uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,089,354 | arxiv | \section{INTRODUCTION}
\label{sec:Intro}
The transitional stage when density perturbations first collapse to become
galaxies is now being reached with data of unprecedented depth. New
polarization measurements with the Planck satellite indicate an optical
depth of only $\taue=0.066\pm0.016$, for electron scattering of the
cosmic microwave background (CMB), which translates into an approximate
instantaneous reionization redshift of
$z \sim 8.8^{+1.7}_{-1.4}$ \citep{Planck2015}.
This may be compared with the ionization state of hydrogen absorption lines
revealed by distant, bright quasi-stellar objects (QSOs) and gamma-ray bursts
(GRBs), which has demonstrated that inter-galactic hydrogen has remained
highly ionized since at least $z \sim 6$. Beyond which evidence of a patchy
opacity is claimed in some QSO and GRB spectra
\citep{Chornock2014,Melandri2015,Becker2015} with a small mean enhancement
in the average neutral fraction by $z=6.5$ \citep{Hartoog2014}.
The forest at $z>6.5$ is not yet measured with any bright source, but a large
jump in neutral hydrogen fraction may be implied in the range $6.5<z<7.0$
from the statistical absence of strong Lyman-$\alpha$ emission from the most
distant galaxies amenable to spectroscopy \citep{Pentericci2014,Stark2015}.
Exceptions have been found with strong Lyman-$\alpha$ emission
at $z \sim 7.7$ \citep{Oesch2015},
and also the currently highest redshift galaxy established by spectroscopy
at $z \sim 8.7$ \citep{Zitrin2015}.
A relatively sharp reionization transition is
considered likely if galaxies are the dominant source of reionization, based
on advanced hybrid techniques \citep{Mesinger2015} which justifies the
instantaneous reionization redshift approximation \citep{Dijkstra2014,Mitra2015}.
More detailed 3D modeling of this transition remains very challenging when incorporating all relevant
processes which may affect ultraviolet (UV) ionization
\citep[e.g.,][]{Springel2003,Bromm2011,Wise2014}, including early galactic outflows
\citep[e.g.,][]{Frye2002} and their feedback effects
\citep[e.g.,][]{Scannapieco2001,Pieri2007,Booth2012}.
This relatively low value of $\taue$ implies that galaxies may not be expected
to be found in abundance at redshifts much higher than $z \sim 9$. Indeed
beyond this redshift only a handful of galaxies are claimed in the deepest
fields, among which the current most distant galaxy has $z \sim 10.7$,
which is discovered in the CLASH program and is highly magnified by
gravitational lensing \citep{Coe2013}.
Two other reliably estimated high-z galaxies are also known behind the new,
highly magnifying Hubble Frontier Field (HFF) clusters at $z \sim 9.6$
\citep{Zheng2012} and $z \sim 9.8$ \citep{Zitrin2014}. Nothing beyond this
has been found in HFF so far, despite the high magnifications by these
clusters \citep{Zitrin2009,Lam2014,Diego2015} and the great depth of these images in the near infrared, with
their potential to detect even higher redshift galaxies to a photometric
limit of $z \sim 11.5$ \citep{Coe2015}.
The UV luminosity function (LF) of distant galaxies is now well constructed
at $z \sim 4 - 10$ relying mainly on dropout galaxies detached in deep
field searches \citep[e.g.,][hereafter B15b]{Bouwens2015b}. The LF is seen to steadily
evolve to low number densities of high redshift galaxies and to steepen at
the faint-end slope as it does so (B15b). Evolution is also seen in terms of the
mean sizes of dropout galaxies which steadily decrease with increasing
redshift \citep{Bouwens2003,Holwerda2015}.
Currently, the behavior of the UV luminosity density at $z>8$ is hotly debated
with evidence of an accelerated decline at $z>8$ by B15b but with a counter claim by
\citet{McLeod2015} for the HFF with a reliance on parametric lens modeling.
This latter claim is at odds with \citet{Coe2015} who provide some
evidence of a deficit at high redshifts for the HFF, based on the first two
completed clusters. Of course in this $z \gtrsim 9$ redshift range data is
restricted to fewer detections in only infrared passbands lying close to
magnitude limits, so that crucial conclusions regarding galaxy formation are
still uncertain.
In this paper, we examine the high-redshift galaxy formation in the context of
a wave dark matter model, known as \emph{$\psiDM$}
\citep[][hereafter SCB14a]{Schive2014a} or \emph{fuzzy} dark matter
\citep{Hu2000}. In this scenario, dark matter is
assumed to be composed of extremely light bosons, such as axion-like
particles proposed by string theory \citep{Arvanitaki2010} or non-QCD axions
\citep{Chiueh2014}. They are non-thermally generated and can be described by
a single coherent wave function \citep{Turner1983,Goodman2000,Bohmer2007,Sikivie2009,Davidson2015,Guth2015}.
When self-interaction is negligible, the evolution of $\psiDM$ is governed
by the Schr\"{o}dinger-Poisson equation
\citep{Ruffini1969,Seidel1990,Widrow1993,Hu2000,Woo2009,Schive2014b}, with a single free
parameter, $m_{\psi}$, the dark matter particle mass.
The most prominent feature in $\psiDM$ is that the uncertainty principle
counters gravity below a Jeans scale, resulting in a
suppression of halos below $\sim 10^{10} \Msun$ and a flat density profile
within $\sim 0.1-1.0 \kpc$ of the centers of galaxies, assuming
$\PM \sim 1.0$ where $\PM = m_{\psi}/10^{-22}\eV$
\citep{Khlopov1985,Peebles2000,Hu2000,Matos2001,Lee2010,Marsh2010,Schive2014a}.
This boson mass scale can naturally arise in a non-QCD axion model
\citep{Chiueh2014}, lending support for the very light boson.
The $\psiDM$ model has become a viable dark matter candidate
\citep[e.g.,][]{Woo2009,Mielke2009,Chavanis2011,Suarez2011,Robles2012,
Marsh2014,Rindler2014,Lora2014,Bray2014,Suarez2014,Bozek2015,Marsh2015a,
Martinez2015,Guzman2015,Madarassy2015,Harko2015},
especially given the increasingly strict limits of non-detections of the
weakly interacting massive particles (WIMPs) in the standard cold dark matter
\citep[CDM,][]{Akerib2014}. Various observable properties of this model
have been proposed
\citep{Amendola2006,Arvanitaki2011,Schive2014a,Schive2014b,Khmelnitsky2014,
Hlozek2015,VanTilburg2015,Stadnik2015}.
The first high-resolution cosmological simulations for the $\psiDM$ model
have recently generated exciting results (SCB14a).
We have directly demonstrated that indeed the large-scale structure of
$\psiDM$ is statistically indistinguishable from CDM, but differs radically
on small scales, where $\psiDM$ halos
form central solitonic cores surrounded by fine-scale, large-amplitude
granular textures. By applying a Jeans analysis to the stellar phase-space
distribution in the Fornax dwarf spheroidal (dSph) galaxy, which is known
to have a distinct core \citep[e.g.,][]{Amorisco2013}, we determine
$\PM=0.8\pm0.2$ ($1\sigma$), thereby providing the crucial normalization of this model
(SCB14a).
From our numerical simulations and theoretical arguments based on the
scaling symmetry of the Schr\"{o}dinger-Poisson equation and the uncertainty principle,
we subsequently derived a unique core-halo mass relation in $\psiDM$ \citep[][hereafter S14b]{Schive2014b},
$M_c \propto (1+z)^{1/2}\Mvir^{1/3}$, where $M_c$ and $\Mvir$ are the core
mass and halo mass, respectively, and $z$ is redshift.
This relation predicts that massive galaxies with $\Mvir \sim 10^{12} \Msun$ at
$z \sim 8$ will have compact solitons of $M_c \sim 10^9 \Msun$ within
$\sim 60 \pc$. Our simulations show that these dense solitonic cores form
promptly after halo collapse, and thus may help to explain the early onset
of QSO activity \citep{Trakhtenbrot2015} by acting as a massive focus for
gas accretion.
Another key prediction of $\psiDM$ is that galaxy formation is \emph{delayed}
relative to CDM because of the inherent Jeans scale. The preliminary results
of SCB14a showed that the first galaxies form at $z \sim 13$ with
$\Mvir \sim 10^9 - 10^{10} \Msun$, assuming $\PM \sim 1.0$ fixed by the scale
of dSph galaxy cores as described above. Halos below $\sim 10^9 \Msun$ are
significantly suppressed. We stress that the particle mass, $\PM$, is the only
free parameter here
assuming that the dark matter is made entirely of $\psiDM$ (see
\citealt{Marsh2014} for a mixed CDM and $\psiDM$ model).
The smaller the $\PM$, the greater the
difference between CDM and $\psiDM$.
Here we aim to establish whether a similar particle mass ($\PM \sim 1.0$) can
satisfy both the observed properties of dSph galaxies and the constraints
from high-z observations, such as galaxy counts, reionization history, and
Lyman-$\alpha$ forest, although some tension seems to exist \citep{Bozek2015}. It
is a well known issue for warm dark matter (WDM), usually termed as the
\emph{Catch 22} problem \citep{Maccio2012}, where the kpc-scale cores in
dSph galaxies require too small a WDM particle mass that
is in contradiction with high-z observations
\citep{Schneider2014,Schultz2014,Lovell2014}. Since the relation between core radius
and power spectrum suppression are different in $\psiDM$ and WDM
\citep{Hu2000,Schive2014b,Marsh2015a}, in this work we examine in detail
whether $\psiDM$ is clear of this serious problem facing WDM.
In this paper we conduct cosmological simulations to study the evolution of
halo mass function (MF) in the $\psiDM$ scenario, and connect it to the
recently established galaxy UV LF at $4 \lesssim z \lesssim 10$.
We explore the results of different $\psiDM$ particle masses ranging
from $\PM = 0.8$ to $3.2$. We predict the evolution of LF beyond the current
observational limit as a future test to distinguish between CDM and
$\psiDM$. We also perform analytic calculations to study the reionization
history in this context, and compare it to the Thomson optical depth recently
reported by Planck.
All magnitudes in this paper are quoted in the AB system
\citep[$M_{\rm AB}$,][]{Oke1983}.
The paper is structured as follows. In \sref{sec:Simulations} we describe
our simulation setup, including initial conditions and simulation
characteristics. We show the $\psiDM$ halo mass function in \sref{sec:MF},
and compare it with observations in \sref{sec:Predictions}. Finally, we
discuss and summarize our results in \sref{sec:Discussion}.
\section{SIMULATIONS}
\label{sec:Simulations}
In this section, we describe the initial power spectra and other
characteristics of our simulations for the study of the evolution of the
$\psiDM$ halo MF at high redshifts.
\subsection{Initial Power Spectra}
\label{subsec:InitPS}
The suppression of $\psiDM$ linear density power spectrum relative
to CDM can be expressed as
\be
P_{\psiDM}(k,z) = T_{\psiDM}^2(k,z)P_{\rm CDM}(k,z),
\label{eq:TranFunc}
\ee
where $P$ denotes the power spectrum and $T_{\psiDM}$ is the $\psiDM$ transfer
function
(strictly speaking it is the ratio between the transfer functions of $\psiDM$
and CDM).
In general $T_{\psiDM}$ is both redshift- and scale-dependent since
the balance between gravity and quantum pressure introduces a redshift-dependent
Jeans scale, $k_J(z)$, below which the structures cannot grow.
However, for the particle masses, redshift range, and halo masses of interest
in this work ($\PM \sim 1$, $z\sim 4-10$, $\Mvir \gtrsim 1\times10^9 \Msun$),
$T_{\psiDM}$ can be approximated as redshift-independent, as we demonstrate
below.
The redshift evolution of $\psiDM$ density perturbations during the
matter-dominated epoch can be described analytically by \citep{Woo2009}
\be
\rho_k(k,z) = A(k)\frac{3\cos\theta - \theta^2\cos\theta + 3\theta\sin\theta}{\theta^2},
\label{eq:GrowingMode}
\ee
where $\rho_k$ is the spatial Fourier component of the comoving density
perturbations, A(k) is the normalization constant,
$\theta(k,z) = \hbar k^2 \sqrt{1+z}/m_{\psi}H_0\sqrt{\Omega_{m0}}$,
$H_0$ is the present Hubble parameter, and $\Omega_{m0}$ is the present
matter density parameter. Setting $\theta^2=6$ gives the Jeans scale,
\be
k_J(z) \approx 69.1\,\PM^{1/2} \left( \frac{\Omega_{m0}h^2}{0.14} \right)^{1/4} (1+z)^{-1/4} \Mpc^{-1},
\label{eq:JeansK}
\ee
where $h$ is the dimensionless Hubble constant. For $k \ll k_J$, we have
$\rho_k \propto (1+z)^{-1}$, and thus $\psiDM$ grows like CDM; while for
$k \gg k_J$ the perturbations oscillate as $\rho_k \propto \cos\theta$.
Note that $k_J(z)$ increases slowly with time, and hence an oscillating mode
may become a growing mode at lower redshifts, but not vice versa.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__PowerSpec_CDM_vs_psiDM.pdf}
\caption{
Growth rate ratio between $\psiDM$ and CDM density perturbations
(Eq. [\ref{eq:EvolvePS}]) during $30 \ge z \ge 4$.
Vertical dashed lines highlight the mass of the faintest galaxies currently observed by HST at
$z\gtrsim 4$ ($M_{\rm HST}$) and that expected for JWST assuming a limiting
absolute magnitude of $\MUV=-15$ at $z=6$ ($M_{\rm JWST}$). Here we
have adopted the mass-luminosity relation described in \sref{subsubsec:CLF}.
Note that even for $\PM=0.8$ the growth rate ratios still reach
$\xi \sim 0.99$ at $\Mvir = M_{\rm HST}$ and $\xi \sim 0.97$ at
$\Mvir = M_{\rm JWST}$, indicating that the additional suppression of $\psiDM$
halos above $M_{\rm JWST}$ during this redshift interval of interest is almost
negligible (see text for details).
}
\label{fig:EvolvePS}
\end{figure}
To quantify the difference in growth rate between $\psiDM$ and CDM density
perturbations during a given redshift interval, $\zs \ge z \ge \ze$, we define
\be
\xi(k) = \frac{\rho_k(k,\ze)/\rho_k(k,\zs)}{\rho_k(k_0,\ze)/\rho_k(k_0,\zs)},
\label{eq:EvolvePS}
\ee
with $k_0 \ll k_J$ so that the denominator represents the growth in CDM.
For $k \ll k_J$, $\psiDM$ grows like CDM, and thus $\xi \sim 1$.
For $k \sim k_J$, quantum pressure starts to counter gravity and leads to
$\xi < 1$, indicative of `additional' suppression of $\psiDM$ halos during
this epoch.
\fref{fig:EvolvePS} shows $\xi(k)$ for various $\psiDM$ particle
masses. We take $\zs=30$ so that the $\psiDM$ density perturbations are still
in the linear regime, and $\ze=4$ to bracket the redshifts of interest in
this work. We convert the wavenumber to the halo virial mass via
$\Mvir = 4\pi(\pi/k)^3\rho_m/3$, where $\rho_m$ is the comoving matter density.
A relatively large deviation from CDM is found at the low-mass end for
a smaller particle mass because of the corresponding longer Jeans wavelength.
However, note that the faintest galaxies currently observed by the
Hubble Space Telescope (HST) at $z \gtrsim 4$ have $\Mvir \sim 10^{10} \Msun$
(see Sec. \ref{subsubsec:CLF} for the mass-luminosity relation adopted), at
which $\xi \sim 0.99$ for $\PM=0.8$. Even for the James Webb Space Telescope
\citep[JWST,][]{Gardner2006} assuming a limiting absolute magnitude of
$\MUV \sim -15$ at $z \sim 6$, we still have $\xi \sim 0.97$ for $\PM=0.8$.
It demonstrates that \emph{for the particle masses, redshifts, and halo masses
of interest when comparing with current and forthcoming observations,
(i) the growth rate of linear density power spectra
in CDM and $\psiDM$ are similar, and (ii) the $\psiDM$ transfer function
$T_{\psiDM}$ can be well approximated as redshift-independent}.
This is primarily due to that the Jeans mass at $z=30$ for $\PM=0.8$ is
$\sim 2.7\times10^8 \Msun$, well below the observational limits.
Note, also, that the smallest halos resolved in our simulations have
$\Mvir \sim 3\times10^8 \Msun$, close to the Jeans mass and hence
$\xi \sim 0.52$ for $\PM=0.8$. Therefore the halo MF at the
low-mass end may be, in this sense, slightly underestimated in our simulations.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__InitPowerSpec.pdf}
\caption{
Linear matter power spectra at $z=30$ in CDM and $\psiDM$.
$\psiDM$ power spectra are obtained using Equations (\ref{eq:TranFunc})
and (\ref{eq:TranFuncHu}),
where we have assumed that $T_{\psiDM}$ can be well approximated as
redshift-independent during $30 \ge z \ge 0$ (see Fig. \ref{fig:EvolvePS}).
Note that the smaller the particle mass, the stronger
the suppression at the high-k end. Arrows indicate the half-mode wavenumbers,
$\khf$ (Eq. [\ref{eq:HalfMode}]), where the power spectra drop by a factor
of four compared to CDM.
}
\label{fig:InitPS}
\end{figure}
The $\psiDM$ transfer function at $z=0$ is given by \citep{Hu2000}
\be
T_{\psiDM} \approx \frac{\cos x^3}{1+x^8},\;\; x=1.61\,\PM^{1/18}\frac{k}{\kJeq},
\label{eq:TranFuncHu}
\ee
which is assumed to be redshift-independent relevant for the redshifts
and wavenumbers of interest. Here $\kJeq=9\,\PM^{1/2}\Mpc^{-1}$
is the Jeans wavenumber at matter-radiation equality.
\fref{fig:InitPS} presents the linear matter power spectra at $z=30$,
which exhibit sharp breaks at $k \sim \kJeq$
and strong oscillations for $k > \kJeq$.
We can also define the characteristic scale to be the `half-mode' scale,
$\khf$, where $T_{\psiDM}(\khf) = 1/2$. \eref{eq:TranFuncHu}
then gives
\be
\khf \approx 5.1\,\PM^{4/9}\Mpc^{-1},\;\;
M_{1/2} \approx 3.8\times10^{10}\,\PM^{-4/3} \Msun,
\label{eq:HalfMode}
\ee
where $M_{1/2} = 4\pi(\pi/\khf)^3\rho_m/3$ is the characteristic halo mass where
a noticeable difference between $\psiDM$ and CDM MFs is expected.
Note that the strong suppression at $k \sim \kJeq$ shown by \eref{eq:TranFuncHu}
is mainly determined during the radiation-dominated epoch \citep{Hu2000},
and thus cannot be solely explained by \eref{eq:GrowingMode} which is only
valid during the matter-dominated epoch.
\subsection{Simulation Characteristics}
\label{subsec:SimuChar}
Bona-fide $\psiDM$ simulations involve solving the Schr\"{o}dinger-Poisson
equation, which is extremely challenging owing to its wave nature. The matter
wave dispersion relation, $\omega \propto \lambda^{-2} \propto v^2$, where
$\omega$, $\lambda$, $v$ are the angular frequency, wavelength, and
velocity, respectively, indicates that exceptionally high spatial and
temporal resolutions are required for resolving the wave functions of
high-velocity flows throughout a simulation box (SCB14a).
Numerically, we find that a comoving spatial resolution as high as
$\sim 1 \kpch$ is required to properly resolve a flow with a moderate
peculiar velocity of $\sim 100 \kms$ at $z \sim 13$. Otherwise we find
the flow velocity can be underestimated, leading to lower
mass accretion rate and underestimation of MF.
In the extreme case, a $\psiDM$ simulation in a $30 \Mpch$
box with a uniform $1 \kpch$ spatial resolution will consume $\sim 400$
terabytes of memory, which is impractical in any modern supercomputer.
As a result, even the state-of-the-art $\psiDM$ simulations currently can
only fully resolve a comoving box as small as $1.4 \Mpch$ to $z=0$
(SCB14a).
In this paper we mainly focus on determining the $\psiDM$ halo MF
above $\sim 1\times10^9 \Msun$ at $z \ge 4$, for which most halos are
isolated and thus insensitive to the subtle differences
between CDM and $\psiDM$ halos. Nor are we interested here in the complex
wave nature of the internal density profiles of the halos, which we have
already established in our previous wave-based simulations (SCB14a, S14b).
As we demonstrated in the previous subsection,
the growth rates of density perturbations in CDM and $\psiDM$ are similar in
the context of this work and differ mainly in their initial amplitudes.
Moreover, S14b verified that CDM and $\psiDM$ halos have
similar virial masses during the same collapse process. All these facts
indicate that, \emph{for the purpose of this study, it is appropriate to use
simulations of collisionless particles with $\psiDM$ initial power
spectra to approximate real $\psiDM$ simulations}. This is the approach
adopted in this work, which is essentially the same as most WDM simulations
where initial thermal velocity are ignored. Real $\psiDM$ simulations
for supporting these arguments, solving either wave function directly or an
alternative fluid-like form \citep[e.g.,][]{Mocz2015,Marsh2015b}, are for future work.
All simulations are run from $z=100$ to 4. Since the linear power spectra relevant
for this study do not change in shape after $z=30$, we can directly apply
\eref{eq:TranFuncHu} to obtain the $\psiDM$ power spectra at $z=30$
(see Fig. \ref{fig:InitPS}). To capture the rare non-Gaussian
peaks, which are the seeds of first galaxies, due to nonlinearity set in
as early as $z \sim 30$, we then extrapolate the $z=30$ spectra to $z=100$ for
which the amplitude is $\sim 3.3$ times smaller to ensure all perturbations are
Gaussian. By doing so, the simulations of collisionless particles preserve
the shape of the spectra from $z=100$ to $z=30$ but allow for the
development of rare non-Gaussian peaks.
We perform simulations with the \textsc{GADGET-2} N-body code
\citep{Springel2005}. We adopt the \textsc{CAMB} package \citep{Lewis2000} to
generate the CDM transfer function, and construct initial conditions for
simulations using the \textsc{MUSIC} code \citep{Hahn2011}. We adopt the
cosmological parameters consistent with the
WMAP9 data \citep{WMAP9}: $\Omega_{m0}=0.284$, $\Omega_{\Lambda0}=0.716$,
$h=0.696$, $\sigma_8=0.818$, and $n_s=0.962$. We choose three different
simulation configurations with $(L,N)=(15\Mpch,\,512^3)$,
$(15\Mpch,\,1024^3)$, and $(30\Mpch,\,1024^3)$, where $L$ is the comoving
box size and $N$ is the total number of simulation particles. The
corresponding simulation particle masses are $\sim 2.8\times10^6 \Msun$ and
$\sim 3.6\times10^5 \Msun$ for the lower and higher mass-resolution
simulations, respectively. For each simulation configuration, we run four
different dark matter models: CDM, $\psiDM$ with $\PM=0.8$, 1.6, and 3.2.
\section{MASS FUNCTION}
\label{sec:MF}
The main aim of our simulations is to determine the halo MF
as a function of $\psiDM$ particle mass. Intuitively, a sharp break in the
initial power spectrum should translate into a strong suppression of
low-mass halos, as verified by the Sheth-Tormen \citep[][hereafter ST99]{Sheth1999} MF
with a sharp k-space window function \citep{Schneider2013}. However, it is
well known that the particle simulations with an initial power spectrum
cut-off suffer from the formation of spurious halos, especially at
low masses \citep{Wang2007,Angulo2013,Schneider2013}.
These spurious halos are caused by artificial fragmentation due to numerical
artifacts \citep{Wang2007}, and are mostly confined along cosmic filaments (see
Fig. \ref{fig:HaloMap}, upper panel). They outnumber genuine halos
below a characteristic mass, which linearly depends on the mean interparticle
separation \citep{Wang2007}, resulting in a prominent upturn in MF at the
low-mass end (see Fig. \ref{fig:MF}, open symbols). We define `protohalo' as
the initial particle positions of an identified halo. \citet{Lovell2014} showed
that the protohalos of genuine and spurious halos have distinct features.
Genuine protohalos are spheroidal and have a good match between low- and
high-resolution simulations, while spurious protohalos have disc-like shapes
and their masses and positions are sensitive to the simulation resolution,
and thus do not have clear counterparts in simulations with different
resolution.
To identify and remove these artificial halos,
we thus adopt a similar approach suggested by \citet{Lovell2014} based
on the shape of the protohalos and the spatial overlap between
low-resolution protohalos and their high-resolution counterparts.
See Appendix \ref{sec:SpuriousHalo} for a more detailed description
of the algorithms adopted.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__HaloMap_All_vs_Genuine.pdf}
\caption{
Density field at $z=4$ for $\psiDM$ simulations with $\PM=1.6$ in a
$15 \Mpch$ box. Each image displays a projected field for a $3 \Mpch$ thick
slab with a size of $2.70 \times 1.35 \Mpch$.
White and red circles show halos more massive than $2\times10^7 \Msun$
in the $512^3$ and $1024^3$ simulations, respectively, where the radii
of circles equal the halos' virial radii. The most massive halo has a mass
of $\sim 1\times10^{12} \Msun$. The upper panel shows both genuine and
spurious halos, while the lower panel only shows genuine halos. Suspicious
low-mass halos, which are mostly confined along filaments and have no clear
counterparts in the $512^3$ and $1024^3$ runs, are identified as spurious,
while only massive halos with a good match between low- and high-resolution
simulations are regarded as genuine.
}
\label{fig:HaloMap}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__MassFunc_NonCumulative_RealHalo.pdf}
\caption{
Halo mass function (MF) in logarithmic mass bins. The open symbols
represent the original MF containing both genuine and spurious
halos in the $30 \Mpch$ simulations with $1024^3$ particles, and the
filled symbols show the genuine MF with spurious halos removed in the
$15 \Mpch$ simulations using $512^3$ and $1024^3$ particles for
estimating the spatial overlap factor. The spurious halos outnumber genuine halos
at low masses, resulting in an unphysical upturn at the low-mass end of
the original MF, especially for lower $\PM$. By contrast, the genuine
MF reveals a prominent drop at the low-mass end, as anticipated and seen in our
high-resolution wave-based simulations, reported earlier. The shaded regions
indicate the uncertainties of genuine MF by varying $\Scut$ and $\Ocut$
by $\pm 20\%$ (see Appendix \ref{sec:SpuriousHalo}). Various lines show the
analytic form, \eref{eq:MFFit}, which fit the simulation results well.
Arrows mark the minimum $\psiDM$ halo masses proposed by S14b
for $\PM=0.8$.
}
\label{fig:MF}
\end{figure}
\fref{fig:MF} shows the halo MF obtained in our simulations. For comparison,
we show both the `original' MF (containing both genuine and spurious
halos) in the $30 \Mpch$ simulations with $1024^3$ particles, and the `genuine'
MF (with spurious halos removed) in the $15 \Mpch$ simulations using $512^3$
and $1024^3$ particles for estimating the spatial overlap factor.
The original $\psiDM$ MF shows a prominent upturn at the
low-mass end due to the contamination from spurious halos, especially for
lower $\PM$. By contrast, the genuine $\psiDM$ MF features a clear drop
at low masses for all redshifts and particle masses, apparently different
from CDM and in agreement with the expectation from a sharp break in the
$\psiDM$ initial power spectra. It is also consistent with the minimum
$\psiDM$ halo mass at $z \gtrsim 1$ proposed by S14b,
$M_{min} = 3.7\times10^7\,\PM^{-3/2}\,(1+z)^{3/4} \Msun$
(indicated by arrows in Fig. \ref{fig:MF} for $\PM=0.8$).
On the other hand, the original and genuine
CDM MFs are almost indistinguishable, which is no surprise since we assume
most CDM halos are genuine when calibrating the thresholds for removing
spurious halos.
The shaded regions in \fref{fig:MF} indicate the uncertainties of genuine
$\psiDM$ MF by varying $\Scut$ and $\Ocut$ by $\pm 20\%$ (see Appendix
\ref{sec:SpuriousHalo}).
It shows that the finding of strong suppression of low-mass halos in $\psiDM$
is reliable, but the exact slope at the low-mass end is still uncertain.
In the high-mass end ($\Mvir \gtrsim 10^{11} \Msun$)
the original MF is smoother because of a larger simulation box. Note, however,
that in the intermediate mass range
($\Mvir \sim 3\times10^9 - 1\times10^{11} \Msun$)
the original and genuine MFs are reasonably consistent with
each other, suggesting that in this mass range (i) most halos are genuine
and (ii) a $15 \Mpch$ simulation box is sufficient to obtain an accurate MF.
These results make the $\psiDM$ MF obtained in this work more robust
for the purpose of comparing with observations.
As shown in \fref{fig:MF}, the genuine $\psiDM$ MF can be well fitted by
the following analytic form:
\be
\left.\frac{dn}{d\Mvir}\right\rvert_{\psiDM}(\Mvir,z) =
\left.\frac{dn}{d\Mvir}\right\rvert_{{\rm CDM}}(\Mvir,z)
\left[ 1 + \left( \frac{\Mvir}{M_0} \right)^{-1.1} \right]^{-2.2},
\label{eq:MFFit}
\ee
where $dn/d\Mvir$ is the halo MF and $M_0=1.6\times10^{10}\,\PM^{-4/3} \Msun$
is the characteristic mass below which MF starts to drop noticeably.
CDM corresponds to $\PM \to \infty$.
The facts that $M_0$ has the same particle mass dependence as the
half-mode mass $M_{1/2}$ (Eq. [\ref{eq:HalfMode}]) and $\psiDM$ MF
drops by a factor of two relative to CDM at $\Mvir \sim M_{1/2}$
reinforce our simulation results.
Also note that the suppression term,
$( 1 + (\Mvir/M_0)^{-1.1})^{-2.2}$, is redshift-independent. It is
expected since in this work the effect of quantum pressure is taken into
account only for the initial conditions. In detail for $\psiDM$ the
suppression of low-mass halos will be redshift-dependent, but
the characteristic mass $M_0$ is still expected to be almost
redshift-independent since it is mainly determined during the
radiation-dominated epoch \citep{Hu2000}. We emphasize that the faintest
galaxies currently observed at $z \gtrsim 4$
have $\Mvir \sim 10^{10} \Msun$ (see Fig. \ref{fig:CumuMF} and
Sec. \ref{subsubsec:CLF}), which is close to $M_0$ and hence is insensitive to
the uncertainties at low masses of MF ($\Mvir \lesssim 10^9 \Msun$) caused by
neglecting the dynamical effect of quantum pressure
and the removal of spurious halos. \eref{eq:MFFit} thus provides a very
convenient comparison between models and observations (see next section).
\section{PREDICTIONS VS OBSERVATIONS}
\label{sec:Predictions}
The rest-frame UV LF at high redshifts have become increasingly well defined
and hence useful for testing a range of dark matter models in detail. The
physical mechanisms assumed to solve the small-scale issues of CDM in the
Local Group will likely suppress the formation of faint galaxies at high
redshifts as well,
so that it is important to examine whether too few high-z galaxies
are created and with a too small Thomson optical depth to CMB.
This is a well-known issue for WDM, usually termed as the \emph{Catch 22}
problem \citep{Maccio2012}. In this section we examine the level of
consistency in the case of $\psiDM$.
\subsection{Cumulative Mass Function}
\label{subsec:CumuMF}
The cumulative galaxy number density, defined as the total number of galaxies
per unit comoving volume at a given redshift, can be converted into a lower
limit of $\PM$, below which the $\psiDM$ MF cannot account for the observed
counts of galaxies. To relate the UV magnitude $\MUV$ of a
galaxy to its corresponding halo mass, we first adopt the abundance
matching technique \citep{Vale2004} which equates the cumulative UV LF,
$\Psi(\mathord{<}\MUV,z)$, to the cumulative halo MF,
$n(\mathord{>}\Mvir,z)$. An alternative approach using the conditional LF
formalism will be discussed later.
For a given LF, one can apply abundance matching to either $\psiDM$ MF
with the particle masses of interest \citep{Bozek2015} or CDM MF
\citep{Schultz2014}, both of which have advantages and disadvantages.
The former provides a model-independent constraint since it simply checks
whether the total numbers of $\psiDM$ halos at various redshifts are sufficient
to account for the observed counts, regardless of the underlying
mass-luminosity ($M$-$L$) relation. However, the inferred mass-to-light ratio
features a sharp, and probably unphysical, drop at the faint end
\citep{Bozek2015}. This approach therefore provides a more conservative
estimation of $\PM$. By contrast, the latter leads to a power-law \ML
relation at the faint end \citep{Schultz2014}, which is more plausible.
However, it fundamentally assumes that CDM matches the observed LF perfectly
and that $\psiDM$ follows exactly the same \ML relation as CDM, both of which may
not be necessarily true. Consequently, any suppression in the $\psiDM$ MF
translates directly into a deficit of galaxies, resulting in
a higher, and likely overestimated, lower limit for $\PM$.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__MassFunc_Cumulative_vs_Obs.pdf}
\caption{Cumulative mass function (MF) at $z=6-10$. The shaded regions indicate
the $2\sigma$ uncertainties. Error bars show the
observational constraints ($2\sigma$ at $z=6-8$ and $1\sigma$ at $z=10$),
which match the CDM MFs perfectly simply because they are derived
from applying abundance matching to the CDM MFs.
Note that in this approach the faintest galaxies currently observed at
$z \sim 4-10$ all have $\Mvir \sim 10^{10} \Msun$.
$\psiDM$ cumulative MFs have finite upper limits due to the
strong suppression of low-mass halos. $\PM=3.2$ and $1.6$ are consistent
with the observations \citep{Bouwens2007,McLure2013,Schenker2013,Finkelstein2014,Oesch2014,Bouwens2015b},
while $\PM=0.8$ has insufficient halos with $\Mvir \sim 10^{10} \Msun$
at $z=6-8$ (see text for details).
}
\label{fig:CumuMF}
\end{figure}
\fref{fig:CumuMF} shows our cumulative MFs for both CDM and $\psiDM$.
CDM MF is constructed from a $30 \Mpch$ simulation
with $1024^3$ particles, and $\psiDM$ MF is obtained by
integrating \eref{eq:MFFit}. The $2\sigma$ uncertainties are estimated
using bootstrap resampling over 125 subvolumes, each with a side length of
$6 \Mpch$. Here the halo masses corresponding to various observational data
points are determined by applying abundance matching to the CDM MF,
which essentially forces observations to be consistent with CDM.
In this approach, we find that the faintest galaxies currently observed at
$z \sim 4-10$ all have $\Mvir \sim 10^{10} \Msun$.
We demonstrate that $\psiDM$ with $\PM=3.2$ and $1.6$ are still consistent
with the observations, while $\PM=0.8$ does not have a sufficient number of
halos with $\Mvir \sim 10^{10} \Msun$ at $z=6-8$. Using the recent LF of
B15b leads to a lower limit of $\psiDM$ particle mass,
$\PM \ge 1.5$ ($2\sigma$). By contrast,
to infer the results of applying abundance matching to the $\psiDM$ MF,
one can simply move the observational data points in \fref{fig:CumuMF}
toward smaller halo masses until, if possible, touching the $\psiDM$ cumulative
MF with a specific particle mass. This approach decreases the lower
limit to $\PM \ge 0.9$ ($2\sigma$), and hence significantly reduces the tension
between observational constraints and smaller $\psiDM$ particle masses.
We emphasize that the two estimations of $\psiDM$ particle mass given above,
namely, $\PM \ge 1.5$ and $\PM \ge 0.9$, are determined using two extreme
models of \ML relation, and therefore likely bracket the uncertainty
of the lower limit of $\PM$.
In fact, for $\psiDM$, it does not make much sense to apply the \ML relation
predicted by CDM from abundance matching. The lower limit $\PM \ge 1.5$ is therefore
likely overestimated.
In the next subsection we provide a more plausible
estimation of $\PM$ based on a less model-dependent \ML relation.
\subsection{Luminosity Function}
\label{subsec:LF}
\subsubsection{Conditional Luminosity Function}
\label{subsubsec:CLF}
As our preferred method to constrain the $\psiDM$ particle mass we adopt the
conditional LF model \citep{Cooray2005}, which has
been shown to be able to reproduce well the high-z UV LF in the context of CDM \citep{Bouwens2008,Bouwens2015b}.
The conditional LF, denoted as $\phi_c(L|\Mvir,z)$, describes the probability density of halos with
mass $\Mvir$ hosting galaxies with UV luminosity of $L$. It is modeled by
a lognormal distribution,
\be
\phi_c(L|\Mvir,z) = \frac{1}{\sqrt{2\pi}\ln(10)\Sigma L}
\exp\left\{ -\frac{\log[L/L_c(\Mvir,z)]^2}{2\Sigma^2} \right\},
\label{eq:CondLF_Main}
\ee
which has a dispersion of $\ln(10)\Sigma$ and peaks at $L_c(\Mvir,z)$,
the \ML relation of the central galaxy. Following B15b,
we parameterize $L_c$ as
\be
L_c(\Mvir,z) = L_0 \frac{(\Mvir/M_1)^{p}}{1+(\Mvir/M_1)^{q}}
\left( \frac{1+z}{4.8} \right)^r,
\label{eq:CondLF_Lc}
\ee
where $M_1$ gives the characteristic halo mass. The \ML relation asymptotes
to $L_c \propto \Mvir^{p}$ when $\Mvir \ll M_1$ and $L_c \propto \Mvir^{p-q}$
when $\Mvir \gg M_1$. For a given $\phi_c$, the LF can then be calculated by
\be
\phi(L,z) = \int_{0}^{\infty} \phi_c(L|\Mvir,z) \frac{dn}{d\Mvir}(\Mvir,z)d\Mvir,
\label{eq:CondLF_Lc2L}
\ee
where $dn/d\Mvir$ is the halo MF.
\begin{table}
\setlength{\tabcolsep}{3pt}
\label{tab:CondLF}
\begin{center}
\caption{Parameters of the Conditional LF Model}
\begin{tabular}{lccccccc}
\hline
Model & $L_0$ & $M_1$ & $\Sigma$ & $p$ & $q$ & $r$ & $\chi_{red}^2$ \\
& ($M_{\rm AB}$) & ($\Msun$) \\
\hline
CDM & -20.7 & $2.7\times10^{11}$ & 0.16 & 1.6 & 1.2 & 1.9 & 1.4 \\
$\PM=3.2$ & -20.9 & $3.1\times10^{11}$ & 0.16 & 1.5 & 1.2 & 1.9 & 1.5 \\
$\PM=1.6$ & -21.1 & $4.0\times10^{11}$ & 0.16 & 1.4 & 1.2 & 1.9 & 1.9 \\
$\PM=0.8$ & -21.7 & $7.8\times10^{11}$ & 0.16 & 1.2 & 1.1 & 1.8 & 3.1 \\
B15b\tablenotemark{a} & -21.9 & $1.2\times10^{12}$ & 0.16 & 1.2 & 1.0 & 1.5 & \\
\hline
\end{tabular}
\end{center}
\tablenotemark{a} \citet{Bouwens2015b}.
\end{table}
Given the above conditional LF formalism, we then use chi-square fitting
on the observed LF of B15b at $z=5-10$ to determine the parameter set
($L_0$, $M_1$, $\Sigma$, $p$, $q$, $r$). Table 1 shows the best-fit parameters
and the corresponding reduced chi-square ($\chi_{red}^2$). We fix $\Sigma$ to
$0.16$ both because it is not well constrained owing to the substantial uncertainties
at the bright end and because it does not influence the faint-end slope, which is
most important for constraining $\PM$.
Note that the faint-end \ML relation, $L_c \propto \Mvir^{p}$, is flatter for
smaller $\PM$
so as to compensate for the stronger suppression
of faint galaxies. In addition, note that from
\eref{eq:CondLF_Lc} the faintest galaxies currently observed at
$z \sim 4-10$ all have $\Mvir \sim 10^{10} \Msun$, consistent with the
results obtained by applying abundance matching to CDM
(see Fig. \ref{fig:CumuMF}).
A limiting absolute magnitude of $\MUV \sim -15$ at $z \sim 6$,
appropriate for JWST, corresponds to $\Mvir \sim 4\times10^9 \Msun$.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__CondLFunc_vs_Obs_z05-10.pdf}
\caption{Luminosity function (LF) at $z=5-10$ predicted by the
conditional LF model. The shaded regions indicate the $2\sigma$
uncertainties. Error bars show the observed LFs ($2\sigma$ at $z=5-8$ and $1\sigma$ at $z=10$)
of B15b (open circles) and \citet[][open triangles]{Oesch2014}.
$\psiDM$ LF shows a drop at the faint end due to the suppression of low-mass halos.
$\PM=3.2$ and $1.6$ are consistent with the observations,
while $\PM=0.8$ shows a deficit of faint galaxies with $\MUV \gtrsim -18$,
especially at $z \le 6$ (see text for details).
Vertical dashed lines highlight the limiting absolute magnitudes of JWST,
which are assumed to be two magnitudes fainter than those of HST.
}
\label{fig:CondLF}
\end{figure}
\fref{fig:CondLF} shows our predicted galaxy UV LF at $z=5-10$ using the conditional
LF formalism described above. We use the ST99 MF
for CDM, and combine it with the ratio given in \eref{eq:MFFit}
to get the $\psiDM$ MFs with various $\PM$.
We add $2\sigma$ variations estimated from the MF at $\Mvir \sim 10^{10} \Msun$
in our $30 \Mpch$ simulations to capture the uncertainties
of the predicted LF around the faintest LF bins of B15b.
The $\psiDM$ LF shows a clear decline at the faint end, which is distinctly
different from the CDM prediction and will be directly testable
with forthcoming observations such as JWST.
Note that this feature results from the assumption of a power-law \ML
relation at the faint end (Eq. [\ref{eq:CondLF_Lc}]), and thus cannot be
captured by the usual abundance matching.
The cases $\PM=3.2$ and $1.6$ are found to be consistent with the current observations,
while $\PM=0.8$ shows an apparent deficit of faint
galaxies with $\MUV \gtrsim -18$, especially at $z \le 6$.
This result is consistent with the analysis of cumulative MF
(see \fref{fig:CumuMF}). Using the faintest LF bins of
B15b (open circles in Fig. \ref{fig:CondLF}) leads to $\PM \ge 1.2$ ($2\sigma$).
Note that this constraint lies in-between
those obtained in \sref{subsec:CumuMF} ($\PM \ge 0.9$ and $1.5$), as expected,
since the faint-end \ML relation adopted here can be regarded as a
compromise between the two extreme cases using abundance matching
to CDM and to $\psiDM$ MFs, respectively.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__CondLFunc_vs_Obs_z02-04.pdf}
\caption{
Luminosity function (LF) at $z=2-4$ predicted by the conditional LF model.
The shaded regions indicate the $2\sigma$ uncertainties. The data with $2\sigma$ error bars
are the LFs determined from B15b (open circles) and \citet[][open squares]{Parsa2015}.
The observed faint-end slope in this redshift range is consistent with the
$\psiDM$ model with $\PM \sim 1.6$ (see text). Notice that CDM overestimates
the number density of faint galaxies, especially at $z=3$.
}
\label{fig:LF2to4}
\end{figure}
It is also interesting to extend the comparison of LF to $z<5$
(Fig. \ref{fig:LF2to4}). To be self-consistent, we adopt the same
parameters of the conditional LF model shown in Table 1.
\citet{Parsa2015} recently found a much shallower faint-end slope at
$z=2-4$, distinctly different from the steep slope reported previously
\citep{Reddy2009,Alavi2014,Bouwens2015b}. $\psiDM$ with
$\PM \sim 1.6$ is found to provide a clear better fit to this shallower slope, while CDM
overestimates the number density of faint galaxies, especially at $z=3$. It is thus
very important for future research to understand this apparent discrepancy
between the faint-end slopes determined by \citet{Parsa2015} and
the previous studies at $z<5$.
It should be emphasized that the conditional LF model provides a more
reasonable constraint on the $\psiDM$ particle mass. At first glance,
it may seem that this approach has too many free parameters to provide an
appropriate estimation. However, actually the only relevant parameter for
constraining $\PM$ is $p$, the faint-end slope of the \ML relation, as argued
below. Since the faint-end slope of the LF is insensitive to $\Sigma$, we have
$L \sim L_c \propto \Mvir^{p}$. This leads to
$\phi(\MUV)=0.4\ln(10)L\phi(L)=0.4\ln(10)p^{-1}\dndlnM$,
where $\dndlnM$ is the halo MF in logarithmic mass bins.
Therefore, for a given $p$, a maximum observed $\phi(\MUV)$ can be directly
converted to a minimum required peak $\dndlnM$, which then turns into
a lower limit of $\PM$. Moreover, if $\dndlnM \propto \Mvir^\eta$, which is
appropriate when $\Mvir \gg M_0$ in \eref{eq:MFFit} (i.e., when $\psiDM$
is still close to CDM), then $\phi(L) \propto L^{\eta/p-1}$. Since $\eta$
can be estimated by the ST99 MF, we can determine
$p$ from a given $\phi(L)$. Accordingly, \emph{$p$ itself is not unconstrained.}
As an example, consider the $\psiDM$ model with $\PM=1.6$ at $z=6$.
The observed LF has $\phi(L) \propto L^{-2.1}$ at $\MUV \sim -19$.
This luminosity corresponds to a halo mass of $\Mvir \sim 7\times10^{10} \Msun$,
at which $\dndlnM \propto \Mvir^{-1.5}$ (see Fig. \ref{fig:MF}). We thus
have $p=-1.5/(-2.1+1) \sim 1.4$, consistent with Table 1. The MF
has a peak of $\dndlnM \sim (1.0 \pm 0.3)\times10^{-2} \Mpc^{-3}$ around
$\Mvir = 10^{10} \Msun$, which can then be converted to a maximum LF of
$\phi(\MUV) = (6.8 \pm 2.0)\times 10^{-3} \Mpc^{-3}$.
This estimation is in excellent agreement with \fref{fig:CondLF},
even though the only assumption made here is that the \ML relation
is a power law at the faint end.
Therefore, we conclude that the conditional LF model provides
a more plausible and less model-dependent estimation of the
$\psiDM$ particle mass than with abundance matching.
\subsubsection{Truncated Schechter Function}
\label{subsubsec:ModScheFunc}
It is also useful and convenient to parameterize the predicted LF of $\psiDM$
by a formula similar to the Schechter function. We adopt
\be
\phi(L) = \frac{\pstar}{\Lstar} \left( \frac{L}{\Lstar} \right)^\alpha
\exp\left( -\frac{L}{\Lstar} \right) \Gamma(L),
\label{eq:ModScheFunc_Main}
\ee
where
\be
\Gamma(L) = \left[ 1 + \left( \frac{L}{\Lpsi} \right)^{\gamma} \right]^{\beta/\gamma}
\label{eq:ModScheFunc_Supp}
\ee
represents the suppression of faint galaxies in $\psiDM$ ($\Gamma=1$ for CDM).
$\Lpsi$ is the characteristic luminosity of the suppression, below which
$\phi(L)$ asymptotes to $L^{\alpha+\beta}$. To describe the
time evolution of LF, we follow the literature
\citep{Bouwens2012,Kuhlen2012,Schultz2014,Bouwens2015a} and assume that the parameters
in Equations (\ref{eq:ModScheFunc_Main}) and (\ref{eq:ModScheFunc_Supp}) depend
linearly on redshift.
Applying chi-square fitting to the conditional LF model
(Fig. \ref{fig:CondLF}) then leads to
\begin{eqnarray}
\Mstar &=& -20.90 - 0.004(z-6) \nonumber \\
\pstar &=& 0.52\times10^{-0.28(z-6)-3} \Mpc^{-3} \nonumber \\
\alpha &=& -1.78 - 0.06(z-6) \nonumber \\
\Mpsi &=& -17.44 + 5.19\log(\PM/0.8) - 2.71\log((1+z)/7) \nonumber \\
\beta &=& 1.69 + 0.03(z-6) \nonumber \\
\gamma &=& -1.10,
\label{eq:ModScheFunc_Para}
\end{eqnarray}
where we have assumed $\MUV = -2.5 \log(L/\rm erg \, s^{-1} \, Hz^{-1})+51.6$.
\fref{fig:ModScheFunc} shows the LF obtained by the truncated Schechter
function described above. It fits well with the LF predicted by the
conditional LF model at $z=4-8$, while for the bright end at $z=10$ it slightly outnumbers
the observed galaxies (see B15b and references therein) and
is marginally consistent with the conditional LF model.
This subtle discrepancy mainly results from the assumption of linear
dependence on redshift in \eref{eq:ModScheFunc_Para}, and it is unclear whether
it indicates a faster evolution at $z > 8$ given the substantial
uncertainties in the LF at $z=10$.
This possible acceleration in evolution
has been successfully modeled recently in the context of abundance matching
for CDM by additionally incorporating early stellar evolution \citep{Mason2015}.
It may be interesting to apply this approach in the context of $\psiDM$ too for extending
predictions to $z \ge 10$ with some security.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__ModifiedSchechterFunc.pdf}
\caption{Luminosity function (LF) at $z=4-10$ obtained by a single
analytic formula similar to the Schechter function
(Eqs. [\ref{eq:ModScheFunc_Main}-\ref{eq:ModScheFunc_Para}]; central lines).
The shaded regions are the same as Fig. \ref{fig:CondLF}, showing the LF
predicted by the conditional LF model within $2\sigma$.
Error bars show the observed LFs ($2\sigma$ at $z=4-8$ and $1\sigma$ at $z=10$)
of \citet[][open squares]{Parsa2015}, B15b (open circles), and \citet[][open triangles]{Oesch2014}.
The analytic formula well reproduces the conditional LF results at $z=4-8$,
while at $z=10$ it slightly outnumbers the observed galaxies and is
marginally consistent with the conditional LF model.
}
\label{fig:ModScheFunc}
\end{figure}
\subsection{Magnification Bias for the Hubble Frontier Fields}
\label{subsec:MagBias}
We have seen above that the quantum pressure inherent to $\psiDM$ leads to a
suppression of low-mass galaxies and hence our predictions for the LF have
largest contrast with CDM at low luminosities. Here we examine the benefits of
gravitational magnification in the new HFF data, where statistical samples of
multiply lensed galaxies are magnified by typically $\sim 10$ \citep{Lam2014},
reaching two magnitudes or more further down the LF at high redshifts. It
corresponds to an intrinsic UV luminosity of $M_{UV} \sim -15$, where we
predict sizeable difference between CDM and $\psiDM$ for the interesting
range of $\PM$ needed to provide the kpc-scale dark cores of local dSph
galaxies.
Gravitational lensing induces a bias in the number density of sources
detected above a flux limit, which in the case of lensing clusters is well
established \citep{Broadhurst1995,Umetsu2008} and has led to the detection
of the highest redshift and lowest luminosity galaxies currently known
\citep{Zheng2012,Coe2013,Zitrin2014}. The number density of such high-redshift
galaxies is modified in the following way:
\be
N_{\rm lensed}(\mathord{>}L) = {(1/{\mu})}N_{\rm unlensed}(\mathord{>}L/{\mu}),
\label{eq:MagBias}
\ee
where $\mu$ is the magnification factor and $N(\mathord{>}L)$ is the galaxy number density
above a flux limit corresponding to $L$. It shows the competition between the
enhanced number density due to the lower, magnified limiting luminosity and
the diminished source plane area, which is smaller than the observed area
by the same magnification factor. The magnification bias can thus be
defined as $N_{\rm lensed}(\mathord{>}L) / N_{\rm unlensed}(\mathord{>}L)$, which equals one
when there is no bias.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__LensedCumuLF.pdf}
\caption{
Magnification bias predicted by $\psiDM$ and CDM at $z=10$. The numbers on
the top of each panel indicate the adopted limiting luminosities. For CDM the
magnification bias continually rises owing to the steep faint-end slope of the
LF, whereas for $\psiDM$ the bias is generally lower than one because of the
strong suppression of low-mass halos. The lower the limiting luminosity, the
greater the contrast between $\psiDM$ and CDM.
}
\label{fig:MagBias}
\end{figure}
\fref{fig:MagBias} shows the difference between the magnification bias for
CDM and $\psiDM$ at $z=10$, which we predict to be a strong function of limiting
luminosity because of the difference in sign in the faint-end slope of the LFs
between these two models. CDM is continually rising and hence the
magnification bias is greater than one, thereby enhancing the number of faint
galaxies detected at high redshifts, whereas for $\psiDM$ the turnover in the
LF leads to many fewer galaxies magnified above the flux limit.
The difference we predict at $\MUV=-14.5$ for example is a factor of
$\sim 10$ for $\mu \sim 10$ at $z \sim 10$ between CDM and $\psiDM$ with
$\PM \sim 1.2$. Going beyond this with JWST should probe another two
magnitudes deeper with the assistance of lensing \citep{Mason2015} - most
efficiently by employing the same deep lenses as the HFF for
which the magnification maps have been widely studied \citep{Rodney2015} and
best understood \citep{Lam2014,Diego2015}.
\subsection{Reionization}
\label{subsec:Reionization}
Based on the predicted $\psiDM$ LF, we
can calculate the reionization history using the standard approach that has been adopted for
various dark matter models \citep[e.g.,][]{Kuhlen2012,Schultz2014,Bozek2015}. The time
evolution of the volume filling fraction of ionized hydrogen, $\QHII(z)$,
is governed by
\be
\frac{d\QHII}{dt} = \frac{\nidot}{\nHbar} - \frac{\QHII}{\trec},
\label{eq:QHII}
\ee
where $\nHbar$ is the mean comoving hydrogen number density. We take
$\QHII=0$ at $z=25$. The volume-averaged recombination time, $\trec(z)$,
can be determined by
\be
\trec \sim 0.93\left( \frac{\CHII}{3} \right)^{-1}
\left( \frac{1+z}{7} \right)^{-3} {\rm Gyr},
\label{eq:trec}
\ee
where $\CHII \equiv \langle \nHII^2 \rangle / \langle \nHII \rangle^2$ is the
volume-averaged clumping factor. Here we have assumed an intergalactic medium
temperature of $2 \times 10^4\,{\rm K}$ and a primordial hydrogen mass
fraction of $0.76$.
The comoving ionizing emissivity, $\nidot(z)$, defined as the number of ionizing
photons produced per unit time per unit comoving volume, can be estimated from
the galaxy UV LF:
\be
\nidot = \frac{2 \times 10^{25}}{\rm erg\,Hz^{-1}} \, \zetai \fesc \int_{\Llim}^{\infty}
dL \phi(L) L,
\label{eq:niondot}
\ee
where $\zetai$ represents the efficiency of converting galaxy UV luminosity to
ionizing photon luminosity, and $\fesc$ is the escape fraction. Strictly speaking,
since the observed rest-frame UV luminosity (at $\sim$ 1500\,\AA) will also be
extinguished by dust, $\fesc$ in \eref{eq:niondot} is the \emph{relative} escape
fraction \citep{Steidel2001} defined as the \emph{absolute} escape fraction
(the fraction of `ionizing photons' that escapes the galaxy without being absorbed
by dust and neutral hydrogen) divided by the fraction of `UV photons' that
escapes. This relative escape fraction can be significantly higher than the absolute
escape fraction because of the efficient dust extinction at $\sim$ 1500\,\AA.
$\Mlim = -2.5 \log(\Llim/\rm erg \, s^{-1} \, Hz^{-1})+51.6$ is the limiting
UV magnitude, below which the galaxy formation is assumed
to be inefficient. Note that $\nidot$ is sensitive to the faint-end
slope of LF that differentiates various dark matter models.
For a given $\QHII(z)$, the Thomson optical depth to CMB can be calculated via
\be
\taue = c \, \sigma_{\rm T} \, \nHbar \int_0^\infty dz \frac{(1+z)^2}{H(z)} \QHII(z)
(1 + \eta(z) Y/4X),
\label{eq:taue}
\ee
where $\sigma_{\rm T}$ is the Thomson cross-section, $H(z)$ is the Hubble parameter,
$X \sim 0.76$ and $Y=1-X$ are the primordial mass fraction of hydrogen and
helium, respectively, and we take $\eta(z>4)=1$ when helium is only singly ionized
and $\eta(z\le4)=2$ when helium is doubly ionized by quasars.
There are three free parameters in Equations (\ref{eq:QHII})-(\ref{eq:taue}), namely,
$\CHII$, $\Mlim$, $\zetai \fesc$ ($\zetai$ and $\fesc$ are fully degenerate), which,
for simplicity, are assumed to be spatially uniform and redshift-independent in this
work. The typical parameter ranges adopted in the literature are
$\CHII=2\sim5$, $\Mlim=-17\sim-10$, $\zetai=0.5\sim2.0$, and $\fesc=0.1\sim0.5$
\citep{Bouwens2012,Kuhlen2012,Schultz2014,Bozek2015,Bouwens2015a,Robertson2015}.
To bracket the
uncertainties in these parameters, we consider three different parameter sets: a minimum
reionization model (MIN) with $(\CHII,\zetai \fesc,\Mlim)=(4.0,0.2,-13)$, a fiducial
reionization model (FID) with $(\CHII,\zetai \fesc,\Mlim)=(3.0,0.6,-13)$, and a maximum
reionization model (MAX) with $(\CHII,\zetai \fesc,\Mlim)=(2.0,1.0,-13)$. The adopted
values are also shown in Table 2. Note that since $\psiDM$ should be
insensitive to $\Mlim$ (because of the strong suppression of small halos),
we fix $\Mlim=-13$ unless otherwise specified.
\begin{table}
\label{tab:reionpara}
\begin{center}
\caption{Reionization parameters}
\begin{tabular}{lccc}
\hline
Model & $\CHII$ & $\zetai \fesc$ & $\Mlim$\tablenotemark{a} \\
\hline
MIN & 4.0 & 0.2 & -13 \\
FID & 3.0 & 0.6 & -13 \\
MAX & 2.0 & 1.0 & -13 \\
\hline
\end{tabular}
\end{center}
\tablenotemark{a} $\Mlim$ is allowed to vary when computing $\taue$.
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__Reionization_Q_vs_z.pdf}
\caption{Volume filling fraction of HII as a function of redshift. The central lines
correspond to the FID model and the shaded regions are bounded by the
MIN and MAX models. The limiting UV magnitude is fixed to $\Mlim=-13$.
Open circles and arrows mark the observational constraints
at $z \sim 6-8$ \citep{Fan2006,Schroeder2013,Schenker2014,McGreer2015}.
}
\label{fig:QHII}
\end{figure}
\fref{fig:QHII} shows the time evolution of $\QHII$ for various models.
For a given reionization parameters, $\psiDM$ predicts a faster increase
of ionized volume at later times. Recent observations indicate that reionization
undergoes a rapid evolution at $6<z<8$ and is completed at $z \sim 6$
\citep{Fan2006,Schroeder2013,Schenker2014,McGreer2015}. Correspondingly,
the required ionizing photon production efficiency, $\zetai \fesc$, is
$\sim 0.6$ for $\psiDM$ with $\PM=0.8$,
about two times higher than CDM
($\zetai \fesc \sim 0.3$).
For $\psiDM$ this result is insensitive to the values of
both $\Mlim$ and $\CHII$ probed in this work.
\begin{figure}[t!]
\centering
\includegraphics[width=8.5cm]{Fig__Reionization_Tau_vs_Mlim.pdf}
\caption{Thomson optical depth versus limiting UV magnitude. The central lines
correspond to the FID model and the shaded regions are bounded by the
MIN and MAX models, where $\Mlim$ is allowed to vary. The cross-hatched
region shows the Planck 2015 $1\sigma$ confidence limit \citep{Planck2015}.
}
\label{fig:taue}
\end{figure}
\fref{fig:taue} shows the Thomson optical depth as a function of $\Mlim$ for
various models. Most recent Planck 2015 results give
$\taue = 0.066 \pm 0.016$ \citep[$1\sigma$,][]{Planck2015},
which is marginally consistent with $\PM=0.8$ assuming
$\zetai \fesc \sim 0.6$, in agreement with the estimation from $\QHII(z)$.
Note that this estimation is largely independent of both $\CHII$ and $\Mlim$
when $\Mlim \gtrsim -15$, since galaxies fainter than this magnitude
are highly suppressed at $z\ge6$ even for $\PM = 3.2$.
Given the considerable uncertainties in the reionization model at high
redshifts, it is therefore clear that neither $\QHII(z)$ nor $\taue$
provides a stringent constraint on $\psiDM$ even for $\PM \sim 0.8$.
In comparison, CDM with an ionizing photon production efficiency as low as
$\zetai \fesc \sim 0.2$ can be consistent with Planck 2015 assuming
$\Mlim \gtrsim -13$, in agreement with the findings of
\citet{Bouwens2015a} and \citet{Robertson2015}.
\citet{Bozek2015} used a similar approach to estimate the $\psiDM$ particle mass.
They reported that $\PM = 1.0$ is disfavored by the observed value of $\taue$
at $3\sigma$, and consequently a particle mass as high as $\PM = 10$ is required,
significantly higher than our estimation. This inconsistency with our result mainly arises
from a higher value of $\taue$ they adopted from the previous Planck 2013 results,
$\taue = 0.090 \pm 0.013$ \citep[$1\sigma$,][]{Planck2013,Spergel2015}.
Also note that the LF we predict based on the `conditional LF model'
declines at the faint end (see Fig. \ref{fig:CondLF}),
while the LF adopted by \citet{Bozek2015} based on the `abundance matching'
approach does not have
this natural feature. Such relative deficit of faint galaxies in our LF model
would only delay the reionization process and reduce the Thomson optical depth,
hence increasing the discrepancy between observations and the $\psiDM$ model
with a small particle mass. However, here we demonstrate that $\PM \sim 0.8$
can still be consistent with the latest Planck observations provided that the ionizing
photon production efficiency is sufficiently high.
\section{DISCUSSION AND CONCLUSION}
\label{sec:Discussion}
In this paper, we have constructed cosmological simulations designed to study the dark matter
halo MF in the wave dark matter ($\psiDM$) scenario. Here
the uncertainty principle counters gravity below a Jeans scale, which is
determined by the only free parameter in this model, $\PM$, the dark matter
particle mass. The smaller the particle mass, the larger the Jeans
scale, and hence the stronger the suppression of low-mass halos.
For this reason, we focus on determining a lower limit of $\PM$ based
on the observed UV LF at $z \sim 4-10$ and the reionization history.
The major findings in this study can be briefly summarized as follows:
\begin{itemize}
\item{$\psiDM$ halo MF has a prominent drop below $\sim 10^{10} \Msun$,
which can be well fitted by \eref{eq:MFFit}.}
\item{$\psiDM$ predicts a clear drop in the galaxy LF around $\MUV \gtrsim -16$
at $z \gtrsim 4$ based on a conditional LF model, which can be fitted by
a truncated Schechter function
(Eqs. [\ref{eq:ModScheFunc_Main}-\ref{eq:ModScheFunc_Para}]).}
\item{The newly established LF at $z\sim4-10$ constrains the $\psiDM$
particle mass to be $\PM \ge 1.2$ ($2\sigma$).}
\item{For galaxies magnified $\mathord{>}10\times$ in the Hubble Frontier Fields, $\psiDM$
predicts an order of magnitude fewer detections than CDM at $z \gtrsim 10$
down to an intrinsic UV luminosity of $\MUV \sim -15$}.
\item{$\psiDM$ with $\PM \gtrsim 0.74$ can satisfy the Thomson optical depth
reported by the latest Planck observations, on the assumption of a
reasonable ionizing photon production rate.}
\end{itemize}
In the following we give a more thorough discussion of this work.
We first argue that, for studying the $\psiDM$ MF with the
particle masses, redshift range, and halo masses of interest here
($\PM \sim 1$, $z\sim 4-10$, $\Mvir \gtrsim 1\times10^9 \Msun$),
it is reasonable to approximate both the $\psiDM$ transfer function as
redshift-independent and the evolution of quantum fluid by collisionless
particles. The major drawback of these approximations is the inability
to capture the difference in the substructures between CDM and $\psiDM$
halos, where $\psiDM$ halos have prominent solitonic cores in the centers
surrounded by fine-scale, large-amplitude cellular interference (SCB14a, S14b).
However, this shortcoming is irrelevant when
one is only concerned with the halo masses, as in this work. This is
especially true because most halos above $\sim 10^9 \Msun$ have yet to
merge gravitationally with each other at $z \gtrsim 4$.
Simulations of collisionless particles with a cut-off in the initial power
spectrum suffer from the well-known side effect of inducing spurious halos due to
numerical artefacts, which must be accounted for when determining an accurate MF
at low masses. To identify and remove these artificial halos,
we adopt a similar approach suggested by \citet{Lovell2014} based
on the shape of the progenitors of halos and the spatial overlap between
low-resolution halos and their high-resolution counterparts (see Appendix \ref{sec:SpuriousHalo}).
The resulting MF cleaned in this way then shows a clear decline
as expected below the Jeans mass and can be well fitted by \eref{eq:MFFit}.
Most importantly, this reinforces the MF we derive above
$\gtrsim 3\times10^9 \Msun$, which are most relevant for comparing with
observations.
Comparing the halo MF with the observed UV LF
requires the knowledge of \ML relation. For this purpose, we first apply the
abundance matching technique to either CDM or $\psiDM$ MFs. In
both cases, $\psiDM$ with $\PM=0.8$ shows a deficit of low-mass galaxies with
$\Mvir \sim 10^{10} \Msun$ at $z=6-8$ (see Fig. \ref{fig:CumuMF}), leading to
lower limits of particle mass, $\PM \ge 1.5$ (abundance matching to CDM) and
$\PM \ge 0.9$ (abundance matching to $\psiDM$). We also explore the conditional
LF model as an alternative approach, which yields $\PM \ge 1.2$. The key
assumption here is a power-law \ML relation at the faint end.
We argue that this approach provides a more reasonable and
model-independent estimation of the $\psiDM$ particle mass. In addition, we
predict that the high-z LF should turn over slowly around $\MUV \gtrsim -16$
at $z \gtrsim 4$, distinctly different from CDM.
This predicted feature lies just beyond the detected luminosity range of the
current LFs at $z \gtrsim 4$, but will be directly testable
with forthcoming observations such as JWST and also with highly magnified
galaxies in the HFF data.
Note that the \ML relation for low-mass halos may be subject to large
uncertainties. \citet{Strigari2008} showed that the Milky Way dwarf
satellites, which have $\Mvir \lesssim 10^{10} \Msun$,
share a common mass scale but have luminosity differences over four orders of
magnitude. Therefore, a more complicated \ML relation might be expected for
halos below $\sim 10^{10} \Msun$, at least at lower redshifts.
Even in CDM simulations, \citet{OShea2015} found a relatively flat high-z
LF at the faint end compared with the Schechter function fits to observations
(B15b). Ideally, $\psiDM$ simulations with the addition of baryonic
physics may be very helpful in further differentiating the high-z LFs predicted
by CDM and $\psiDM$.
The Thomson scattering optical depth to CMB provides another constraint for
the $\psiDM$ particles mass. In general, $\psiDM$ predicts a faster increase
of ionized volume at later times due to the suppression of early galaxy
formation. We demonstrate that $\psiDM$ with $\PM \gtrsim 0.74$ can satisfy
the Planck 2015 results and have reionization completed at $z \gtrsim 6$, on
the assumption that the ionizing photon production rate is sufficiently
efficient (about three times higher than that required for CDM).
This result is largely independent of the limiting luminosity and the faint-end
slope adopted since galaxies fainter than $\MUV \sim -15$ are highly suppressed
at $z\ge6$ even for $\PM = 3.2$.
On the other hand, this constraint is somewhat undermined because of
the current large uncertainties associated with the escape fraction
and the efficiency of converting galaxy UV luminosity to ionizing photon
luminosity. Note that for CDM the reionization calculation is much more
uncertain in fact than for $\psiDM$, as it is dominated by the choice of
limiting luminosity assumed for the relatively steeply rising
LF ($\alpha >1.0$) as otherwise the integrated luminosity diverges.
The MF below $\sim 10^9 \Msun$ determined by our simulations has a larger
relative uncertainty in our model. Firstly, approximating the $\psiDM$
transfer function as redshift-independent can somewhat underestimate the
matter power spectra at higher redshifts (see Fig. \ref{fig:EvolvePS}),
which in turn leads to a underestimation of MF at low masses.
Secondly, we approximate the evolution of quantum fluid by collisionless
particles. It may allow for the formation of a small number of halos with
masses well below the Jeans mass, which would otherwise be suppressed further
if the dynamical effect of quantum pressure is taken into account.
In this sense, the MF in a bona-fide wave-based $\psiDM$ simulation may have
an even stronger break at the low-mass end than \fref{fig:MF}.
We may hope to check on this in the future by running the
full adaptive-mesh-refinement wave simulations that we have previously described
(SCB14a, S14b) on a substantially more powerful platform.
Thirdly, there are also uncertainties associated with the
removal of spurious halos below $\sim 10^9 \Msun$ (see the shaded areas in
Fig. \ref{fig:MF}). However, it should be emphasized that none of these
uncertainties are relevant for the purpose of this study at the level of
precision that is currently afforded by the data, as the depths of the
current Hubble and forthcoming JWST do not extend to such low-mass halos.
Though not entirely consistent, the $\psiDM$ particle mass estimated in
this work, $\PM \ge 1.2$ ($2\sigma$), is surprisingly close to the values
determined from local dwarf galaxies.
SCB14a established with the first wave-based $\psiDM$
simulations a distinct solitonic core in the center of every
$\psiDM$ halo. We have previously obtained $\PM=0.8\pm0.2$ ($1\sigma$) by
fitting the spatial distribution of the intermediate metallicity stellar
population in the Fornax dSph galaxy to the soliton mass profile, under
the assumption of a constant projected velocity dispersion. \citet{Marsh2015a}
have also determined a similar constraint, $\PM \le 1.1$ ($2\sigma$), by
fitting the mass profiles of Fornax and Sculptor dSph galaxies to the soliton
mass profile using an empirical relation between the half-light radius and
velocity dispersion. Note that for $\PM=1.2$, we predict that a halo with
$\Mvir = 2\times10^9 \Msun$ still has a core as large as $\sim 1.1 \kpc$ (S14b),
consistent with many estimates of the large cores found
in dSph galaxies \citep[e.g.,][]{Salucci2012,Amorisco2013}.
A more thorough comparison between the stellar phase-space distribution in
dSph galaxies and the $\psiDM$ halo mass profile using a full Jeans analysis
will be extremely important to further clarify how coincident are the
particle masses determined by these various approaches and the role of
baryonic feedback in this context (S-R. Chen et al., in preparation).
The inherent density granularity of the $\psiDM$ halo may also be examined
through internal dynamical effects and by lensing flux anomalies on sub-kpc
scales, which provide other key independent observational tests for
distinguishing $\psiDM$ from WDM and CDM.
The finding that a similar particle mass in $\psiDM$ can both solve the
cusp-core problem in dwarf galaxies \citep{Moore1994} and satisfy the high-z
LF and reionization observations is very encouraging for this model.
It demonstrates a great advantage of $\psiDM$ over WDM, for which the light
particle mass required for creating kpc-scale cores in dSph galaxies prevents
the formation of the host dwarf galaxies in the first place and overly
suppresses high-z galaxies \citep{Maccio2012,Schneider2014}.
The key reason for this striking difference is that the relation between
core radius and power spectrum truncation differs because of the very
different physics underlying $\psiDM$ and WDM, with the uncertainty principle
responsible in the case of $\psiDM$ and free streaming in the case of WDM.
The particle masses in the two models can always be chosen so that they have
similar truncated matter power spectra, but the corresponding core radius in
$\psiDM$ can then be several times larger than that in WDM. This is why
\emph{$\psiDM$ does not suffer from the Catch 22 problem affecting WDM}.
\section{ACKNOWLEDGEMENT}
We thank David Marsh for stimulating discussions.
This work is supported in part by the National Science Council of Taiwan
under the grant MOST 103-2112-M-002-020-MY3.
|
2,877,628,089,355 | arxiv | \section{\label{intro}Introduction}
For an incompressible Newtonian fluid of shear viscosity $\eta$, it is well known that the uniaxial extensional viscosity is $\eta_E=3 \eta$, the planar extensional viscosity is $\eta_P = 4 \eta$, and the biaxial extensional viscosity is $\eta_B = 6 \eta$, where the coefficients, 3, 4, and 6, are commonly referred to as the respective Trouton ratio $\text{Tr}$.~\cite{Trouton1906,Petrie2006} By contrast, for viscoelastic fluids such as polymer solutions and melts, these limiting values of the extensional viscosity are only approached at small rates of strain. At higher rates of strain, such that the dimensionless Weissenberg number $\text{Wi}\gtrsim 0.5$, the unraveling and orientation of polymer chains \cite{DeGennes1974,Hinch1974,Keller1985,Larson1989,Perkins1997} results in an increased elastic tensile stress difference $\Delta\sigma$ in the fluid and hence a non-linear increase in the extensional viscosity and apparent Trouton ratio $\text{Tr}_{app}$.
Understanding how the extensional viscosity and $\text{Tr}_{app}$ for viscoelastic fluids at $\text{Wi} > 0.5$ depends on the imposed mode of extension has interested a number of researchers over many years.~\cite{Stevenson1975,Meissner1982,Petrie1984,Demarmels1985,Jones1987,Khan1987,Khan1987b,Petrie1990,Isaki1991,Wagner1998,Nishioka2000,Kwan2001,Hachmann2003,Stadler2007,Shogin2021} A large number of studies have involved constitutive modeling, while most of the experimental work for the validation of those models has involved the extensional flow of polymer melts. For such highly elastic fluids, various instrumentation has been developed based on, e.g., the stretching of filaments or sheets of material held in rotary clamps,\cite{Meissner1981,Meissner1987,Hachmann2003} or by lubricated squeezing,\cite{Chatraei1981,Khan1987b,Nishioka2000} and the high elastic stresses resulting from the imposed deformation are quite readily measurable. By contrast, for less viscous, more mobile, viscoelastic fluids such as polymeric solutions, which can not be fixed in clamps and which generate relatively weak elastic stresses, the development of extensional rheometers is far more challenging.~\cite{James1993,Macosko1994,Haward2016} In this case, experimental comparisons between the response of viscoelastic fluids under different modes of extension are extremely rare.~\cite{Jones1987}
Extensional flows are potential flows characterized by diagonal rate-of-strain tensors $\text{\bf{D}}$, and come in three fundamental types. Uniaxial extension has one positive extensional axis and is compressional along the remaining directions, e.g.:
\begin{equation}
\text{\bf{D}}_U =
\begin{pmatrix}
-\dot\varepsilon /2 & 0 & 0 \\
0 & -\dot\varepsilon /2 & 0 \\
0 & 0 & \dot\varepsilon
\end{pmatrix}
.
\label{uni}
\end{equation}
\pagebreak
Planar extension has one neutral direction with equal and opposite extension and compression along two perpendicular directions, e.g.:
\begin{equation}
\text{\bf{D}}_P =
\begin{pmatrix}
\dot\varepsilon & 0 & 0 \\
0 & -\dot\varepsilon & 0 \\
0 & 0 & 0
\end{pmatrix}
.
\label{pla}
\end{equation}
Finally, biaxial extension is the kinematic reverse of uniaxial extension, having one compressional axis and with extension along the remaining directions, thus:
\begin{equation}
\text{\bf{D}}_B =
\begin{pmatrix}
\dot\varepsilon_B & 0 & 0 \\
0 & \dot\varepsilon_B & 0 \\
0 & 0 & -2\dot\varepsilon_B
\end{pmatrix}
.
\label{bi}
\end{equation}
Note that here we use similar definitions for uniaxial and biaxial extension as those used by Meissner and coworkers,~\cite{Meissner1982,Demarmels1985} and adopted by Petrie,~\cite{Petrie1984,Petrie1990,Petrie2006} in the sense that we always consider the relevant strain rate metric on which material functions will be defined as that along the stretching direction(s). In accordance with Society of Rheology notation,~\cite{Dealy1984,Dealy1995} we place the subscript ``$B$'' on $\dot\varepsilon$ for biaxial extension. This serves to distinguish these expressions from alternative definitions, such as those suggested by Stevenson et al,~\cite{Stevenson1975} and by Bird,~\cite{Bird} for which biaxial extension is considered equivalent to uniaxial compression and is thus described by Eq.~\ref{uni} with a reversed sign, resulting in a strain rate of $\dot\varepsilon/2$ in the two orthogonal stretching directions.
The extra stresses that arise in viscoelastic polymer solutions for $\text{Wi}=\lambda \dot\varepsilon > 0.5$ (or $\text{Wi}= \lambda \dot\varepsilon_B > 0.5$ in biaxial flow) result from the entropic elasticity of the polymer chains (with characteristic relaxation time $\lambda$), causing them to resist deformation and stretching. The hydrodynamically-forced stretching causes optical anisotropy in the fluid, often visible in experiments as flow-induced birefringence.~\cite{Fuller,Odell2007} Birefringence is thus an optical signature of the elastic stress in the fluid; indeed the two may be directly proportional in cases for which the stress-optical rule is obeyed.~\cite{Fuller} In hyperbolic stagnation point extensional flows (such as those described by Eqs.~\ref{uni} to \ref{bi}), due to the long residence time (or equivalently the large accumulated strain) available for polymers to unravel, the birefringence is predominantly aligned along the axes of positive extension rate. Here, the fluid has passed through (or near) the hyperbolic point at the coordinate origin, where the residence time in the straining flow (and hence the strain) is theoretically infinite. The localization of the birefringence about the stretching axes gives rise to the common descriptor of ``birefringent strand''.~\cite{Crowley1976,Keller1985,Harlen1990,Harlen1992,Remmelgas1999,Becherer2008,Becherer2009,Haward2012c,Haward2019b} It is the growth of the elastic stress along the stretching axes only that leads us to consider the relevant strain rate for determination of the extensional viscosity as also being that directed along the same axes.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.4]{Fig1.pdf}
\caption {The apparent Trouton ratio $\text{Tr}_{app}$ predicted in the three fundamental modes of extensional deformation by various common constitutive models used to describe dilute polymer solutions: (a) the Oldroyd-B model, (b) the Giesekus model (mobility factor $\alpha=0.01$), (c) the FENE-P model (extensibility $L=10$), and (c) the linear PTT (l-PTT) model (PTT extensibility parameter $\epsilon=0.01$). In all cases, the solvent viscosity is set to $\eta_s = 0$.
}
\label{predictions}
\end{center}
\end{figure}
To obtain a flow curve of extensional viscosity as a function of the strain rate, the extensional kinematics applied to the fluid should be both homogeneous in space and constant in time (i.e., \emph{persistent}).~\cite{Petrie2006} Thus, at each imposed extension rate, the polymer chains have sufficient time to achieve an equilibrium degree of stretching in the flow, and for the elastic stresses to reach a steady state as the strain $\varepsilon = \dot\varepsilon t \rightarrow \infty$ (or $\varepsilon_B = \dot\varepsilon_B t \rightarrow \infty$) over a residence time, $t \gg \lambda$. If the steady-state diagonal stress tensor resulting from the homogeneous extensional deformation is:
\begin{equation}
\pmb{\upsigma} =
\begin{pmatrix}
\sigma_{xx} & 0 & 0 \\
0 & \sigma_{yy} & 0 \\
0 & 0 & \sigma_{zz}
\end{pmatrix}
,
\end{equation}
then the uniaxial extensional viscosity will be $\eta_{E}(\dot\varepsilon)~=~(\sigma_{zz}~-~\sigma_{xx})/\dot\varepsilon$, the planar extensional viscosity will be $\eta_{P}(\dot\varepsilon)~=
~(\sigma_{xx}~-~\sigma_{yy})/\dot\varepsilon$, and the biaxial extensional viscosity will be $\eta_{B}(\dot\varepsilon_B)~=
~(\sigma_{xx}~-~\sigma_{zz})/\dot\varepsilon_B$. The apparent Trouton ratio can be defined as $\text{Tr}_{app} = \eta_E/\eta_0$, $\eta_P/\eta_0$, or $\eta_B/\eta_0$ for uniaxial, planar, or biaxial extension (respectively), where $\eta_0$ is the steady shear viscosity of the fluid at zero shear rate.
Petrie (1990) obtained asymptotic results for the uniaxial and planar extensional viscosities given by various viscoelastic constitutive models commonly used to describe polymeric solutions.~\cite{Petrie1990} For all models examined, including the Phan-Thien and Tanner (PTT), the Giesekus, and the finitely extensible non-linear elastic dumbbell with Peterlin closure (FENE-P) model, the two extensional viscosities were equal at high $\text{Wi}$ (apart from the relatively small differences due to the different contribution of the solvent in each flow type). In fact, all viscoelastic constitutive models predict that at low deformation rates in uniaxial, planar, and biaxial elongation $\text{Tr}_{app}$ approaches the Newtonian limit of 3, 4, or 6 (respectively) as $\text{Wi} \rightarrow 0$, and that in all three flows, as the Weissenberg number exceeds 0.5, $\text{Tr}_{app}$ undergoes an abrupt increase (see Fig.~\ref{predictions}). For the infinitely extensible Oldroyd-B model (Fig.~\ref{predictions}(a)), $\text{Tr}_{app} \rightarrow \infty$ for $\text{Wi}\geq0.5$, in all cases. Models with a bounded elasticity show that under uniaxial and planar elongation, $\text{Tr}_{app}$ approaches the same limiting plateau value as $\text{Wi} \rightarrow \infty$ (Fig.~\ref{predictions}(b-d)). However, there is a disagreement between the predictions of different models in terms of the high $\text{Wi}$ limit of $\text{Tr}_{app}$ in biaxial elongation. Some models, such as the Giesekus model, predict that $\text{Tr}_{app}$ will tend to the same high $\text{Wi}$ plateau in biaxial extension as it does in uniaxial and planar extension (Fig.~\ref{predictions}(b)). However, the FENE-P and PTT models predict that in biaxial extension the limiting value of $\text{Tr}_{app}$ at high $\text{Wi}$ will be one-half of that for uniaxial and planar extension (Fig.~\ref{predictions}(c,d)). Note that, using the alternative definition of the deformation rate tensor formulation for biaxial extension outlined by Bird \cite{Bird} results in a doubling of the Weissenberg number and a halving of the apparent Trouton ratio compared to the formulation of Meissner et al,~\cite{Meissner1982} as indicated by the dotted blue lines in Fig.~\ref{predictions}.~\cite{Shogin2021}
Due to the great difficulty associated with experimental extensional rheometry of low viscosity, mobile viscoelastic fluids such as dilute polymer solutions,~\cite{James1993,Macosko1994,Haward2016} these theoretical predictions are largely untested experimentally. Using a ``spin-line'' rheometer and a converging channel rheometer to generate uniaxial and planar extension, respectively, Jones et al (1987) found a ``satisfactory'' (meaning order-of-magnitude) correspondence between $\eta_{E}$ and $\eta_{P}$ for a variety of polymer solutions. \cite{Williams1985,Jones1987} However, the two measurement methods employed differed greatly in terms of how the deformation was applied, its spatial homogeneity, the range of deformation rates probed, and how the tensile stress was estimated. \cite{Williams1985,Jones1987} The authors themselves expressed apparent surprise at the agreement they obtained given the inherent problems with making such measurements, and remarking that they were not ``comparing like with like'' since one method was planar and the other uniaxial. To this day, a systematic experimental comparison of the uniaxial, planar and biaxial extensional responses of dilute polymer solutions using comparable measurement methods is still missing from the literature.
In the present work, we employ numerically optimized stagnation point microfluidic devices to make measurements of the uniaxial, planar and biaxial extensional viscosities of a variety of model solutions formulated from dilute concentrations of linear polymers. For planar extensional viscosity measurements we utilize the two-dimensional (2D) Optimized-Shape Cross-slot Extensional Rheometer (OSCER, Fig.~\ref{schematic}(a)),~\cite{Alves2008} which over the last decade has proven useful for characterizing the extensional rheology and flow behavior of a variety of viscoelastic fluids. \cite{Haward2012c,Haward2013b,Haward2016c} For uniaxial and biaxial extensional viscosity measurements we utilize the three-dimensional (3D) Optimized Uniaxial and Biaxial Extensional Rheometer (OUBER, Fig.~\ref{schematic}(b)) presented in Part I of this paper.~\cite{Haward2023} The devices are designed to provide close approximations to the respective deformation rate tensors (given in Eqs.~\ref{uni}-\ref{bi}) over multiple characteristic device lengthscales in each spatial direction. Both devices allow the strain rate to be controlled by simply varying the volumetric flow rate. They also both generate stagnation points at the center of the flow field, such that strain can accumulate indefinitely at the set strain rate (a requirement for measuring the extensional viscosity). Under all three modes of elongation, we employ micro-particle image velocimetry ($\upmu$-PIV) to confirm and quantify the extensional strain rates, coupled with pressure drop measurements designed to enable estimation of the elastic tensile stress difference. The comparable (microfluidic) size scales of the two devices allow similar extension rates to be obtained in each mode of extension while always keeping inertia negligible.
\begin{figure}[!t]
\begin{center}
\includegraphics[scale=0.55]{Fig2.pdf}
\caption {Schematic illustrations of numerically optimized stagnation point elongational flow devices. (a) The optimized shape cross-slot extensional rheometer (OSCER) geometry consisting of two pairs of opposed planar inlet (outlet) channels (height $2H$, width $2W$) oriented along the $y$ ($x$) axes and joined by a numerically-determined profile designed to generate an optimal approximation to planar elongational flow. (b) The optimized uniaxial and biaxial extensional rheometer (OUBER), consisting of two pairs of opposed planar inlet (outlet) channels (height $2H$, width $2W$) oriented along the $x$ and $y$ axes and connected via a numerically-determined profile to a pair of opposing outlet (inlet) channels of circular cross-section oriented along the $z$ axis. Depending on the choice of imposed flow direction, the geometry can produce an optimal approximation to either uniaxial or biaxial elongational flow. For the OUBER geometry, along with the standard $(x,y,z)$ coordinate system, a $45^{\circ}$-rotated coordinate system $(x',y',z)$ is employed (see main text for details). The coordinate origin is located at the center of each device
}
\label{schematic}
\end{center}
\end{figure}
We remark that experimental extensional viscosity measurements are always an approximation, and that extensional ``rheometers'' must always be considered ``indexers'' to some extent. In this work, for the first time we have assembled a pair of highly comparable and sophisticated indexers that permit a fair comparison between the extensional rheology of viscoelastic fluids under each of the three fundamental modes of extension. For the most dilute polymer solution that we test (which can be considered ``ultradilute''~\cite{Clasen2006}), our results at high $\text{Wi}>0.5$ indicate that $\eta_{E} \approx \eta_{P} \approx 2\eta_{B} $, in agreement with the prediction of the FENE-P constitutive model. Of some interest, we observe that these elongational flows lose stability at different $\text{Wi}$ in each of the three flows (lowest in uniaxial and highest in biaxial extension), which we discuss in terms of the region occupied by the elastic ``birefringent strand'' that forms along the stretching axis (or over the stretching plane). These observations have important implications for viscoelastic constitutive modeling as well as for experimental extensional rheometry.
\section{Experimental Methods}
\label{ExpMeth}
\subsection{Microfluidic geometries}
\label{geom}
\subsubsection{Planar extensional flow OSCER device}
The OSCER device, shown schematically in Fig.~\ref{schematic}(a), has been described in detail in several prior works. \cite{Haward2012c,Haward2013b,Haward2016c} Briefly, the channel is cut in stainless steel by wire-electrical discharge machining and sealed about the $z$ direction with soda glass viewing windows. The channel has a uniform half-height $H=1$~mm and a characteristic half-width $W=0.1$~mm upstream and downstream of the optimized region. The channel shape is optimized over a region spanning $\abs{x},\abs{y} \leq 15W$, and generates a close approximation to pure planar elongation over a large portion of that region.\cite{Haward2012c,Haward2016c} The high aspect ratio of the device ($H/W=10$) gives a good approximation to a two-dimensional (2D) flow ensuring that the flow field is also uniform through most of the channel height.
\subsubsection{Uni- and biaxial extensional flow OUBER device}
The fabrication of an OUBER device (Fig.~\ref{schematic}(b)),~\cite{Haward2023} is achieved by the technique of selective laser-induced etching (SLE) in fused silica glass, \cite{Gottmann2012,Meineke2016,Burshtein2019} and is described in detail in Part I of this paper.~\cite{Haward2023}
The circular cross-section channels aligned along $z$ have a radius $R=0.4$~mm, while the four planar channels aligned along $x$ and $y$ each have half-width $W=0.64$~mm and half-height $H=0.16$~mm. The channel shape is optimized to provide almost uniform velocity gradients over a region spanning $\abs{x},\abs{y},\abs{z} \leq 5R$, and (depending on how the flow is imposed) generates a close approximation to either pure uniaxial or pure biaxial elongation over a large portion of that region.~\cite{Haward2023}
Note that, as depicted in Fig.~\ref{schematic}, it is natural to align the $x$ and $y$ axes with adjacent planar inlet/outlet channels. However, obtaining an experimental view inside of the OUBER device along either of those two directions is problematic with our current design. As described in Part I of the paper, optical access to the stagnation point region inside the device is only possible by viewing at $45^{\circ}$ to the $x$-axis.~\cite{Haward2023} Therefore, in our experimental setup we consider a coordinate system described by $(x',y',z)$, where $x' = \frac{1}{\sqrt{2}}(x+y)$, and $y' = \frac{1}{\sqrt{2}}(y-x)$ (also shown in Fig.~\ref{schematic}).
\begin{figure}[!t]
\begin{center}
\includegraphics[scale=0.7]{Fig3.pdf}
\caption {Rheological response of the various test fluids employed. (a) Shear viscosity $\eta$ as a function of the applied shear rate $\dot\gamma$ of the Newtonian solvent (89.6\% glycerol in water, dashed line) and poly(acrylamide) (PAA) solutions at various polymer concentrations measured in steady shear using a stress-controlled TA Instruments DHR3 rotational rheometer with 40~mm diameter 1$^{\circ}$ cone-and-plate fixture. (b) Decay of the filament diameter $D$ as a function of time for the polymer solutions during capillary thinning in a CaBER device, used to obtain the extensional relaxation times $\lambda$ of the samples.
}
\label{flowcurves}
\end{center}
\end{figure}
\subsection{Test fluids}
\label{fluids}
Due to the surface curvature of the three-dimensional (3D) OUBER device, see Fig.~\ref{schematic}(b), clear imaging inside of the device (e.g., for performing flow velocimetry, as described below) requires that the channel be filled with a fluid of similar refractive index $RI$ as the fused silica glass ($RI=1.4584$).~\cite{Malitson1965} A mixture of 89.6~wt\% glycerol and 10.4~wt\% water, with $RI=1.4582$ at $25^{\circ}$C (measured using an Anton-Paar Abbemat MW refractometer operating at 589~nm) is found to be a sufficiently close match. The 89.6\,:\,10.4~wt\% glycerol\,:\,water mixture (with density $\rho = 1231$~kg~m$^{-3}$ and viscosity $\eta_s = 0.143$~Pa~s, Fig.~\ref{flowcurves}(a)) is used as both a Newtonian reference fluid and also as a solvent for viscolelastic polymeric test solutions.
The polymer sample used is a nonionic poly(acrylamide) (PAA) of molecular weight $M \approx 5$~MDa obtained from Sigma-Aldrich. Polymer solutions are prepared at four different concentrations $c = 50,~100,~200$, and 400 parts-per-million (ppm) by first dissolving the required mass of dry polymer powder in the aqueous component of the solvent, before adding the mass of glycerol necessary to achieve the final desired composition. To avoid mechanical degradation of the polymer during the solution preparation, mechanical stirring is not used. Rather the fluids are mixed by gentle agitation on a roller-mixer (Ika, Japan). Typically, 24~h is required for complete dissolution of the polymer powder into the water, and a further 24~h for complete mixing with the glycerol. Subsequent to preparation, the fluids are stored at $5^{\circ}$C in unlit conditions, and are discarded if not used within one month.
The concentration regime and equilibrium conformation of the PAA in solution can be estimated based on the number of backbone bonds $n = 2 M/m \approx 140,000$ (where $m = 71$~Da is the monomer molecular weight), the average length per bond $l = 0.154$~nm, and the characteristic ratio $C_\infty=6.9$.~\cite{Scholtan1954,Winston1980,Kulicke1982} From this information, it is possible to estimate the contour length $L_c =nl \approx 21.6~\upmu$m and the equilibrium mean-square end-to-end length $\langle r_0^2\rangle=C_{\infty}nl^2\approx23,000~\text{nm}^2$. The radius of gyration is then $R_g=\frac{1}{\sqrt{6}}\langle r_0^2\rangle^{1/2} \approx 62$~nm, which can be used to estimate the overlap concentration $c^*=M/N_A(2R_g)^3\approx4400$~ppm (where $N_A$ is Avogadro's number).~\cite{Graessley1980} It can also be estimated that to achieve full stretch of the polymer chain, the end-to-end separation needs to be increased from its equilibrium value by an extensibility factor (or stretch ratio) of $L = L_c/\langle r_0^2\rangle^{1/2}\approx143$. We note that these molecular parameters are estimated based on the value of $C_\infty$ reported for PAA in water at $25^{\circ}$C.~\cite{Kulicke1982} However, recent molecular dynamics simulations indicate that the PAA chain adopts a roughly similar conformation in 90~wt\% aqueous glycerol as it does in pure water.~\cite{Hopkins_S_2020} Therefore, we have some confidence that our test fluids should be safely in the dilute solution regime with $0.011 \lesssim c/c^* \lesssim 0.088$, and that the PAA chains should be highly extensible.
The steady shear rheology of the polymeric test fluids is measured using a stress-controlled DHR3 rotational rheometer (TA Instruments Inc.) fitted with a 40~mm diameter 1$^{\circ}$ angle cone-and-plate geometry (see Fig.~\ref{flowcurves}(a)). Over the range of accessible shear rates, the fluids each have a near-constant viscosity, close to that of the solvent. For this reason, we take the viscosity $\eta$ of each fluid as being the average of the respective data shown in Fig.~\ref{flowcurves}(a). The relaxation times $\lambda$ of the fluids are assessed by means of capillary thinning measurements using a CaBER device (Thermo-Haake). \cite{Anna2001b} The device is fitted with 6~mm diameter plates with the initial separation set to 1~mm and the final separation to 6~mm. Curves of the filament diameter at the midpoint between the plates $D$ as a function of time are shown in Fig.~\ref{flowcurves}(b). The value of $\lambda$ is extracted from the time constant of the exponential decay of the filament diameter observed within the elasto-capillary thinning regime. \cite{Anna2001b} The values of $\eta$ and $\lambda$ obtained for each polymeric fluid are given in Table~\ref{tab1}.
\subsection{Flow control and dimensionless groups}
\label{control}
The test fluids are driven through the microfluidic OSCER and OUBER devices using 29:1 gear ratio neMESYS low pressure syringe pumps (Cetoni, GmbH) to control the volumetric flow rate through each individual channel. For planar extensional flow in the OSCER device, two pumps are used to impose equal volumetric flow rates $Q$ into each of the two inlet channels, while two pumps withdraw fluid at equal and opposite rates from each of the two outlet channels. For uniaxial (biaxial) extensional flow in the OUBER device, two pumps are used to impose equal volumetric flow rates $Q$ through each of the two circular cross-section outlet (inlet) channels, while four pumps impose equal volumetric flow rates $Q/2$ through each of the four planar inlet (outlet) channels. The pumps are fitted with Hamilton Gastight syringes of appropriate volumes such that the specified ``pulsation free'' dosing rate is always exceeded. Connections between the syringes and the microfluidic devices are made using flexible Tygon tubing.
\begin{table}
\caption{\label{tab1}Values of viscosity $\eta$, solvent-to-total viscosity ratio $\beta$, and relaxation time $\lambda$ obtained from rheological characterization of the PAA solutions. }
\begin{ruledtabular}
\begin{tabular}{c c c c c}
PAA concentration [ppm] & $c/c^*$ & $\eta$ [Pa~s] & $\beta = \eta_s/\eta$ & $\lambda$ [s] \\
\hline
50 & 0.011 & 0.146 & 0.98 & 0.22 \\
100 & 0.022 & 0.151 & 0.95 & 0.38 \\
200 & 0.044 & 0.155 & 0.92 & 0.54 \\
400 & 0.088 & 0.181 & 0.79 & 1.03 \\
\end{tabular}
\end{ruledtabular}
\end{table}
For an imposed volumetric flow rate $Q$ in each channel of the OSCER device, the average flow velocity is $U=Q/4WH$ and the expected (or nominal) extension rate based on a Newtonian flow field prediction is given by $\dot\varepsilon_{nom}=0.1U/W$.~\cite{Haward2012c,Haward2013b,Haward2016c}
For the OUBER device, we consider the characteristic average flow velocity $U$ as being that in the two channels of circular cross-section, so that for an imposed volumetric flow rate $Q$ in each of those channels, $U=Q/\uppi R^2$. The expected nominal extension rates are $\dot\varepsilon_{nom}=0.4U/R$ for uniaxial extension, and $\dot\varepsilon_{B,nom}=0.2U/R$ for biaxial extension.~\cite{Haward2023}
The Reynolds number $\text{Re}$ describes the relative strength of inertial to viscous forces in the flow experiments. In the OSCER device, we define $\text{Re}=\rho U D_h / \eta$, where $D_h=2WH/(W+H)$ is the hydraulic diameter of the rectangular channels. The maximum Reynolds number reached in experiments using the OSCER device is $\text{Re} \approx 0.1$. In the case of the OUBER device, we define $\text{Re} = 2 \rho U R / \eta$, and the maximum values reached are $\text{Re} \approx 0.2$ (uniaxial extension), and $\text{Re} \approx 0.7$ (biaxial extension). Since $\text{Re} < 1$ in all experiments, inertial effects in the flow are considered negligible.
The Weissenberg number describes the relative strength of elastic to viscous forces in the flow and can be quantified by the product of the extension rate and the relaxation time $\lambda$. However, since we only have \emph{a priori} knowledge of the nominal extension rate, it is convenient to first also define a nominal Weissenberg number as $\text{Wi}_{nom} = \lambda \dot\varepsilon_{nom}$ in uniaxial and planar elongation, and $\text{Wi}_{nom} = \lambda \dot\varepsilon_{B,nom}$ in biaxial elongation.
Typically in elongational flows, it is found that polymer stretching for $\text{Wi}_{nom} \gtrsim 0.5$ will modify the flow field compared to the Newtonian case, resulting in a reduction of the true extension rate below its nominal value.~\cite{Mackley1978,Dunlap1987,Remmelgas1999,Haward2012c,Haward2013b,Haward2016c} In the present work, micro-particle image velocimetry ($\upmu$-PIV) experiments (described in Sec.~\ref{PIV}) will be used to directly measure the true extension rate (or velocity gradient) along the stretching axis $\dot\varepsilon$ (or $\dot\varepsilon_B$), allowing the true Weissenberg number to be evaluated as $\text{Wi} = \lambda \dot\varepsilon$ in uniaxial and planar elongation, and $\text{Wi} = \lambda \dot\varepsilon_{B}$ in biaxial elongation.
\subsection{Microparticle image velocimetry}
\label{PIV}
Quantitative measurement of the flow field in each extensional flow configuration is achieved using microparticle image velocimetry ($\upmu$-PIV, TSI Inc., MN).~\cite{Wereley2005,Wereley2010} For this purpose, the test fluids are seeded with a low concentration ($c_p \approx 0.02$~wt\%) of $3.2~\upmu$m diameter red fluorescent tracer particles (Fluor-Max, Thermo Scientific) with excitation (emission) wavelength 542~nm (612~nm). The plane of interest within the geometry (i.e., the $xy$ midplane in the OSCER geometry, and the $y'=0$ plane in the OUBER geometry) is brought into focus on an inverted microscope (Nikon Eclipse Ti) with a $4 \times$ magnification, $\text{NA}=0.13$ numerical aperture Nikon PlanFluor objective lens. Under these conditions, the measurement depth over which microparticles contribute to the determination of the velocity field is $\delta_m \approx 180~\upmu$m. \cite{Meinhart2000} Excitation with a dual-pulsed Nd:YLF laser with a wavelength of 527~nm induces the emission of particle fluorescence, which is detected by a high speed camera (Phantom MIRO, Vision Research). The camera is operated in frame-straddling mode and is synchronized with the laser in order to acquire pairs of particle images corresponding to pairs of laser pulses separated by a small time $\Delta t$. The value of $\Delta t$ is varied inversely to the imposed flow rate and set so that the average displacement of particles between the two images in each pair is always $\approx4$~pixels. In this work we are only concerned with steady flows, so at each flow rate tested 50 image pairs are acquired and are processed using an ensemble average cross-correlation PIV algorithm (TSI Insight 4G) in order to reduce noise. A recursive Nyquist criterion is employed with a final interrogation area of $16 \times 16$ pixels to enhance the spatial resolution and obtain two components of the velocity vector $\bf{u}$ spaced on a square grid of $26.6~\upmu$m~$\times 26.6~\upmu$m. In the OSCER device, the obtained components of $\bf{u}$ are $u$ and $v$ (the $x$ and $y$ component, respectively). In the OUBER device, the obtained components of $\bf{u}$ are $u'$ and $w$ (the $x'$ and $z$ component, respectively). Subsequent to data acquisition, the software Tecplot Focus (Tecplot Inc., WA) is used for generation of velocity contour plots and streamline traces and for extraction of velocity profiles.
\subsection{Pressure drop measurements and extensional rheometry}
\label{press}
Pressure drop measurements are made using a 35.5~kPa wet-wet differential pressure sensor (Omega Engineering Inc.) connected across one inlet and one outlet of each device. Pressure taps are taken by installing T-junction connectors in the upstream and downstream tubing connecting between the fluidic device and the syringes driving the flow. At each imposed flow rate in each extensional flow configuration (uniaxial, planar and biaxial), two independent measurements of the pressure drop are made. The first pressure drop measurement (labeled $\Delta P_{tot}$) is made with flow imposed in all the channels of the device (i.e., with the device in normal operation, as described in Sec~\ref{control}) and provides an estimate of the total stress. This combines stresses due to the shear induced by the walls of the channel and connecting tubing, as well as any extra stress due to the elongational kinematics in the flow. A second measurement (labeled $\Delta P_{sh}$) is made with half of the inlet channels and half of the outlet channels disabled and allows estimation of the shear stresses only. It is important to state that in the OUBER device, two adjacent (not opposing) planar channels are disabled during the measurement of $\Delta P_{sh}$ in order to avoid the formation of a `T-channel-like' flow configuration which would retain a stagnation point in the center of the device.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.8]{Fig4.pdf}
\caption {Normalized velocity fields with superimposed streamlines for creeping flow ($\text{Re}<0.05$) of the Newtonian solvent in (a) uniaxial, (b) planar, and (c) biaxial extension. Parts (d), (e) and (f) show the respective normalized velocity profiles measured along the flow axes (data points), which compare favorably against the target numerical profiles (lines). The nominal elongation rate in each case is indicated within the respective plot.
}
\label{NewtPIV}
\end{center}
\end{figure*}
From the raw pressure drop measurements, we obtain an excess pressure drop $\Delta P_{ex} = \Delta P_{tot} - \Delta P_{sh}$, which we assume arises predominantly due to the extensional kinematics present in the flow field during the measurement of $\Delta P_{tot}$. Of course, this differential measurement is not able to quantify each individual component of the diagonal stress tensor in order to precisely evaluate the principal stress difference $\Delta \sigma$ required to compute the extensional viscosity (see Sec.~\ref{intro}). In previous works involving planar extensional flows in the OSCER device, and also in the standard cross-slot geometry, it has simply been assumed that $\Delta P_{ex} \approx \Delta \sigma$, thus the planar extensional viscosity has been computed as $\eta_{P} \approx \Delta P_{ex}/\dot\varepsilon$.~\cite{Haward2011,Haward2012a,Haward2012c,Haward2013b} Some support for this assumption has been shown using birefringent polymer solutions, for which direct proportionality has been shown between $\Delta P_{ex}$ and the birefringence $\Delta n$ measured at the stagnation point. The constants of proportionality approximately matched with the known stress-optical coefficients $C$, of the respective fluids, suggesting that $\Delta P_{ex} \approx \Delta n / C = \Delta \sigma$.~\cite{Sharma2015,Fuller}
In this work, we attempt to more properly relate $\Delta P_{ex}$ to $\Delta \sigma$ by considering the macroscopic power balance for flow through each of our geometries, thus enabling a more accurate estimation of the extensional viscosity to be obtained from the experimental pressure drop measurements (see details in Sec.~\ref{etaE}).
\section{Results}
\label{Res}
\subsection{Newtonian flow field characterization}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.8]{Fig5.pdf}
\vspace{-0.1in}
\caption {Normalized velocity fields with superimposed streamlines for the flow of the 100~ppm poly(acrylamide) solution in (a,d,g) uniaxial, (b,e,h) planar, and (c,f,i) biaxial extension at the nominal extension rates and nominal Weissenberg numbers indicated.
}
\label{100ppmPIV}
\end{center}
\end{figure*}
For completeness, we commence the presentation of the experimental results by showing the Newtonian flow field in each extensional flow configuration (Fig.~\ref{NewtPIV}). Normalized fields of the velocity magnitude measured by $\upmu$-PIV, with streamlines superimposed to indicate the direction of flow, are shown in Fig.~\ref{NewtPIV}(a), (b) and (c) for uniaxial, planar and biaxial extension, respectively, at Reynolds numbers $\text{Re} < 0.05$. In each case, the velocity field is symmetric about the flow axes, with a stagnation point at the coordinate origin, as expected. Velocity profiles extracted along the flow axes are shown in Fig.~\ref{NewtPIV}(d), (e) and (f) (below the respective velocity field), in comparison with the numerical predictions for Newtonian creeping flow (available in Refs. \citenum{Alves2008} and \citenum{Haward2023}, for the OSCER and the OUBER, respectively). It is clear that over the measurable ranges of the accessible flow axes in each device configuration, the experimental velocity profiles agree very well with the respective numerical predictions, giving confidence that the microfluidic devices and the experimental setup are performing satisfactorily. It should be noted that over the $z=0$ plane in the OUBER device, the 2D $\upmu$-PIV measurement provides the $u'(x')$ velocity profile, which agrees with the $u(x)$ profile for $\abs{x'} \lesssim R$ and $\abs{x} \lesssim R$, as shown in Part I.~\cite{Haward2023} In other words, the velocity field has good axisymmetry over a radial distance of $r = \sqrt{x^2 + y^2} \approx R$ about the $z$-axis.
\subsection{Polymer solution flow field characterization}
\label{flowfield}
We proceed to examine the flow field in the case of the viscoelastic PAA-based test solutions. Fig.~\ref{100ppmPIV} shows normalized velocity magnitude fields obtained in each of the three extensional flow configurations for the 100~ppm PAA solution over a range of imposed nominal extension rates. At lower nominal rates, such that $\text{Wi}_{nom}$ is only slightly above unity, the flow field in each case (uniaxial, planar, and biaxial, shown in Fig.~\ref{100ppmPIV}(a), (b), and (c), respectively) appears to be rather similar to that observed for the flow of Newtonian fluid (shown in Fig.~\ref{NewtPIV}(a), (b), and (c), respectively). However, close inspection of the velocity magnitude contours for uniaxial extension of the polymer solution (Fig.~\ref{100ppmPIV}(a)) reveals a local minimum in the velocity along the stretching axis (i.e., along the $x'=0$ centerline). In contrast, for Newtonian flow, the Poiseuille-like velocity profile is maximal along the center of the outlet channels. This local minimum in velocity along the stretching direction is not evident in either the planar (Fig.~\ref{100ppmPIV}(b)) or biaxial (Fig.~\ref{100ppmPIV}(c)) flows at these rather low imposed values of $\text{Wi}_{nom}$. For the uniaxial flow, increasing the nominal Weissenberg number to $\text{Wi}_{nom}=2.5$ (Fig.~\ref{100ppmPIV}(d)), results in the centerline minimum of the velocity profile across the outlet channels becoming more pronounced. For the planar extensional flow at $\text{Wi}_{nom}=6.3$ (Fig.~\ref{100ppmPIV}(e)), and for biaxial extension at a similar $\text{Wi}_{nom}=5.0$ (Fig.~\ref{100ppmPIV}(f)), still no obvious difference from Newtonian flow can be discerned (see Fig.~\ref{NewtPIV}(b,c) for comparison). At sufficiently high $\text{Wi}_{nom}$, uniaxial and planar extensional flows of the 100~ppm PAA solution exhibit elastic instabilities manifested as an asymmetry of the flow. For uniaxial extension, this occurs for a critical nominal Weissenberg number $\text{Wi}_{nom,c} \approx 5$ as a distinct distortion of the streamlines close to the stagnation point, and develops with increasing $\text{Wi}_{nom}$ into the strong flow asymmetry shown in Fig.~\ref{100ppmPIV}(g) for $\text{Wi}_{nom}=10.1$. For planar extension, a somewhat higher critical value $\text{Wi}_{nom,c} \approx 13$ is necessary before the flow exhibits instability. Fig.~\ref{100ppmPIV}(h) shows a strongly asymmetric flow state observed in planar extension for $\text{Wi}_{nom}=25.4$. In contrast, for biaxial extension, no obvious sign of instabilty is observed even at the highest achievable nominal Weissenberg number $\text{Wi}_{nom}=20.2$ (Fig.~\ref{100ppmPIV}(i)); indeed, the kinetics appear to remain essentially Newtonian-like.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.75]{Fig6.pdf}
\caption {Illustration of the flow modification along the stretching direction resulting from the flow of poly(acrylamide) solutions in (a,d) uniaxial, (b,e) planar, and (c,f) biaxial extension. Flow velocity profiles are measured across a device outlet 1~mm downstream of the stagnation point. (a,b,c) show normalized profiles of the streamwise flow velocity for the 100~ppm PAA solution at various nominal extension rates and compared against the result for the Newtonian solvent. (d,e,f) show normalized profiles of the streamwise flow velocity for all the tested polymer solutions at the highest nominal extension rate (indicated in the plot) for which all are deemed to be steady and symmetric, again compared against the result for the Newtonian solvent.
}
\label{PAA_trans_PIV}
\end{center}
\end{figure*}
The flow asymmetry observed in the OSCER device (Fig.~\ref{100ppmPIV}(h)) has been observed previously in various experiments involving planar stagnation point extensional flows.~\cite{Gardner1982,Arratia2006,Poole2007,Rocha2009,Haward2012a,Haward2013,Haward2016c} It is considered to be a purely elastic phenomenon driven by elastic tensile stress on the strongly curving streamlines that pass through the birefringent strand in the vicinity of the stagnation point, a mechanism consistent with the well-known elastic instability criterion introduced by McKinley and coworkers.~\cite{Pakdel1996,McKinley1996,Oztekin1997,Haward2016c} The asymmetric flow state observed under uniaxial extension in the OUBER device (Fig.~\ref{100ppmPIV}(g)) appears to be similar in form to that observed in the OSCER (Fig.~\ref{100ppmPIV}(h)), but it is unclear exactly how this asymmetry is oriented in the 3D `axisymmetric' flow field of the OUBER. A detailed investigation is beyond the scope of the present work and will require careful visualization in 3D, possibly using microtomographic flow velocimetry.~\cite{Haward2023} A cursory investigation indicates that the asymmetry in the OUBER device (Fig.~\ref{100ppmPIV}(g)) is steady in time and not rotating around the stretching axis. Most likely, it selects a favored orientation due to the presence of the four planar inlet channels which break the perfect axisymmetry of the flow away from the $z$-axis, similar to the instability reported by Afonso et al.~\cite{Afonso2010} in numerical simulations of viscoelastic flow in a 6-arm cross-slot.
In this work, which is focused on extensional rheometry, we wish to avoid elastic instabilities. Each viscoelastic test fluid is driven to the point of instability only in order to determine the limiting values of $\text{Wi}_{nom}$ up to which the extensional flow field generated around the stagnation point remains stable and symmetric. For experimental determination of the extensional viscosity under each extensional flow configuration (Sec.~\ref{etaE}), the range of extension rates is restricted to $\text{Wi}_{nom} < 0.5 \text{Wi}_{nom,c}$ in order to ensure the measurement is made while the flow is stable and symmetric.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.75]{Fig7.pdf}
\caption {Quantification of extension rates determined from velocity fields measured with poly(acrylamide) solutions in (a,d) uniaxial, (b,e) planar, and (c,f) biaxial extension. (a,b,c) show normalized streamwise velocity profiles along the stretching axes measured for the 100~ppm PAA solution at various nominal extension rates and compared against the result for the Newtonian solvent. (d,e,f) show the measured extension rates as a function of the average flow velocity for all the tested polymer solutions and compared against the result for the Newtonian solvent. The extension rate in each case is determined by averaging the velocity gradient on the relevant axis over the spatial domain indicated in the respective plot. Solid lines are fits to the experimental data points of the form described in the main text.
}
\label{PAA_PIV}
\end{center}
\vspace{-0.1in}
\end{figure*}
The modification of the Newtonian flow field by the presence of polymer (i.e., the development of local minima in the velocity profiles, mentioned above) is rendered more apparent by extracting velocity profiles across the channel outlets. Fig.~\ref{PAA_trans_PIV} shows normalized profiles of the streamwise flow velocity taken across a channel outlet 1~mm downstream of the stagnation point. Such profiles for the 100~ppm PAA solution (for which example velocity fields at different extension rates are provided in Fig.~\ref{100ppmPIV}) are shown in comparison with the Newtonian case for uniaxial, planar, and biaxial extension in Fig.~\ref{PAA_trans_PIV}(a), (b), and (c), respectively. During uniaxial extension in the OUBER device (Fig.~\ref{PAA_trans_PIV}(a)), the velocity profile obtained for the polymer solution agrees well with that of the Newtonian fluid at low $\dot\varepsilon_{nom}$, but an increasingly pronounced local minimum develops at $x'=0$ as the flow rate through the device is increased. At high $\dot\varepsilon_{nom}$, the profile becomes asymmetric about $x'=0$ as the symmetric base flow becomes unstable and the elastic asymmetry (illustrated in Fig.~\ref{100ppmPIV}(g)) develops. For planar extension in the OSCER device (Fig.~\ref{PAA_trans_PIV}(b)), the velocity profile across the channel outlet for the 100~ppm PAA solution again agrees well with that of the Newtonian fluid at low $\dot\varepsilon_{nom}$. In this case, with increasing $\dot\varepsilon_{nom}$, there is a progressive modification to the velocity profile, with some flattening of the central peak at $y=0$, but the profile remains almost parabolic. For the 100~ppm PAA solution in biaxial extension in the OUBER device, no significant difference is noticable compared to the Newtonian flow profile, even up to the highest values of $\dot\varepsilon_{B,nom}$ examined (Fig.~\ref{PAA_trans_PIV}(c)).
To illustrate the effects of increasing the polymer concentration, Fig.~\ref{PAA_trans_PIV}(d), (e), and (f) shows the velocity profiles across the channel outlets for all of the tested fluids in uniaxial, planar, and biaxial extension (respectively). In each case the profiles are shown for a fixed value of the nominal extension rate (the highest for which all the flows are considered stable and symmetric). In general, the degree of flow modification caused by viscoelasticity becomes increasingly severe with increasing polymer concentration. For uniaxial extension at $\dot\varepsilon_{nom} = 1.66~\text{s}^{-1}$ (Fig.~\ref{PAA_trans_PIV}(d)), at lower polymer concentrations of 50 and 100~ppm, the velocity profile is flattened compared with the Newtonian case, but an increasing local minimum in the centerline flow velocity develops as the PAA concentration is raised to 200~ppm and above. In planar extension at $\dot\varepsilon_{nom} = 4.16~\text{s}^{-1}$ (Fig.~\ref{PAA_trans_PIV}(e)), and low polymer concentrations of 50 and 100~ppm of PAA, the profiles are essentially Newtonian-like. For 200~ppm of polymer, the profile becomes flattened compared with the Newtonian case, and for 400~ppm a local minimum around $y=0$ becomes evident. In the case of biaxial extension at $\dot\varepsilon_{B,nom} = 13.3~\text{s}^{-1}$, flow modification by the polymer is evident at 200 and 400~ppm of PAA, where the flow velocity is reduced about the centerline and the profiles become flattened again (Fig.~\ref{PAA_trans_PIV}(f)). However, in biaxial extension, the flow profiles always remain essentially parabolic.
Velocity profiles with a local minimum in the streamwise velocity along the stretching axis have been reported a number of times in the literature studying stagnation point flows of viscoelastic fluids (e.g., Refs.~\citenum{Lyazid1980,Gardner1982,Dunlap1987,Harlen1990,Haward2010b,Haward2012c}). The reduction in flow velocity on the axis (relative to a Newtonian fluid) is associated with the localized stretching of polymers that pass near the stagnation point and are subsequently advected downstream along the outlet centerline. For polymer solutions that exhibit measurable flow-induced birefringence, this stretching results in the appearance of a characteristic `birefringent strand' localized along the stretching axis (e.g., Refs.~\citenum{Keller1985,Harlen1990,Harlen1992,Remmelgas1999,Becherer2008,Becherer2009,Haward2012c}), and indicative of high extensional stress.~\cite{Fuller} Within the strand, the fluid behaves elastically and exhibits a much higher extensional viscosity than the fluid flowing outside the strand, where the polymer is relatively unstretched, and the fluid remains Newtonian-like. The elastic strand thus acts as an internal stress boundary layer in the flow, driving velocity perturbations that resist the stretching, and thus giving rise to the modified flow profile observed.~\cite{Harlen1990}
The modification to the Newtonian flow field by the stretching of the polymer along the extensional axis reduces the true extension rate along the stretching axis, as assessed in Fig.~\ref{PAA_PIV}. Normalized profiles of the streamwise velocity component along the stretching axis are shown for the 100~ppm PAA solution over a range of nominal extension rates under uniaxial, planar, and biaxial extension in Fig.~\ref{PAA_PIV}(a), (b), and (c), respectively. Here we only consider flows that are deemed stable and symmetric. Under uniaxial extension (Fig.~\ref{PAA_PIV}(a)), the velocity profile for the 100~ppm PAA solution agrees well with the Newtonian profile for $\dot\varepsilon_{nom} \leq 0.83~\text{s}^{-1}$, but increasingly deviates from the Newtonian profile as the imposed flow rate is increased beyond $\dot\varepsilon_{nom} \approx 0.83~\text{s}^{-1}$. Under planar extension (Fig.~\ref{PAA_PIV}(b)), as $\dot\varepsilon_{nom}$ is increased, only a slight deviation from the Newtonian profile is evident even at the highest nominal extension rates tested (up to 16.7~s$^{-1}$), while for biaxial extension (Fig.~\ref{PAA_PIV}(c)), the profiles for the polymer solution remain Newtonian-like for $\dot\varepsilon_{B,nom}$ up to 53.1~s$^{-1}$ (the highest imposed value).
In Fig.~\ref{PAA_PIV}(d), (e), and (f) we plot the measured extension rate (determined from velocity profiles such as those shown in Fig.~\ref{PAA_PIV}(a), (b), and (c)) as a function of the imposed flow rate or nominal extension rate in uniaxial, planar, and biaxial extension, respectively. Here data is shown for all the tested polymer solutions and is compared against the Newtonian result (shown by the dashed grey lines). For uniaxial extension (Fig.~\ref{PAA_PIV}(d)), we report $\dot\varepsilon = \partial w/\partial z$, which is evaluated along the $z$-axis for $\abs{z} \leq 4R$ (i.e., within the range over which the flow field is optimized~\cite{Haward2023}). Similarly, in planar extension (Fig.~\ref{PAA_PIV}(e)), we report $\dot\varepsilon = \partial u/\partial x$, evaluated along the $x$-axis for $\abs{x} \leq 12W$. In biaxial extension, (Fig.~\ref{PAA_PIV}(f)), we report $\dot\varepsilon_B = \partial u'/\partial x'$, which is evaluated along the $x'$-axis for $\abs{x'} \leq R$ (the range over which axisymmetry applies such that $u'(x') \equiv u(x)$, see Fig.~\ref{NewtPIV}(f)). As shown in Fig.~\ref{PAA_PIV}(d), (e), and (f), for all three extensional flow configurations and all polymer concentrations, the polymer solutions follow the Newtonian trend for lower imposed nominal extension rates, but progressively deviate below the Newtonian trend at higher flow rates. The large polymeric stresses induced by the extensional stretching always retard the evolution of the velocity profile downstream of the stagnation point. For each polymeric fluid in each extensional flow configuration, the experimental data are well-described by a curve of the form $\dot\varepsilon = \dot\varepsilon_{nom} - A{\dot\varepsilon_{nom}}^B$ (where $A$ and $B$ are fitting constants), as shown by the respective solid lines. The fitted curves allow calculation of the true strain rate for arbitrary imposed flow conditions with the given fluid.
In general, from Figs.~\ref{PAA_trans_PIV} and \ref{PAA_PIV} it is evident that the deviation from Newtonian-like behavior becomes more severe with increasing polymer concentration and increasing extension rate. Also, the greatest effects are observed in uniaxial extension while biaxial extension causes the mildest modification of the flow field. The effect of planar extension appears to be intermediate between uniaxial and biaxial. A similar general trend is also apparent from the onset of instability, which for a given polymer solution occurs at the lowest nominal Weissenberg number in uniaxial extension, followed by planar extension, and finally biaxial extension. This may have implications for the utility of the different flows for extensional rheometry, as will be discussed further below.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.88]{Fig8.pdf}
\vspace{-0.05in}
\caption {Pressure drop measurements made with the Newtonian solvent and PAA solutions in uniaxial (left column), planar (middle column), and biaxial (right column) extensional flow. (a), (b) and (c) show representative raw measurements of the total pressure drop ($\Delta P_{tot}$) and the pressure due to shear ($\Delta P_{sh}$) versus time as the imposed flow rate $U$ is increased in steps. Inserts schematically indicate the flow configurations used for measurement of $\Delta P_{tot}$ and $\Delta P_{sh}$ in each case. The respective steady state plateau value of the pressure drop at each increment in $U$ is presented as a function of $U$ in (d), (e) and (f), where the dashed lines represent linear fits to $\Delta P_{sh}$ passing through the origin, for low $U < 2.5~\text{mm~s}^{-1}$. (g), (h), and (i) show the excess pressure drop ($\Delta P_{ex} = \Delta P_{tot}-\Delta P_{sh}$) for all of the tested fluids as a function of the nominal strain rate in uniaxial, planar, and biaxial extension, respectively. Dashed grey lines are linear fits through the data for the Newtonian fluid, with constants of proportionality $\approx1.3$~Pa~s, $\approx0.9$~Pa~s, and $\approx2.0$~Pa~s in parts (g), (h), and (i), respectively. Error bars on $\Delta P_{ex}$ for the polymer solutions represent the standard deviation over at least five repeated measurements.
}
\label{pressure}
\end{center}
\end{figure*}
\subsection{Pressure drop}
In Fig.~\ref{pressure}(a), (b), and (c), we illustrate raw measurements of the pressure drop in uniaxial, planar, and biaxial extension (respectively) using a few of the polymeric fluids and also the Newtonian solvent. As described in Sec.~\ref{press} (and illustrated schematically by the respective inserts to Fig.~\ref{pressure}(a), (b), and (c)), for each fluid and each flow configuration, the pressure drop is measured once with the device in full operation mode to obtain the total pressure drop $\Delta P_{tot}$, and once with half of the inlet channels and half of the outlet channels disabled in order to quantify the contribution of shear $\Delta P_{sh}$. The measurements are made by programming the syringe pumps to increment the average flow velocity through the device $U$ in a stepwise fashion, with sufficient time at each step for the pressure to rise and stabilize to a steady plateau value. Subsequently, the average plateau pressure drop is measured at each step in flow rate in order to obtain curves such as those shown in Fig.~\ref{pressure}(d), (e), and (f), which result from the raw pressure traces shown in Fig.~\ref{pressure}(a), (b), and (c), respectively. Note that in each flow configuration, for the Newtonian fluid $\Delta P_{tot} \approx \Delta P_{sh}$, with $\Delta P_{sh} \propto U$ (as indicated by the dashed grey lines). For lower concentration polymer solutions, $\Delta P_{sh} \propto U$ (as indicated by the dashed black and red lines in Fig.~\ref{pressure}(d) and (e), repsectively), although at higher polymer concentrations, $\Delta P_{sh}$ may increase superlinearly at higher imposed flow rates (as shown by the deviation of the experimental data points from the dashed blue line in Fig.~\ref{pressure}(f)). Most notably, for the polymer solutions at low average flow velocities $\Delta P_{tot} \approx \Delta P_{sh}$ (as we also observe for the Newtonian fluid), but beyond a certain value of $U$, the two curves diverge and $\Delta P_{tot}$ rises clearly above $\Delta P_{sh}$, leading to a significant and clearly measurable excess pressure drop $\Delta P_{ex} = \Delta P_{tot} - \Delta P_{sh}$.
In Fig.~\ref{pressure}(g), (h), and (i), we present $\Delta P_{ex}$ as a function of the nominal extension rate for all of the tested fluids under uniaxial, planar, and biaxial extension (respectively). Here, error bars represent the standard deviation over a minimum of five repeated measurements. Scatter and uncertainty in the data for the Newtonian solvent fluid is significant, so for better clarity the data in each plot is represented by a linear fit (dashed grey line). In general, at low nominal deformation rates the data from the polymer solutions follow a roughly linear trend (similar to the Newtonian fluid), but depart from that trend as the extension rate increases, turning upwards and tending towards eventual plateau values. Note that, in several cases, the measured pressure difference ($\Delta P_{tot}$ and/or $\Delta P_{sh}$) exhibits fluctuations at higher extension rates (even though the flow field may be deemed steady and symmetric, see Sec.~\ref{flowfield}). For this reason, the maximum extension rates at which data is curtailed in Fig.~\ref{PAA_PIV}(d), (e), and (f) and in Fig.~\ref{pressure}(g), (h), and (i) (respectively) may not always precisely match, and in some cases there is an increase in the reported error bars in $\Delta P_{ex}$ at the highest rates tested.
\subsection{Extensional rheometry}
\label{SecetaE}
With the required experimental data, i.e., true measured extension rates, and excess pressure drops in hand (see Figs.~\ref{PAA_PIV} and~\ref{pressure}, respectively), we turn to simple theory in order to understand how to obtain a robust estimate of the extensional viscosity from our measurements.
Considering for simplicity the 2D flow through the OSCER geometry, a macroscopic power balance leads to the following approximate expression (see Appendix~\ref{appendixA} for details):
\begin{equation}
2\Delta P_{ex} Q \approx (\sigma_{xx}-\sigma_{yy})\dot\varepsilon\mathcal{V}_P
,
\label{OSCERDtau}
\end{equation}
which allows estimation of the extensional viscosity:
\begin{equation}
\eta_{P} \approx 2\Delta P_{ex} Q/ \dot\varepsilon^2\mathcal{V}_P
,
\label{eta_p}
\end{equation}
where $\mathcal{V}_P$ is an appropriate volume of fluid within the device over which $\sigma_{xx}-\sigma_{yy}$ can be considered `homogeneous' for averaging purposes.
For a Newtonian fluid, or for a viscoelastic fluid flowing at $\text{Wi} \ll 0.5$, $\mathcal{V}_P$ might be expected to roughly equate with the volume of the optimized region of the OSCER geometry, $V_{OSC} =480W^2H~(=4.8 \times10^{-9}~\text{m}^3$ for the specific device being used here with $H=1$~mm and $W=0.1$~mm, Sec.~\ref{geom}).
Since for a Newtonian fluid in planar extension the Trouton ratio is known to be $\text{Tr}=(\sigma_{xx}-\sigma_{yy})/\dot\varepsilon \eta = 4$, Eq.~\ref{OSCERDtau} can be rewritten and rearranged to give:
\begin{equation}
\mathcal{V}_{P,Newt} = 2 \Delta P_{ex} Q / 4\dot\varepsilon^2 \eta
.
\label{V_Newt}
\end{equation}
We know that for Newtonian flow in the OSCER device, $\dot\varepsilon \approx \dot\varepsilon_{nom} = 0.1U/W = 0.1Q/4W^2H$. Furthermore, from the fit to the Newtonian data shown in Fig.~\ref{pressure}(h), we know that $\Delta P_{ex} \approx 0.9 \dot\varepsilon$. Hence, Eq.~\ref{V_Newt} can be evaluated to give a unique value $\mathcal{V}_{P,Newt}\approx126W^2H$ (or $\approx 0.25V_{OSC}$). We should indeed anticipate that $\mathcal{V}_{P,Newt} < V_{OSC}$ since the extensional kinematics are not entirely homogeneous over the whole OSCER geometry due to the shear induced at the channel walls.~\cite{Haward2012c,Haward2016c} In fact, a volume of magnitude $0.25V_{OSC}$ corresponds well to the volume of the OSCER geometry over which the local extension rate is within 10\% of $\dot\varepsilon_{nom}$, i.e., the region where the extensional kinematics are almost homogeneous.
As discussed in Sec.~\ref{flowfield} and elsewhere, for a viscoelastic fluid in an extensional flow at $\text{Wi} > 0.5$, a localized elastic `birefringent strand' develops along the stretching axis within which $\Delta \sigma$ becomes dominant.~\cite{Harlen1990,Harlen1992,Becherer2008,Becherer2009,Haward2012c} Accordingly, we expect that for $\text{Wi} > 0.5$, the relevant volume $\mathcal{V}_P$ to use in Eqs.~\ref{OSCERDtau} or \ref{eta_p}, would be that of the birefringent strand.
In principle, in certain cases, it may be possible to directly measure the dimensions of the birefringent elastic strand in order to determine its volume experimentally as a function of the Weissenberg number. However, this is not always practical or even possible; for instance in the present case, the fluids being used are too weakly birefringent to make the required optical measurements. For this reason, we seek a simple and pragmatic approach to estimate the volume of the elastic strand $\mathcal{V}_{P,strand}$, which may be used more generally in Eqs.~\ref{OSCERDtau} and \ref{eta_p} when $\text{Wi} \geq 0.5$.
For the FENE-P model in planar extension, an approximate scaling relation for the dimensionless half-width $w_{strand}^*$ of the sheet-like birefringent strand in terms of $\text{Wi}$ and the polymer extensibility $L$ has been presented by Becherer et al.~\cite{Becherer2008} The scaling has been shown to adequately describe measurements of the birefringent strands that develop in the OSCER device for $\text{Wi} \geq 0.5$.~\cite{Haward2012c} In the asymptotic limit of high $\text{Wi}$, the dimensionless strand half-width scales as $w_{strand}^* \sim 1/L$.~\cite{Crowley1976,Rallison1988,Harlen1992,Renardy2006,Becherer2008} A dimensional strand half-width can be computed as $w_{strand} \approx l_{opt}/L$, where $l_{opt}=15W$ is the lengthscale over which the flow field in the OSCER device is optimized and the flow is purely extensional.
Accordingly, we approximate the asymptotic volume of the sheet-like birefringent strand as $\mathcal{V}_{P,strand} \approx 1800 W^2H/L$ (strand length $l_{strand}=30W$, width $2w_{strand} = 30W/L$, height $h_{strand} = 2H$), which for $L=143$ (Sec.~\ref{fluids}) yields $\mathcal{V}_{P,strand} \approx \mathcal{V}_{P,Newt}/10$. We propose the following simple piecewise approximation to the volume $\mathcal{V}_P$ as a function of $\text{Wi}$ for planar extensional flow of dilute solutions of flexible polymers in the OSCER device:
\begin{equation}
\mathcal{V}_P=
\begin{cases}
\mathcal{V}_{P,Newt} \approx 126W^2H & \text{:} \quad \text{Wi}<0.5 \\
\mathcal{V}_{P,strand} \approx 1800W^2H/L & \text{:} \quad \text{Wi}\geq0.5 \\
\end{cases}
.
\label{Vp}
\end{equation}
By following similar arguments, we arrive at the following equations to evaluate $\eta_E$ in the case of uniaxial extensional flow in the OUBER device:
\begin{equation}
2\Delta P_{ex} Q \approx (\sigma_{zz}-\sigma_{xx})\dot\varepsilon\mathcal{V}_E
,
\label{OUBER_uni_Dtau}
\end{equation}
i.e.:
\begin{equation}
\eta_{E} \approx 2\Delta P_{ex} Q/ \dot\varepsilon^2\mathcal{V}_E
,
\label{eta_E}
\end{equation}
where
\begin{equation}
\mathcal{V}_E=
\begin{cases}
\mathcal{V}_{E,Newt} \approx 47R^3 & \text{:} \quad \text{Wi}<0.5 \\
\mathcal{V}_{E,strand} \approx 250\uppi R^3/L & \text{:} \quad \text{Wi}\geq0.5 \\
\end{cases}
.
\label{VE}
\end{equation}
Here, $\mathcal{V}_{E,Newt} \approx 0.31 V_{OUB}$, where $V_{OUB} =154R^3$ is the volume of the optimized region of the OUBER device. For the particular OUBER device being used in this study, with $R=0.4$~mm (Sec.~\ref{geom}), $V_{OUB} \approx 9.86 \times 10^{-9}~\text{m}^3$. For uniaxial extension, Harlen et al (1992)~\cite{Harlen1992} have shown for the FENE-CR model that the asymptotic radius of the birefringent strand at high $\text{Wi}$ scales as $1/\sqrt{L}$, which is consistent with experimental measurements made in classical opposed-jets apparatus,~\cite{Muller1988,Cathey1990} as well as in a 6-arm cross-slot device.~\cite{Haward2019b} In Eq.~\ref{VE}, $\mathcal{V}_{E,strand}$ represents the volume of a columnar birefringent strand of asymptotic diameter $10R/\sqrt{L}$ and length $10R$.
\begin{figure*}[ht]
\begin{center}
\includegraphics[scale=0.59]{Fig9.pdf}
\caption {Extensional viscosity as a function of the extensional strain rate for the Newtonian solvent and the dilute PAA solutions, determined from excess pressure drop measurements made in (a) uniaxial, (b) planar, and (c) biaxial elongational flow.
}
\label{etaE}
\end{center}
\end{figure*}
For biaxial extensional flow in the OUBER device, we obtain:
\begin{equation}
\Delta P_{ex} Q \approx (\sigma_{xx}-\sigma_{zz})\dot\varepsilon_B\mathcal{V}_B
,
\label{OUBER_bi_Dtau}
\end{equation}
i.e.:
\begin{equation}
\eta_{B} \approx \Delta P_{ex} Q/ \dot\varepsilon_B^2\mathcal{V}_B
,
\label{eta_B}
\end{equation}
where
\begin{equation}
\mathcal{V}_B=
\begin{cases}
\mathcal{V}_{B,Newt} \approx 37.5R^3 & \text{:} \quad \text{Wi}<0.5 \\
\mathcal{V}_{B,strand} \approx 250\uppi R^3/L & \text{:} \quad \text{Wi}\geq0.5 \\
\end{cases}
,
\label{VB}
\end{equation}
and in this case $\mathcal{V}_{B,Newt} \approx 0.24 V_{OUB}$. For biaxial stagnation point extension, the thickness of the disk-like birefringent ``strand'' region (e.g., Refs.~\citenum{Frank1971,Backus2002}) that forms over the $z=0$ plane has not been well characterized in the literature. Additionally, since there is a disagreement between constitutive models regarding the response of polymeric solutions to biaxial extension, we do not wish to rely on the prediction of a specific (e.g., FENE-type) model to describe the dimension of the resulting birefringent region. Limited experimental data obtained from a dilute solution of near-monodisperse atactic polystyrene in a 6-arm cross-slot device over a range of $\text{Wi}$ in both uniaxial and biaxial elongation is available in Ref.~\citenum{Haward2019b}. Assuming that the asymptotic strand radius in uniaxial extension scales as $1/\sqrt{L}$ (as established above), the data available in Ref.~\citenum{Haward2019b} indicates the asymptotic thickness of the birefringent region in biaxial extension to scale as $\sim 1/L$,~\cite{Haward2019b} similar to the result for planar elongation.~\cite{Becherer2008,Haward2012c} Accordingly, in Eq.~\ref{VB}, $\mathcal{V}_{B,strand}$ represents the volume of a birefringent disk of asymptotic thickness $10R/L$ and diameter $10R$.
Note that the computation of $\mathcal{V}_{P,Newt}$, $\mathcal{V}_{E,Newt}$, and $\mathcal{V}_{B,Newt}$ using the excess pressure drop measured for the Newtonian fluid serves as a Newtonian calibration of the respective flow, ensuring the correct value of $\text{Tr}$ will be obtained for the Newtonian fluid when those volumes are used to compute the extensional viscosity from Eqs.~\ref{eta_p}, \ref{eta_E}, and \ref{eta_B}, respectively.
The volumes computed for the Newtonian fluid and for the birefringent strand regions (given in Eqs.~\ref{Vp}, \ref{VE}, and \ref{VB}), should be valid for $\text{Wi}\rightarrow0$ and $\text{Wi}\rightarrow \infty$, respectively. The step change in $\mathcal{V}_P$, $\mathcal{V}_E$, and $\mathcal{V}_B$ at $\text{Wi}=0.5$ is clearly unphysical, however at present the functional form that the volume should take across this transition between Newtonian-like and viscoelastic behavior is unclear. The question over this is further complicated if we are to consider a $\text{Wi}$-dependent strand volume, which vanishes for $\text{Wi}\leq0.5$,~\cite{Becherer2008} suggesting a possibly nonmonotonic variation of the volume with $\text{Wi}$. Despite this shortcoming, we consider the formulation described above to be an advance on earlier estimates of the extensional viscosity from pressure drop measurements. For instance, in the cross-slot and OSCER devices, the rather coarse approximation $\eta_P \approx \Delta P_{ex}/\dot\varepsilon$ was generally used (e.g., Ref.~\citenum{Haward2012c}), although some prior attempts have also been made to account for the dimensions of the birefringent strand.~\cite{Haward2010,Haward2010b} Given the experimentally-established linear relation between $Q$ and $\dot\varepsilon$ (at least for the Newtonian fluid) it can be seen that our new approximation to the planar extensional viscosity measured in the OSCER device (Eq.~\ref{eta_p}) can be written $\eta_P \approx (\Delta P_{ex}/\dot\varepsilon) \times F$, where $F = 8W^2H/0.1\mathcal{V}_P$ is a dimensionless correction factor essentially consisting of a ratio of geometric parameters. An analogy can be drawn with the determination of the shear viscosity from the experimentally-measured pressure drop along a pipe or channel of arbitrary cross-section, where the pressure drop must be scaled by the ratio of the hydraulic diameter to the length of the conduit.~\cite{Walters}
The extensional viscosities $\eta_E(\dot\varepsilon)$, $\eta_P(\dot\varepsilon)$, and $\eta_B(\dot\varepsilon_B)$, computed as described above, are shown for each of the experimental test fluids in Fig.~\ref{etaE}(a), (b), and (c), respectively. In each plot, the Newtonian result (dashed gray line) is computed using the respective fit to the excess pressure drop data shown in Fig.~\ref{pressure}(g,h,i), resulting in a constant value for the extensional viscosity equal to $3\eta_s$, $4\eta_s$, and $6\eta_s$ in uniaxial, planar and biaxial extension, respectively. At low extension rates, the results obtained for the dilute polymer solutions generally approach a constant value close to (or slightly higher than) that of the Newtonian solvent, as expected given the slightly higher shear viscosities of the polymer solutions (Table~\ref{tab1}). With increasing extension rate, each of the polymeric fluids undergo a gradual increase in the extensional viscosity, before an abrupt jump takes place at a specific extension rate (corresponding to $\text{Wi}=0.5$) that reduces with the polymer concentration (due to the increasing relaxation time, see Table~\ref{tab1}). Subsequently, for further increasing extension rate, there is a general trend for the extensional viscosity to gradually increase towards an apparent plateau.
\begin{figure*}[ht]
\begin{center}
\includegraphics[scale=0.75]{Fig10.pdf}
\caption {Apparent Trouton ratio $\text{Tr}_{app}$ as a function of the Weissenberg number $\text{Wi}$ for (a) 50 ppm PAA, (b) 100 ppm PAA, (c) 200 ppm PAA, and (d) 400 ppm PAA in uniaxial, planar and biaxial elongational flow. Data points are experimentally determined from pressure loss measurements. Lines are computed from the FENE-P model with the solvent-to-total viscosity ratio $\beta$ matched to the respective fluid (given in Table~\ref{tab1}), and the extensibility parameter (or stretch ratio) $L=143$ (Sec.~\ref{fluids}).
}
\label{Tr_vs_Wi}
\end{center}
\end{figure*}
Similarities and differences between the responses of the fluids to the different modes of extensional flow are made more obvious by viewing their apparent Trouton ratio as a function of the Weissenberg number in Fig.~\ref{Tr_vs_Wi}. Here, we also plot the response predicted by the FENE-P model under homogeneous uniaxial, planar and biaxial elongation conditions, where the model parameters $\beta$ and $L^2$ are matched to those of the fluids (solvent-to-total viscosity ratio $\beta$ given in Table~\ref{tab1}, and extensibility $L=143$, as computed in Sec.~\ref{fluids}). In general, within experimental uncertainty, for all polymer solutions at low $\text{Wi}$, $\text{Tr}_{app}$ approaches the expected (i.e., Newtonian) limiting value. Also, consistent with the model prediction, in all cases $\text{Wi}=0.5$ marks the point of an abrupt increase in $\text{Tr}_{app}$. For the most dilute 50~ppm PAA solution (Fig.~\ref{Tr_vs_Wi}(a)), for $\text{Wi}>0.5$ the experimental data obtained from uniaxial, planar, and biaxial extension closely follow the respective FENE-P prediction towards the high-$\text{Wi}$ plateau, where $\text{Tr}_{app}(\text{uniaxial}) = \text{Tr}_{app}(\text{planar}) =2\times \text{Tr}_{app}(\text{biaxial})$. As the PAA concentration is increased through Fig.~\ref{Tr_vs_Wi}(b), (c), and (d), the agreement with the FENE-P model prediction becomes less convincing, with an increasingly gradual approach of the experimental data towards the eventual plateau in the extensional viscosty and a less distinct difference between the response in biaxial extension from that in uniaxial and planar extension. These changes with polymer concentration may be because the polymer solutions (although dilute with $c/c^* \leq 0.1$) can not all be considered ``ultradilute'', and at higher polymer concentrations intermolecular interactions may play an increasingly important role when the molecules become stretched by the flow.~\cite{Dunlap1987,Harrison1998,Clasen2006,Stoltz2006} Experimental data from uniaxial extension~\cite{Clasen2006} and molecular dynamics simulations in planar extension~\cite{Stoltz2006} suggest that the ultradilute limit, for which interchain interactions are negligible even at high polymer extensions, is approached as the polymer concentration is decreased towards $c/c^* \approx 0.01$, similar to the concentration regime of our 50~ppm PAA solution. Notably, at the higher polymer concentrations tested (Fig.~\ref{Tr_vs_Wi}(c,d)), we are unable to see a convincing high-Weissenberg number plateau in $\text{Tr}_{app}$ for uniaxial extensional flow. In these cases, the onset of elastic flow instability curtails the measurement before a plateau is reached. In fact, the flow modification is so severe in these cases (see Fig.~\ref{PAA_PIV}(d)) that $\dot\varepsilon$ almost ceases to increase with the imposed flow velocity. Since the excess pressure drop across the device continues to increase with the imposed flow velocity (Fig.~\ref{pressure}(g)), this causes an apparent upturn in $\eta_E$ and $\text{Tr}_{app}$ to an asymptote as $\text{Wi}\rightarrow1$, before the flow field breaks symmetry.
\section{Discussion and Conclusions}
\label{SumCon}
In this work we have used the new OUBER device (developed in Part I of this paper~\cite{Haward2023}), and also the pre-existing OSCER device,~\cite{Haward2012c} to perform the first experimental comparison of the extensional rheology of dilute mobile polymer solutions in planar, uniaxial and biaxial extensional flow. In each case the extensional viscosity is assessed using common methods: micro-particle image velocimetry is used to quantify the relevant extensional strain rate along the stretching axis (or axes) and excess pressure drop measurements are used to estimate the respective tensile stress difference as a function of the extension rate. The estimate of the tensile stress difference is based on a new analysis of the macroscopic power balance for each extensional flow configuration. In each case, the Reynolds number of the flow is maintained sufficiently low for inertial contributions to the pressure drop to be ignored.
Several differences are observed between the responses of the polymer solutions in the various extensional flow configurations. Specifically, for a given nominal extension rate, the flow field is most severely modified (compared to that of a Newtonian fluid) in uniaxial extension. By contrast, in biaxial extensional flow of the polymer solutions the kinematics remain essentially Newtonian-like even at much higher nominal extension rates. Planar extensional deformations of the polymer solutions have an intermediate effect, showing more significant flow modification than in biaxial extension, but being less severe than in uniaxial extension. Stability constraints follow a similar trend: for a given polymer solution, uniaxial flow destabilizes and becomes asymmetric at the lowest extension rate, while biaxial flow remains stable to much higher extension rates, with planar flow being intermediate.
Our estimates of the extensional viscosities and apparent Trouton ratios of the polymer solutions, based on our new analysis method, are broadly consistent with the predictions of the FENE-P constitutive model. Within experimental error, the data approach the expected limiting values at low extension rates or Weissenberg numbers, and all of the polymeric test solutions exhibit an increase in the extensional viscosity (or $\text{Tr}_{app}$) at $\text{Wi}=0.5$. For $\text{Wi}>0.5$, the extensional viscosities of the polymeric fluids generally approach towards high-$\text{Wi}$ plateau values. For our most dilute 50~ppm polymer solution (for which $c/c^* \approx0.01$ and which can be considered ``ultradilute''), the high-$\text{Wi}$ plateau values of the extensional viscosity agree very well with the prediction of the FENE-P model, for which $\eta_E = \eta_P = 2\eta_B$. However, this agreement progressively deteriorates with increasing polymer concentration. This is likely because, although the polymer chains are dilute and non-interacting under quiescent conditions (with $c/c^* \leq 0.1$), interchain interactions become increasingly important at higher concentrations as the molecules unravel in the extensional flow.
From a practical point of view an important consideration is the early onset of instability in the uniaxial extensional flow. This can cause difficulty in reaching the high-$\text{Wi}$ plateau of the extensional viscosity, and therefore limits the utility of the device for measurement of $\eta_E$. The greater relative stability of planar and biaxial extensional flows allow measurements to be made to much higher extension rates (or larger limiting Weissenberg numbers), and for plateau values of $\eta_P$ and $\eta_B$ to be found more convincingly. On the other hand, it will be of fundamental interest to understand the three-dimensional form of the symmetry-breaking flow instability that occurs in uniaxial extension, which is not readily ascertained from the 2D flow velocimetry performed in the present work (see Fig.~\ref{100ppmPIV}(g)). It will also be important to better understand the physical reason for why uniaxial extension is the most prone to the onset of elastic instability. We speculate at present that this is related to the greater thickness of the birefringent strand (radius $\sim 1/\sqrt{L}$ in uniaxial extension, but half-width $\sim 1/L$ in planar and biaxial extension). We think this is likely to be the reason why the flow modification along the stretching axis is most severe in the case of uniaxial extension, and that this probably contributes to the onset of instability, too.
We reiterate that our estimates of the extensional viscosities $\eta_E,~\eta_P,$ and $\eta_B$ are just that (i.e., \emph{estimates}), as will necessarily always be the case since generating a spatially homogeneous extensional flow throughout the whole of the rheometric device is practically impossible. However, we have for the first time designed and fabricated microfluidic devices that generate reasonably homogeneous approximations to uniaxial, planar and biaxial extension over spatial regions much larger than the characteristic lengthscale of the geometry, and which also permit comparable assessments to be made of the tensile stress difference as a function of the imposed extension rate, all at low levels of fluid inertia. We believe that our new approach to estimating the tensile stress difference from the excess pressure drop, based on an approximate solution to the macroscopic power balance (see Sec.~\ref{SecetaE}) represents a significant advance on prior analyses in similar such devices. Nevertheless, there remains significant scope for further improvement. Specifically, at the transition between Newtonian-like and viscoelastic behavior at $\text{Wi}=0.5$, the abrupt step down in the volume used to compute the extensional viscosity in Eqs.~\ref{eta_p}, \ref{eta_E}, and \ref{eta_B} is unphysical. Clearly this transition should be smooth, but at present it is unclear how it should be described mathematically. Numerical simulations may provide insight to this computational rheology problem, although it is possible that polydispersity of the polymer molecular weight also contributes to the form of this transition region, which may be confounding. Furthermore, the estimation of the volume of the birefringent strand for $\text{Wi}>0.5$ should strictly depend on $\text{Wi}$, which further complicates the analysis. An analytical solution (based on the FENE-P model) for the width of the birefringent strand as a function of $\text{Wi}$ is available for planar extension,~\cite{Becherer2008} but the corresponding elastic boundary layer analysis needs to be solved (and confirmed experimentally) for uniaxial and biaxial extension. In our ongoing work, we intend to focus our research efforts towards addressing these issues.
\begin{acknowledgments}
S.J.H, S.V., and A.Q.S. gratefully acknowledge the support of the Okinawa Institute of Science and Technology Graduate University (OIST) with subsidy funding from the Cabinet Office, Government of Japan, along with funding from the Japan Society for the Promotion of Science (JSPS, Grant Nos. 21K03884 and 22K14184). MAA acknowledges the support by LA/P/0045/2020 (ALiCE), UIDB/00532/2020 and UIDP/00532/2020 (CEFT), funded by national funds through FCT/MCTES (PIDDAC). We are indebted to Prof. R. J. Poole for insightful discussions.
\end{acknowledgments}
\section*{Conflict of Interest Statement}
The authors have no conflicts to disclose.
\section*{Data Availability Statement}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2,877,628,089,356 | arxiv | \section{Introduction}
\label{intro}
In \cite{bgwexact}, with an eye toward expanding the class of locally compact groups $G$ for which the Baum-Connes conjecture holds, the authors study ``crossed-product functors'' that take an action of $G$ on a $C^*$-algebra and produce an ``exotic crossed product'' between the full and reduced ones, in a functorial manner.
In \cite{graded}, inspired by \cite{BrownGuentner}, we studied certain quotients of $C^*(G)$
that lie ``above'' $C^*_r(G)$ --- namely those that carry a quotient coaction. We characterized these intermediate (which we now call ``large'') quotients as those for which the annihilator $E$, in the Fourier-Stieltjes algebra $B(G)$, of the kernel of the quotient map is a $G$-invariant weak* closed ideal containing the reduced Fourier-Stieltjes algebra $B_r(G)$ (which we now call ``large ideals'' of $B(G)$). We went on to show how, if $\alpha$ is an action of $G$ on a $C^*$-algebra $B$, large ideals $E$ induce
exotic crossed products
$B\rtimes_{\alpha,E}G$ intermediate between the full and reduced crossed products $B\rtimes_\alpha G$ and $B\rtimes_{\alpha,r} G$. One of the reasons this interested us is the possibility of ``$E$-crossed-product duality'' for a coaction $\delta$ of $G$ on a $C^*$-algebra $A$: namely, that the canonical surjection $\Phi:A\rtimes_\delta G\rtimes_{\widehat\delta} G\to A\otimes \mathcal K(L^2(G))$ descends to an isomorphism $A\rtimes_\delta G\rtimes_{\widehat\delta,E} G\cong A\otimes\mathcal K$. Crossed-product duality $A\rtimes_\delta G\rtimes_{\widehat\delta,r} G\cong A\otimes\mathcal K$ for normal coactions and $A\rtimes_\delta G\rtimes_{\widehat\delta} G\cong A\otimes\mathcal K$ for maximal coactions are the extreme cases with $E=B_r(G)$ and $B(G)$, respectively. We (rashly) conjectured that every coaction satisfies $E$-crossed-product duality for some $E$, and moreover that the dual coaction on every $E$-crossed product $B\rtimes_{\alpha,E} G$ satisfies $E$-crossed-product duality.
In \cite{BusEch} Buss and Echterhoff disproved the first of the above conjectures, and in \cite{exotic} we proved the second conjecture (also proved independently in \cite{BusEch}).
(Note: in \cite[Introduction]{exotic} we wrote
``We originally wondered whether every coaction satisfies $E$-crossed product duality for some $E$. In \cite[Conjecture~6.12]{graded} we even conjectured that this would be true for dual coactions.'' This is slightly inaccurate --- \cite[Conjecture~6.14]{graded} concerns dual coactions, while \cite[Conjecture~6.12]{graded} says
``Every coaction satisfies $E$-crossed-product duality for some $E$.'')
In \cite[Section~3]{exotic} we showed that every large ideal $E$ of $B(G)$ induces a transformation $(A,\delta)\mapsto (A^E,\delta^E)$ of $G$-coactions, where $A^E=A/A_E$ and
$A_E=\ker (\text{\textup{id}}\otimes q_E)\circ\delta$, and where in turn $q_E:C^*(G)\to C^*_E(G):=C^*(G)/{}\ann E$ is the quotient map.
In this paper we further study this assignment $(A,\delta)\mapsto (A^E,\delta^E)$.
When $(A,\delta)=(B\rtimes_\alpha G,\widehat\alpha)$, the composition
\[
(B,\alpha)\mapsto (B\rtimes_\alpha G,\widehat\alpha)
\mapsto (B\rtimes_{\alpha,E} G,\widehat\alpha^E)
\]
was shown to be functorial in \cite[Corollary~6.5]{BusEch};
here we show that $(A,\delta)\mapsto (A^E,\delta^E)$ is functorial, giving an alternate proof of the Buss-Echterhoff result.
In fact, we study more general functors on the category of coactions of $G$,
of which the functors induced by large ideals of $B(G)$ are special cases.
We are most interested in the connection with the crossed-product functors of \cite{bgwexact}.
In particular, we introduce a
``minimal exact and Morita compatible" coaction functor.
When this functor is composed with the full-crossed-product functor for actions,
the result is a crossed-product functor in the sense of \cite{bgwexact}.
We briefly discuss various possibilities for how these functors are related:
for example, is the composition mentioned in the preceding sentence equal to the minimal exact and Morita compatible crossed product functor of \cite{bgwexact}?
Also, is the greatest lower bound of the coaction functors defined by large ideals itself defined by a large ideal?
These are just two among others that arise naturally from these considerations.
Unfortunately, at this early stage we have more questions than answers.
After a short section on preliminaries, in \secref{categories} we define the categories we will use for our functors.
In numerous previous papers, we have used ``nondegenerate categories'' of $C^*$-algebras and their equivariant counterparts. But these categories are inappropriate for the current paper, primarily due to our need for short exact sequences. Rather, here we must use ``classical'' categories, where the homomorphisms go between the $C^*$-algebras themselves, not into multiplier algebras. In order to avail ourselves of tools that have been developed for the equivariant nondegenerate categories, we include a brief summary of how the basic theory works for the classical categories.
Interestingly, the crossed products are the same in both versions of the categories (see Corollaries~\ref{same coaction product} and \ref{same product}).
In \secref{sec:coaction functor} we define \emph{coaction functors}, which are a special type of functor on the classical category of coactions.
Composing such a coaction functor with the full-crossed-product functor on actions, we get crossed-product functors in the sense of Baum, Guentner, and Willett (\cite{bgwexact}); it remains an open problem whether every such crossed-product functor is of this form.
Maximalization and normalization are examples of coaction functors, but there are lots more --- for example, the functors induced by large ideals of the Fourier-Stieltjes algebra (see \secref{large}).
In \secref{sec:coaction functor}
we also define a partial ordering on coaction functors, and prove in \thmref{glb} that the class of coaction functors is complete in the sense that every nonempty collection of them has a greatest lower bound.
We also introduce the general notions of \emph{exact} or \emph{Morita compatible} coaction functors, and prove in \thmref{glb exact} that they are preserved by greatest lower bounds.
We show in \propref{compose} that our partial order, as well as our exactness and Morita compatibility, are consistent with those of \cite{bgwexact}.
To help prepare for the study of coaction functors associated to large ideals,
in \secref{decreasing} we introduce \emph{decreasing coaction functors},
and show how Morita compatibility takes a particularly simple form for these functors in \propref{decreasing morita}.
In \secref{E coaction functor} we study the coaction functors $\tau_E$ induced by large ideals $E$ of $B(G)$.
Perhaps interestingly, maximalization is not among these functors.
We show that these functors $\tau_E$ are decreasing in \propref{E coaction functor}, and how the test for exactness simplifies significantly for them in \propref{exact E functor}. Moreover, $\tau_E$ is automatically Morita compatible (see \propref{morita}).
Composing maximalization followed by $\tau_E$, we get a related functor that we call \emph{$E$-ization}. We show that these functors are also Morita compatible in \thmref{E morita}. Although $E$-ization and $\tau_E$ have similar properties, they are not naturally isomorphic functors (see \remref{mE}).
The outputs of $E$-ization are precisely the coactions we call \emph{$E$-coactions},
namely those for which \emph{$E$-crossed-product duality} holds \cite[Theorem~4.6]{exotic} (see also \cite[Theorem~5.1]{BusEch}). \thmref{equivalence} shows that $\tau_E$ gives an equivalence of maximal coactions with $E$-coactions.
We close \secref{E coaction functor} with some open problems that mainly concern the application of the coaction functors $\tau_E$ to the theory of \cite{bgwexact}.
Finally, \appxref{module lemmas} supplies a few tools that show how some properties of coactions can be more easily handled using the associated $B(G)$-module structure.
We thank the referee for comments that significantly improved our paper.
\section{Preliminaries}\label{prelim}
We refer to \cite[Appendix~A]{enchilada}, \cite{maximal} for background material on coactions of locally compact groups on $C^*$-algebras, and \cite[Chapters~1--2]{enchilada} for
imprimitivity bimodules
and their linking algebras.
Throughout, $G$ will denote a locally compact group, and $A,B,C,\dots$ will denote $C^*$-algebras.
Recall from \cite[Definition~1.14]{enchilada} that the \emph{multiplier bimodule} of an $A-B$ imprimitivity bimodule $X$ is defined as $M(X)=\mathcal L_B(B,X)$, where $B$ is regarded as a Hilbert module over itself in the canonical way. Also recall \cite[Corollary~1.13]{enchilada} that $M(X)$ becomes an $M(A)-M(B)$ correspondence in a natural way.
The \emph{linking algebra} of an $A-B$ imprimitivity bimodule $X$ is $L(X)=\smtx{A&X\\\widetilde X&B}$,
where $\widetilde X$ is the \emph{dual} $B-A$ imprimitivity bimodule.
$A$, $B$, and $X$ are recovered from $L(X)$ via the \emph{corner projections} $p=\smtx{1&0\\0&0},q=\smtx{0&0\\0&1}\in M(L(X))$.
The multiplier algebra of $L(X)$ decomposes as $M(L(X))=\smtx{M(A)&M(X)\\M(\widetilde X)&M(B)}$.
We usually omit the lower left corner of the linking algebra, writing $L(X)=\smtx{A&X\\{*}&B}$, since it takes care of itself.
Also recall from \cite[Lemma~1.52]{enchilada} (see also \cite[Remark~(2) on page 307]{er:multipliers}) that nondegenerate homomorphisms of imprimitivity bimodules correspond bijectively to nondegenerate homomorphisms of their linking algebras.
For an action $(A,\alpha)$ of $G$,
we use the following notation for the (full) crossed product $A\rtimes_\alpha G$:
\begin{itemize}
\item $i_A=i_A^\alpha:A\to M(A\rtimes_\alpha G)$
and $i_G=i_G^\alpha:G\to M(A\rtimes_\alpha G)$
comprise the universal covariant homomorphism $(i_A,i_G)$.
\item $\widehat\alpha$ is the dual coaction on $A\rtimes_\alpha G$.
\end{itemize}
On the other hand, for the reduced crossed product $A\rtimes_{\alpha,r} G$ we use:
\begin{itemize}
\item $\Lambda:A\rtimes_\alpha G\to A\rtimes_{\alpha,r} G$ is the regular representation.
\item $i_A^r=i_A^{\alpha,r}=\Lambda\circ i_A$
and $i_G^r=i_G^{\alpha,r}=\Lambda\circ i_G$
are the canonical maps into $M(A\rtimes_{\alpha,r} G)$.
\item $\widehat\alpha^n$ is the dual coaction on $A\rtimes_{\alpha,r} G$.
\end{itemize}
We will need to work extensively with morphisms between coactions,
in particular (but certainly not only) with maximalization and normalization.
In the literature, the notation for these maps has not yet stabilized.
Recall that a coaction $(A,\delta)$ is called \emph{normal} if the canonical surjection
$\Phi:A\rtimes_\delta G\rtimes_{\widehat\delta} G\to A\otimes \mathcal K(L^2(G))$
factors through an isomorphism of the reduced crossed product
$\Phi:A\rtimes_\delta G\rtimes_{\widehat\delta,r} G\to A\otimes \mathcal K(L^2(G))$,
and \emph{maximal} if $\Phi$ itself is an isomorphism.
One convention is,
for a coaction $(A,\delta)$ of $G$,
to write
\[
q^m_A:(A^m,\delta^m)\to (A,\delta)
\]
for a maximalization,
and
\[
q^n_A:(A,\delta)\to (A^n,\delta^n)
\]
for a normalization.
We will use this convention for maximalization, but we will need the letter ``$q$'' for other similar purposes, and it would be confusing to keep using it for normalization.
Instead, we will use
\[
\Lambda=\Lambda_A:(A,\delta)\to (A^n,\delta^n)
\]
for normalization --- this is supposed to remind us that for crossed products by actions the regular representation
\[
\Lambda:(A\rtimes_\alpha G,\widehat\alpha) \to (A\rtimes_{\alpha,r} G,\widehat\alpha^n)
\]\
is a normalization.
\subsection*{$B(G)$-modules}
Every coaction $(A,\delta)$ of $G$ induces $B(G)$-module structures on both $A$ and $A^*$:
for $f\in B(G)$ define
\begin{align*}
&f\cdot a=(\text{\textup{id}}\otimes f)\circ\delta(a)\righttext{for}a\in A\\
&(\omega\cdot f)(a)=\omega(f\cdot a)\righttext{for}\omega\in A^*,a\in A.
\end{align*}
Many properties of coactions can be handled using these module structures rather than the coactions themselves.
For example (see \appxref{module lemmas}),
letting $(A,\delta)$ and $(B,\epsilon)$ be coactions of $G$:
\begin{enumerate}
\item
A homomorphism
$\phi:A\to B$ is $\delta-\epsilon$ equivariant,
meaning $\epsilon\circ\phi=\bar{\phi\otimes\text{\textup{id}}}\circ\delta$,
if and only if
\[
\phi(f\cdot a)=f\cdot \phi(a)\righttext{for all}f\in B(G),a\in A.
\]
\item
An ideal $I$ of $A$ is \emph{weakly} $\delta$-invariant,
meaning $I\subset \ker\bar{q\otimes\text{\textup{id}}}\circ\delta$, where $q:A\to A/I$ is the quotient map,
if and only if
\[
B(G)\cdot I\subset I,
\]
because
the proof of \cite[Lemma~3.11]{graded} shows that
\[
\ker(q\otimes\text{\textup{id}})\circ\delta=\{a\in A:B(G)\cdot a\subset I\}.
\]
\end{enumerate}
If $I$ is a weakly $\delta$-invariant ideal of $A$, then in fact $I=\ker(q\otimes\text{\textup{id}})\circ\delta$, and the quotient map $q$ is $\delta-\delta^I$ equivariant for a unique coaction $\delta^I$ on $A/I$, which we call the \emph{quotient coaction}.
Since the slice map $\text{\textup{id}}\otimes f:M(A\otimes C^*(G))\to M(A)$ is strictly continuous \cite[Lemma~1.5]{lprs},
the $B(G)$-module structure extends to $M(A)$, and moreover $m\mapsto f\cdot m$ is strictly continuous on $M(A)$ for every $f\in B(G)$.
\subsection*{Short exact sequences}
Several times we will need the following elementary lemma.
\begin{lem}\label{nine}
Let
\[
\xymatrix{
&0\ar[d]&0\ar[d]&0\ar[d]
\\
0\ar[r]&A_1\ar[r]^{\phi_1}\ar[d]_{\iota_A}
&B_1\ar[r]^{\psi_1}\ar[d]_{\iota_B}
&C_1\ar[r]\ar[d]_{\iota_C}
&0
\\
0\ar[r]&A_2\ar[r]^{\phi_2}\ar[d]_{\pi_A}
&B_2\ar[r]^{\psi_2}\ar[d]_{\pi_B}
&C_2\ar[r]\ar[d]_{\pi_C}
&0
\\
0\ar[r]&A_3\ar[r]^{\phi_3}\ar[d]
&B_3\ar[r]^{\psi_3}\ar[d]
&C_3\ar[r]\ar[d]
&0
\\
&0&0&0
}
\]
be a commutative diagram of $C^*$-algebras,
where the columns and the middle row are exact.
Suppose that the $\iota$'s are inclusions of ideals and the $\pi$'s are quotient maps.
Then the bottom \(interesting\) row is exact if and only if both
\begin{equation}\label{exact 1}
\phi_2(A_1)=\phi_2(A_2)\cap B_1
\end{equation}
and
\begin{equation}\label{exact 2}
\phi_2(A_2)+B_1\supset \psi_2^{-1}(C_1).
\end{equation}
\end{lem}
\begin{proof}
Since $\psi_3\circ\pi_B=\pi_C\circ\psi_2$
and $\psi_B$ and $\phi_2$ are both surjective,
$\psi_3$ is surjective,
so
the bottom row is automatically exact at $C_3$.
Thus, the only items to consider are exactness of
the bottom row at $A_3$ and $B_3$,
i.e., whether
$\phi_3$ is injective and $\phi_3(A_3)=\ker\psi_3$.
$\phi_3$ is injective if and only if
$\ker \pi_A=\ker \pi_B\circ\phi_2$,
which, since $\phi_2$ is injective, is equivalent to
\eqref{exact 1}.
Since $\psi_2\circ\phi_2=0$ and $\pi_A$ is surjective, $\psi_3\circ\phi_3=0$,
so $\phi_3(A_3)\subset\ker\psi_3$ automatically.
Since
$\pi_B$ is surjective,
$\phi_3(A_3)\supset\ker \psi_3$
if and only if
\[
\pi_B^{-1}(\phi_3(A_3)\supset \pi_B^{-1}(\ker\psi_3).
\]
Since
$\pi_B^{-1}(\phi_3(A_3))$ consists of all $b\in B_2$
for which
\[
\pi_B(a)\in \phi_3(A_3)=\phi_3(\pi_A(A_2))=\pi_B(\phi_2(A_2)),
\]
equivalently
for which
\[
b\in \phi_2(A_2)+B_1,
\]
we see that
\[
\pi_B^{-1}(\phi_3(A_3))=\phi_2(A_2)+B_1.
\]
On the other hand,
\[
\pi_B^{-1}(\ker\psi_3)=\ker\psi_3\circ \pi_B=\ker \pi_C\circ\psi_2
=(\psi_2)^{-1}(C_1).
\]
Thus, the bottom row is exact at $B_3$ if and only if
\eqref{exact 2} holds.
\end{proof}
\begin{rem}
In the above lemma, we were interested in characterizing exactness of the bottom (interesting) row of the diagram.
\cite[Lemma~3.5]{bgwexact} does this
in terms of subsets of the spectrum $\widehat{B_2}$,
which could just as well be done with subsets of
$\prim B_2$,
but we instead did it directly in terms of ideals of $B_2$.
Note that, although the $\iota$'s were inclusion maps of ideals and the $\pi$'s were the associated quotient maps, for technical reasons we did \emph{not} make the analogous assumptions regarding the middle row.
There is a standard characterization from homological algebra,
namely that the bottom row is exact if and only if the top row is ---
this is sometimes called the nine lemma, and is an easy consequence of the snake lemma.
However, this doesn't seem to lead to a simplification of the above proof.
\end{rem}
\section{The categories and functors}\label{categories}
We want to study coaction functors.
Among other things, we want to apply the theory we've developed in \cite{graded, exotic}
concerning large ideals $E$ of $B(G)$.
On the other hand, it is important to us in this paper for
our theory to be consistent with the crossed-product functors of \cite{bgwexact}.
In particular, we want to be able to apply our coaction functors to short exact sequences.
But now a subtlety arises: some of us working in noncommutative duality for $C^*$-dynamical systems have grown accustomed to doing every thing in the ``nondegenerate'' categories, where the morphisms are nondegenerate homomorphisms into multiplier algebras (possibly preserving some extra structure).
But the maps in a short exact sequence
\[
\xymatrix@C-10pt{
0\ar[r]& I\ar[r]^\phi& A\ar[r]^\psi& B\ar[r]&0
}
\]
are not of this type, most importantly $\phi$.
So, we must replace the nondegenerate category by something else.
We can't just allow arbitrary homomorphisms into multiplier algebras, because they wouldn't be composable.
We can't require ``extendible homomorphisms'' into multiplier algebras, because the inclusion of an ideal won't typically have that property.
Thus, it seems we need to use the ``classical category'' of homomorphisms between the $C^*$-algebras, not into multiplier algebras.
This is what \cite{bgwexact} uses, so presumably our best chance of seamlessly connecting with their work is to do likewise.
Since most of the existing categorical theory of coactions uses nondegenerate categories, it behooves us to establish the basic theory we need in the context of the classical categories, which we do below.
One drawback to this is that the covariant homomorphisms and crossed products can't be constructed using morphisms from the classical $C^*$-category --- so, it seems we have to abandon some of the appealing features of the nondegenerate category.
\begin{defn}
In the \emph{classical category $\mathbf{C}^*$ of $C^*$-algebras}, a morphism $\phi:A\to B$ is a *-homomorphism from $A$ to $B$ in the usual sense (no multipliers).
\end{defn}
\begin{defn}
In the \emph{classical category $\mathbf{Coact}$ of coactions}, a morphism $\phi:(A,\delta)\to (B,\epsilon)$ is a morphism $\phi:A\to B$ in $\mathbf{C}^*$ such that
the diagram
\[
\xymatrix@C+10pt{
A \ar[r]^-\delta \ar[d]_\phi
&\widetilde M(A\otimes C^*(G)) \ar[d]^{\bar{\phi\otimes\text{\textup{id}}}}
\\
B \ar[r]_-\epsilon
&\widetilde M(B\otimes C^*(G))
}
\]
commutes,
and we call $\phi$ a \emph{$\delta-\epsilon$ equivariant} homomorphism.
\end{defn}
To make sense of the above commuting diagram, recall that for any $C^*$-algebra $C$,
\[
\widetilde M(A\otimes C)=\{m\in M(A\otimes C);m(1\otimes C)\cup (1\otimes C)m\subset A\otimes C\},
\]
and that for any homomorphism $\phi:A\to B$ there is a canonical extension to a homomorphism
\[
\bar{\phi\otimes\text{\textup{id}}}:\widetilde M(A\otimes C)\to \widetilde M(B\otimes C),
\]
by \cite[Proposition~A.6]{enchilada}.
It is completely routine to verify that $\mathbf{C}^*$ and $\mathbf{Coact}$ are categories, i.e., there are identity morphisms and there is an associative composition.
\begin{rem}
Thus, a coaction is not itself a morphism in the classical category; this will cause no trouble.
\end{rem}
To work in the classical category of coactions, we need to be just a little bit careful with covariant homomorphisms and crossed products.
We write $w_G$ for the unitary element of
$M(C_0(G)\otimes C^*(G))=C_b(G,M^\beta(C^*(G)))$
defined by $w_G(s)=s$, where we have identified $G$ with its canonical image in $M(C^*(G))$,
and where the superscript $\beta$ means that we use the strict topology on $M(C^*(G))$.
\begin{defn}\label{cov def}
A \emph{degenerate covariant homomorphism} of a coaction $(A,\delta)$ to a $C^*$-algebra $B$ is a pair $(\pi,\mu)$,
where
$\pi:A\to M(B)$ and $\mu:C_0(G)\to M(B)$ are homomorphisms such that
$\mu$ is nondegenerate
and
the diagram
\[
\xymatrix@C+60pt{
A \ar[r]^-\delta \ar[d]_\pi
&\widetilde M(A\otimes C^*(G)) \ar[d]^{\bar{\pi\otimes\text{\textup{id}}}}
\\
M(B) \ar[r]_-{\ad(\mu\otimes\text{\textup{id}})(w_G)\circ (\cdot\otimes 1)}
&M(B\otimes C^*(G))
}
\]
commutes,
where the bottom arrow is the map
$b\mapsto \ad(\mu\otimes\text{\textup{id}})(w_G)(b\otimes 1)$.
If $\pi:A\to M(B)$ happens to be nondegenerate, we sometimes refer to $(\pi,\mu)$ as a \emph{nondegenerate covariant homomorphism} for clarity.
\end{defn}
\begin{rem}
The homomorphisms $\pi$ and $\mu$ are not morphisms in the classical category $\mathbf{C}^*$; this will cause no trouble, but does present a danger of confusion.
\end{rem}
\begin{rem}
Thus, in our new definition of degenerate covariant homomorphism, we include all the usual nondegenerate covariant homomorphisms, and we add more, allowing the homomorphism $\pi$ of $A$ (but not the homomorphism $\mu$ of $C_0(G)$) to be degenerate.
\end{rem}
\begin{rem}
We wrote $M(B\otimes C^*(G))$, rather than the relative multiplier algebra $\widetilde M(B\otimes C^*(G))$,
in the above diagram, because $\bar{\pi\otimes\text{\textup{id}}}$ will in general not map $\widetilde M(A\otimes C^*(G))$ into $\widetilde M(B\otimes C^*(G))$
since $\pi$ does not map $A$ into $B$.
\end{rem}
Although we have apparently enlarged the supply of covariant homomorphisms, in some sense we have not.
In \lemref{nd} below we use the following terminology:
given $C^*$-algebras $A\subset B$, the \emph{idealizer} of $A$ in $B$ is $\{b\in B:bA\cup Ab\subset A\}$.
\begin{lem}\label{nd}
Let $(\pi,\mu)$ be a degenerate covariant homomorphism of $(A,\delta)$ to $B$, as in \defnref{cov def}.
Put
\[
B_0=\clspn\{\pi(A)\mu(C_0(G))\}.
\]
Then:
\begin{enumerate}
\item
$B_0=\clspn\{\mu(C_0(G))\pi(A)\}$.
\item
$B_0$ is a $C^*$-subalgebra of $M(B)$.
\item
$\pi$ and $\mu$ map into the idealizer $D$ of $B_0$ in $M(B)$.
Let $\rho:D\to M(B_0)$
be the homomorphism
given by
\[
\rho(m)b_0=mb_0\righttext{for}m\in D\subset M(B),b_0\in B_0\subset B,
\]
and let $\pi_0=\rho\circ\pi:A\to M(B_0)$ and $\mu_0=\rho\circ\mu:C_0(G)\to M(B_0)$.
Then $(\pi_0,\mu_0)$ is a nondegenerate covariant homomorphism of $(A,\delta)$ to $B_0$.
\item
For all $a\in A$ and $f\in C_0(G)$ we have
\[
\pi_0(a)\mu_0(f)=\pi(a)\mu(f)\in B_0.
\]
\end{enumerate}
\end{lem}
\begin{proof}
For (1), by symmetry
it suffices to show that for $a\in A$ and $f\in C_0(G)$ we have
\[
\mu(f)\pi(a)\in B_0,
\]
and we use an old trick from \cite[proof of Lemma~2.5]{lprs}:
since $A(G)$ is dense in $C_0(G)$,
it suffices to take $f\in A(G)$,
and then since $A(G)$ is a nondegenerate $C^*(G)$-module via $\<y,g\cdot x\>=\<xy,g\>$ for $x,y\in C^*(G),g\in A(G)$,
by Cohen's Factorization Theorem we can write $f=g\cdot x$.
Then the following approximation suffices:
\begin{align*}
\mu(f)\pi(a)
&=\<(\mu\otimes\text{\textup{id}})(w_G),\text{\textup{id}}\otimes f\>\pi(a)
\\&=\<(\mu\otimes\text{\textup{id}})(w_G)(\pi(a)\otimes 1),\text{\textup{id}}\otimes f\>
\\&=\<\bar{\pi\otimes\text{\textup{id}}}(\delta(a))(\mu\otimes\text{\textup{id}})(w_G),\text{\textup{id}}\otimes g\cdot x\>
\\&=\<(\pi\otimes\text{\textup{id}})((1\otimes x)\delta(a))(\mu\otimes\text{\textup{id}})(w_G),\text{\textup{id}}\otimes g\>
\\&\approx \sum_i\<(\pi\otimes\text{\textup{id}})(a_i\otimes x_i)(\mu\otimes\text{\textup{id}})(w_G),\text{\textup{id}}\otimes g\>
\\&\hspace{1in}\text{for finitely many $a_i\in A,x_i\in C^*(G)$}
\\&= \sum_i\<(\pi(a_i)\otimes x_i)(\mu\otimes\text{\textup{id}})(w_G),\text{\textup{id}}\otimes g\>
\\&= \sum_i\pi(a_i)\<(\mu\otimes\text{\textup{id}})(w_G),\text{\textup{id}}\otimes g\cdot x_i\>
\\&= \sum_i\pi(a_i)\mu(g\cdot x_i).
\end{align*}
From (1) it follows that $B_0$ is a $*$-subalgebra of $B$, giving (2).
(3).
It is now clear that
\[
\pi(A)B_0\cup B_0\pi(A)\subset B_0,
\]
and similarly for $\mu$,
so both $\pi$ and $\mu$ map into $D$.
It is also clear that $\pi_0$ and $\mu_0$ map nondegenerately into $M(B_0)$.
The covariance property for $(\pi_0,\mu_0)$ follows quickly from that of $(\pi,\mu)$:
if $a\in A$ then
\begin{align*}
\ad (\mu_0\otimes\text{\textup{id}})(w_G)(\pi_0(a)\otimes 1)
&=(\rho\otimes\text{\textup{id}})\circ \ad (\mu\otimes\text{\textup{id}})(w_G)(\pi(a)\otimes 1)
\\&=(\rho\otimes\text{\textup{id}})\circ \bar{\pi\otimes\text{\textup{id}}}\circ\delta(a)
\\&=\bar{\pi_0\otimes\text{\textup{id}}}\circ\delta(a).
\end{align*}
(4).
This follows from the construction.
\end{proof}
Let $(A\rtimes_\delta G,j_A,j_G)$ be the usual crossed product of the coaction $(A,\delta)$,
i.e., $(j_A,j_G)$ is a nondegenerate covariant homomorphism of $(A,\delta)$ to $A\rtimes_\delta G$
that is universal in the sense that if $(\pi,\mu)$ is any nondegenerate covariant homomorphism of $(A,\delta)$ to a $C^*$-algebra $B$,
then there is a unique homomorphism
$\pi\times\mu:A\rtimes_\delta G\to M(B)$
such that
\begin{align*}
\pi\times\mu\circ j_A&=\pi\\
\pi\times\mu\circ j_G&=\mu,
\end{align*}
equivalently such that
\begin{equation}\label{universal}
\pi\times\mu\bigl(j_A(a)j_G(f)\bigr)=\pi(a)\mu(f)\righttext{for all}a\in A,f\in C_0(G).
\end{equation}
\begin{cor}\label{same coaction product}
With the above notation, $(j_A,j_G)$ is also universal among degenerate covariant homomorphisms \(in the sense of \defnref{cov def}\).
More precisely:
for any degenerate covariant homomorphism $(\pi,\mu)$ of $(A,\delta)$ to $B$ as in \defnref{cov def},
there is a unique homomorphism $\pi\times\mu:A\rtimes_\delta G\to M(B)$
satisfying \eqref{universal}.
\end{cor}
\begin{proof}
Let $\pi_0,\mu_0,B_0$ be as in the preceding lemma.
Then we have a unique homomorphism $\pi_0\times\mu_0:A\rtimes_\delta G\to M(B_0)$
such that
\[
\pi_0\times\mu_0\bigl(j_A(a)j_G(f)\bigr)=\pi_0(a)\mu_0(f)\righttext{for all}a\in A,f\in C_0(G).
\]
By construction we have $\pi\times\mu(A\rtimes_\delta G)\subset B_0$.
Since $B_0\subset M(B)$,
we can regard $\pi_0$ as a homomorphism $\pi:A\to M(B)$,
and similarly for $\mu:C_0(G)\to M(B)$.
Then we regard $\pi_0\times\mu_0$ as a homomorphism $\pi\times\mu:A\rtimes_\delta G\to M(B)$,
and trivially \eqref{universal} holds.
Since $\pi_0(a)\mu_0(f)=\pi(a)\mu(f)\in B_0$ for all $a\in A,f\in C_0(G)$,
the homomorphism $\pi\times\mu$ is unique.
\end{proof}
Similarly, and more easily, for actions:
\begin{defn}
In the \emph{classical category $\mathbf{Act}$ of actions}, a morphism $\phi:(A,\alpha)\to (B,\beta)$ is a morphism $\phi:A\to B$ in $\mathbf{C}^*$ such that
\[
\beta_s\circ\phi=\phi\circ \alpha_s\righttext{for all}s\in G.
\]
\end{defn}
\begin{defn}\label{cov act}
A \emph{degenerate covariant homomorphism} of an action $(A,\alpha)$ to a $C^*$-algebra is a pair $(\pi,u)$, where $\pi:A\to M(B)$ is a homomorphism and $u:G\to M(B)$ is a strictly continuous unitary homomorphism such that
\[
\pi\circ\alpha_s=\ad u_s\circ\pi\righttext{for all}s\in G.
\]
\end{defn}
We call $(\pi,u)$ \emph{nondegenerate} if $\pi:A\to M(B)$ is.
\begin{lem}\label{nd act}
Let $(\pi,u)$ be a degenerate covariant homomorphism of an action $(A,\alpha)$ to $B$,
and put
\[
B_0=\clspn\{\pi(A)u(C^*(G))\},
\]
where we use the same notation $u$ for the associated nondegenerate homomorphism $u:C^*(G)\to M(B)$.
Then:
\begin{enumerate}
\item
$B_0=\clspn\{u(C^*(G))\pi(A)\}$.
\item
$B_0$ is a $C^*$-subalgebra of $M(B)$.
\item
$\pi$ and $u$ map into the idealizer $D$ of $B_0$ in $M(B)$.
Let $\rho:D\to M(B_0)$
be the homomorphism
given by
\[
\rho(m)b_0=mb_0\righttext{for}m\in D\subset M(B),b_0\in B_0\subset B,
\]
and let $\pi_0=\rho\circ\pi:A\to M(B_0)$ and $u_0=\rho\circ u:G\to M(B_0)$.
Then $(\pi_0,u_0)$ is a nondegenerate covariant homomorphism of $(A,\alpha)$ to $B_0$.
\item
For all $a\in A$ and $c\in C^*(G)$ we have
\[
\pi_0(a)u_0(c)=\pi(a)u(c)\in B_0.
\]
\end{enumerate}
\end{lem}
Let $(A\rtimes_\alpha G,i_A,i_G)$ be the usual crossed product of the action $(A,\alpha)$,
i.e., $(i_A,i_G)$ is a nondegenerate covariant homomorphism of $(A,\alpha)$ to $A\rtimes_\alpha G$
that is universal in the sense that if $(\pi,u)$ is any nondegenerate covariant homomorphism of $(A,\alpha)$ to a $C^*$-algebra $B$,
then there is a unique homomorphism
$\pi\times u:A\rtimes_\alpha G\to M(B)$
such that
\begin{equation}\label{universal act}
\pi\times u\bigl(i_A(a)i_G(c)\bigr)=\pi(a)u(c)\righttext{for all}a\in A,c\in C^*(G).
\end{equation}
\begin{cor}\label{same product}
With the above notation, $(i_A,i_G)$ is also universal among degenerate covariant homomorphisms \(in the sense of \defnref{cov def}\):
for any degenerate covariant homomorphism $(\pi,u)$ of $(A,\alpha)$ to $B$ as in \defnref{cov act},
there is a unique homomorphism $\pi\times u:A\rtimes_\alpha G\to M(B)$
satisfying \eqref{universal act}.
\end{cor}
If $\phi:(A,\delta)\to (B,\epsilon)$ in $\mathbf{Coact}$,
then a routine adaptation of the usual arguments shows that we get a morphism
\[
\phi\rtimes G=(j_B\circ\phi)\times j_G^B:(A\rtimes_\delta G,\widehat\delta)\to (B\rtimes_\epsilon G,\widehat\epsilon)
\]
in $\mathbf{Act}$,
and similarly if $\phi:(A,\alpha)\to (B,\beta)$ in $\mathbf{Act}$ we get a morphism
\[
\phi\rtimes G=(i_B\circ\phi)\times i_G^B:(A\rtimes_\alpha G,\widehat\alpha)\to (B\rtimes_\beta G,\widehat\beta)
\]
in $\mathbf{Coact}$.
Thus we have crossed-product functors between the classical categories of coactions and actions.
It is also routine to verify that
if $(A,\delta)$ is a coaction then
the canonical surjection
\[
\Phi:A\rtimes_\delta G\rtimes_{\widehat\delta} G\to A\otimes\mathcal K
\]
is a natural transformation between the double crossed-product functor and stabilization.\footnote{It is completely routine to verify that stabilization $A\mapsto A\otimes\mathcal K$ is a functor on the classical category $\mathbf{C}^*$.}
We need to check that normalization and maximalization behave appropriately in the new coaction category.
\subsection*{Maximalization}
A \emph{maximalization} of a coaction $(A,\delta)$ consists of a maximal coaction $(A^m,\delta^m)$ and a surjective morphism $q^m:(A^m,\delta^m)\to (A,\delta)$ in $\mathbf{Coact}$ such that
\[
q^m\rtimes G:A^m\rtimes_{\delta^m} G\to A\rtimes_\delta G
\]
is an isomorphism.
Existence of maximalizations is established in \cite[Theorem~6.4]{fischer}, \cite[Theorem~3.3]{maximal}.
To make maximalization into a functor on the classical category of coactions, we
note that the argument of \cite[Proof of Lemma~6.2]{fischer} carries over to give an appropriate version of the universal property:
given coactions $(A,\delta)$ and $(B,\epsilon)$,
with $\epsilon$ maximal,
and a morphism $\phi:(B,\epsilon)\to (A,\delta)$ in $\mathbf{Coact}$,
there is a unique morphism $\widetilde\phi$ in $\mathbf{Coact}$ making the diagram
\[
\xymatrix@C+10pt{
(B,\epsilon) \ar@{-->}[r]^-{\widetilde\phi} \ar[dr]_\phi
&(A^m,\delta^m) \ar[d]^{q^m}
\\
&(A,\delta)
}
\]
commute.
Thus, given a morphism $\phi:(A,\delta)\to (B,\epsilon)$ in $\mathbf{Coact}$,
there is a unique morphism $\phi^m$ making the diagram
\[
\xymatrix@C+10pt{
(A^m,\delta^m) \ar@{-->}[r]^-{\phi^m} \ar[d]_{q^m_A}
&(B^m,\epsilon^m) \ar[d]^{q^m_B}
\\
(A,\delta) \ar[r]_-\phi
&(B,\epsilon)
}
\]
commute in $\mathbf{Coact}$. Uniqueness makes the assignments $\phi\mapsto \phi^m$ functorial,
and the \emph{maximalizing maps} $q^m$ give a natural transformation from the maximalization functor to the identity functor.
Also, the universal property implies that the maximalization functor is faithful,
i.e., if $\phi,\psi:(A,\delta)\to (B,\epsilon)$ are distinct morphisms in $\mathbf{Coact}$,
then the maximalizations $\phi^m,\psi^m:(A^m,\delta^m)\to (B^m,\epsilon^m)$ are also distinct.
\begin{rem}\label{choice}
It is important for us that maximalization is a \emph{functor};
however, when we refer to $(A^m,\delta^m)$ as ``the'' maximalization of a coaction $(A,\delta)$,
we do not have in mind a specific $C^*$-algebra $A^m$, rather we regard the maximalization as being characterized up to isomorphism by its universal properties, but for the purpose of having a functor we imagine that a choice of maximalization has been made for every coaction --- any other choices would give a naturally isomorphic functor.
On the other hand, whenever we have a maximal coaction $(B,\epsilon)$, we may call a morphism $\phi:(B,\epsilon)\to (A,\delta)$ with the defining property \emph{a maximalization} of $(A,\delta)$.
\end{rem}
\subsection*{Normalization}
A \emph{normalization} of a coaction $(A,\delta)$ consists of a normal coaction $(A^n,\delta^n)$ and a surjective morphism $\Lambda:(A,\delta)\to (A^n,\delta^n)$ in $\mathbf{Coact}$ such that
\[
\Lambda\rtimes G:A\rtimes_{\delta} G\to A^n\rtimes_{\delta^n} G
\]
is an isomorphism.
Existence of normalizations is established in \cite[Proposition~2.6]{fullred}.
To make normalization into a functor on the classical category of coactions, we
note that
\cite[Lemma~2.1]{maximal} says that,
given a morphism $\phi:(A,\delta)\to (B,\epsilon)$ in $\mathbf{Coact}$,
there is a unique morphism $\phi^n$ making the diagram
\[
\xymatrix@C+10pt{
(A,\delta) \ar[r]^-{\phi} \ar[d]_{\Lambda_A}
&(B,\epsilon) \ar[d]^{\Lambda_B}
\\
(A^n,\delta^n) \ar@{-->}[r]_-{\phi^n}
&(B^n,\epsilon^n)
}
\]
commute in $\mathbf{Coact}$. Uniqueness makes the assignments $\phi\mapsto \phi^n$ functorial,
and the \emph{normalizing maps} $\Lambda$ give a natural transformation from the identity functor to the normalization functor.
\begin{rem}
The comments of \remref{choice} can be adapted in an obvious way to normalization,
and also to crossed products, etc.
There are numerous ``natural'' relationships among such functors; for example, maximalization is naturally isomorphic to the composition
\[
(A,\delta)\mapsto (A^n,\delta) \mapsto (A^{nm},\delta^{nm})
\]
of normalization followed by maximalization,
and
the dual coaction $\widehat\alpha^n$ on the reduced crossed product $A\rtimes_{\alpha,r} G$ of an action $(A,\alpha)$ is naturally isomorphic to the normalization
of the dual coaction $\widehat\alpha$ on the full crossed product $A\rtimes_\alpha G$
\cite[Proposition~A.61]{enchilada}.
\end{rem}
The normalization $\Lambda:(A,\delta)\to (A^n,\delta^n)$ of a maximal coaction is also a maximalization of the normal coaction $\delta^n$.
It follows that the normalization functor is faithful,
i.e., if $\phi,\psi:(A,\delta)\to (B,\epsilon)$ are distinct morphisms in $\mathbf{Coact}$,
then the normalizations $\phi^n,\psi^n:(A^n,\delta^n)\to (B^n,\epsilon^n)$ are also distinct.
It follows from this
and surjectivity of
the normalizing maps $\Lambda_A:(A,\delta)\to (A^n,\delta^n)$
that the normalizing maps
are monomorphisms in the category $\mathbf{Coact}$,
i.e., if $\phi,\psi:(A,\delta)\to (B,\epsilon)$ are distinct morphisms in $\mathbf{Coact}$,
then the compositions $\Lambda_B\circ \phi,\Lambda_B\circ \psi:(A,\delta)\to (B^n,\epsilon^n)$ are also distinct.\footnote{The analogous fact for the nondegenerate category of coactions is
\cite[Corollary~6.1.20]{bkq1}.}
\subsection*{Exact sequences}
It is crucial for us to note that
in each of the classical categories $\mathbf{C}^*$, $\mathbf{Coact}$, and $\mathbf{Act}$
there is an obvious concept of short exact sequence.
Nilsen \cite{nil:full} develops the basic theory of short exact sequences for coactions and crossed products.
We briefly outline the essential facts here.
\begin{defn}
Let $(A,\delta)$ be a coaction.
An ideal $I$ of $A$ is \emph{strongly $\delta$-invariant}
if
\[
\clspn\{\delta(I)(1\otimes C^*(G))\}=I\otimes C^*(G).
\]
We will normally just write \emph{invariant} to mean strongly invariant.
\end{defn}
Nilsen proves in \cite[Proposition~2.1, Proposition~2.2, Theorem~2.3]{nil:full}
(see also \cite[Proposition~4.8]{lprs})
that, with the conventions of \cite{nil:full}, if $I$ is strongly invariant then:
\begin{enumerate}
\item $\delta$ restricts to a coaction $\delta_I$ on $I$.
\item $I\rtimes_{\delta_I} G$ is (canonically isomorphic to) an ideal of $A\rtimes_\delta G$.
\item $I$ is \emph{weakly} $\delta$-invariant, i.e.,
$\delta$ descends to a coaction $\delta^I$ on $A/I$.
\item
$0\to I\rtimes_{\delta_I} G\to A\rtimes_\delta G\to (A/I)\rtimes_{\delta^I} G\to 0$
is a short exact sequence in the classical category $\mathbf{C}^*$.
\end{enumerate}
We
point out that Nilsen had to do a bit of work to map $I\rtimes_{\delta_I} G$ into $A\rtimes_\delta G$;
in our framework with the classical categories,
we just
note that the inclusion $\phi:I\hookrightarrow A$ is $\delta_I-\delta$ equivariant,
hence gives a morphism in $\mathbf{Coact}$,
so we can apply the functor $\CP$ to get a morphism
\[
\phi\rtimes G:I\rtimes_{\delta_I} G\to A\rtimes_\delta G\midtext{in}\mathbf{C}^*.
\]
\begin{defn}\label{def:exact}
A functor between any two of the categories $\mathbf{C}^*, \mathbf{Coact}, \mathbf{Act}$ is \emph{exact} if it preserves short exact sequences.
\end{defn}
\begin{ex}
The full crossed-product functor
\begin{align*}
(A,\alpha)&\mapsto (A\rtimes_\alpha G,\widehat\alpha)
\\\phi&\mapsto \phi\rtimes G
\end{align*}
from $\mathbf{Act}$ to $\mathbf{Coact}$ is exact \cite[Proposition~12]{gre:local}.
However, the reduced crossed-product functor is not exact, due to Gromov's examples of non-exact groups.
\end{ex}
\begin{ex}
The crossed-product functor
\begin{align*}
(A,\delta)&\mapsto (A\rtimes_\delta G,\widehat\delta)
\\\phi&\mapsto \phi\rtimes G
\end{align*}
from $\mathbf{Coact}$ to $\mathbf{Act}$ is exact \cite[Theorem~2.3]{nil:full}.
\end{ex}
\begin{ex}
The stabilization functor
\begin{align*}
A&\mapsto A\otimes\mathcal K
\\\phi&\mapsto \phi\otimes\text{\textup{id}}
\end{align*}
on $\mathbf{C}^*$ is exact.
\end{ex}
\section{Coaction functors}\label{sec:coaction functor}
\cite{bgwexact} defined a \emph{crossed-product} as a functor
$(B,\alpha)\mapsto B\rtimes_{\alpha,\tau} G$,
from the category of actions to the category of $C^*$-algebras,
equipped with natural transformations
\[
\xymatrix{
B\rtimes_\alpha G \ar[r] \ar[d]
&B\rtimes_{\alpha,\tau} G \ar[dl]
\\
B\rtimes_{\alpha,r} G,
}
\]
where the vertical arrow is the regular representation,
such that the horizontal arrow is surjective.
Our predilection is to decompose such a crossed-product functor as a composition
\[
(B,\alpha)\mapsto (B\rtimes_\alpha G,\widehat\alpha) \mapsto B\rtimes_{\alpha,\tau} G,
\]
where the first arrow is the full crossed product and the second arrow depends only upon the dual coaction $\widehat\alpha$.
Our approach will require the target $C^*$-algebra $B\rtimes_{\alpha,\tau} G$ to carry a quotient of the dual coaction.
Thus, it is certainly not obvious that our techniques can handle all crossed-product functors of \cite{bgwexact},
because
\cite{bgwexact} do not require their crossed products $B\rtimes_{\alpha,\tau} G$ to have coactions, and
even if they all do, there is no reason to believe that the crossed-product functor factors in this way.
Nevertheless, we think that it is useful to study crossed-product functors that do factor, and thus we can focus upon the second functor, where all the action stays within the realm of coactions.
The following definition is adapted more or less directly from \cite[Definition~2.1]{bgwexact}:
\begin{defn}\label{coaction functor}
A \emph{coaction functor} is a functor
$\tau:(A,\delta)\mapsto (A^\tau,\delta^\tau)$
on
the category of coactions,
together with
a natural transformation
$q^\tau$ from maximalization to $\tau$
such that for every coaction $(A,\delta)$,
\begin{enumerate}
\item
$q^\tau_A:A^m\to A^\tau$ is surjective, and
\item
$\ker q^\tau_A\subset \ker \Lambda_{A^m}$.
\end{enumerate}
\end{defn}
\begin{ex}
\begin{enumerate}
\item
Maximalization $(A,\delta)\mapsto (A^m,\delta^m)$ is a coaction functor, with natural surjections
given by the identity maps $\text{\textup{id}}_{A^m}$.
\item
Normalization $(A,\delta)\mapsto (A^n,\delta^n)$ is a coaction functor, with natural surjections $\Lambda_{A^m}:A^m\to A^n$.
\item
The identity functor is a coaction functor, with with natural surjections
$q^m_A:A^m\to A$.
\end{enumerate}
\end{ex}
\begin{lem}\label{axiom}
If $\tau$ is a coaction functor, then for every coaction $(A,\delta)$
there is a unique
$\delta^\tau-\delta^n$ equivariant
surjection $\Lambda^\tau_A$ making the diagram
\begin{equation}\label{Lambda tau}
\xymatrix{
A^m \ar[r]^-{q^\tau_A} \ar[d]_{\Lambda_{A^m}}
&A^\tau \ar@{-->}[dl]^{\Lambda^\tau_A}
\\
A^n
}
\end{equation}
commute.
Moreover, $\Lambda^\tau$ is a natural transformation from $\tau$ to normalization.
\end{lem}
\begin{proof}
The first statement follows immediately from the definitions.
To verify that $\Lambda^\tau$ is a natural transformation,
we must show that
the homomorphisms $\Lambda^\tau$
\begin{enumerate}
\item
are morphisms of coactions, and
\item
are natural.
\end{enumerate}
(1)
In the commuting triangle \eqref{Lambda tau},
we must show that $\Lambda^\tau_A$ is a $B(G)$-module map,
but this follows since $\Lambda_{A^m}$ and $q^\tau_A$ are module maps
and $q^\tau_A$ is surjective.
(2)
For the naturality,
let $\phi:(A,\delta)\to (B,\epsilon)$ be a morphism in the category of coactions.
Consider the diagram
\[
\xymatrix{
A^m \ar[rr]^{\phi^m} \ar[dd]_{\Lambda_{A^m}} \ar[dr]^{q^\tau_A}
&&B^m \ar[dr]^{q^\tau_B} \ar'[d][dd]_(.4){\Lambda_{B^m}}
\\
&A^\tau \ar[rr]^(.3){\phi^\tau} \ar[dl]^(.4){\Lambda^\tau_A}
&&B^\tau \ar[dl]^{\Lambda^\tau_B}
\\
A^n \ar[rr]_-{\phi^n}
&&B^n.
}
\]
We need to know that the lower quadrilateral, with horizontal and southwest arrows, commutes,
and this follows from
surjectivity of $q^\tau_A$ and
commutativity of the other two quadrilaterals and the two triangles.
\end{proof}
\begin{cor}
If $\tau$ is a coaction functor, then in \eqref{Lambda tau} we have:
\begin{enumerate}
\item
$q^\tau:A^m\to A^\tau$ is a maximalization of $\delta^\tau$, and
\item
$\Lambda^\tau:A^\tau\to A^n$ is a normalization of $\delta^\tau$.
\end{enumerate}
\end{cor}
\begin{proof}
Taking crossed products in \eqref{Lambda tau},
we get a commutative diagram
\[
\xymatrix@C+30pt@R+20pt{
A^m\rtimes_{\delta^m} G \ar[r]^-{q^\tau\rtimes G}_-\simeq \ar[d]_{\Lambda\rtimes G}^\simeq
&A^\tau\rtimes_{\delta^\tau} G \ar[dl]^{\Lambda^\tau\rtimes G}_-\simeq
\\
A^n\rtimes_{\delta^n} G,
}
\]
where the horizontal arrow is surjective because $q^\tau$ is, and is injective because of the vertical isomorphism, and then the diagonal arrow is an isomorphism because the other two arrows are.
Thus $q^\tau$ and $\Lambda^\tau$ satisfy the defining properties of maximalization and normalization, respectively.
\end{proof}
\begin{rem}
Caution: it might seem that $\tau$ should factor through the maximalization functor, at least up to natural isomorphism.
This would entail, in particular, that
\[
(A^{m\tau},\delta^{m\tau})\cong (A^\tau,\delta^\tau)\righttext{for every coaction $(A,\delta)$}.
\]
But this is violated with $\tau=\text{\textup{id}}$.
\end{rem}
\begin{notn}
With the above notation, we define an ideal of $A^m$ by
\[
A^m_\tau:=\ker q^\tau_A.
\]
Note that for the maximalization functor $m$ we have $A^m_m=\{0\}$,
while for the normalization functor $n$ the associated ideal $A^m_n$ is the kernel of the normalization map $\Lambda_{A^m}:A^m\to A^{mn}\cong A^n$.
\end{notn}
\subsection*{Partial ordering of coaction functors}
\cite[see page~8]{bgwexact} defines one crossed-product functor $\sigma$ to be \emph{smaller} than another one $\tau$ if the natural surjection $A\rtimes_{\alpha,\tau} G\to A\rtimes_{\alpha,r} G$ factors through the $\sigma$-crossed product.
We adapt this definition of partial order to coaction functors,
but ``from the top rather than toward the bottom''.
\begin{defn}\label{smaller}
If $\sigma$ and $\tau$ are coaction functors, then
$\sigma$ is \emph{smaller} than $\tau$, written $\sigma\le \tau$, if
for every coaction $(A,\delta)$ we have
\[
A^m_\tau\subset A^m_\sigma.
\]
\end{defn}
\begin{lem}\label{order}
For coaction functors $\sigma,\tau$, the following are equivalent:
\begin{enumerate}
\item $\sigma\le \tau$.
\item For every coaction $(A,\delta)$ there is a homomorphism $\Gamma^{\tau,\sigma}$ making the diagram
\[
\xymatrix{
A^m \ar[r]^-{q^\tau} \ar[dr]_{q^\sigma}
&A^\tau \ar@{-->}[d]^{\Gamma^{\tau,\sigma}}
\\
&A^\sigma
}
\]
commute.
\item For every coaction $(A,\delta)$ there is a homomorphism $\Gamma^{\tau,\sigma}$ making the diagram
\[
\xymatrix{
&A^\tau \ar[dl]_{\Lambda^\tau} \ar@{-->}[d]^{\Gamma^{\tau,\sigma}}
\\
A^n
&A^\sigma \ar[l]^{\Lambda^\sigma}
}
\]
commute.
\end{enumerate}
Moreover, if the above equivalent conditions hold then $\Gamma^{\tau,\sigma}$ is
unique, is
surjective, and is a natural transformation from $\tau$ to $\sigma$.
\end{lem}
\begin{proof}
(1) is equivalent to (2) since $A^m_\tau=\ker q^\tau$ and $A^m_\sigma=\ker q^\sigma$.
Moreover, (1) implies that $\Gamma^{\tau,\sigma}$ is unique
and is surjective,
since the maps $q^\tau$ are surjective.
Assume (3).
Consider the combined diagram
\begin{equation}
\begin{split}\label{combined}
\xymatrix@+30pt{
A^m \ar[r]^-{q^t} \ar[dr] ^(.3){q^\sigma} |!{[r];[d]}\hole \ar[d]_{\Lambda_{A^m}}
&A^\tau \ar[dl]_(.3){\Lambda^\tau} \ar[d]^{\Gamma^{\tau,\sigma}}
\\
A^n
&A^\sigma. \ar[l]^{\Lambda^\sigma}
}
\end{split}
\end{equation}
The upper left and lower left triangles commute by definition of coaction functor,
and the lower right triangle commutes by assumption.
Thus the upper right triangle commutes after post-composing with $\Lambda^\sigma$.
Since the latter map is a normalizer,
by \cite[Corollary~6.1.20]{bkq1}
it is a monomorphism in the category of coactions.
Thus the upper right triangle commutes.
Similarly (but more easily), assuming (2),
the lower right triangle in the diagram \eqref{combined} commutes
because it commutes
after pre-composing with the surjection $q^\tau$.
Naturality of $\Gamma^{\tau,\sigma}$ can be proved by essentially the same argument as in \lemref{axiom}.
\end{proof}
The following is a coaction-functor analogue of \cite[Lemma~3.7]{bgwexact}, and we adapt their argument:
\begin{thm}\label{glb}
Every nonempty collection $\mathcal T$ of coaction functors has a greatest lower bound $\sigma$ with respect to the above partial ordering,
characterized by
\[
A^m_\sigma=\clspn_{\tau\in\mathcal T}A^m_\tau
\]
for every coaction $(A,\delta)$.
\end{thm}
\begin{proof}
Let $(A,\delta)$ be a coaction,
Then the ideal
\[
A^m_\sigma:=\clspn_{\tau\in\mathcal T}A^m_\tau
\]
of $A^m$ is contained in the kernel of the normalization map $\Lambda_{A^m}$.
Put
\[
A^\sigma=A^m/A^m_\sigma,
\]
and let
\[
q^\sigma_A:A^m\to A^\sigma
\]
be the quotient map.
For all $\tau\in \mathcal T$,
$A^m_\tau$ is a weakly $\delta^m$-invariant ideal of $A^m$,
so for all $f\in B(G)$ we have
\[
f\cdot A^m_\tau\subset A^m_\tau\subset A^m_\sigma,
\]
and
it follows that $f\cdot A^m_\sigma\subset A^m_\sigma$,
i.e.,
$A^m_\sigma$ is a weakly $\delta^m$-invariant ideal.
Thus
$q^\sigma$ is equivariant for $\delta^m$ and
a unique coaction $\delta^\sigma$ on $A^\sigma$.
We now have assignments
\[
(A,\delta)\mapsto (A^\sigma,\delta^\sigma)
\]
on objects,
and we need to handle morphisms.
Thus, let $\phi:(A,\delta)\to (B,\epsilon)$ be a morphism of coactions,
i.e., $\phi:A\to B$ is a $\delta-\epsilon$ equivariant homomorphism.
Since
\[
A^m_\tau\subset (\phi^m)^{-1}(B^m_\tau)\subset (\phi^m)^{-1}(B^m_\sigma)
\midtext{for all $\tau\in\mathcal T$,}
\]
we have
\[
\ker q^\sigma_A=
A^m_\sigma=\clspn_{\tau\in\mathcal T}A^m_\tau\subset (\phi^m)^{-1}(B^m_\sigma)
=\ker q^\sigma_B\circ \phi^m.
\]
Thus there is a unique homomorphism $\phi^\sigma$
making the diagram
\begin{equation}\label{phi tau}
\xymatrix{
A^m \ar[r]^-{\phi^m} \ar[d]_{q^\sigma_A}
&B^m \ar[d]^{q^\sigma_B}
\\
A^\sigma \ar@{-->}[r]_-{\phi^\sigma}
&B^\sigma
}
\end{equation}
commute.
Moreover,
$\phi^\sigma$ is $\delta^\sigma-\epsilon^\sigma$ equivariant
because the other three maps are and $q^\sigma_A$ is surjective.
We need to verify that the assignments $\phi\mapsto \phi^\sigma$ of morphisms are functorial.
Obviously identity morphisms are preserved.
For compositions,
let
\[
\xymatrix{
(A,\delta) \ar[r]^-\phi \ar[dr]_\nu
&(B,\epsilon) \ar[d]^\rho
\\
&(C,\gamma)
}
\]
be a commuting diagram of coactions.
Consider the diagram
\[
\xymatrix{
A^m \ar[rr]^{\phi^m} \ar[dd]_{q^\sigma_A} \ar[dr]_{\nu^m}
&&B^m \ar[dd]^{q^\sigma_B} \ar[dl]^{\rho^m}
\\
&C^m \ar[dd]^(.3){q^\sigma_C}
\\
A^\tau \ar[rr]|!{[ur];[dr]}\hole^(.3){\phi^\tau} \ar[dr]_{\nu^\tau}
&&B^\tau \ar[dl]^{\rho^\tau}
\\
&C^\tau.
}
\]
The three
vertical
quadrilaterals and the
top
triangle commute,
and $q^\sigma_A$ is surjective.
It follows that the
bottom
triangle commutes,
and we have shown that composition is preserved.
Thus we have a functor $\sigma$ on the category of coactions.
Moreover,
$\sigma$
is a coaction functor,
since
the surjections $q^\sigma$
have small kernels and
the commuting diagram \eqref{phi tau} shows that
$q^\sigma$ gives
a natural transformation from maximalization to $\sigma$.
By construction, $\sigma$ is a greatest lower bound for $\mathcal T$.
\end{proof}
\subsection*{Exact coaction functors}
As a special case of our general \defnref{def:exact}, we explicitly record:
\begin{defn}\label{exact functor}
A coaction functor $\tau$ is \emph{exact} if for every short exact sequence
\[
\xymatrix{
0 \ar[r]
&(I,\gamma) \ar[r]^-\phi
&(A,\delta) \ar[r]^-\psi
&(B,\epsilon) \ar[r] &0
}
\]
of coactions the associated sequence
\[
\xymatrix{
0 \ar[r]
&(I^\tau,\gamma^\tau) \ar[r]^-{\phi^\tau}
&(A^\tau,\delta^\tau) \ar[r]^-{\psi^\tau}
&(B^\tau,\epsilon^\tau) \ar[r] &0
}
\]
is exact.
\end{defn}
\begin{thm}\label{max exact}
The maximalization functor is exact.
\end{thm}
\begin{proof}
Let
\[
\xymatrix{
0 \ar[r]
&(I,\gamma) \ar[r]^-\phi
&(A,\delta) \ar[r]^-\psi
&(B,\epsilon) \ar[r] &0
}
\]
be an exact sequence of coactions.
Taking crossed products twice, we get an exact sequence
\[
\xymatrix@C+20pt{
0 \ar[r]
&I\rtimes_\gamma G\rtimes_{\widehat\gamma} G \ar[r]^{\phi\rtimes G\rtimes G}
&A\rtimes_\delta G\rtimes_{\widehat\delta} G
\\&{\hphantom{I\rtimes_\gamma G\rtimes_{\widehat\gamma} G}}
\ar[r]^{\psi\rtimes G\rtimes G}
&B\rtimes_\epsilon G\rtimes_{\widehat\epsilon} G
\ar[r]&0.
}
\]
Since the identity functor on coactions is a coaction functor, we get an isomorphic sequence
\[
\xymatrix@C+20pt{
0 \ar[r]
&I^m\rtimes_{\gamma^m} G\rtimes_{\widehat{\gamma^m}} G \ar[r]^{\phi^m\rtimes G\rtimes G}
&A^m\rtimes_{\delta^m} G\rtimes_{\widehat{\delta^m}} G
\\&{\hphantom{I\rtimes_{\gamma^m} G\rtimes_{\widehat\gamma^m} G}}
\ar[r]^{\psi^m\rtimes G\rtimes G}
&B^m\rtimes_{\epsilon^m} G\rtimes_{\widehat{\epsilon^m}} G
\ar[r]&0,
}
\]
which is therefore also exact.
Since the canonical surjection $\Phi$ is a natural transformation from the double crossed-product functor to the stabilization functor, and since the coactions are now maximal, we get an isomorphic sequence
\[
\xymatrix@C+10pt{
0\ar[r]
&I^m\otimes\mathcal K \ar[r]^{\phi^m\otimes\text{\textup{id}}}
&A^m\otimes\mathcal K \ar[r]^{\psi^m\otimes\text{\textup{id}}}
&B^m\otimes\mathcal K
\ar[r]&0,
}
\]
which is therefore also exact.
Since $\mathcal K$ is an exact $C^*$-algebra,
\[
(\ker \phi^m)\otimes\mathcal K=\ker (\phi^m\otimes\text{\textup{id}})=\{0\},
\]
so $\ker \phi^m=\{0\}$,
and similarly
\[
(\ker \psi^m)\otimes\mathcal K=\ker (\psi^m\otimes\text{\textup{id}})=(\phi^m\otimes\text{\textup{id}})(I^m\otimes\mathcal K)
=\phi^m(I^m)\otimes\mathcal K,
\]
so, because $\phi^m(I^m)\subset \ker \psi^m$ by functoriality,
we must have $\phi^m(I^m)=\ker \psi^m$.
Therefore
the sequence
\[
\xymatrix{
0\ar[r]
&I^m \ar[r]^{\phi^m}
&A^m \ar[r]^{\psi^m}
&B^m
\ar[r]&0
}
\]
is exact.
\end{proof}
\begin{thm}\label{functor exact}
A coaction functor $\tau$ is exact if and only if
for
any short exact sequence
\[
\xymatrix{
0\ar[r]&(I,\delta_I)\ar[r]^\phi&(A,\delta)\ar[r]^\psi&(B,\delta^I)\ar[r]&0
}
\]
of coactions, both
\[
\phi^m(I^m_\tau)=\phi^m(I^m)\cap A^m_\tau
\]
and
\[
\phi^m(I^m)+A^m_\tau=(\psi^m)^{-1}(B^m_\tau)
\]
hold.
\end{thm}
\begin{proof}
We have a commutative diagram
\begin{equation}\label{exact diagram}
\xymatrix{
&0\ar[d]&0\ar[d]&0\ar[d]
\\
0\ar[r]&I^m_\tau\ar[d]_{\iota_I}\ar[r]^{\phi^m|}&A^m_\tau\ar[d]_{\iota_A}\ar[r]^{\psi^m|}&B^m_\tau\ar[d]_{\iota_{B}}\ar[r]&0
\\
0\ar[r]&I^m\ar[d]_{q_I}\ar[r]^{\phi^m}&A^m\ar[d]_{q_A}\ar[r]^{\psi^m}&B^m\ar[d]_{q_{B}}\ar[r]&0
\\
0\ar[r]&I^\tau\ar[r]^{\phi^\tau}\ar[d]&A^\tau\ar[r]^{\psi^\tau}\ar[d]&B^\tau\ar[r]\ar[d]&0
\\
&0&0&0,
}
\end{equation}
where the columns are exact by definition,
and the middle row is exact by \thmref{max exact}.
Thus the result follows immediately from \lemref{nine}.
\end{proof}
\subsection*{Morita compatible coaction functors}
If we have coactions $(A,\delta)$ and $(B,\epsilon)$,
and a $\delta-\epsilon$ compatible coaction $\zeta$ on an $A-B$ imprimitivity bimodule $X$,
we'll say that $(X,\zeta)$ is an \emph{$(A,\delta)-(B,\epsilon)$ imprimitivity bimodule}.
\begin{ex}\label{double dual}
The double dual bimodule coaction
\[
(Y,\eta):=\bigl(X\rtimes_\zeta G\rtimes_{\widehat\zeta} G,\widehat{\widehat\zeta}\,\bigr)
\]
is an
\[
\bigl(A\rtimes_\delta G\rtimes_{\widehat\delta} G,\widehat{\widehat\delta}\,\bigr)-
\bigl(B\rtimes_\epsilon G\rtimes_{\widehat\epsilon} G,\widehat{\widehat\epsilon}\,\bigr)
\]
imprimitivity bimodule.
Since the identity functor on coactions is a coaction functor,
$(Y,\eta)$ becomes an
\[
\bigl(A^m\rtimes_{\delta^m} G\rtimes_{\widehat{\delta^m}} G,\widehat{\widehat{\delta^m}}\,\bigr)-
\bigl(B^m\rtimes_{\epsilon^m} G\rtimes_{\widehat{\epsilon^m}} G,\widehat{\widehat{\epsilon^m}}\,\bigr)
\]
imprimitivity bimodule.
Since maximalizations satisfy full-crossed-product duality,
$(Y,\eta)$ becomes, after replacing the double dual coactions by exterior equivalent coactions,
an
\[
(A^m\otimes\mathcal K,\delta^m\otimes_* \text{\textup{id}})-(B^m\otimes\mathcal K,\epsilon^m\otimes_* \text{\textup{id}})
\]
imprimitivity bimodule
(see \cite[Lemma~3.6]{maximal}).
\end{ex}
We need the following basic lemma, which is probably folklore,
although we could not find it in the literature.
Our formulation is partially inspired by Fischer's treatment of relative commutants of $\mathcal K$
\cite[Section~3]{fischer}.
\begin{lem}\label{tensor}
Let $A$ and $B$ be $C^*$-algebras, and let $Y$ be an $(A\otimes\mathcal K)-(B\otimes\mathcal K)$ imprimitivity bimodule. Define
\[
X=\{m\in M(Y):(1_A\otimes k)\cdot m=m\cdot (1_B\otimes k)\in Y\text{ for all }k\in\mathcal K\}.
\]
Then:
\begin{enumerate}
\item
$X$ is an $(A\otimes 1_\mathcal K)-(B\otimes 1_\mathcal K)$ submodule of $M(Y)$;
\item
$\clspn\<X,X\>_{M(B\otimes\mathcal K)}=B\otimes 1_\mathcal K$;
\item
$\clspn{}_{M(A\otimes\mathcal K)}\<X,X\>=A\otimes 1_\mathcal K$.
\end{enumerate}
Thus $X$ becomes an $A-B$ imprimitivity bimodule in an obvious way,
and moreover there is a unique
$(A\otimes\mathcal K)-(B\otimes\mathcal K)$ imprimitivity bimodule isomorphism
\[
\theta:X\otimes\mathcal K\overset{\cong}{\longrightarrow} Y
\]
such that
\[
\theta(m\otimes k)=m\cdot (1_B\otimes k)\righttext{for}m\in X,k\in\mathcal K.
\]
\end{lem}
\begin{lem}\label{Xm}
Given coactions $(A,\delta)$ and $(B,\epsilon)$,
and a $\delta-\epsilon$ compatible coaction $\zeta$ on an $A-B$ imprimitivity bimodule $X$,
let $(Y,\eta)$ be the
\[
(A^m\otimes\mathcal K,\delta^m\otimes_* \text{\textup{id}})-(B^m\otimes\mathcal K,\epsilon^m\otimes_* \text{\textup{id}})
\]
imprimitivity bimodule from \exref{double dual},
and let $X^m$ denote the associated $A^m-B^m$ imprimitivity bimodule as in \lemref{tensor},
with an $(A^m\otimes\mathcal K)-(B^m\otimes\mathcal K)$ imprimitivity bimodule isomorphism
$\theta:X^m\otimes\mathcal K\to Y$.
Then there is a unique $\delta^m-\epsilon^m$ compatible coaction $\zeta^m$ on $X^m$ such that
$\theta$ transports $\zeta^m\otimes_*\text{\textup{id}}$ to $\eta$.
\end{lem}
\begin{proof}
The diagram
\[
\xymatrix@C+30pt{
X^m\otimes\mathcal K
\ar@{-->}[r]^-\kappa
\ar[d]_\theta^\simeq
&M(X^m\otimes\mathcal K\otimes C^*(G)) \ar[d]^{\theta\otimes\text{\textup{id}}}_\simeq
\\
Y \ar[r]_-\eta
&M(Y\otimes C^*(G))
}
\]
certainly has a unique commuting completion, and $\kappa$ is a $(\delta^m\otimes_*\text{\textup{id}})-(\epsilon^m\otimes_*\text{\textup{id}})$ compatible coaction on $X^m\otimes\mathcal K$.
In order to recognize that $\kappa$ is of the form $\zeta^m\otimes_*\text{\textup{id}}$,
we need to know that,
letting $\Sigma:\mathcal K\otimes C^*(G)\to C^*(G)\otimes\mathcal K$
be the flip isomorphism,
for every $\xi\in X^m$,
the element
\[
m:=(\text{\textup{id}}_{X^m}\otimes\Sigma)\circ(\theta\otimes\text{\textup{id}})^{-1}\circ\eta\circ\theta(\xi\otimes 1_\mathcal K)
\]
of the multiplier bimodule
$M(X^m\otimes C^*(G)\otimes \mathcal K)$
is contained in the subset
$M(X^m\otimes C^*(G))\otimes 1_\mathcal K$,
and for this we need only check that for all $k\in\mathcal K$ we have
\[
(1_{A\otimes C^*(G)}\otimes k)\cdot m=m\cdot (1_{B\otimes C^*(G)}\otimes k)
\in X^m\otimes C^*(G)\otimes \mathcal K,
\]
which follows from the properties of the maps involved.
Then it is routine to check that the resulting map $\zeta^m$ is a $\delta^m-\epsilon^m$ compatible coaction on $X^m$.
\end{proof}
\begin{defn}\label{morita functor}
A coaction functor $\tau$ is \emph{Morita compatible} if
whenever $(X,\zeta)$ is an
$(A,\delta)-(B,\epsilon)$ imprimitivity bimodule,
with associated $A^m-B^m$ imprimitivity bimodule $X^m$ as above,
the Rieffel correspondence of ideals satisfies
\begin{equation}\label{induce}
X^m\dashind B^m_\tau=A^m_\tau.
\end{equation}
\end{defn}
We will use without comment the simple observation that
if $(A,\delta)$ (and hence also $(B,\epsilon)$) is maximal,
then we can replace $X^m$ by $X$
and regard the natural surjection $q^\tau_A$ as going from $A$ to $A^\tau$
(and similarly for $B$),
since the maximalizing maps $q^m_A$ and $q^m_B$ can be combined to
give an isomorphism of the $A^m-B^m$ imprimitivity bimodule $X^m$ onto $X$.
\begin{rem}
Caution: \defnref{morita functor} is not a direct analogue of the definition of Morita compatibility in
\cite[Definition~3.2]{bgwexact}, but it suits our purposes in working with coaction functors, as we will see in \propref{compose}.
\end{rem}
\begin{rem}
\lemref{Xm} says in particular
that maximalization preserves Morita equivalence of coactions.
This
is almost new: it also follows from
first applying the cross-product functor,
noting that the dual actions are ``weakly proper $G\rtimes G$-algebras'' in the sense of \cite{BusEch},
then applying \cite[Corollary~4.6]{BusEch2} with the universal crossed-product norm (denoted by $u$ in \cite{BusEch}).
\end{rem}
\begin{lem}\label{X tau}
A coaction functor $\tau$ is Morita compatible if and only if
whenever $(X,\zeta)$ is an $(A,\delta)-(B,\epsilon)$ imprimitivity bimodule,
there are an $A^\tau-B^\tau$ imprimitivity bimodule $X^\tau$
and a $q^\tau_A-q^\tau_B$ compatible imprimitivity-bimodule homomorphism $q^\tau_X:X^m\to X^\tau$.
\end{lem}
\begin{proof}
Given $X^\tau$ and $q^\tau_X$ with the indicated properties,
by \cite[Lemma~1.20]{enchilada} we have
\[
X^m\dashind B_\tau^m
=X^m\dashind \ker q^\tau_B
=\ker q^\tau_A
=A^m_\tau.
\]
It follows that $\tau$ is Morita compatible.
Conversely, suppose $\tau$ is Morita compatible,
and let $(X^m,\zeta^m)$ be as above.
Then by the Rieffel correspondence,
$X^\tau:=X^m/X^m\cdot B_\tau^m$
is an $A^m/A^m_\tau-B^m/B^m_\tau$ imprimitivity bimodule,
and the quotient map
$q^\tau_X:X^m\to X^\tau$
is compatible with the quotient maps $A^m\mapsto A^m/A^m_\tau$ and $B^m\mapsto B^m_\tau$.
Via the unique isomorphisms making the diagrams
\[
\begin{aligned}
\xymatrix{
A^m \ar[d]_{\txt{quotient\\map}} \ar[dr]^{q^\tau_A}
\\
A^m/A^m_\tau \ar@{-->}[r]_-\simeq
&A^\tau
}
&
\xymatrix{
B^m \ar[d]_{\txt{quotient\\map}} \ar[dr]^{q^\tau_B}
\\
B^m/B^m_\tau \ar@{-->}[r]_-\simeq
&B^\tau
}
\end{aligned}
\]
commute, $q^\tau_X$ becomes $q^\tau_A-q^\tau_B$ compatible.
\end{proof}
\begin{ex}
It follows trivially that the maximalization functor is Morita compatible.
\end{ex}
\begin{lem}\label{id}
The identity functor on coactions is Morita compatible.
\end{lem}
\begin{proof}
Let $(X,\zeta)$ be an $(A,\delta)-(B,\epsilon)$ imprimitivity bimodule,
and let $(X^m,\zeta^m)$ be the associated $(A^m,\delta^m)-(B^m,\epsilon^m)$ imprimitivity bimodule from \lemref{Xm}.
By
\lemref{X tau}
it suffices to find a
$q^m_A-q^m_B$ compatible
imprimitivity-bimodule homomorphism
$q^m_X:X^m\to X$.
Now, $X^m$ is the upper right corner of the $2\times 2$ matrix representation of the linking algebra $L^m$,
and the maximalization map $q^m_L$ of the linking algebra $L$ of $X$ preserves the upper right corners. Thus $q^m_L$ takes $X^m$ onto $X$, and simple algebraic manipulations show that it has the right properties.
\end{proof}
\begin{thm}\label{glb exact}
The greatest lower bound of the collection of all exact and Morita compatible coaction functors is itself exact and Morita compatible.
\end{thm}
\begin{proof}
Let $\mathcal T$ be the collection of all exact and Morita compatible coaction functors,
and let $\tau$ be the greatest lower bound of $\mathcal T$.
As in the proof of \thmref{glb},
for every coaction $(A,\delta)$ we have
\[
A^m_\tau=\clspn_{\sigma\in\mathcal T}A^m_\sigma.
\]
For exactness, we apply \thmref{exact functor}
Let
\[
\xymatrix{
0\ar[r]
&(I,\gamma) \ar[r]^-\phi
&(A,\delta) \ar[r]^-\psi
&(B,\epsilon)
\ar[r]&0
}
\]
be a short exact sequence of coactions.
Then
\begin{align*}
\phi^m(I^m_\tau)
&=\phi^m\left(\clspn_{\sigma\in\mathcal T}I^m_\sigma\right)
\\&=\clspn_{\sigma\in\mathcal T}\phi^m(I^m_\sigma)
\\&=\clspn_{\sigma\in\mathcal T}\bigl(\phi^m(I^m)\cap A^m_\sigma\bigr)
\righttext{(since $\sigma$ is exact)}
\\&=\phi^m(I^m)\cap \clspn_{\sigma\in\mathcal T}A^m_\sigma
\\&\hspace{.5in}\text{(since all spaces involved are ideals in $C^*$-algebras)}
\\&=\phi^m(I^m)\cap A^m_\tau,
\end{align*}
and
\begin{align*}
\phi^m(I^m)+A^m_\tau
&=\phi^m(I^m)+\clspn_{\sigma\in\mathcal T}A^m_\sigma
\\&=\clspn_{\sigma\in\mathcal T}\bigl(\phi^m(I^m)+A^m_\sigma\bigr)
\\&=\clspn_{\sigma\in\mathcal T}(\psi^m)^{-1}(B^m_\sigma)
\righttext{(since $\sigma$ is exact)}
\\&=(\psi^m)^{-1}\bigl(\clspn_{\sigma\in\mathcal T}B^m_\sigma\bigr)
\\&=(\psi^m)^{-1}(B^m_\tau),
\end{align*}
so $\tau$ is exact.
For Morita compatibility,
let $(X,\zeta)$ be an $(A,\delta)-(B,\epsilon)$ imprimitivity bimodule,
with associated $A^m-B^m$ imprimitivity bimodule $X^m$.
Then
\begin{align*}
X^m\dashind B^m_\tau
&=X^m\dashind \clspn_{\sigma\in\mathcal T}B^m_\sigma
\\&=\clspn_{\sigma\in\mathcal T} X^m\dashind B^m_\sigma
\\&\hspace{.5in}\righttext{(by continuity of Rieffel induction)}
\\&=\clspn_{\sigma\in\mathcal T} A^m_\sigma\righttext{(since $\sigma$ is Morita compatible)}
\\&=A^m_\tau,
\end{align*}
so $\tau$ is Morita compatible.
\end{proof}
\begin{defn}
We call the above greatest lower bound of the collection of all exact and Morita compatible coaction functors
the \emph{minimal exact and Morita compatible coaction functor}.
\end{defn}
\subsection*{Comparison with \cite{bgwexact}}
As we mentioned previously, \cite[see page~8]{bgwexact} defines one crossed-product functor $\sigma_1$ to be \emph{smaller} than another one $\sigma_2$, written $\sigma_1\le \sigma_2$, if the natural surjection $A\rtimes_{\alpha,\sigma_2} G\to A\rtimes_{\alpha,r} G$ factors through the $\sigma_1$-crossed product.
Let $\tau$ be a coaction functor, and let $\sigma=\tau\circ\CP$ be the associated crossed-product functor, i.e.,
\[
(A,\alpha)^\sigma=A\rtimes_{\alpha,\sigma} G:=(A\rtimes_\alpha G)^\tau.
\]
For a morphism $\phi:(A,\alpha)\to (B,\beta)$ of actions,
we write
\[
\phi\rtimes_\sigma G=(\phi\rtimes G)^\tau:A\rtimes_{\alpha,\sigma} G\to B\rtimes_{\beta,\sigma} G
\]
for the associated morphism of $\sigma$-crossed products.
\begin{prop}\label{compose}
With the above notation, if the coaction functor $\tau$ is exact or Morita compatible, then the associated crossed-product functor $\sigma$ has the same property.
Moreover, if $\tau_1\le \tau_2$ then $\sigma_1\le \sigma_2$.
\end{prop}
\begin{proof}
The last statement follows immediately from the definitions.
For the other statement, first assume that $\tau$ is exact,
and let
\[
\xymatrix{
0\ar[r]&(I,\gamma)\ar[r]^\phi &(A,\alpha)\ar[r]^\psi &(B,\beta)\ar[r]&0
}
\]
be a short exact sequence of actions.
Then the sequence
\[
\xymatrix{
0\ar[r] &(I\rtimes_\gamma G,\widehat\gamma) \ar[r]^{\phi\rtimes G} &(A\rtimes_\alpha G,\widehat\alpha) \ar[r]^{\psi\rtimes G} &(B\rtimes_\beta G,\widehat\beta)\ar[r]&0
}
\]
of coactions is exact,
since the full-crossed-product functor is exact.
Then by exactness of $\tau$ we see that the sequence
\[
\xymatrix@C+10pt{
0\ar[r] &I\rtimes_{\gamma,\sigma} G \ar[r]^{\phi\rtimes_\sigma G} & A\rtimes_{\alpha,\sigma} G \ar[r]^{\psi\rtimes_\sigma G} & B\rtimes_{\beta,\sigma} G \ar[r] & 0
}
\]
is also exact.
On the other hand, assume that the coaction functor $\tau$ is Morita compatible.
As in \cite[Section~3]{bgwexact}, the \emph{unwinding isomorphism} $\Phi$,
which is the integrated form of the covariant pair
\begin{align*}
\pi(a\otimes k)&=i_A(a)\otimes k\\
u_s&=i_G(s)\otimes \lambda_s,
\end{align*}
fits into a diagram
\begin{equation}\label{unwind}
\xymatrix@C+20pt{
(A\otimes\mathcal K)\rtimes_{\alpha\otimes\ad\lambda} G \ar[r]^-\Phi_-\simeq
\ar[d]_{q^\tau_{(A\otimes\mathcal K)\rtimes_{\alpha\otimes\ad\lambda} G}}
&(A\rtimes_\alpha G)\otimes\mathcal K \ar[d]^{q^\tau_{A\rtimes_\alpha G}\otimes\text{\textup{id}}}
\\
(A\otimes\mathcal K)\rtimes_{\alpha\otimes\ad\lambda,\sigma} G
\ar@{-->}[r]^-\simeq_-\Upsilon
&(A\rtimes_{\alpha,\sigma} G)\otimes\mathcal K,
}
\end{equation}
i.e.,
\[
\ker q^\tau_{(A\otimes\mathcal K)\rtimes_{\alpha\otimes\ad\lambda} G}
=\ker (q^\tau_{A\rtimes_\alpha G}\otimes\text{\textup{id}})\circ \Phi.
\]
The diagram \eqref{unwind} fits into a more elaborate diagram
\[
\xymatrix@C+5pt{
(A\otimes\mathcal K)\rtimes_{\alpha\otimes\ad\lambda} G \ar[r]^-\Phi_-\simeq
\ar[d]_{q^\tau_{(A\otimes\mathcal K)\rtimes_{\alpha\otimes\ad\lambda} G}}
&(A\rtimes_\alpha G)\otimes\mathcal K
\ar[d]|{q^\tau_{(A\rtimes_\alpha G)\otimes\mathcal K}}
\ar@/^1pc/[ddr]^{q^\tau_{A\rtimes_\alpha G}\otimes\text{\textup{id}}}
\\
(A\otimes\mathcal K)\rtimes_{\alpha\otimes\ad\lambda,\sigma} G
\ar@{-->}[r]^-\simeq_-{\Phi^\tau}
\ar@{-->}@/_1pc/[drr]_\Upsilon^\simeq
&((A\rtimes_\alpha G)\otimes\mathcal K)^\tau \ar@{-->}[dr]^\theta_\simeq
\\
&&(A\rtimes_{\alpha,\sigma} G)\otimes\mathcal K,
}
\]
which we proceed to analyze.
There is a unique
\[
\bigl(\widehat{\alpha\otimes\ad\lambda}\bigr)^\tau-(\widehat\alpha\otimes_*\text{\textup{id}})^\tau
\]
equivariant
homomorphism $\Phi^\tau$
making the upper-left rectangle commute,
since $\tau$ is functorial.
Moreover, $\Phi^\tau$ is an isomorphism since $\Phi$ is, again by functoriality.
Applying Morita compatibility of $\tau$ to the equivariant
$((A\rtimes_\alpha G)\otimes\mathcal K)-(A\rtimes_\alpha G)$
imprimitivity bimodule
$(A\rtimes_\alpha G)\otimes L^2(G)$
shows that there is a unique
\[
(\widehat\alpha\otimes_*\text{\textup{id}})^\tau-(\widehat\alpha^\tau\otimes_*\text{\textup{id}})
\]
equivariant
isomorphism $\theta$
that makes the upper right triangle commute.
Thus there is a unique isomorphism $\Upsilon$ making the lower left triangle commute,
and then the outer quadrilateral commutes, as desired.
\end{proof}
\begin{q}\
\begin{enumerate}
\item
Is the minimal exact and Morita compatible crossed product of \cite[Section~4]{bgwexact} naturally isomorphic to the composition of the minimal exact and Morita compatible coaction functor and the full crossed product?
\item
More generally, given a crossed-product functor on actions, when does it decompose as a full crossed product followed by a coaction functor?
Does it make any difference if the crossed-product functor is exact or Morita compatible?
\end{enumerate}
\end{q}
\section{Decreasing coaction functors}\label{decreasing}
In this section we introduce a particular type of coaction functor with the convenient property that we do not need to check things by going through the maximalization functor, as we'll see in Propositions~\ref{decreasing exact} and \ref{decreasing morita}.
Suppose that for each coaction $(A,\delta)$ we have a coaction $(A^\tau,\delta^\tau)$ and a $\delta-\delta^\tau$ equivariant surjection $Q^\tau:A\to A^\tau$,
and further suppose that for each morphism $\phi:(A,\delta)\to (B,\epsilon)$ we have
\[
\ker Q^\tau_A\subset \ker Q^\tau_B \circ \phi,
\]
so that there is a unique morphism $\phi^\tau$ making the diagram
\[
\xymatrix{
(A,\delta) \ar[r]^-\phi \ar[d]_{Q^\tau_A}
&(B,\epsilon) \ar[d]^{Q^\tau_B}
\\
(A^\tau,\delta^\tau) \ar@{-->}[r]_-{\phi^\tau}^-{!}
&(B^\tau,\delta^\tau)
}
\]
commute.
The uniqueness and surjectivity assumptions imply that $\tau$ constitutes a functor on the category of coactions, and moreover $Q^\tau:\text{\textup{id}}\to \tau$ is a natural transformation.
\begin{defn}\label{decreasing defn}
We call a functor $\tau$ as above \emph{decreasing} if for each coaction $(A,\delta)$ we have
\[
\ker Q^\tau_A\subset \ker \Lambda_A.
\].
\end{defn}
\begin{lem}\label{dec coact}
Every decreasing functor $\tau$ on coactions is a coaction functor, and moreover $\tau\le \text{\textup{id}}$.
\end{lem}
\begin{proof}
For each coaction $(A,\delta)$,
define a homomorphism $q^\tau_A$ by the commutative diagram
\[
\xymatrix{
A^m \ar[d]_{q^m_A} \ar[dr]^{q^\tau_A}
\\
A \ar[r]_-{Q^\tau_A}
&A^\tau,
}
\]
where $q^m_A$ is the maximalization map.
$q^\tau$ is natural and surjective since both $q^m$ and $Q^\tau$ are.
We have
\begin{align*}
\ker q^\tau_A
&=\{a\in A^m:q^m_A(a)\in \ker Q^\tau_A\}
\\&\subset \{a\in A^m:q^m_A(a)\in \ker \Lambda_A\}
\\&=\ker \Lambda_A\circ q^m_A
\\&=\ker \Lambda_{A^m}.
\end{align*}
Thus $\tau$ is a coaction functor, and then $\tau\le \text{\textup{id}}$ by \lemref{order}.
\end{proof}
\begin{notn}
For a decreasing coaction functor $\tau$ and any coaction $(A,\delta)$ put
\[
A_\tau=\ker Q^\tau_A.
\]
\end{notn}
\begin{prop}\label{decreasing exact}
A decreasing coaction functor $\tau$ is exact if and only if for any short exact sequence
\begin{equation}\label{seq dec}
\xymatrix{
0\ar[r] &(I,\delta_I)\ar[r]^\phi &(A,\delta)\ar[r]^\psi &(B,\delta^I)\ar[r]&0
}
\end{equation}
of coactions,
both
\[
\phi(I_\tau)=\phi(I)\cap A_\tau
\]
and
\[
\phi(I)+A_\tau\supset \psi^{-1}(B_\tau)
\]
hold.
\end{prop}
\begin{proof}
The proof is very similar to, and slightly easier than, that of \thmref{functor exact}, using the commutative diagram
\[
\xymatrix{
&0\ar[d]&0\ar[d]&0\ar[d]
\\
0\ar[r]&I_\tau\ar[d]_{\iota_I}\ar[r]^{\phi|}&A^m_\tau\ar[d]_{\iota_A}\ar[r]^{\psi|}&B^m_\tau\ar[d]_{\iota_{B}}\ar[r]&0
\\
0\ar[r]&I\ar[d]_{Q^\tau_I}\ar[r]^{\phi}&A\ar[d]_{Q^\tau_A}\ar[r]^{\psi}&B\ar[d]_{Q^\tau_{B}}\ar[r]&0
\\
0\ar[r]&I^\tau\ar[r]^{\phi^\tau}\ar[d]&A^\tau\ar[r]^{\psi^\tau}\ar[d]&B^\tau\ar[r]\ar[d]&0
\\
&0&0&0.
}
\]
\end{proof}
\begin{prop}\label{decreasing morita}
A decreasing coaction functor $\tau$ is Morita compatible if and only if
whenever $(X,\zeta)$ is an $(A,\delta)-(B,\epsilon)$ imprimitivity bimodule,
there are an $A^\tau-B^\tau$ imprimitivity bimodule $X^\tau$ and a $Q^\tau_A-Q^\tau_B$ compatible imprimitivity-bimodule homomorphism $Q^\tau_X:X\to X^\tau$.
\end{prop}
\begin{proof}
First suppose $\tau$ is Morita compatible.
Let $(X,\zeta)$ be an $(A,\delta)-(B,\epsilon)$ imprimitivity bimodule,and let $q^\tau_X:X^m\to X^\tau$
be a $q^m_A-q^m_B$ compatible imprimitivity-bimodule homomorphism onto an $A^\tau-B^\tau$ imprimitivity bimodule $X^\tau$, as in \lemref{X tau}.
By Lemmas~\ref{id} and \ref{X tau} there is also a $q^m_A-q^m_B$ compatible imprimitivity bimodule homomorphism $q^m_X$ of $X^m\to X$.
By definition, we have
\[
q^\tau_A=Q^\tau_A\circ q^m_A:A^m\to A^\tau.
\]
Thus
\begin{align*}
\ker q^m_X
&=(\ker q^m_A)\cdot X^m
\\&\subset (\ker Q^\tau_A\circ q^m_A)\cdot X^m
\\&=(\ker q^\tau_A)\cdot X^m
\\&=\ker q^\tau_X,
\end{align*}
and hence $q^\tau_X$ factors through a commutative diagram
\[
\xymatrix{
X^m \ar[dd]_{q^\tau_X} \ar[dr]^{q^m_X}
\\
&X \ar@{-->}[dl]^{Q^\tau_X}_{!}
\\
X^\tau
}
\]
for a unique imprimitivity bimodule homomorphism $Q^\tau_X$.
Moreover, $Q^\tau_X$ is compatible on the left with $Q^\tau_A$ by construction,
and similar reasoning, using the Rieffel correspondence of ideals, shows that it is also $Q^\tau_B$ compatible on the right.
Conversely, suppose we have $(X,\zeta)$, $X^\tau$, and $Q^\tau_X$ as indicated,
and let $(X^m,\zeta^m)$ be the associated $(A^m,\delta^m)-(B^m,\epsilon^m)$ imprimitivity bimodule from \lemref{Xm}.
By \lemref{X tau} it suffices to find a $q^m_A-q^m_B$ compatible imprimitivity-bimodule homomorphism $q^\tau_X:X^m\to X^\tau$.
Since $q^\tau=Q^\tau\circ q^m$ on both $A^m$ and $B^m$,
by \lemref{id} and our assumptions we can take
$q^\tau_X=Q^\tau_X\circ q^m_X$.
\end{proof}
\section{Coaction functors from large ideals}\label{large}
The most important source of examples of the decreasing coaction functors of the preceding section is large ideals.
We recall some basic concepts from \cite{graded, exotic}.
Let $E$ be an ideal of $B(G)$ that is \emph{large},
meaning it is nonzero, $G$-invariant, and weak* closed.
Then the preannihilator ${}\ann E$ of $E$ in $C^*(G)$ is an ideal contained in the kernel of the regular representation $\lambda$.
Write $C^*_E(G)=C^*(G)/{}\ann E$ for the quotient group $C^*$-algebra
and
$q_E:C^*(G)\to C^*_E(G)$ for the quotient map.
The ideal ${}\ann E=\ker q_E$ of $C^*(G)$
is \emph{weakly} $\delta_G$-invariant,
i.e.,
$\delta_G$ descends to a coaction, which we denote by $\delta_G^E$, on the quotient $C^*_E(G)$.
For any coaction $(A,\delta)$ and any large ideal $E$ of $B(G)$,
\[
A_E:=\{a\in A:E\cdot a=\{0\}\}=\ker(\text{\textup{id}}\otimes q_E)\circ\delta
\]
is a \emph{small} ideal of $A$ (that is, an ideal contained in $\ker j_A=\ker \Lambda_A$)
and we write
$A^E=A/A_E$ for the quotient $C^*$-algebra and
$Q^E_A:A\to A^E$ for the quotient map.
$A_E$ is weakly $\delta$-invariant \cite[Lemma~3.5]{exotic},
and we write $\delta^E$ for the quotient coaction on $A^E$.
\begin{rem}
The properties of the $B(G)$-module structure (see \appxref{module lemmas}) allow for a shorter proof of invariance than in \cite{exotic}:
if $a\in A_E$, $f\in B(G)$, and $g\in E$ then
\[
g\cdot (f\cdot a)=(gf)\cdot a=0,
\]
because $E$ is an ideal, and it follows that $B(G)\cdot A_E\subset A_E$.
\end{rem}
\begin{prop}\label{E coaction functor}
$(A,\delta)\mapsto (A^E,\delta^E)$ is a decreasing coaction functor,
which we denote by $\tau_E$.
\end{prop}
\begin{proof}
By the above discussion and \lemref{dec coact}, it suffices to observe that for any morphism $\phi:(A,\delta)\to (B,\epsilon)$ of coactions
and
for all $a\in \ker Q^E_A$ and $f\in E$,
\begin{align*}
f\cdot \phi(a)
&=\phi(f\cdot a)=0,
\end{align*}
which implies that $\ker Q^E_A\subset \ker Q^E_B\circ \phi$.
\end{proof}
\begin{rem}
The above lemma should be compared with
\cite[Corollary~6.5 and Lemma~7.1]{BusEch},
\cite[Lemma~2.3]{BusEch2}, and
\cite[Lemma~A.3]{bgwexact}.
\end{rem}
\begin{ex}
$\tau_{B(G)}$ is the identity functor.
\end{ex}
\begin{ex}
$\tau_{B_r(G)}$ is naturally isomorphic to the normalization functor.
\end{ex}
\begin{ex}
The maximalization functor is not of the form $(A,\delta)\mapsto (A^E,\delta^E)$ for any large ideal $E$ of $B(G)$,
because the maximalization functor is not decreasing in the sense of \defnref{decreasing defn}.
\end{ex}
\begin{prop}\label{exact E functor}
For a large ideal $E$ of $B(G)$,
the coaction functor $\tau_E$ is exact if and only if
for every coaction $(A,\delta)$ and
every strongly invariant ideal $I$ of $A$,
\begin{equation}\label{test exact E}
I+A_E\supset \{a\in A:E\cdot a\subset I\}.
\end{equation}
\end{prop}
\begin{proof}
Let
\begin{equation}\label{seq}
\xymatrix{
0\ar[r] & (I,\zeta) \ar[r]^\phi & (A,\delta) \ar[r]^\psi & (B,\epsilon) \ar[r] &0
}
\end{equation}
be a short exact sequence of coactions.
Exactness of the associated sequence
\begin{equation}\label{E seq}
\xymatrix{
0\ar[r] & I^E \ar[r]^{\phi^E} & A^E \ar[r]^{\psi^E} & B^E \ar[r] &0
}
\end{equation}
will not be affected if we replace the short exact sequence \eqref{seq} by an isomorphic one, so without loss of generality
$\phi$ is the inclusion of an ideal $I$ of $A$
and $\psi$ is the quotient map onto $B=A/I$.
By \propref{decreasing exact},
the sequence \eqref{E seq} is exact if and only if
\begin{equation}\label{IE}
I_E=I\cap A_E
\end{equation}
and
\begin{equation}\label{AE}
I+A_E\supset \psi^{-1}(B_E).
\end{equation}
Since
\[
I_E=\{a\in I:E\cdot a=\{0\}\},
\]
\eqref{IE} automatically holds in this context.
On the other hand, \eqref{AE} is equivalent to \eqref{test exact E} because
\begin{align*}
B_E
&=\{a+I\in B=A/I:E\cdot (a+I)=\{0\}\}
\\&=\{a+I:E\cdot a\subset I\}.
\qedhere
\end{align*}
\end{proof}
\begin{rem}
Techniques similar those used in the above proof, showing that \eqref{IE} holds automatically,
can also be used to show that
the functor $\tau_E$ preserves injectivity of morphisms:
if $\phi:A\to B$ is an injective equivariant homomorphism
and $a\in \ker \phi^E$,
then we can write $a=Q^E_A(a')$ for some $a'\in A$.
We have
\[
0=\phi^E(a)=\phi^E\circ Q^E_A(a')=Q^E_B\circ \phi(a'),
\]
so
\[
\phi(a)\in \ker Q^E_B=B_E.
\]
Thus for all $f\in E$ we have
\[
0=f\cdot \phi(a')=\phi(f\cdot a'),
\]
so $f\cdot a'=0$ since $\phi$ is injective.
But then $a'\in A_E=\ker Q^E_A$, so $a=0$.
This remark should be compared with \cite[Proposition~6.2]{BusEch}.
\end{rem}
\begin{cor}\label{intersect}
Let $E$ and $F$ be large ideals of $B(G)$,
and let $\<EF\>$ denote the weak*-closed linear span of the set $EF$ of products.
If $\tau_E$ or $\tau_F$ is exact then $\<EF\>=E\cap F$.
\end{cor}
\begin{proof}
Without loss of generality assume that $\tau_E$ is exact.
Note that, since $E$ is an ideal of $B(G)$,
\[
{}\ann E=\{a\in C^*(G):E\cdot a=\{0\}\},
\]
and similarly for ${}\ann F$.
Claim:
\[
{}\ann E+{}\ann F={}\ann \<EF\>.
\]
To see this, note that, since $E$ is exact, by \propref{exact E functor}
with
$(A,\delta)=(C^*(G),\delta_G)$ and
$I={}\ann F$
we have
\[
{}\ann F+{}\ann E\supset \{a\in C^*(G):E\cdot a\subset {}\ann F\}.
\]
Now, for $a\in C^*(G)$ we have
\begin{align*}
E\cdot a\subset {}\ann F
&\iff F\cdot (E\cdot a)=\{0\}
\\&\iff (EF)\cdot a=\{0\}
\\&\overset{*}{\iff} \<EF\>\cdot a=\{0\}
\\&\iff a\in {}\ann \<EF\>,
\end{align*}
where the equivalence at * holds since
for every $a\in C^*(G)$ the map from $B(G)$ to $C^*(G)$ defined by
$f\mapsto f\cdot a$
is weak*-weak continuous.
Thus ${}\ann F+{}\ann E\supset {}\ann \<EF\>$.
For the reverse containment, note that
$EF\subset E$ because $E$ is an ideal,
so $\<EF\>\subset E$ because $E$ is weak*-closed,
and hence ${}\ann E\subset {}\ann \<EF\>$.
Similarly, ${}\ann F\subset {}\ann \<EF\>$, and so ${}\ann E+{}\ann F\subset {}\ann \<EF\>$,
proving the claim.
Now,
since ${}\ann E$ and ${}\ann F$ are closed ideals of $C^*(G)$,
it follows from the elementary duality theory for Banach spaces that
\[
{}\ann E+{}\ann F={}\ann (E\cap F),
\]
and the corollary follows upon taking annihilators.
\end{proof}
The following result should be compared with
\cite[Lemma~A.5]{bgwexact}:
\begin{prop}\label{morita}
The coaction functor $\tau_E$ is Morita compatible.
\end{prop}
\begin{proof}
Let $(X,\zeta)$ be an $(A,\delta)-(B,\epsilon)$ imprimitivity bimodule.
Since $\tau$ is decreasing, by \lemref{decreasing morita}, it suffices to show that
$X\dashind B_E=A_E$.
The external tensor product $X\otimes C^*_E(G)$ is an $(A\otimes C^*_E(G))-(B\otimes C^*_E(G))$ imprimitivity bimodule, and we have an $(\text{\textup{id}}_A\otimes q_E)-(\text{\textup{id}}_B\otimes q_E)$ compatible imprimitivity bimodule homomorphism
\[
\text{\textup{id}}_X\otimes q_E:X\otimes C^*(G)\to X\otimes C^*_E(G).
\]
The composition
\[
(\text{\textup{id}}_X\otimes q_E)\circ\zeta:X\to M(X\otimes C^*_E(G))
\]
is an
$(\text{\textup{id}}_A\otimes q_E)\circ\delta-(\text{\textup{id}}_B\otimes q_E)\circ\epsilon$
compatible imprimitivity bimodule homomorphism.
We have
\begin{align*}
\ker (\text{\textup{id}}_A\otimes q_E)\circ\delta&=A_E\\
\ker (\text{\textup{id}}_B\otimes q_E)\circ\epsilon&=B_E.
\end{align*}
Thus, by \cite[Lemma~1.20]{enchilada}, $A_E$ is the ideal of $A$ associated to the ideal $B_E$ of $B$ via the Rieffel correspondence.
\end{proof}
\begin{rem}
\propref{morita} subsumes \cite[Lemma~4.8]{exotic}, which is the special case of exterior equivalent coactions.
It is tempting to try to use this to simplify the proof of \cite[Theorem~4.6]{exotic}, which says that $(A,\delta)$ satisfies $E$-crossed-product duality if and only if it is isomorphic to $({A^m}^E,{\delta^m}^E)$,
since we have Morita equivalences
\[
(A^m,\delta^m)\sim_M
(A^m\otimes \mathcal K,\delta\otimes_*\text{\textup{id}})\sim_M
(A\rtimes_\delta G\rtimes_{\widehat\delta} G,\widehat{\widehat\delta}).
\]
However, it turns out that appealing to \propref{morita} would not shorten the proof of \cite[Theorem~4.6]{exotic} much.
Nevertheless, it is interesting to note that,
by \propref{morita},
we have
\begin{align*}
(A,\delta)=({A^m}^E,{\delta^m}^E)
&\iff
(A\otimes\mathcal K,\delta\otimes_*\text{\textup{id}})=((A^m\otimes\mathcal K)^E,(\delta^m\otimes_*\text{\textup{id}})^E),
\end{align*}
equivalently
\[
\ker\Phi=(A\rtimes_\delta G\rtimes_{\widehat\delta} G)_E,
\]
which by definition is equivalent to $E$-crossed-product duality for $(A,\delta)$.
\end{rem}
For some purposes,
albeit not for
the purposes of this paper, a more appropriate coaction functor associated to $E$ is the following (see also \cite[Theorem~5.1]{BusEch}):
\begin{defn}
The \emph{$E$-ization} of a coaction $(A,\delta)$ is
\[
(A^\text{$E$-ize},\delta^\text{$E$-ize}):=\bigl((A^m)^E,(\delta^m)^E\bigr).
\]
\end{defn}
$E$-ization is
a functor on the category of coactions,
being
the composition of the functors maximalization and $\tau_E$.
The $E$-ization of a $\delta-\epsilon$ equivariant homomorphism $\phi:A\to B$
is
\[
\phi^\text{$E$-ize}=(\phi^m)^E:A^{mE}\to B^{mE}.
\]
\begin{prop}
$E$-ization is a coaction functor.
\end{prop}
\begin{proof}
We must produce a suitable natural transformation $q^{\text{$E$-ize}}:(A^m,\delta^m)\to (A^\text{$E$-ize},\delta^\text{$E$-ize})$,
and we take
\[
q^{\text{$E$-ize}}_A=Q^E_{A^m}:A^m\to A^{mE}=A^\text{$E$-ize}.
\]
$q^{\text{$E$-ize}}$ is natural
since $\tau_E$ is a decreasing coaction functor.
\end{proof}
\begin{thm}\label{E morita}
For any large ideal $E$ of $B(G)$, the $E$-ization coaction functor is Morita compatible,
\end{thm}
\begin{proof}
Let $(X,\zeta)$ be an $(A,\delta)-(B,\epsilon)$ imprimitivity bimodule,
with associated $(A^m,\delta^m)-(B^m,\epsilon^m)$ imprimitivity bimodule $(X^m,\zeta^m)$.
We must show that
\[
X^m\dashind \ker q^{\text{$E$-ize}}_B=\ker q^{\text{$E$-ize}}_A.
\]
But this follows immediately by applying \propref{morita} to $(X^m,\zeta^m)$,
since
$q^{\text{$E$-ize}}_A=Q_E^{A^m}$ and
$q^{\text{$E$-ize}}_B=Q_E^{B^m}$.
\end{proof}
\begin{rem}\label{mE}
For any large ideal $E$,
the two coaction functors $\tau_E$ and $E$-ization have similar properties,
e.g., they are both Morita compatible
(\propref{morita} and \thmref{E morita}).
However, in general they are not naturally isomorphic functors. For example, if $E=B(G)$ then
$\tau_E$ is the identity functor and $E$-ization is maximalization.
That being said, for $E=B_r(G)$ we do have $\tau_E\cong \tau_E\circ \text{maximalization}$.
\end{rem}
Note that, given a coaction $(A,\delta)$, we have two homomorphisms of the maximalization $(A^m,\delta^m)$:
\[
\xymatrix{
(A^m,\delta^m) \ar[d]^{q^m} \ar[dr]^{q^{\text{$E$-ize}}}
\\
(A,\delta)
&(A^\text{$E$-ize},\delta^\text{$E$-ize});
}
\]
in \cite[Definition~3.7]{graded} we said $(A,\delta)$ is \emph{$E$-determined from its maximalization} if $\ker q^m=\ker q^{\text{$E$-ize}}$, in which case there is a natural isomorphism $(A,\delta)\cong (A^\text{$E$-ize},\delta^\text{$E$-ize})$.
Given an action $(B,\alpha)$, in \cite[Definition~6.1]{graded} we defined the \emph{$E$-crossed product} as
\[
B\rtimes_{\alpha,E} G=(B\rtimes_\alpha G)/(B\rtimes_\alpha G)_E=(B\rtimes_\alpha G)^E,
\]
where in the last expression we have composed the full-crossed-product functor with $\tau_E$.
As in \cite[Definition~4.5]{BusEch}, we say a coaction $(A,\delta)$ satisfies \emph{$E$-duality} (called ``$E$-crossed product duality'' in \cite[Definition~4.3]{exotic}), or is an \emph{$E$-coaction}, if
there is an isomorphism $\theta$ making the diagram
\[
\xymatrix{
A\rtimes_\delta G\rtimes_{\widehat\delta} G \ar[r]^-\Phi \ar[d]_{Q_E}
&A\otimes\mathcal K
\\
A\rtimes_\delta G\rtimes_{\widehat\delta,E} G \ar@{-->}[ur]_\theta^\simeq
}
\]
commute,
equivalently
\[
\ker\Phi=(A\rtimes_\delta G\rtimes_{\widehat\delta} G)_E,
\]
where $\Phi$ is the canonical surjection.
In \cite[Theorem~4.6]{exotic}
we proved that $(A,\delta)$ is an $E$-coaction if and only if it is $E$-determined from its maximalization.
(\cite[Theorem~5.1]{BusEch} proves the converse direction.)
\begin{lem}\label{determined}
For a coaction $(A,\delta)$, the following are equivalent:
\begin{enumerate}
\item
$(A,\delta)$ is an $E$-coaction
\item
$(A,\delta)$ is $E$-determined from its maximalization.
\item
There exists a maximal coaction $(B,\epsilon)$ such that $(A,\delta)\cong (B^E,\epsilon^E)$.
\end{enumerate}
\end{lem}
\begin{proof}
The equivalence of (1) and (2) is
\cite[Theorem~4.6]{exotic}, and
(2) trivially implies (3).
Assume (3), i.e., that $(B,\epsilon)$ is maximal and we have an isomorphism $\theta:(B^E,\epsilon^E)\to (A,\delta)$.
The surjection $Q_E^B:(B,\epsilon)\to (B^E,\epsilon^E)$ is a maximalization,
since $\epsilon$ is maximal and $\ker Q_E^B\subset \ker q^n_B$.
Thus $\theta\circ Q_E^B$ is a maximalization of $(A,\delta)$.
Since any two maximalizations of $(A,\delta)$ are isomorphic,
there is an isomorphism $\psi$ making the diagram
\[
\xymatrix{
(A^m,\delta^m) \ar[d]_{q^m_A}
&(B,\epsilon) \ar@{-->}[l]_-\psi^-\simeq \ar[d]^{Q_E}
\\
(A,\delta)
&(B^E,\epsilon^E) \ar[l]^-\theta_-\simeq
}
\]
commute.
Thus $q^m_A\circ\psi$ is also a maximalization of $(A,\delta)$.
Therefore
\begin{align*}
\ker q^m_A
&=\psi\bigl(\ker Q_E\bigr)
\\&=\psi\bigl(B_E\bigr)
\\&=A^m_E,
\end{align*}
giving (2).
\end{proof}
\begin{thm}\label{equivalence}
The functor $\tau_E$
restricts
to give an equivalence of the category of maximal coactions to the category of $E$-coactions.
\end{thm}
Note: in the above statement, we mean the \emph{full} subcategories of the category of coactions.
\begin{proof}
By abstract nonsense, it suffices to show that the functor is essentially surjective and fully faithful, i.e.,
\begin{enumerate}
\item
every $E$-coaction $(A,\delta)$ is isomorphic to $(B^E,\epsilon^E)$ for some maximal coaction $(B,\epsilon)$, and
\item
for any two maximal coactions $(A,\delta)$ and $(B,\epsilon)$,
\[
\phi\mapsto \phi^E
\]
maps the set of equivariant homomorphisms $\phi:A\to B$
bijectively onto the set of equivariant homomorphisms $\psi:A^E\to B^E$.
\end{enumerate}
(1) is immediate from \lemref{determined}.
For (2),
given maximal coactions $(A,\delta)$ and $(B,\epsilon)$
and distinct nondegenerate equivariant homomorphisms $\phi,\psi:A\to B$,
we have an equivariant commutative diagram
\[
\xymatrix{
A \ar[rr]^-\phi \ar[dr]^{Q^E_A} \ar[dd]_{\Lambda_A}
&&B \ar[dr]^{Q^E_B} \ar@{-}[d]
\\
&A^E \ar[rr]^(.3){\phi^E} \ar[dl]^{\Lambda^E_A}
&{} \ar[d]_(.4){\Lambda_B}
&B^E \ar[dl]^{\Lambda^E_B}
\\
A^n \ar[rr]_-{\phi^n}
&&B^n,
}
\]
where $Q^E_A$ is a maximalization of $(A^E,\delta^E)$, $\Lambda_A$ is a normalization of $(A,\delta)$, and $\Lambda^E_A$ is a normalization of $(A^E,\delta^E)$, and similarly for the right-hand triangle involving the $B$'s.
There is a similar commutative diagram for $\psi$.
Since the normalizations $\phi^n$ and $\psi^n$ are distinct, by
\cite[Corollary~6.1.19]{bkq1}, we must have $\phi^E\ne \psi^E$ by commutativity of the diagram.
This proves injectivity.
For the surjectivity,
let $\sigma:A^E\to B^E$ be an equivariant homomorphism.
Then the maximalization $\sigma^m:A\to B$ of $\sigma$ is the unique equivariant homomorphism making the diagram
\[
\xymatrix{
A \ar[r]^-{\sigma^m} \ar[d]_{Q^E_A}
&B \ar[d]^{Q^E_B}
\\
A^E \ar[r]_-\sigma
&B^E
}
\]
commute.
Applying the functor $\tau_E$, we see that the diagram
\[
\xymatrix{
A \ar[r]^-{\sigma^m} \ar[d]_{Q^E_A}
&B \ar[d]^{Q^E_B}
\\
A^E \ar[r]_-{(\sigma^m)^E}
&B^E
}
\]
also commutes, so we must have $\sigma^m=((\sigma^m)^E)^m$ by the universal property of maximalization,
and hence $\sigma=(\sigma^m)^E$ by \cite[Corollary~6.1.19]{bkq1} again.
\end{proof}
\begin{rem}
Much of the development in this paper regarding ``classical'' categories carries over to the ``nondegenerate'' categories (involving multiplier algebras). The nondegenerate version of the above result
resembles the ``maximal-normal equivalence'' of \cite[Theorem~3.3]{clda},
which says that normalization restricts to an equivalence between maximal and normal coactions.
However, there are some properties missing: for example,
the functor $\tau_E$ is not a reflector in the categorical sense,
because
\[
Q_E:(A^E,\delta^E)\to (A^{EE},\delta^{EE})
\]
is not an isomorphism in general.
Indeed,
\cite[Proposition~8.4]{exotic} shows that
if $(A,\delta)$ is a maximal coaction
then the composition $(\text{\textup{id}}\otimes q_E)\circ\delta^E$ in the commutative diagram
\[
\xymatrix@C+30pt{
A \ar[d]_{Q_E}
\\
A^E \ar[r]^-{\delta^E} \ar[dr]_{(\text{\textup{id}}\otimes q_E)\circ\delta^E\hspace{.2in}}
&M(A^E\otimes C^*(G)) \ar[d]^{\text{\textup{id}}\otimes q_E}
\\
&M(A^E\otimes C^*_E(G))
}
\]
need not be faithful.
Thus we cannot characterize the $E$-coactions as the coactions that are ``$E$-normal''
in the sense that the map $Q_E$ is faithful.
Furthermore, unlike with
normalization,
\remref{mE} shows that $\tau_E$ is not isomorphic to its composition with maximalization.
\end{rem}
\begin{q}\label{min E}
Let $\mathcal F$ be a collection of large ideals of $B(G)$,
and let
\[
F=\bigcap_{E\in\mathcal F}E.
\]
Then $F$ is a large ideal of $B(G)$.
Is
$\tau_F$
a greatest lower bound for the coaction functors
$\{\tau_E:E\in\mathcal F\}$?
(It is easy to see that
$\tau_F$ is a lower bound.)
What if we take $\mathcal F$ to be the set of all large ideals $E$ of $B(G)$ for which
$\tau_E$
is exact?
\end{q}
\begin{q}
Given a coaction functor $\tau$,
is there a large ideal $E$ of $B(G)$
such that, after restricting to maximal coactions, $\tau$ is naturally isomorphic to $\tau_E$?
Note that at the level of objects the statement is false:
\cite[Example~5.4]{BusEch} gives a source of examples of a maximal coaction $(A,\delta)$ and a weakly invariant ideal $I\subset \ker q^n_A$ such that the quotient coaction $(A/I,\delta^I)$ is not of the form $(A^E,\delta^E)$ for any large ideal $E$.
(\cite[Theorem~6.10]{exotic} gives related examples, albeit not involving maximal coactions.)
Here is a related question: do there exist coaction functors that include the Buss-Echterhoff examples?
Such a functor could not be exact, since the Buss-Echterhoff examples are explicitly based upon short exact sequences whose image under the quotient maps are not exact.
We could ask the same question for the functors $\tau_E$, which, again, is exact for $E=B(G)$ but not for $E=B_r(G)$.
\end{q}
\begin{q}
For which large ideals $E$ is the coaction functor $E$-ization exact?
Exactness trivially holds for $E=B(G)$, since $B(G)$-ization coincides with maximalization.
On the other hand, exactness does not always hold for $E=B_r(G)$, because Gromov has shown the existence of nonexact groups.
\end{q}
\begin{q}
Let $\tau$ be the minimal exact and Morita compatible coaction functor.
Applying $\tau$ to the canonical coaction $(C^*(G),\delta_G)$,
we get a coaction $(C^*(G)^\tau,\delta_G^\tau)$,
with a canonical quotient map
\[
q^\tau:C^*(G)\to C^*(G)^\tau.
\]
Then
\[
E_\tau:=(\ker q^\tau)^\perp
\]
is a large ideal of $B(G)$,
by \cite[Corollary~3.13]{graded}.
Does the functor $\tau$ coincide with ${E_\tau}$-ization?
This is related to the following question:
is
\begin{align*}
E_\tau
&=\bigcap\{E:\text{$E$ is a large ideal such}
\\&\hspace{1in}\text{that $E$-ization is exact}\}?
\end{align*}
Again we could ask the analogous questions for $\tau_E$.
See also the discussion in \cite[Section~8.1]{bgwexact}.
\end{q}
\begin{rem}
Related to \qref{min E} above,
what if we consider only finitely many large ideals?
Let $E$ and $F$ be two large ideals, and let $D=E\cap F$, which is also a large ideal.
Suppose that the coaction functors $\tau_E$ and $\tau_F$ are both exact.
Is $\tau_D$ exact?
We proved in \corref{intersect} that
exactness of $E$ implies that $D$ is the weak*-closed span of the set of products $EF$,
and then we can deduce from this that if
\[
\xymatrix{
0\ar[r]
&(I,\gamma)\ar[r]^\phi
&(A,\delta)\ar[r]^\psi
&(B,\epsilon)\ar[r]
&0
}
\]
is a short exact sequence of coactions,
and if we assume that $\delta$ is \emph{w-proper} in the sense that $(\omega\otimes\text{\textup{id}})\circ\delta(A)\subset C^*(G)$ for all $\omega\in A^*$,
then the sequence
\[
\xymatrix{
0\ar[r]
&I^D\ar[r]^{\phi^D}
&A^D\ar[r]^{\psi^D}
&B^D\ar[r]
&0
}
\]
is exact.
We see a way to parlay this into a proof that $\tau_D$ is indeed exact,
but this requires a somewhat more elaborate version of Morita compatibility,
involving not only imprimitivity bimodules but more general $C^*$-correspondences.
This will perhaps resemble the property that Buss, Echterhoff, and Willett call \emph{correspondence functoriality} (see \cite[Theorem~4.9]{BusEchWil}).
We plan to address this in a forthcoming publication.
\end{rem}
\begin{appendix}
\section{$B(G)$-module lemmas}\label{module lemmas}
Every coaction $\delta:A\to M(A\otimes C^*(G))$ gives rise to a $B(G)$-module structure on $A$ via
\[
f\cdot a=(\text{\textup{id}}\otimes f)\circ\delta(a)\righttext{for}f\in B(G),a\in A.
\]
We feel that this module structure is under-appreciated,
and will point out here several situations in which it
makes things easier, since it allows us
to avoid computations with tensor products.
\begin{prop}\label{equivariant}
Let $(A,\delta)$ and $(B,\epsilon)$ be coactions of $G$, and let $\varphi:A\to B$ be a homomorphism. Then $\phi$ is $\delta-\epsilon$ equivariant if and only if
it is a module map, i.e.,
\[
\varphi(f\cdot a)=f\cdot \varphi(a)\righttext{for all}f\in B(G),a\in A.
\]
\end{prop}
\begin{proof}
First assume that $\phi$ is $\delta-\epsilon$ equivariant,
and let $f\in B(G)$ and $a\in A$. Then
\begin{align*}
\phi(f\cdot a)
&=\phi\bigl((\text{\textup{id}}\otimes f)\circ\delta(a)\bigr)
\\&=(\text{\textup{id}}\otimes f)\bigl((\phi\otimes\text{\textup{id}})\circ\delta(a)\bigr)
\\&=(\text{\textup{id}}\otimes f)\bigl(\epsilon\circ\phi(a)\bigr)
\\&=f\cdot \phi(a).
\end{align*}
Conversely, assume that $\phi$ is a module map,
and let $a\in A$.
Then for every $f\in B(G)$ the above computation shows that
\[
(\text{\textup{id}}\otimes f)\bigl((\phi\otimes\text{\textup{id}})\circ\delta(a)\bigr)
=(\text{\textup{id}}\otimes f)\bigl(\epsilon\circ\phi(a)\bigr),
\]
and it follows that $(\phi\otimes\text{\textup{id}})\circ\delta(a)
=\epsilon\circ\phi(a)$ since slicing by $B(G)=C^*(G)^*$ separates points of $M(B\otimes C^*(G))$.
\end{proof}
\begin{prop}\label{invariant module}
Let $(A,\delta)$ be a coaction, and let $I$ be an ideal of $A$ then $I$ is weakly $\delta$-invariant if and only if it is invariant for the module structure, i.e.,
\[
B(G)\cdot I\subset I.
\]
\end{prop}
\begin{proof}
First assume that $I$ is $\delta$-invariant,
and let $f\in B(G)$ and $a\in I$.
We must show that $f\cdot a\in I$.
Let $q:A\to A/I$ be the quotient map.
We have
\begin{align*}
q(f\cdot a)
&=q\bigl((\text{\textup{id}}\otimes f)(\delta(a))\bigr)
\\&=(\text{\textup{id}}\otimes f)\bigl((q\otimes\text{\textup{id}})\circ\delta(a)\bigr)
\\&=0,\righttext{since $I\subset \ker (q\otimes\text{\textup{id}})\circ\delta$.}
\end{align*}
Conversely, assume that $I$ is $B(G)$-invariant,
and let $a\in I$.
We need to show that $a\in \ker (q\otimes\text{\textup{id}})\circ\delta$.
For every $f\in B(G)$ we have $f\cdot a\in I$, so
\begin{align*}
0
&=q(f\cdot a)
\\&=(\text{\textup{id}}\otimes f)\bigl((q\otimes\text{\textup{id}})\circ\delta(a)\bigr).
\end{align*}
It follows that $(q\otimes\text{\textup{id}})\circ\delta(a)=0$
since slicing by $B(G)$ separates points in $M(A\otimes C^*(G))$.
\end{proof}
\begin{rem}
It has been noticed elsewhere in the literature that the $B(G)$-module structure can be useful in other ways.
For example,
$\delta$ is slice-proper \cite[Definition~5.1]{exotic} if and only if the maps
\[
f\mapsto f\cdot a:B(G)\to A
\]
are weak*-weak continuous (for $a\in A$)
\cite[Lemma~5.3]{exotic}.
Also,
for any full coaction $(A,\delta)$,
\[
A_0:=\clspn\{A(G)\cdot A\}
\]
is a $C^*$-subalgebra and a nondegenerate $A(G)$-submodule of $A$,
where $A(G)$ is the Fourier algebra of $A$,
and $\delta$ is nondegenerate if and only if $A_0=A$
\cite[Lemma~1.2, Corollary~1.5]{fullred} (see also \cite[Lemma~2]{kat}
In the same vein,
\cite[Corollary~1.7]{fullred} says that if $B$ is a nondegenerate $A(G)$-submodule of $M(A)$, then $\delta|_B$ is a nondegenerate coaction of $G$ on $B$.
\end{rem}
\end{appendix}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,877,628,089,357 | arxiv | \section{Introduction}
It is known that perturbatively rigid string
\cite{polrig,rig}
is trivial in the sense that at low energies it is equivalent to
the Nambu-Goto string. There is a hope that non-perturbative effects may change
this behaviour. In fact, lattice simulations indicate appearance of a
non-perturbative IR
fixed point for the 3d rigid string \cite{amb}. For higher dimensional target
spaces the situation is unclear.
One of the non-perturbative signature of field theory models are instantons.
Certain instanton equations of rigid string appeared for
the first time in \cite{polrig} and then their solutions and
properties were discussed in \cite{wheater,rob}. These instantons appeared to
be non-compact surfaces in $R^4$ and as we shall show they are somehow
exceptional examples of more general instantons.
One can also find some
remarks about rigid string instantons in \cite{other}. Despite these works not
much have been established toward classifications of instantons and their
relevance for string dynamics.
In this note we are going to investigate rigid string instantons in $R^4$ more
throughly. In particular we show that
equations of \cite{polrig} give instantons of very limited
type - our construction yield much bigger family. The instantons are
classified
by two topological invariants: the Euler characteristic $\chi$ of the immersed
(closed) Riemann surface $\Sigma$ and the self-intersection number $I$ of the
immersion.
The constructed set of instantons is rich enough to cover all
possible values of $\chi,I$. It is interesting to note that, contrary to
ordinary instantons, the rigid string
instantons split into three families. The intersection of these families is
non-trivial and, except one case, is equivalent to the instantons of
\cite{polrig}.
Let us recall some basic facts about the rigidity (sometimes called
extrinsic curvature).
The action is given by \refeq{extc1} and it is known to be
plagued with plethora of identities
which allow to exhibit its different aspects.
\begin{equation}
\int_\Sigma\sqrt{g}g^{ab}\p_a t^{\mu\nu}\p_b
t^{\mu\nu}=2\int_\Sigma\sqrt{g}K_a^{ib}K_b^{ia}=
2\int_\Sigma\sqrt{g}(\D{\vec X})^2-8\pi \chi.
\label{extc1}
\end{equation}
In the above $t^{\mu\nu}\equiv \epsilon^{ab}\p_aX^\mu \p_b X^\nu/\sqrt{g}$ is the
element
of the Grassman manifold $G_{4,2}$. Throughout the paper we shall exclusively use
the induced metric
$g_{ab}\equiv \vX{a}\vX{b}$. The tension tensor $K_{ab}^{i}$ is defined by
the relation:
$\p_a\vX{b}=\Gamma_{ab}^{c}\vX{c}+K_{ab}^{i}{\vec N}_i$, where
${\vec N}_i$ ($i$=1,2) are two vectors normal to
the immersed surface. The Euler
characteristic of the Riemann surface $\Sigma$ is given by Gauss-Bonnet formula
$\chi=\frac{1}{4\pi}\int_\Sigma\sqrt{g}R$. In the course of the paper we shall
heavily use identities
expressing $\chi$ and the self-intersection number $I$ of an immersion in
terms of $t^{\mu\nu}$.
\begin{eqnarray}
\label{chi}
\chi&=&\frac{1}{4\pi}\int_\Sigma \epsilon^{ab}\p_a t^{\mu\nu}\p_b
t^{\mu\rho}t^{\nu\rho}\\
\label{inter}
I&=&-\frac{1}{16\pi}\int_\Sigma \sqrt{g}g^{ab}\p_a t^{\mu\nu}\p_b
{\tilde t}^{\mu\nu}
=\frac{1}{8\pi}\int_\Sigma \epsilon^{ab}\p_a t^{\mu\nu}\p_b
t^{\mu\rho}{\tilde t}^{\nu\rho}
\label{top}
\end{eqnarray}
where ${\tilde t}_{\mu\nu}={\mbox{\small $\frac{1}{2}$}} \epsilon^{\mu\nu\rho\sigma}t^{\rho\sigma}$.
The paper is organized as follows: in the first section we show that there is
an infinite energy barrier between instantons belonging to different topological
sectors of rigid string. This indicate the existence of instantons in each
topological sector of the model. In Sec.2 we derive basic equations,
while in the
next section we discuss their solutions. Finally we comment on
other works devoted to the subject and state conclusions.
\section{Energy barrier between different instantons}
Before we go to the discussion of the instanton equations we shall show that
any action containing the rigidity has a minimum in each
topological sector given by the Euler characteristic $\chi$ of $\Sigma$
and the self-intersection number $I$ of the immersion $X$.
The considerations are valid for compact
surfaces only.
It is known that generic maps of a Riemann surface of genus $h$ to $R^4$
($X:\Sigma\to R^4$) are
immersion. Immersions of given $\Sigma$ are classified,
up to regular homotopies,
by the self-intersection number $I$ \cite{whitney1} (see also \cite{nfold} for a
brief review and some definitions). Hereafter we shall
identify both topological numbers $\chi,I$ with analytical
expressions \refeq{chi} and
\refeq{inter}, respectively.
If so the genus $h$ of $\Sigma$ is not really an invariant of continuous
deformations of $X$ but can acquire arbitrary values from metric singularities.
Similar behaviour characterizes $I$ what can be inferred from the similarity
of the
expressions \refeq{chi} and \refeq{inter}. In the following we shall discuss
the latter case more thoroughly. We shall construct a continuous family of maps
$X_\a$ which will connect two
immersions with $I$ different by one. Thus the family will not be a regular
homotopy. $X_\a$ must go
through a singularity i.e. a point where
the induced metric will vanish. We shall show that at this point the
rigidity is infinite. An action with rigidity will separate different
topological sector of field configurations. Hence there must exist a minimum
of the rigidity for each $I$ and also for each $\chi$.
Any map $X$ with
given ($\chi,\,I$) can be locally deformed, by a homotopy which is not
regular, in such a way that
$I$ will change by one.
For a certain value of the deformation parameter, say $\a=0$,
the map $X_{\a=0}$ ceases to be
an immersion. The problem is to characterize singularities of
$X_\a$ under such deformations.
We shall parameterize
family $X_\a$ by $\a$ from neighborhood of zero $\a\in D^1$.
Because deformations are local instead of
considering the whole Riemann surface $\Sigma$ we take a 2d disc $D^2\subset\Sigma$.
Thus the family of discs is a
3d manifold $D^1\times D^2$. Maps $X_\a$ from $D^2\subset\Sigma$ to
$R^4$ will be constructed as a composition of two maps: $X_\a=g\circ f_\a$,
where $f_\a:D^2\to D^1\times D^2$ and $g:D^1\times D^2\to R^4$.
The first map $f$ must be non-singular i.e. it must be embedding
because $X_\a$ must be immersion for all $\a\neq 0$.
In order to analyze singularities of $X_\a$ we must consider maps ${\tilde g}$
of $D^2$
together with parameter space $\a\in D^1$ into
the 5-manifold $D^1\times R^4 $. The requirement is that the parameter space
is embedded into $D^1$ of $D^1\times R^4$. Hence $\p_\a {\tilde g}$ is never
zero.
The generic singularities of such maps are well known
\cite{morin} to be cross-caps which in suitable coordinate system have the
form:
\begin{equation}
{\tilde g}:(t_1,t_2,x)\to (t_1,t_2,t_1 x,t_2 x,x^2).
\label{sing}
\end{equation}
The map has the line of self-intersections ${\tilde g}(0,0,x)=
{\tilde g}(0,0,-x)$ which terminates at the
singular point $x=0$. We must immerse family ($f_\a$) of discs
$D^1\times D^2\to D^1\times R^4$ in such a way
that it intersects (in 2 points) the line $t_1=t_2=0$ for
$\a<0$ and ceases to do it otherwise. Hence, for $a<0$,
$X_\a=g\circ f_\a$ is an immersion of $D^2\subset\Sigma$ in $R^4$ with one
self-intersection point.
$X_0=g\circ f_0$ ceases to be an immersion because it goes through singularity
point $(0,0,0)$ of ${\tilde g}$.
As $f_\a$ we consider a family of quadrics:
$f_\a(s,t)=\{s^2+t^2+\a,s,t\}$ in $D^1\times D^2$.
It respects all requirement just
imposed on the family of embedded surfaces. As $g$ we take the last four
components of the map (\ref{sing}) dropping the coordinate which corresponds to
an embedding of the deformation parameter ($\a\sim t_1$) in $D^1$.
Thus $X_\a$ is:
\begin{equation}
\label{map}
X_\a(s,t)=g\circ f_\a(s,t)=\{s,(s^2+t^2+\a) t,s t,t^2\}
\end{equation}
As we expected, at $\a=0$ the image of $D^2$ under $X$ is singular i.e.
$\p X_0/\p t=0$ at $(s=t=0)$. For $\a\neq 0$ the map \refeq{map} is an immersion.
For $\a> 0$ it does not have self-intersection points ($I=0$).
For $\a< 0$ it has one self-intersection point: $X_\a(s=0,t=\sqrt{\a})=
X_\a(s=0,t=-\sqrt{\a})$ ($I=1$). The above arguments
show that (\ref{map}) is the generic form of maps with the desired properties.
Now we calculate the rigidity for such a family of maps.
The relevant formulae are \refeq{extc1}.
At $\a=0$ the density
$\sqrt{g}g^{ab}g_{cd}K_a^{ib}K_b^{ia}$ diverges as:
$4\pi/r^3 + O(1/r)$, where $r$ is the polar coordinate on $D^2\subset \Sigma$.
Existence of the singularity means that the rigidity tends to
infinity at $\a=0$ i.e. when $X$ ceases to be an immersion.
Thus {\it
the rigidity separates configurations with
different self-intersection number by an infinite barrier}.
Hence, we can expect a
minimum of an action with the rigidity for each topological sector of the
theory.
This is the main conclusion of this part of the paper. Let us stress the local
aspect of the considerations, what implies its validity
for an arbitrary target space-time.
In the rest of the
paper we shall be looking for these minima in terms of instantons. It will
appear that in some cases the minima do not exist if we bound considerations
to compact surfaces in $R^4$.
\section{Basic equations}
In this section we shall derive instanton equations. We recall that
the tensor $t^{\mu\nu}$
is the (Gauss) map $\Sigma\to G_{4,2}$, where $\Sigma$ is the Riemann
surface and $G_{4,2}\equiv O(4)/(O(2)\times O(2))
=S^2\times S^2$ is the Grassman manifold of planes in
$R^4$ ($\mu=0,1,2,3$). The product structure of $G_{4,2}$ is related to the
fact that $t^{\mu\nu}$
splits into self-dual ($+$) and anti-self-dual ($-$) parts:
$t_\pm^{\mu\nu}\equiv t^{\mu\nu}\pm {\tilde t}^{\mu\nu}$. Both tensors assume
values in $S^2$ due to $t_\pm^{\mu\nu}t_\pm^{\mu\nu}=4$. In order to simplify
notation we introduce two vectors: $n_\pm^i=t_\pm^{0i}$ ($i=1\dots
3$) which parameterize all components of $t_\pm^{\mu\nu}$ and respect
${\vec n}_+^2={\vec n}_-^2=1$. There are associated topological
invariants $I_\pm$ which classify homotopy classes of maps\footnote{In the
case under consideration, homotopy classes of maps $\Sigma\to
G_{4,2}$ are classified by their degrees. This follows from
the Pontryagin-Thom construction \cite{bredon}.} $\Sigma\to
G_{4,2}$. These are the degrees (winding numbers) of maps
$t_\pm^{\mu\nu}:\Sigma\to S^2$.
Both topological invariants \refeq{top} can be expressed in terms of
$I_{\pm}$.
\begin{equation}
\chi=I_+-I_-\quad I={\mbox{\small $\frac{1}{2}$}}(I_++I_-)
\label{char}
\end{equation}
where $I_{\pm}=\frac{1}{8\pi}\int_\Sigma \epsilon^{ab}\p_a{\vec n}_\pm(\p_b{\vec n}_\pm
\times {\vec n}_\pm$). We also note another useful
identity:
\begin{equation}
I=-\frac{1}{16\pi}\int_\Sigma \sqrt{g}g^{ab}(\p_a{\vec n}_+\p_b{\vec n}_+-\p_a{\vec n}_-\p_b{\vec n}_-)
\label{ichar}
\end{equation}
which stems from (\ref{top}).
Using \refeq{char} and \refeq{ichar} we get
\begin{eqnarray}
\int_\Sigma\sqrt{g}g^{ab}\p_a t^{\mu\nu}\p_b t^{\mu\nu}
&=&\int_\Sigma\sqrt{g}g^{ab}(\p_a{\vec n}_+\p_b{\vec n}_++\p_a{\vec n}_-\p_b{\vec n}_-
)\label{extc2}\\
&=&2\int_\Sigma\sqrt{g}g^{ab}\p_a{\vec n}_+\p_b{\vec n}_++16\pi I
\label{idp}\\
&=&2\int_\Sigma\sqrt{g}g^{ab}\p_a{\vec n}_-\p_b{\vec n}_--16\pi I
\label{idm}
\end{eqnarray}
In order to derive instanton equations we follow the standard route.
Let us write the inequalities:
\begin{equation}
\int_\Sigma \sqrt{g}g^{ab}(\p_a {\vec n}_+\pm \frac{\epsilon_a^{\;c}}{\sqrt{g}}
\p_c {\vec n}_+\times {\vec n}_+)(\p_b {\vec n}_+\pm
\frac{\epsilon_b^{\;d}}{\sqrt{g}}\p_d {\vec n}_+\times {\vec n}_+)\geq 0
\end{equation}
which imply $\int_\Sigma\sqrt{g}g^{ab}\p_a{\vec n}_+\p_b{\vec n}_+\geq 8\pi |I_+|$.
They are saturated by the following instanton equations for the
self-dual part of $t^{\mu\nu}$ i.e. for ${\vec n}_+$:
\begin{equation}
({+,\pm})\equiv\p_a {\vec n}_+\pm \frac{\epsilon_a^{\;c}}{\sqrt{g}}\p_c {\vec n}_+\times {\vec n}_+
=0
\label{instp}
\end{equation}
There is a twin set of instanton equations for ${\vec n}_-$ i.e. for the
anti-self-dual
part of $t^{\mu\nu}$.
\begin{equation}
({-,\pm})\equiv\p_a {\vec n}_-\pm \frac{\epsilon_a^{\;c}}{\sqrt{g}}\p_c {\vec n}_-\times
{\vec n}_- =0
\label{instm}
\end{equation}
As we shall see below, \refeq{instp} and \refeq{instm} are not
independent equations. This is obvious if one notices that ${\vec n}_+$ and ${\vec n}_-$
carry altogether the same degrees of freedom as $X$.
It follows that if one threats ${\vec n}_+$ and ${\vec n}_-$ as would be independent the
so-called
integrability conditions appears. These will be discussed in the end of the
paper. Moreover one must realizes that metric also depends only on the same
degrees of freedom.
Hereafter we shall discuss relations between Eqs.(\ref{instp},\ref{instm}).
If (\ref{instp}) holds then
$I_+\geq 0$ for
$({+,-})=0$ and $I_+\leq 0$ for $({+,+})=0$.
On the other hand if (\ref{instm}) holds then $I_-\geq 0$ for
$({-,-})=0$ and $I_-\leq 0$ for $({-,+})=0$.
Let us check when two instanton equations can be respected
simultaneously. From (\ref{ichar}) we get
$I={\mbox{\small $\frac{1}{2}$}}(-|I_+|+|I_-|)$. Confronting with
\refeq{char} we conclude that in this case $
|I_-|-I_-=I_+ +|I_+| $ must be respected. All possible solutions to
this condition are listed below.
\begin{enumerate}
\item $I_+>0$ implies $I_-< 0$. Instanton equations: $({-,-})=0,\;({+,+})=0$.
Below we shall show that, in fact $({-,-})=0 \Leftrightarrow
({+,+})=0\Leftrightarrow \D X^\mu=0 $.
\item $I_+=0$ implies $I_-\geq 0$. Instanton equations are
$({+,-})=({+,+})=0$ and $(-,-)=0$. Due to the first point we get
$\p_a t_+^{\mu\nu}=0$.
\item $I_-=0$ implies $I_+\leq 0$. Instanton equations $({-,+})=({-,-})=0$
and $(+,+)=0$. Due to the first point we get $\p_a t_-^{\mu\nu}=0$.
\item $I_+<0$ implies $I_-> 0$. Instanton equations:
$({+,-})=0,\;({-,+})=0$. Due to (\ref{char}) and $\chi\leq 2$,
both equations can be
respected simultaneously only for $I_+=1, I_-=-1$ (non self-intersecting
sphere).
\end{enumerate}
We rewrite $({+,+})=0$ in terms of components of the stress tensor
$K^i_{ab}$.
\begin{equation}
K^i_{ef}\le \d_{ij}\left(
-\d_a^e\frac{\epsilon^{\;f}_b}{\sqrt{g}}+\d_b^f\frac{\epsilon^{\;e}_a}{\sqrt{g}} \right) +
\epsilon_{ij}\left( \d_a^e\d_b^f+\frac{\epsilon^{\;e}_a\epsilon^{\;f}_b}{g}\right)\right\rbrack=0
\label{mini}
\end{equation}
The l.h.s. of \refeq{mini} is equivalent to $K^{i\;a}_a=0$ i.e. to $\D
X^\mu=0$. Analogously one can show that $({-,-})=0 \Leftrightarrow \D
X^\mu=0$. Instantons respecting $\D X^\mu=0$ are called minimal.
Because the l.h.s. of \refeq{extc1} is non-negative, minimal
instantons can not exist for Riemann surfaces of
genus smaller than 2. There is one exception to this: torus with $I=0$ -
from \refeq{extc1} we get $\p_a t^{\mu\nu}=0$ i.e. the ``torus'' is in fact
degenerate to $R^2$.
We summarize this discussion noting that we obtained three families of instanton
equations: $({+,-})=0,\;({-,+})=0,\;({+,+})\equiv({-,-})=0$.
Instantons considered in
\cite{polrig,wheater,rob} lies in the intersection of these families and
corresponds to the equations $\p_a t_\pm^{\mu\nu}=0$.
It is useful to construct a map of all possible instantons on
the $(I,h)$ plane (here $h$ is the genus of the Riemann surface $\Sigma_h$).
Minimal instantons respect $I_+<0$ and $I_-> 0$ thus leads to inequality
$|I|\leq h-1$; $({+,-})=0$ instantons respect $I_+\geq 0$ and from
\refeq{idp} we get
$I+I_+=2I_++h-1\geq 0$; $({-,+})=0$ instantons respect $I_-\leq 0$ and from
\refeq{idm}
$I+I_-=2I_++1-h\leq $. We also notice that instantons may
exist for all possible $\chi$ and $I$.
Fig.1 summarizes the relation between different type of instanton equations.
\begin{figure}[t]
\label{pathnf}
\vspace{0.5cm}
\postscript{inst3.eps}{0.67}
\vspace{0.5cm}
\caption{ Rigid string instantons on the $(I,h)$ plane. Minimal instantons,
are denoted by empty circles, $({+,-})=0$ instantons by $+$'s, $({-,+})=0$
instantons by $-$'s, respectively.
Instantons $\p_a {\vec n}_\pm=0$ are denoted by full circles.
The solution found in this paper corresponds to the $\pm$ point.}
\end{figure}
\section{Solutions of instanton equations}
Apparently the problem of solving
Eqs.(\ref{instp},\ref{instm}) is very complicated. Despite this
some results are known.
On of the tools is
the Gauss map of an immersion
\cite{osserman}. Let us recall some basic facts. The Gauss map of an immersion
$X:\Sigma\to R^4$ is defined to be the map $G:\Sigma\to G_{2,4}=S^2\times S^2$ i.e.
$G(z)$ gives tangent plane to the immersion at the point $X(z)$.
For the conformal metric $g_{ab}\propto \d_{ab}$ one can
identify $G_{2,4}$ with a quadric in $CP^3$: $\sum_{\mu=1}^4 Z_\mu^2=0$, where
$Z_\mu$ are coordinates on $CP^3\subset C^4$. $Z_\mu$ is
$\p X$ up to a $C$-number function $\Psi$ : $\p X^\mu=\Psi Z^\mu$.
Unfortunately not
every map $G:\Sigma\to G_{2,4}=S^2\times S^2$ can be a Gauss map of an immersion.
The so-called integrability conditions have to be respected \cite{osserman}.
They originate from the fact that $\pb \p X^\mu$ must be orthogonal to
$\p X^\mu$
and real. Both conditions reads:
\begin{equation}
\pb \ln(\Psi)=-\frac{\pb Z^\mu {\bar Z}^\mu}{|Z|^2}, \quad
{\rm Im}\le \Psi\left(\pb Z^\nu ( \d^{\mu\nu}-
\frac{Z^\mu {\bar Z}^\nu}{|Z|^2})\right)\right\rbrack=0
\label{second}
\end{equation}
There is a nice parameterization of $Z$
\begin{equation}
Z=\{ 1+f_+f_-,i(1-f_+f_-),f_+-f_-,-i(f_++f_-)\},
\label{param}
\end{equation}
where $f_i:\Sigma\to S^2$.
From Eq.\refeq{second} one can derive the following
integrability conditions \cite{osserman}:
\begin{eqnarray}
&&\frac{|\pb f_+|}{1+|f_+|^2}=\frac{|\pb f_-|}{1+|f_-|^2}
\label{con1},\\
&&{\rm Im}\le\pb\left(\frac{\p\pb f_+}{\pb f_+}-2\frac{\p f_+{\bar f}_+}{1+|f_+|^2}+
\frac{\p\pb f_-}{\pb f_-}-2\frac{\p f_-{\bar f}_-}{1+|f_-|^2}\right)\right\rbrack=0
\label{con2}
\end{eqnarray}
Both conditions take relatively simple form when expressed in
terms of $({+,+}),({-,-})$: $|({+,+})|=|({-,-})|\; , \quad dA=0$
where $A=[\overline{(+,+)}\p ({+,+}) +\overline{(-,-)}\p
({-,-})]/|({+,+})|^2\;dz+c.c.$. We see that the first integrability condition
guarantee the equality (\ref{idp})=(\ref{idm}).
For minimal instantons the first condition is a
tautology while the second one looks singular.
There is a theorem \cite{osserman} which says that while the integrability
conditions
(\ref{con1},\ref{con2}) are solved for appropriately regular maps we can
reconstruct
the surface $X$ up to a 4d shift and a scale.
It is worth to notice that the
integrability conditions posses symmetry
groups. First of all it is the rotation group $SO(4)\sim SO_+(3)\times
SO_-(3)$.
\begin{equation}
f_\pm\to \frac{\a_\pm f_\pm+\b_\pm}{-{\bar \b}_\pm f_\pm +{\bar \a}_\pm},
\quad |\a_\pm|^2+|\b_\pm|^2=1,\quad
\a_\pm,\b_\pm\in C
\label{so}
\end{equation}
Both conditions are also invariant under
(restricted) conformal transformations
performed on $f_+$ and $f_-$ simultaneously:
$f_\pm(z,\zb)\to f_\pm(g(z),\overline{g(z)})$.
This symmetry is the remnant of the reparameterization invariance of the
original theory.
Below we shall shortly discuss solutions to the instantons equations and the
above integrability
conditions. In the parameterization \refeq{param}:
\begin{eqnarray}
{\vec n}_+=(\frac{f_+{\bar f}_+-1}{1+|f_+|^2},\;i\frac{f_+-{\bar
f}_+}{1+|f_+|^2},\;\frac{f_++{\bar f}_+}{1+|f_+|^2})\nonumber\\
{\vec n}_-=(\frac{f_-{\bar f}_--1}{1+|f_-|^2},\;i\frac{{\bar
f}_--f_-}{1+|f_-|^2},\;\frac{f_-+{\bar f}_-}{1+|f_-|^2})
\end{eqnarray}
so that we get
\begin{equation}
({+,-})=0\Leftrightarrow\p f_+=0,\quad ({-,+})=0 \Leftrightarrow\p f_-=0
\label{holo}
\end{equation}
respectively. Hereafter we shall concentrate on the first of Eqs \refeq{holo}.
It can be easily solved:
\begin{equation}
f_+=\eta_+\prod_{j=1}^{I_+}\frac{\zb-{\bar a}_j}{\zb-{\bar b}_j}.
\label{fplus}
\end{equation}
We solve the integrability conditions for the $I_+=1$ case.
Using conformal invariance we can
put $f_+=\zb$. Next we choose the ansatz for $f_-$:
${f}_-=\frac{a \zb+b}{c \zb+d},\quad ad-bc=1,\; a,b,c,d\in C$.
This is equivalent to an assumption that both Eqs.(\ref{holo}) are respected.
The second integrability condition holds identically.
The first
integrability condition gives $d={\bar a},\; c=-{\bar b}$ thus setting
the solution on the $SO_-(3)$ manifold. Hence the whole moduli space of
solutions for $f_\pm$ consists of one point (up to irrelevant rotations of
space-time and
reparameterizations of the world-sheet). From (\ref{second})
we can determine
$\Psi$ : $\Psi=i\lambda/|Z|^2$, $\lambda\in R$.
Integrating $\p X=\Psi Z$ we get the immersion $X$
\begin{equation}
X-X_0=\frac{\lambda}{1+|z|^2} \{y,x,0,1\}
\label{sphere}
\end{equation}
The above is the sphere
$(X^0-X_0^0)^2+(X^1-X_0^1)^2+(X^3-X_0^3-\lambda/2)^2=\lambda^2/4$, $X^2-X_0^2=0$.
The formula \refeq{sphere} gives 5-dimensional family of instantons.
In the forthcoming paper \cite{next} we show that this is really the most
general instanton
family with $\chi=2,I=0$.
Unfortunately for $I_+>1$ the situation is much more complicated.
The simple ansatz
$f_-=\eta_-\prod_{j=1}^{m}\frac{\zb-{\bar a}_j}{\zb-{\bar b}_j}\prod_{k=1}^{m'}
\frac{z-\a_k'}{z-b_k'}$ appeared to be too restrictive and we were not able
to find
any solutions to the integrability conditions. Definitely, different methods are
required \cite{next}.
\section{Final comments}
Let us finally comment on other works concerning the rigid string instantons
and state conclusions.
Certain instanton equations for rigid string were proposed in
\cite{polrig} and farther elaborated in
\cite{wheater,rob}\footnote{Surfaces constructed in \cite{rob} must be singular
i.e. they are not immersions. By a direct computations one can find that their
Euler characteristic is (except one case) greater then 2.}.
The considered equations were
\begin{equation}
\p_a {\vec n}_\pm=0
\label{polinst}
\end{equation}
One can see that they belong to the set Eqs.(\ref{instp},\ref{instm})
restricted be the condition $I_\pm=0$. Eq.
(\ref{char}) implies that for (\ref{polinst}) instantons $I=\pm{\mbox{\small $\frac{1}{2}$}}\chi$
holds, so e.g. for the torus the
equations can describe only the standard ($I=0$) immersion in $R^3$.
Moreover (\ref{polinst}) implies also $\D X^\mu=0$.
Hence no compact surface can be immersed in $R^4$ while
(\ref{polinst}) is respected.
In the present paper we have shown that, contrary to
\refeq{polinst}
general instanton equations (\ref{instp},\ref{instm}) can
have a representant for each value of $\chi$ and $I$.
Not all of them can have compact
representant in non-compact space-time $R^4$. Non-compactness of the
space-time makes some of the
immersions to ``run away'' to infinity
i.e. instantons become non-compact and hard to control.
It is known that minimal
instantons $({+,+})=0$
can not exist in $R^4$ \cite{osserman2,eells}. For $(+,-)$ and
the twin $(-,+)$ family
we have found
explicitly one compact instanton
with topological numbers $\chi=2,I=0$.
In the forthcoming paper we shall show that, in fact, all these instantons are
compact \cite{next}.
We want to stress
that despite this, the general arguments of
Sec.1 shows that solutions to the instanton equations should
exist for all possible topological sectors for compact space-times.
This subject goes beyond the scope of this
paper.
We finish with few remarks concerning possible applications of
the rigid string instantons described in this paper.
It is conceivable that they may play
prominent role in string description of gauge fields. For example, it is
known that YM$_2$ in 1/N expansion is localized on surface-to-surface
holomorphic and anti-holomorphic maps \cite{cmr} (see also \cite{horava}).
Four dimensional
version of this construction was proposed in \cite{my,nfold}.
Unfortunately, in this
case no definite set of maps was given. One may speculate that
the rigid string instantons should be the appropriate maps.
Work in this direction is in progress.
{\bf Acknowledgment}. I would like to thank
A.Niemi for kind hospitality in Uppsala University where a part of this
paper was prepared.
|
2,877,628,089,358 | arxiv | \section{Introduction}
Learning quickly is a hallmark of human intelligence, even a child can recognize objects from a few examples. Fortunately, meta-learning provides a promising strategy for enabling efficient learning from a few supervised information, and achieves great success in many fields~\cite{DBLP:conf/icml/RakellyZFLQ19,DBLP:conf/acl/QianY19}, especially in few-shot learning~\cite{DBLP:journals/corr/abs-2004-05439,DBLP:journals/csur/WangYKN20}. However, compared with humans who can easily utilize experience from a seen environment (or domain) to help efficiently learn tasks from other unseen domains, most meta-learning models thus far have focused on the situation where
all the tasks are from a same domain. The talent of generalizing the experience to unseen domains is still a challenge for recent meta-learning methods.
Moreover, the ability of meta-learning to generalize to unseen domains is also critical in practice, due to many settings where meta-learning is applied for essentially referring to a cross-domain problem. For example, it's impossible to construct large training datasets for rare classes (\eg., some rare bird species, or some diseases), and the auxiliary set for training the meta-learning model is usually from the other domains where the annotated data is easily collected. Therefore, the meta-learning method is wished to leverage the meta-knowledge from seen domain to help efficiently study in the unseen domains.
Although there are some works had paid attention to this problem~\cite{DBLP:conf/iclr/TsengLH020,DBLP:journals/corr/abs-2010-07734,sun2020explain}, almost all the methods tailor for classification, leading to limited applications. Typically, feature-wise transformation~\cite{DBLP:conf/iclr/TsengLH020} just can be applied to metric-based meta-learning models, and~\cite{sun2020explain} shows comparable performance to~\cite{DBLP:conf/iclr/TsengLH020}. STARTUP~\cite{DBLP:journals/corr/abs-2010-07734} must use unlabeled data from the target domain. Moreover, the performances of some methods are shown to be sensitive to the degree of the domain shift, even underperform the traditional meta-learning methods, when exists a large domain discrepancy between the training and target domains~\cite{DBLP:conf/eccv/GuoCKCSSRF20}.
Therefore, in this work, we aim to propose a \emph{model-agnostic} and \emph{domain-free} method to improve the generalization of various meta-learning frameworks on unseen domains, in the sense that it can be applied to different learning problems, and robust whatever the domain shift is small or large. Moreover, \emph{we don't need the data from the unseen domains}.
The core idea is to generate tasks from other unseen domains, and utilize these pseudo tasks combined with true tasks sampled from the source domain to elegantly disentangle how to learn domain-invariant meta-knowledge, which can improve the generalization of meta-learning on unseen domains.
In order to achieve this goal, we propose a shift layer to learn how to simulate the domain shift and generate tasks from unseen domains. For training it, we also develop a new adversarial learning-to-learn mechanism. In this way, the meta-learning model and the shift layer can be jointly trained end-to-end.
We evaluate the proposed method with different meta-learning models on both regression and classification problems. Experiments demonstrate that our method is model-agnostic and robust to the degree of the domain shift.
Three primary contributions of this work are followed:
\begin{itemize}
\item We propose a shift layer to generate pseudo tasks from unseen domains. With these pseudo tasks, the meta-learning model can easily learn cross-domain meta-knowledge.
\vspace{-1mm}
\item We develop an adversarial learning-to-learn mechanism to help the shift layer capture how to generate appropriate tasks which benefit for improving the generalization of the meta-learning model.
\vspace{-1mm}
\item The experimental results show that our method can achieve state-of-the-art performance on cross-domain few-shot classification, and also
effectively improves the generalization of various meta-learning models on unseen domains in few-shot regression.
\end{itemize}
\section{Related Work}
\subsection{Meta-learning}
Meta-learning aims to assist the learning process in the new task by studying how learning models perform on each learning task. Recent meta-learning methods can be broadly divided into three categories, metric-based, gradient-based, and model-based methods.
\textbf{Metric-based methods.} Metric-based meta-learning framework can be considered as learning to compare, and a nonparametric similarity function is designed to evaluate the
similarity between examples. For example, Matching networks~\cite{DBLP:conf/nips/VinyalsBLKW16} firstly use attention recurrent network as a feature encoder to mapping images from different classes to a common meta-feature space, and applies cosine similarity to obtain the predicted result, Prototypical networks~\cite{DBLP:conf/nips/SnellSZ17} adopt euclidean distance, and RelationNet~\cite{DBLP:conf/cvpr/SungYZXTH18} directly utilizes a deep distance metric to measure the similarity. In general, metric-based methods are simply and effective, however thus far are restricted to classification.
\textbf{Model-based methods.} In this category, meta-learning models are usually designed as a parameterized predictor to generate parameters for the new tasks. For example, Ravi \etal~\cite{DBLP:conf/iclr/RaviL17} and Santoro \etal~\cite{santoro2016meta} both used the recurrent neural network as the predictor.
\textbf{Gradient-based methods.} Gradient-based methods focus on extracting meta-knowledge required to improve the optimization performance. Model-agnostic meta-learning (MAML)~\cite{DBLP:conf/icml/FinnAL17} regards the initialization of deep network as meta-knowledge and aims to learn a good initialization for all tasks, so that the learner of a new task just needs a few gradient steps from this initialization. R2D2~\cite{DBLP:conf/iclr/BertinettoHTV19} and MetaOpt~\cite{DBLP:conf/cvpr/LeeMRS19} adopt ridge regression or support vector machine~\cite{cortes1995support} as the task-specific learner for each learning task, respectively. With these linear classification models, they can learn a feature embedding model which can generalize well on the new task. Compared with the metric-based method, gradient-based method can be applied to many learning problems, \eg regression and reinforcement learning. However it usually suffers from second-order derivatives, Raghu \etal~\cite{DBLP:conf/iclr/RaghuRBV20} proposed ANIL to relieve this problem.
\subsection{Domain Adaptation}
There is a substantial body of work on domain adaptation~\cite{DBLP:journals/ijon/WangD18}, which aims to learn from one or multiple source domains a well-performing model on a target domain. Early methods of domain adaptation generally rely on instance re-weighting~\cite{DBLP:conf/nips/DudikSP05} or model parameter adaptation~\cite{yang2007cross}. Since the emergence of domain adversarial neural networks
(DANN)~\cite{ganin2016domain}, recent frameworks~\cite{DBLP:conf/iccv/LeeKKJ19,DBLP:conf/iccv/XuLYL19} are mainly based on applying adversarial training to alleviate the domain shift existing in source and target domains. There are also some methods using the discrepancy-based framework to align the marginal distribution between domains~\cite{DBLP:conf/icml/LongZ0J17,DBLP:conf/cvpr/Kang0YH19}.
However, most frameworks on domain adaptation are followed two strict priori, \ie, the label spaces of source and target domains are same and numerous unlabeled images in the target domain are available. According to the argumentations in~\cite{DBLP:conf/iclr/TsengLH020,DBLP:conf/eccv/GuoCKCSSRF20}, these assumptions may not be realistic and restrict the domain adaptation framework to handle novel concepts. Our works consider the scenario of how to improve the generalization of the learning model on the new concept from unseen domains.
\begin{figure*}
\centering
\includegraphics[width=14.5cm]{fig_1.pdf}
\caption{Method overview. In each updating, we sample three tasks from the source domain, \eg, $\mathcal{T}_1$, $\mathcal{T}_2$ and $\mathcal{T}_3$ in this figure. $\mathcal{T}_1$ is used to help the proposed shift layer with the initialization $\boldsymbol \phi$ learn how to simulate the domain shift in unseen domains. Then, based on $\mathcal{T}_2$, the adapted shift layer $\boldsymbol \phi'$ can generate a pseudo task $\hat{\mathcal{T}}_2$ that is supposed to be from the other domains. Finally, $\hat{\mathcal{T}}_2$ and $\mathcal{T}_3$ are used to optimize the meta-parameter together.} \label{fig:1}
\vspace{-2mm}
\end{figure*}
\subsection{Domain Generalization}
In contrast to domain adaptation, domain generalization methods~\cite{balaji2018metareg,DBLP:conf/iclr/ShankarPCCJS18,DBLP:journals/corr/abs-2103-01134} devote to learn features that perform well when transferred to unseen domains. These models needn't data from the unseen domains during the training stage. Most recently, meta-learning based approaches~\cite{balaji2018metareg,li2019episodic} are proposed in domain generalization and achieve impressive results. The ideas of these methods are using episodic training to simulate the domain shift during the training and evaluation stages. By this way, better generalization on the unseen domains is achieved. Yet, existing domain generalization approaches still aim at tackling the problem under the assumption that the instances in the training stage share the same label space with the data in the unseen domain.
Besides label space, these algorithms also require to access the training instances drawn from different source domains, not a single one. Our method does not have this limitation.
Compared with domain adaptation and generalization, this work studies a more challenging setting, which just requires one single source domain and needs to generalize well on the new concepts from new domains.
\subsection{Cross-domain Few-shot Classification}
Cross-domain few-shot classification is a scenario which our work can be applied to. In cross-domain few-shot classification, training and novel classes are drawn from different domains, and the class label sets are disjoint. This scenario is very difficult, therefore, limited works aim at cross-domain few-shot classification. Typically, Tsing \etal~\cite{DBLP:conf/iclr/TsengLH020} used feature-wise transform to improve the generalization ability of the learned representations, and Guo \etal~\cite{DBLP:conf/eccv/GuoCKCSSRF20} implemented a broader study of cross-domain few-shot classification and proposed a challenging benchmark. Phoo \etal~\cite{DBLP:journals/corr/abs-2010-07734} introduced a
self-training approach that allows few-shot learners to adapt feature representations to the
target domain by some unlabeled data from the target domain. LRP~\cite{sun2020explain} uses explanation-guided training to improve the performance.
However, all these approaches are tailored for classification. Specifically, some of them are sensitive to the degree of the domain shift or rely on the transductive setting. Our method is model-agnostic and inductive which can be applied to many learning problems.
\section{Preliminary}
We firstly present the meta-learning problem in the typical few-shot learning, and then generalize to the setting that our method aims to solve.
\subsection{Meta-learning in Typical Few-shot Learning}
In the typical few-shot learning, meta-learning model accesses to a set of training tasks $\mathcal{S}^\text{meta} = \{\mathcal{T}_i\}^T_i$ drawn from a task distribution $P(\mathcal{T})$. Each task $\mathcal{T}_i$ contains a dataset $\mathcal{D}_i$, which is usually divided into two disjoint sets: $\mathcal{D}^\text{tr}_i$ and $\mathcal{D}^\text{ts}_i$. These datasets each is associated with $K$ data-label pairs, \ie, let $\mathbf{x} \in \mathcal{X}$ and $\mathbf{y} \in \mathcal{Y}$ denote data and their label, respectively, $\mathcal{D}^\text{tr}_i = \{
(\mathbf{x}^k_i, \mathbf{y}^k_i)\}^K_{i=1}$, and similarity for $\mathcal{D}^\text{ts}_i$. In general, $\mathcal{D}^\text{tr}_i$ and $\mathcal{D}^\text{ts}_i$ in the same task share the same label space. Different tasks have different label spaces.
We suppose that all the tasks share a same learning algorithm $\mathcal{A}lg$ and a loss function $\mathcal{L}$. Each task $\mathcal{T}_i$ has itself learning model (or base-learner) $\mathcal{A}lg_i$ parameterized by $\mathbf{w}_i \in \mathbb{R}^d $. Meta-learning model is interested in learning a meta-learner, \eg, a neural network, from the meta-training set $\mathcal{S}^\text{meta}$, which can help the base-learner $\mathcal{A}lg_j$ of a new task $\mathcal{T}_{j}$ efficiently learn with a few labeled data. This motivation can be formulated as below:
\begin{align}
& \min_{\boldsymbol \theta} \sum_{i=1}^{T} \mathcal{L} ( \boldsymbol \theta, \mathbf{w}_i; \mathcal{D}^\text{ts}_i) \label{eq:bilevel_meta-learner} \\
& \text{s.t.}~\mathbf{w}_i = \min_{\mathbf{w}} \mathcal{L} (\mathbf{w}; \boldsymbol \theta, \mathcal{D}^\text{tr}_i ) \label{eq:bilelvel_base-learner},
\end{align}
where $\boldsymbol \theta$ denotes the meta-parameter. Particularly, Eq.~\ref{eq:bilevel_meta-learner} and Eq.~\ref{eq:bilelvel_base-learner} can be solved as a bi-level optimization problem~\cite{colson2007overview}.
\textbf{Note:} Metric-based meta-learning method doesn't have the step of Eq.~\ref{eq:bilelvel_base-learner}, because the base-learners of these methods are a nonparametric distance function.
In the meta-test stage, when faced a new task $\mathcal{T}_{j} \in P(\mathcal{T})$, the base-learner $\mathcal{A}lg_j$ can achieve good performance by using the adaptation procedure of $\mathbf{w}_j$ with the learned meta-parameter.
\subsection{Review of Cross-domain Setting}
Different from the typical few-shot learning, we address the few-shot problem under the domain generalization setting. In other words, the meta-training set $\mathcal{S}^\text{meta} = \{\mathcal{T}_i\}^T_{i=i}$ is sampled from a seen (source) domain, and the meta-learning model trained on $\mathcal{S}^\text{meta}$ is supposed to help new tasks from the other unseen domains learn fast. More specifically, there exists a distribution discrepancy between dataset $\mathcal{D}_i$ in the training task $\mathcal{T}_i \in \mathcal{S}^\text{meta}$ and $\mathcal{D}_j$ in the new task $\mathcal{T}_j$. Moreover, we can't access the images in the unseen domain at the training stage.
\section{Methodology}
In this paper, our main idea is to learn how to simulate the domain shift existing in different domains. In this way, we can generate the pseudo tasks from the other domains. With these tasks, the generalization of the meta-learning system on the real unseen domains is supposed to be indeed improved.
\subsection{Feature-wise Shift Layer}
Firstly, we introduce a \textbf{F}eature-w\textbf{i}se \textbf{S}hift \textbf{L}ayer (FiSL) which is used to simulate the domain shift in our method. The architecture of FiSL is based on feature-wise transformation~\cite{DBLP:conf/aaai/PerezSVDC18} which is proven to be capable to represent domain-specific information in many works~\cite{DBLP:conf/iclr/DumoulinSK17,DBLP:conf/aaai/PerezSVDC18}. In few-shot learning, feature-wise transformation is already adopted to dynamically represent domain-specific~\cite{DBLP:conf/iclr/TsengLH020} and task-specific information~\cite{DBLP:conf/nips/OreshkinLL18}.
Suppose that the meta-learner contains a feature encoder $f_{\boldsymbol \theta}$ with the parameter $\boldsymbol \theta \in \Theta$, we are given a feature activation map $\mathbf{z}_0 \in \mathcal{Z}$ of an image $\mathbf{x}_0$ from the last layer of the feature encoder with the dimension of $C \times H \times W$. The output $\mathbf{z}$ of our shift layer is
\begin{equation}\label{eq:shift block}
\mathbf{z} = \boldsymbol \gamma \odot \mathbf{z}_0 + \boldsymbol \beta, ~~\text{where}~~\mathbf{z}_0 = f_{\boldsymbol \theta} (\mathbf{x}_0) \in \mathbb{R}^{C \times H \times W},
\end{equation}
$\boldsymbol\gamma$ and $\boldsymbol \beta$ are learnable scaling and shift vectors applied to affine transformation. For easy notation, Eq.~\ref{eq:shift block} can be denoted by $\mathbf{z} = \text{FiSL}(\mathbf{x}_0)$.
After training, the shift layer with $\boldsymbol \phi =\{\boldsymbol \gamma, \boldsymbol \beta\}$ is supposed to be enable to transfer the images from the source domain to other unseen domains.
\subsection{Adversarial Learning-to-learn Mechanism}
However, how to train a meta-learning model with FiSL is an intractable problem because of
\emph{
1. How to make FiSL learn the way of generating pseudo tasks.
2. How to make meta-learning model learn useful domain-invariant meta-knowledge from these tasks.}
\subsubsection{How to Generate Pseudo Tasks}
In the first problem, we are interested in training FiSL in a single domain $P_0$ to simulate the domain shift and generate pseudo tasks from unforeseen domains $P$ for improving the generalization and robustness of meta-learning.
Inspired by the recent developments in robust optimization and adversarial data augmentation~\cite{sinha2017certifiable,DBLP:conf/nips/VolpiNSDMS18}, we consider the first problem following the worst-case problem around the (training) source distribution $P_0$, as
\begin{equation}\label{eq:worst-case}
\begin{split}
\min_{\boldsymbol \theta} & \sup_{P: D(P,P_0) \leq \rho} \mathbb{E}_P[ \mathcal{L}( \boldsymbol \theta, \mathbf{w}^*; \mathcal{D}^\text{ts}_i)], \\
& \text{where}~\mathbf{w}^*_i = \mathop{\arg\min}_{\mathbf{w}} \mathcal{L} (\mathbf{w}; \boldsymbol \theta, \mathcal{D}^\text{tr}_i ).
\end{split}
\end{equation}
Here, $P_0$ represents the distribution that images in the seen (source) domain follow. $P$ is the distribution that FiSL simulates. $D(P, P_0)$ is a distance metric on the space of probability distributions. $\mathcal{D}^\text{tr}_i$ and $\mathcal{D}^\text{ts}_i$ are the support (training) and query (test) sets of task $\mathcal{T}_i$ from the source domain $P_0$.
The solution of Eq.~\ref{eq:worst-case} guarantees good performance (robustness) of the learned $\boldsymbol \theta$ against any data distribution $P$ that is $\rho$ distance away from the source domain $P_0$. In other words, meta-parameter $\boldsymbol \theta$ can achieve good generalization on unseen tasks by solving Eq.~\ref{eq:worst-case}.
We firstly focus on how to simulate the unforeseen distributions $P$ by FiSL.
To preserve the semantics of the input samples, similar to~\cite{DBLP:conf/nips/VolpiNSDMS18,DBLP:conf/nips/000300M20}, we use Wasserstein distance defined in the latent feature $\mathcal{Z}$ as our metric $D$ to constrain the distributions FiSL simulates. Let $c_{\boldsymbol \theta}: \mathcal{Z} \times \mathcal{Z} \rightarrow \mathbb{R}_{+} \cup \{\infty\}$ denote the transportation cost of moving mass from $(\mathbf{x}_0, \mathbf{y}_0)$ to $(\mathbf{x}, \mathbf{y})$, as
\begin{equation}\label{eq:wdistance}
c_{\boldsymbol \theta}((\mathbf{x}_0, \mathbf{y}_0), ( \mathbf{x}, \mathbf{y})) \coloneqq \frac{1}{2} \|\mathbf{z}_0 - \mathbf{z} \|^2_2 + \infty \cdot \mathbf{1}\{\mathbf{y}_0 \neq \mathbf{y}\},
\end{equation}
where $\mathbf{z}_0 = f_{\boldsymbol \theta} (\mathbf{x}_0)$ and $\mathbf{z} = \text{FiSL}(\mathbf{x}_0)$. $\mathbf{x}$ is the pseudo data of $\mathbf{z}$.
For probability measures $P$ and $P_0$ supported on $\mathcal{Z}$, we consider that $\Pi(P, P_0)$ denotes their couplings. Then, the notion of our metric is defined by
\begin{equation}\label{eq:our_wdistance}
D_{\boldsymbol \theta} (P, P_0) \coloneqq \inf_{M \in \Pi(P, P_0)} \mathbb{E}_M[c_{\boldsymbol \theta}((\mathbf{x}_0, \mathbf{y}_0), (\mathbf{x}, \mathbf{y}))].
\end{equation}
Armed with this notion of distance on the semantic space, we now consider a variant of the worst-case
problem Eq.~\ref{eq:worst-case} where we replace the distance with $D_{\boldsymbol \theta}$ in Eq.~\ref{eq:our_wdistance}, our adaptive notion of distance defined on
the semantic space is
\begin{equation}\label{eq:our_definit}
\min_{\boldsymbol \theta} \sup_P \{ \mathbb{E}_P[ \mathcal{L}( \boldsymbol \theta, \mathbf{w}^*_i; \mathcal{D}^\text{ts}_i ): D_{\boldsymbol \theta}(P, P_0)] \leq \rho \}.
\end{equation}
However, for deep neural networks, this formulation is intractable with arbitrary $\rho$. Instead, following the reformulation of~\cite{sinha2017certifiable,DBLP:conf/nips/VolpiNSDMS18}, we
consider its \emph{Lagrangian relaxation} $\mathcal{F}$ for a fixed penalty parameter $\gamma$
\begin{equation}\label{eq:sup_max}
\min_{\boldsymbol \theta} \sup_P \{ \mathbb{E}_P[ \mathcal{L}( \boldsymbol \theta, \mathbf{w}^*_i; \mathcal{D}^\text{ts}_i )- \gamma D_{\boldsymbol \theta}(P, P_0)] \},
\end{equation}
where $\mathbf{w}^*_i = \mathop{\arg\min}_\mathbf{w} = \mathcal{L}(\mathbf{w}; \boldsymbol \theta, \mathcal{D}^\text{tr}_i)$.
Taking the dual reformulation of the penalty relaxation Eq.~\ref{eq:sup_max}, we can obtain an efficient solution
procedure: simulating the unseen distribution $P$ by FiSL, learning the robust $\boldsymbol \theta$ with it.
According to Theorem~\ref{the1} that is a minor adaptation of Lemma 1 in~\cite{DBLP:conf/nips/VolpiNSDMS18}, we propose an iterative training procedure to solve the penalty problem (Eq.~\ref{eq:sup_max}).
\begin{theorem}\label{the1}
Let $ (\Theta \times \mathbb{R}^d) \times (\mathcal{X} \times \mathcal{Y}) \rightarrow R$ and Let $\phi_r$ denote the robust surrogate loss. Then, for any distribution $P_0$ and any $\gamma \geq 0$, we have that,
\begin{equation}
\begin{split}
& \sup_P \{ \mathbb{E}_P[ \mathcal{L}( \boldsymbol \theta, \mathbf{w}; \mathcal{D}^\text{ts}_i))- \gamma D_{\boldsymbol \theta}(P, P_0)] \} \\
& = \mathbb{E}_{(\mathbf{x}, \mathbf{y} )\in \mathcal{D}^\text{ts}_i} [\phi_\gamma(\boldsymbol\theta, \mathbf{w}; \mathbf{x}, \mathbf{y})],~~~~~ \text{where}
\end{split}
\end{equation}
\vspace{-3mm}
\begin{equation}\label{eq:su_loss}
\begin{split}
& \phi_\gamma(\boldsymbol\theta, \mathbf{w}; \mathbf{x}_0, \mathbf{y}_0) \\
& =\sup_{\mathbf{x} \in \mathcal{X}} \{ \mathcal{L}( \boldsymbol \theta, \mathbf{w}; \mathbf{x}, \mathbf{y}_0 )- \gamma c_{\boldsymbol \theta}( ( \mathbf{x}_0, \mathbf{y}_0), (\mathbf{x}, \mathbf{y}_0 ) ) \}.
\end{split}
\end{equation}
\end{theorem}
Our training procedure contains two phases: a maximization phase where FiSL learns how to simulate the domain shift by computing the maximization problem (Eq.~\ref{eq:su_loss}) and a minimization phase, where meta-parameter $\boldsymbol \theta$ can perform stochastic gradient descent procedures on the robust surrogate $\phi_\gamma$.
Note that $\mathbf{x}$ in Eq.~\ref{eq:su_loss} is generated by FiSL in our method.
\textbf{Maximization phase.} In the maximization phase, a new task $\mathcal{T}_j$ drawn from the source domain $P_0$ is given to help FiSL learn how to simulate the domain shift. This phase can be formulated as
\begin{equation}\label{eq:max}
\begin{split}
\boldsymbol \phi' = \boldsymbol \phi + \eta \nabla \{ & \mathbb{E}_{(\mathbf{x}_0, \mathbf{y}_0) \in \mathcal{D}^\text{ts}_j}\mathcal{L}(\boldsymbol \theta, \mathbf{w}^*_j; \mathbf{x}, \mathbf{y}_0) \\
& - \gamma c_{\boldsymbol \theta}(( \mathbf{x}_0, \mathbf{y_0}),( \mathbf{x}, \mathbf{y}_0) ) \},
\end{split}
\end{equation}
where $\mathbf{w}^*_j = \mathop{\arg\min}_\mathbf{w} = \mathcal{L}(\mathbf{w}; \boldsymbol \theta, \mathcal{D}^\text{tr}_j)$ and $\mathbf{x}$ is the pseudo data generated by FiSL.
\textbf{Minimization phase.} With the learned FiSL, the task $\mathcal{T}_i$ can be transformed to a pseudo task $\hat{\mathcal{T}_i}$ from the other domains, which is used to optimize the meta-parameter $\boldsymbol \theta$, such as
\begin{equation}\label{eq:min}
\boldsymbol \theta = \boldsymbol \theta - \alpha \nabla_{\boldsymbol \theta} \mathcal{L} (\boldsymbol \theta, \mathbf{w}_i^*; \hat{\mathcal{D}}^\text{ts}_i),
\end{equation}
where $\mathbf{w}_i^* = \min_{\mathbf{w}} \mathcal{L} (\mathbf{w}; \boldsymbol \theta, \hat{\mathcal{D}}^\text{tr}_{i} )$, $\hat{\mathcal{D}}^\text{tr}_i = \text{FiSL}_{\boldsymbol \phi'}(\mathcal{D}^\text{tr}_i)$ and $\hat{\mathcal{D}}^\text{ts}_i = \text{FiSL}_{\boldsymbol \phi'}(\mathcal{D}^\text{ts}_i)$.
\subsubsection{How to Learn Domain-invariant Knowledge}
For learning domain-invariant meta-knowledge, inspired by multi-task learning~\cite{zhang2017survey}, besides optimizing $\boldsymbol \theta$ by the pseudo task $\hat{\mathcal{T}}_i$ in Eq.~\ref{eq:min}, we sample an additional task $\mathcal{T}_k$ from the source domain to jointly optimize $\boldsymbol \theta$, such as
\begin{equation}\label{eq:overall-loss}
\min_{\boldsymbol \theta} \mathcal{L} (\boldsymbol \theta, \mathbf{w}^*_k; \mathcal{D}^\text{ts}_k)
+ \mathcal{L} ( \boldsymbol \theta, \mathbf{w}^*_{i} ; \hat{\mathcal{D}}^\text{ts}_{i}) ,
\end{equation}
where $\mathbf{w}^*_{k} = \min_{\mathbf{w}} \mathcal{L} (\mathbf{w}; \boldsymbol \theta, \mathcal{D}^\text{tr}_{k})$ and $\mathbf{w}_{i}^* = \min_{\mathbf{w}} \mathcal{L} (\mathbf{w}; \boldsymbol \theta, \hat{\mathcal{D}}^\text{tr}_{i})$. $\hat{\mathcal{D}}^\text{tr}_{i}$ and $\hat{\mathcal{D}}^\text{ts}_{i}$ is transformed by FiSL learned by Eq.~\ref{eq:max}.
Moreover, for learning cross-domain meta-knowledge, FiSL is supposed to dynamically generate various unseen domains based on different tasks. Hence, similar to MAML, we learn a good initialization for FiSL. From the learned initialization, an unseen domain can be simulated by a few gradient descents by Eq.~\ref{eq:max}. In particular, the good initialization is appropriate for simulating many unseen domains, in the sense that the learned initialization can be regarded as including domain-invariant knowledge. In the meta-test stage, we can transform the data from unseen domains by this initialization to achieve better generalization. The full algorithm is summarized in Algorithm~\ref{alg1}. The overview of our method can be found in Figure.~\ref{fig:1}.
\begin{algorithm}\label{alg1}
\caption{Learning-to-learn Adversarial Shift}
\SetAlgoLined
\KwIn{ Sample three sets of training tasks $\mathcal{S}_1 = \{\mathcal{T}_{3i-2}\}_{i = 1}^N, \mathcal{S}_2 = \{\mathcal{T}_{3i-1}\}_{i = 1}^{N}$ and $\mathcal{S}_3 = \{\mathcal{T}_{3i}\}_{i = 1}^{N}$ }
\KwOut {Learned weights $\boldsymbol \theta, \boldsymbol \phi$}
Initialize $\boldsymbol \theta, \boldsymbol \phi$ \\
\While{not converged}{
\For { i = 1, \ldots, N}
{
Train a base-learner $\mathbf{w}^*_{3i-2}$ for $\mathcal{T}_{3i-2}$ with $\mathcal{D}^\text{tr}_{3i-2}$ \\
Use $\mathcal{T}_{3i-2}$ to obtain $\boldsymbol \phi'$ via Eq.~\ref{eq:max} \label{step:5} \\
Generate a pseudo tasks $\hat{\mathcal{T}}_{3i-1}$ based on $\mathcal{T}_{3i-1}$, $ \hat{\mathcal{D}}^\text{tr}_{3i-1} = \text{FiSL}_{\boldsymbol \phi'} (\mathcal{D}^\text{tr}_{3i-1})$ and $ \hat{\mathcal{D}}^\text{ts}_{3i-1} = \text{FiSL}_{\boldsymbol \phi'} (\mathcal{D}^\text{ts}_{3i-1})$ \\
Train base-learners for pseudo task $\hat{\mathcal{T}}_{3i-1}$ and task $\mathcal{T}_{3i}$, respectively.\\
Update $\boldsymbol \theta, \boldsymbol \phi$ via multi-task loss in Eq.~\ref{eq:overall-loss}
\vspace{-3mm}
\begin{flalign}
& \boldsymbol \theta \leftarrow \boldsymbol \theta -\alpha \nabla_{\boldsymbol \theta} [ \mathcal{L} (\boldsymbol \theta, \mathbf{w}^*_{3i} ; \mathcal{D}^\text{ts}_{3i}) & \nonumber \\
& ~~~~~~+ \mathcal{L} (\boldsymbol \theta, \mathbf{w}^*_{3i-1} ; \hat{\mathcal{D}}^\text{ts}_{3i-1})] & \nonumber \\
& \boldsymbol \phi \leftarrow \boldsymbol \phi -\alpha \nabla_{\boldsymbol \phi} [ \mathcal{L} (\boldsymbol \theta, \mathbf{w}^*_{3i} ; \mathcal{D}^\text{ts}_{3i}) & \nonumber \\
& ~~~~~~+ \mathcal{L} (\boldsymbol \theta, \mathbf{w}^*_{3i-1} ; \hat{\mathcal{D}}^\text{ts}_{3i-1})] & \nonumber
\end{flalign}
\vspace{-7mm}
}
}
\end{algorithm}
\textbf{Note:} When Algorithm~\ref{alg1} is applied to metric-based meta-learning method, it's not necessary to train a base learner of each task, \ie, step 4 and 7.
In the meta-test stage, the learned $\boldsymbol \theta^*$ and $\boldsymbol \phi^*$ can help meta-learning method achieve good generalization on the new task $\mathcal{T}_l$ from the unseen domains, as
\begin{equation}\label{eq:meta-test}
\mathbf{w}_l = \min_{\mathbf{w}} \mathcal{L} (\mathbf{w}; \boldsymbol \theta^*, \hat{\mathcal{D}}^\text{tr}_l), \hat{\mathcal{D}}^\text{tr}_l = \text{FiSL}_{\boldsymbol \phi^*}(\mathcal{D}^\text{tr}_l).
\vspace{-2mm}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig_2.pdf}
\caption{Results of cross-domain few-shot regression. Top: results of ANIL and ANIL-FiSL (our). Below: results of MAML and MAML-FiSL (our). Redline is the ground truth. Our method can help different meta-learning models achieve better performance on the unseen domain. }\label{fig:2}
\vspace{-2mm}
\end{figure}
\section{Experiments}
In this section, we evaluate our method in different learning problems, including regression and classification to verify the adaptation of the proposed method. In each setting, our method is used to improve the generalization of different metric-based and gradient-based meta-learning models on unseen domains to demonstrate that our method is model-agnostic and can indeed help learn robust meta-knowledge.
\subsection{Cross-domain Regression}
\textbf{Experimental Setup.} We start with a regression problem of fitting sine curves following~\cite{DBLP:conf/icml/FinnAL17}. Each task involves regressing from the input to the output of a sine wave.
The amplitude and phase of the training tasks are uniformly sampled from $[ 0.1, 3.0]$ and $[0, \frac{3}{4}\pi]$, respectively. However, the amplitude and phase of test tasks are uniformly drawn from $[ 3.0, 5.0]$ and $[\frac{3}{4}\pi, \pi]$, respectively. For training, five labeled datapoints are given for each task as $\mathcal{D}^\text{tr}$, and twenty labeled datapoints are sampled as $\mathcal{D}^\text{ts}$. Both of them are uniformly sampled from $[-5, 5]$. We use a neural network with two hidden layers as the feature encoder and
40 nodes each and a mean-squared error (MSE) as loss $\mathcal{L}$.
Our method is applied to two gradient-based methods: MAML~\cite{DBLP:conf/icml/FinnAL17} and ANIL~\cite{DBLP:conf/iclr/RaghuRBV20}. Because, our method is trained by multi-task mechanism, for fairness, the batchsize of tasks to train MAML and ANIL is set to 2. All models are trained $20,000$ iterations by Adam with a learning rate of 0.001. Both MAML and ANIL use one inner gradient step with a learning rate 0.01 and 0.1, respectively. During the test, we present the model with $2,000$ newly sampled tasks from the disjoint domain and measure mean squared error over 100 test points on each task. $\gamma$ and $\eta$ in our method are $0.5$ and $0.01$, respectively.
\textbf{Results.} Table~\ref{tab:reg} shows the results of different models on the cross-domain few-shot regression. We can see that our method effectively improves the generalization of different gradient-based models on the unseen domain. The results also verify that our method can work well in regression problem. Figure.~\ref{fig:2} represents some results of these models on some new tasks from the unseen domain.
\begin{table}
\begin{center}
\scalebox{0.8}{
\begin{tabular}{c|c|cc}
\toprule
Methods & FiSL& 5-shot & 10-shot \\
\midrule
\multirow{2}{4em}{ANIL~\cite{DBLP:conf/iclr/RaghuRBV20}} & - & 4.256 $\pm$ 0.127 & 3.080 $\pm$ 0.075 \\
& $\surd$ & \textbf{1.889 $\pm$ 0.075} & \textbf{0.961 $\pm$ 0.034} \\
\midrule
\multirow{2}{5em}{MAML~\cite{DBLP:conf/icml/FinnAL17}} &- & 3.558 $\pm$ 0.087 & 2.168 $\pm$ 0.060 \\
& $\surd$ & \textbf{1.712 $\pm$ 0.075} & \textbf{0.935 $\pm$ 0.035} \\
\bottomrule
\end{tabular}}
\end{center}
\caption{Mean squared error (MSE) of cross-domain few-shot regression, lower is better. FiSL indicates that we apply the shift layer with the proposed adversarial mechanism to train the model.}
\label{tab:reg}
\vspace{-2mm}
\end{table}
\subsection{Cross-domain Classification}
\begin{table*}
\begin{center}
\scalebox{0.85}{
\begin{tabular}{c|c|cccc}
\toprule
5way 1-shot& FiSL & CUB & Car & ISIC & ChestX \\
\midrule
\multirow{2}{4em}{ProNet} & - & 40.03 $\pm$ 0.58 & 30.60 $\pm$ 0.48 & 30.63 $\pm$ 0.47 & 22.20 $\pm$ 0.33 \\
& $\surd$ & \textbf{40.94 $\pm$ 0.58} & \textbf{31.11 $\pm$ 0.48} & 30.52 $\pm$ 0.47 & \textbf{22.43 $\pm$ 0.35}\\
\midrule
\multirow{2}{4em}{RelationNet} & - & 39.30 $\pm$ 0.56 & 28.34 $\pm$ 0.43 & 29.64 $\pm$ 0.46 & 22.12 $\pm$ 0.33\\
& $\surd$ & \textbf{40.29 $\pm$ 0.57} & \textbf{29.00 $\pm$ 0.46} & 28.75 $\pm$ 0.44 & 21.92 $\pm$ 0.32\\
\midrule
\multirow{2}{4em}{ANIL} & - & 32.11 $\pm$ 0.55 & 26.58 $\pm$ 0.41 & 24.45 $\pm$ 0.34 & 21.19 $\pm$ 0.25\\
& $\surd$ & \textbf{39.49 $\pm$ 0.59} & \textbf{30.21 $\pm$ 0.50} & \textbf{31.09 $\pm$ 0.47} & \textbf{21.88 $\pm$ 0.32} \\
\midrule
\multirow{2}{4em}{MetaOpt} & - &42.33 $\pm$ 0.60 & 32.04 $\pm$ 0.47 & 29.61 $\pm$ 0.45 & 22.10 $\pm$ 0.31\\
& $\surd$ & \textbf{45.53 $\pm$ 0.61} & \textbf{34.67 $\pm$ 0.51} & \textbf{33.82 $\pm$ 0.51} & \textbf{22.91 $\pm$ 0.33} \\
\midrule
\multirow{2}{4em}{R2D2} & - & 42.81 $\pm$ 0.62 & 33.15 $\pm$ 0.49 & 30.04 $\pm$ 0.46 & 22.35 $\pm$ 0.32 \\
& $\surd$ & \textbf{44.14 $\pm$ 0.60} & \textbf{34.18 $\pm$ 0.51} & \textbf{32.47 $\pm$ 0.49} & \textbf{23.02 $\pm$ 0.34}\\
\midrule
\midrule
5-way 5-shtot & FiSL & CUB & Car & ISIC & ChestX \\
\midrule
\multirow{2}{4em}{ProNet} & - & 57.26$\pm$ 0.57 & 42.83 $\pm$ 0.53 & 39.89 $\pm$ 0.43 & 25.03 $\pm$ 0.34 \\
& $\surd$ & \textbf{57.84 $\pm$ 0.56} & \textbf{43.11 $\pm$ 0.54} & \textbf{41.47 $\pm$ 0.43} & \textbf{25.40 $\pm$0.35} \\
\midrule
\multirow{2}{4em}{RelationNet} & - & 55.71 $\pm$ 0.56 & 37.86 $\pm$ 0.51 & 38.10 $\pm$ 0.43 & 24.11 $\pm$ 0.32 \\
& $\surd$ & \textbf{56.11 $\pm$ 0.53} & \textbf{39.66 $\pm$ 0.55} & \textbf{38.39 $\pm$ 0.45} & 23.73 $\pm$ 0.32\\
\midrule
\multirow{2}{4em}{ANIL} & - & 37.24 $\pm$ 0.57 & 28.79 $\pm$ 0.40 & 27.90 $\pm$ 0.38 & 20.93 $\pm$ 0.17\\
& $\surd$ & \textbf{57.56 $\pm$ 0.58} & \textbf{44.34 $\pm$ 0.57} & \textbf{41.82 $\pm$ 0.45} & \textbf{25.00 $\pm$ 0.34}\\
\midrule
\multirow{2}{4em}{MetaOpt} & - & 61.66 $\pm$ 0.60 & 50.55 $\pm$ 0.56 & 44.10 $\pm$ 0.47 & 26.36 $\pm$ 0.35\\
& $\surd$ & \textbf{63.24 $\pm$ 0.57} & \textbf{50.82 $\pm$ 0.58} & \textbf{46.39 $\pm$ 0.46} & \textbf{26.46 $\pm$ 0.34} \\
\midrule
\multirow{2}{4em}{R2D2} & - & 62.31 $\pm$ 0.59 & 49.49 $\pm$ 0.57 & 42.81 $\pm$ 0.44 & 25.92 $\pm$ 0.34\\
& $\surd$ & \textbf{64.62 $\pm$ 0.58} & \textbf{51.62 $\pm$ 0.60} & \textbf{47.62 $\pm$ 0.47} & \textbf{26.48 $\pm$ 0.36}\\
\bottomrule
\end{tabular}}
\end{center}
\caption{Few-shot classification results on unseen domains. We train the model on the mini-ImageNet domain and evaluate the trained model on other domains. FiSL is our method.}
\label{tab:results_metamodels}
\vspace{-2mm}
\end{table*}
In cross-domain few-shot classification, we validate the efficacy of the proposed methods with two categories of meta-learning frameworks, \ie, metric-based and gradient-based frameworks. In metric-based method, we choose ProNet~\cite{DBLP:conf/nips/SnellSZ17} and RelationNet~\cite{DBLP:conf/cvpr/SungYZXTH18}. As for gradient-based method, ANIL, R2D2~\cite{DBLP:conf/iclr/BertinettoHTV19}, and MetaOptNet~\cite{DBLP:conf/cvpr/LeeMRS19} are chosen. In order to evaluate the performance on unseen domains, we
train the few-shot classification model on the mini-ImageNet~\cite{DBLP:conf/nips/VinyalsBLKW16} domain and evaluate the trained model on four different domains: CUB~\cite{wah2011caltech}, Cars~\cite{krause20133d}, ISIC~\cite{tschandl2018ham10000}, and ChestX~\cite{wang2017chestx}. CUB and Car are two benchmarks which are well established for cross-domain few-shot classification.
Evaluating on these two benchmarks can provide a fair comparison to the previous methods. However, the images of these two datasets are natural images that still retain a high degree of visual similarity. Moreover, according to~\cite{DBLP:conf/eccv/GuoCKCSSRF20}, some previous state-of-the-art methods, \eg~\cite{DBLP:conf/iclr/TsengLH020} are not robust to the large domain shift. Therefore, following~\cite{DBLP:conf/eccv/GuoCKCSSRF20}, we adopt ISIC and Chexst as the other two benchmarks. Some images from these datasets are shown in Figure.~\ref{fig:3}.
\textbf{DataSets.} We conduct experiments using five datasets: mini-ImageNet,
CUB, Cars, ISIC, and ChestX. We follow the same dataset processing in~\cite{DBLP:conf/eccv/GuoCKCSSRF20,DBLP:conf/iclr/TsengLH020}. Compared with natural images in CUB and Car, ISIC and ChestX cover dermoscopic images of skin lesions, and X-ray images respectively, which are largely different to mini-ImageNet. Similar to~\cite{DBLP:conf/iclr/TsengLH020}, we select the training iterations with the best accuracy on the validation set of mini-ImageNet for evaluation.
\begin{figure}
\centering
\includegraphics[width=7cm]{fig_3.pdf}
\caption{Images from different benchmarks. mini-ImageNet is used as the source domain, and domains of varying dissimilarity from natural images are used for target evaluation.}\label{fig:3}
\vspace{-3mm}
\end{figure}
\begin{table*}
\begin{center}
\scalebox{0.8}{
\begin{tabular}{c|cc|cccc}
\toprule
Method& Pre-trained& FiSL & CUB & Car & ISIC & ChestX \\
\midrule
\multirow{3}{4em}{ProNet} & & & 51.82 $\pm$ 0.58 & 42.12 $\pm$ 0.56 & 39.41 $\pm$ 0.43 & 25.11 $\pm$ 0.35 \\
& $\surd$ & & 57.26$\pm$ 0.57 & 42.83 $\pm$ 0.53 & 39.89 $\pm$ 0.43 & 25.03 $\pm$ 0.34 \\
& $\surd$ & $\surd$ & \textbf{57.84 $\pm$ 0.56} & \textbf{43.11 $\pm$ 0.54} & \textbf{41.47 $\pm$ 0.43} & \textbf{25.40 $\pm$0.35} \\
\midrule
\multirow{3}{4em}{R2D2}& & & 61.19 $\pm$ 0.60 & 43.91 $\pm$ 0.59 & 40.57 $\pm$ 0.43 & 25.10 $\pm$ 0.32 \\
& $\surd$ & & 62.31 $\pm$ 0.59 & 49.49 $\pm$ 0.57 & 42.81 $\pm$ 0.44 & 25.92 $\pm$ 0.34 \\
& $\surd$ & $\surd$ & \textbf{64.62 $\pm$ 0.58} & \textbf{51.62 $\pm$ 0.60} & \textbf{47.62 $\pm$ 0.47} & \textbf{26.48 $\pm$ 0.36} \\
\bottomrule
\end{tabular}}
\end{center}
\caption{$5$-way $5$-shot classification accuracy on different unseen domains. This result shows the influence of pre-training on generalization of different meta-learning methods. }
\label{tab:pretrain}
\vspace{-2mm}
\end{table*}
\textbf{Implementation details.}
All experiments are performed with ResNet-10~\cite{DBLP:conf/cvpr/HeZRS16} for a fair comparison. Same as~\cite{DBLP:conf/eccv/GuoCKCSSRF20,DBLP:conf/iclr/TsengLH020}, we firstly pre-train ResNet-10 by minimizing the standard cross-entropy classification loss on the $64$ training categories in the mini-ImageNet dataset. Then all the models are trained $30,000$ iterations by Adam~\cite{DBLP:journals/corr/KingmaB14} with a learning rate of $0.001$. Because our method uses multi-task training, the batchsize is set to $2$ for training ProNet, RelationNet, ANIL, R2D2, and MetaOpt. The inner learning rate and updating steps are $0.1$ and $5$ for ANIL. $\gamma$ and $\eta$ in our method are $0.5$ and $0.1$, respectively.
We present the average results over $1,000$ trials for all the experiments, and report the average accuracy and $95\%$ confidence interval. In each trial, the query set $\mathcal{D}^\text{ts}$ contains $15$ images.
\begin{table}
\begin{center}
\scalebox{0.8}{
\begin{tabular}{c|c|cc}
\toprule
Method& Shot & CUB & Car \\
\midrule
$\text{GNN}^\dag$~\cite{DBLP:conf/iclr/SatorrasE18} & 1 & 45.69 $\pm$ 0.68 & 31.79 $\pm$ 0.51\\
GNN-FWT~\cite{DBLP:conf/iclr/TsengLH020} & 1 & \textbf{47.47 $\pm$ 0.75} & 31.61 $\pm$ 0.53 \\
LRP-CAN~\cite{sun2020explain} & 1 & 46.23 $\pm$ 0.42 & 32.66 $\pm$ 0.46 \\
\midrule
\textbf{R2D2-FiSL}& 1 & 44.14 $\pm$ 0.60 & 34.18 $\pm$ 0.51 \\
\textbf{MetaOpt-FiSL} & 1 & 45.53 $\pm$ 0.60 & \textbf{34.67 $\pm$ 0.51} \\
\midrule
\midrule
$\text{GNN}^\dag$~\cite{DBLP:conf/iclr/SatorrasE18} & 5 & 62.25 $\pm$ 0.65 & 44.28 $\pm$ 0.63\\
GNN-FWT~\cite{DBLP:conf/iclr/TsengLH020} & 5 & \textbf{66.98 $\pm$ 0.68} & 44.90 $\pm$ 0.64 \\
LRP-CAN~\cite{sun2020explain} & 5 & 66.58 $\pm$ 0.39 & 43.86 $\pm$ 0.38 \\
\midrule
\textbf{MetaOpt-FiSL} & 5 & 63.24 $\pm$ 0.57 & 50.82 $\pm$ 0.58 \\
\textbf{R2D2-FiSL} & 5 & 64.62 $\pm$ 0.58 & \textbf{51.62 $\pm$ 0.60} \\
\bottomrule
\end{tabular}}
\end{center}
\caption{$5$-way $K$-shot classification accuracy on CUB and Car. $^\dag$ Results reported in~\cite{DBLP:conf/eccv/GuoCKCSSRF20}.}
\label{tab:results_SOTA}
\vspace{-2.5mm}
\end{table}
\textbf{Generalization with FiSL.} Table~\ref{tab:results_metamodels} shows the results of five meta-learning models with FiSL or not on $5$way-$1$ shot and $5$way-$5$shot cross-domain few-shot classification. Both the metric-based and gradient-based models trained with our method perform favorably against the individual baselines. This observation demonstrates that our method is model agnostic and robust to different unseen domains of varying dissimilarity from the source domain. We attribute the improvement of the generalization to the use of FiSL for making the meta-learner really learn the domain-invariant meta-knowledge.
We also obverse that gradient-based methods achieve better generalization than metric-based methods. This might due to the gradient-based methods learn a base learner for the new task with the provided labeled data. The defined metric space learned in the source domain by metric-based methods is not flexible compared with adapting learner on unseen domains.
\begin{table}
\begin{center}
\scalebox{0.8}{
\begin{tabular}{c|c|cc}
\toprule
Method& Shot & ISIC & ChestX \\
\midrule
$\text{ProNet}^\dag$ & 5 & 39.57 $\pm$ 0.57 & 24.05 $\pm$ 1.01\\
$\text{ProNet-FWT}^\dag$~\cite{DBLP:conf/iclr/TsengLH020} & 5 & 38.87 $\pm$ 0.52 & 23.77 $\pm$ 0.42 \\
$\text{RN}^\dag$ & 5 & 39.41 $\pm$ 0.58 & 22.96 $\pm$ 0.88 \\
$\text{RN-FWT}^\dag$~\cite{DBLP:conf/iclr/TsengLH020} & 5 & 35.54 $\pm$ 0.55 & 22.74 $\pm$ 0.40 \\
$\text{MAML}^\dag$ & 5 & 40.13 $\pm$ 0.58 & 23.48 $\pm$ 0.96\\
CHEF~\cite{adler2020cross} & 5 & 41.26 $\pm$ 0.34 & 24.72 $\pm$ 0.14\\
$\text{Fixed}^\dag$~\cite{DBLP:conf/eccv/GuoCKCSSRF20} & 5& 43.56 $\pm$ 0.60 & 25.35 $\pm$ 0.96\\
\midrule
\textbf{MetaOpt-FiSL} & 5 & 46.39 $\pm$ 0.46 & 26.46 $\pm$ 0.34 \\
\textbf{R2D2-FiSL}& 5 & \textbf{47.62 $\pm$ 0.47} & \textbf{26.48 $\pm$ 0.36} \\
\midrule
\midrule
$\text{ProNet-FWT}^\dag$~\cite{DBLP:conf/iclr/TsengLH020} & 20 & 43.78 $\pm$ 0.47 & 26.87 $\pm$ 0.43\\
$\text{RN-FWT}^\dag$~\cite{DBLP:conf/iclr/TsengLH020} & 20 & 43.31 $\pm$ 0.51 & 26.75 $\pm$ 0.41 \\
CHEF~\cite{adler2020cross} & 20 & 54.30 $\pm$ 0.34 & 29.71 $\pm$ 0.27 \\
$\text{Fixed}^\dag$~\cite{DBLP:conf/eccv/GuoCKCSSRF20} & 20 & 52.78 $\pm$ 0.39 & 30.83 $\pm$ 1.05 \\
\midrule
\textbf{MetaOpt-FiSL} & 20 & 55.34 $\pm$ 0.44 & 30.59 $\pm$ 0.35 \\
\textbf{R2D2-FiSL} & 20 & \textbf{58.74 $\pm$ 0.46} & \textbf{31.51 $\pm$ 0.36} \\
\bottomrule
\end{tabular}}
\end{center}
\caption{$5$-way $K$-shot classification accuracy on ISIC and ChestX. RN indicates RelationNet. Fixed (Fixed feature extractor) in~\cite{DBLP:conf/eccv/GuoCKCSSRF20} is a strong baseline that leverages the pre-trained model as a fixed
feature extractor and a linear model as the classifier. Many meta-learning models can't outperform it when exists a large domain shift. $^\dag$ Results reported in~\cite{DBLP:conf/eccv/GuoCKCSSRF20}.}
\label{tab:results_SOTA_2}
\vspace{-3mm}
\end{table}
\textbf{Comparison to previous state of the arts.} Table~\ref{tab:results_SOTA} and
Table~\ref{tab:results_SOTA_2} show the results. All the models are trained on mini-ImageNet and evaluate on the other four benchmarks. First, we obverse GNN+FWT achieves the best performance on CUB, however, degrades on Car. Meanwhile, our method achieves competitive performance on CUB and outperforms GNN-FWT $3.06\%$ and $6.72\%$ on 1-shot and 5-shot on Car, respectively. We think that CUB is a fine-grained bird dataset, which has the highest similarity to the mini-ImageNet in semantic content and color style among these four datasets, as shown in Figure.~\ref{fig:3}. Some recent few-shot learning methods~\cite{DBLP:conf/eccv/AfrasiyabiLG20} can even handle this situation with a small domain shift. From this point, learning well enough on mini-ImageNet could provide satisfying performance on CUB.
On ISIC and ChestX, our method achieves state-of-the-art performance and largely outperforms the previous methods. We also obverse FWT can't improve the performance of ProNet and RelationNet. Some similar results of our method can be found in Table~\ref{tab:results_metamodels}. Note that the decline of our methods is less than FWT. In this regard, it's challenging to improve the generalization of the metric-based meta-learning on some domains existing a large shift.
Moreover, some recent works~\cite{tian2020rethinking,chen2019closer} point out that meta-learning based few-shot learning algorithms underperform compared to the traditional
pre-training model when there exists a large shift between base and novel class domains. Comparing the performance of MAML, ProNet with Fixed, we can find the same results in Table~\ref{tab:results_SOTA_2}. However, based on our method, some meta-learning methods can largely outperform Fixed.
\subsection{Influence of Pre-training on the Generalization}
According to several recent methods~\cite{DBLP:conf/cvpr/YeHZS20,DBLP:conf/iclr/RusuRSVPOH19}, pre-training can significantly improve the performance of meta-learning frameworks on the typical few-shot learning scenario. In this section, we investigate the influence of pre-training on the generalization on unseen domains.
As shown in Table~\ref{tab:pretrain}, pre-training the feature encoder substantially improves the performance of ProNet and R2D2 on four unseen benchmarks. However, the influence of pre-training on ProNet is not very obvious, when there exists a large domain shift in target domains.
\section{Conclusion}
We propose a model-agnostic method to effectively enhance the generalization of different kinds of meta-learning frameworks under the
domain shift, which can be applied to many learning problems. The core idea of our method lies in using the feature-wise shift layer to
simulate various distributions existing in unseen domains. In order to learn how to simulate the distributions and learn domain-invariant knowledge,
We develop a learning-to-learn approach for jointly optimizing the proposed feature-wise shift
layer and the meta-learning model. From extensive
experiments, we demonstrate that our method is applicable to different meta-learning frameworks and shows consistent improvement over the baselines
and robustness to different unseen domains.
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,089,359 | arxiv | \section{Introduction}
In this paper, we consider a family of non-convex non-smooth optimization problems that can be written in the following form:
\begin{align}\label{eqn:P1}
\min_{\mathbf{x}\in\mathbb{R}^d} g(\mathbf{x}) + r(\mathbf{x}) - h(\mathbf{x}),
\end{align}
where $g(\cdot)$ and $h(\cdot)$ are real-valued lower-semicontinuous convex functions, $r(\cdot)$ is a proper lower-semicontinuous function. We include the component $r$ in order to capture non-differentiable functions that usually plays the role of regularization, e.g., the indicator function of a convex set $\mathcal{X}$ where $r(\mathbf{x}) = \delta_{\mathcal{X}}(\mathbf{x})$ is zero if $\mathbf{x}\in\mathcal{X}$ and infinity otherwise, and a non-differential regularizer such as the convex $\ell_1$ norm $\|\mathbf{x}\|_1$ or the non-convex $\ell_0$ norm and $\ell_p$ norm $\|\mathbf{x}\|^p_p$ with $p\in(0,1)$. We do not necessarily impose smoothness condition on $g(\mathbf{x})$ or $h(\mathbf{x})$ and the convexity condition on $r(\mathbf{x})$.
A special class of the problem~(\ref{eqn:P1}) is the one with $r(\mathbf{x})$ being a convex function - also known as difference of convex (DC) functions. We would like to mention that even the family of DC functions is broader enough to cover many interesting non-convex problems that are well-studied, including an additive composition of a smooth non-convex function and a non-smooth convex function, weakly convex functions, etc. We postpone this discussion to Section~\ref{sec:pre} after we formally introduce the definitions of smooth functions and weakly convex functions.
In the literature, deterministic algorithms for DC problems have been studied extensively since its introduction by Pham Dinh Tao in 1985 and are continuously receiving attention from the community~\citep{pmlr-v80-khamaru18a,Wen2018}. Please refer to~\citep{LeThi2018} for a survey on this subject. Although stochastic optimization (SO) algorithms for the special cases of DC functions mentioned above (smooth non-convex functions, weakly convex functions) have been well studied recently~\citep{davis2017proximally,sgdweakly18,modelweakly18,Drusvyatskiy2018,chen18stagewise,DBLP:journals/corr/abs/1805.05411,DBLP:conf/icml/Allen-Zhu17,chen18stagewisekatyusha,DBLP:conf/icml/ZhuH16,DBLP:conf/cdc/ReddiSPS16,Reddi:2016:SVR:3045390.3045425,zhang2018convergence}, a comprehensive study of SO algorithms with a broader applicability to the DC functions and the problem~(\ref{eqn:P1}) with a non-smooth non-convex regularizer $r(\mathbf{x})$ still remain rare. The papers by~\cite{pmlr-v54-nitanda17a} and~\cite{pmlr-v70-thi17a} are the most related works dedicated to the stochastic optimization of special DC functions. \cite{pmlr-v70-thi17a} considered a special class of DC problem whose objective function consists of a large sum of nonconvex smooth functions and a regularization term that can be written as a DC function. They reformulated the problem into~(\ref{eqn:P1}) such that $h$ is a sum of $n$ convex functions, and $g$ is a quadratic function and $r$ is the first component of the DC decomposition of the regularizer. Regarding algorithm and convergence, they proposed a stochastic variant of the classical DCA (Difference-of-Convex Algorithm) and established an asymptotic convergence result for finding a critical point. To our knowledge, the paper by \cite{pmlr-v54-nitanda17a} is the probably the first result that gives non-asymptotic convergence for finding an approximate critical point of a special class of DC problem, in which both $g$ and $h$ can be stochastic functions and $r=0$.
Their algorithm consists of multiple stages of solving a convex objective that is constructed by linearizing $h(\mathbf{x})$ and adding a quadratic regularization.
However, their algorithm and convergence theory have the following drawbacks. First, at each stage, they need to compute an unbiased stochastic gradient denoted by $\mathbf{v}(\mathbf{x})$ of $\nabla h(\mathbf{x})$ such that $\mathrm{E}[\|\mathbf{v}(\mathbf{x}) - \nabla h(\mathbf{x})\|^2]\leq \epsilon^2$, where $\epsilon$ is the accuracy level imposed on the returned solution in terms of the gradient's norm. In reality, one has to resort to mini-batching technique by using a large number of samples to ensure this condition, which is impractical and not user-friendly. An user has to worry about what is the size of the mini-batch in order to find a sufficiently accurate solution while keeping the computational costs minimal. Second, for each constructed convex subproblem, their theory requires running a stochastic algorithm that solves each subproblem to the accuracy level of $\epsilon$, which could waste a lot of computations at earlier stages. Third, their convergence analysis requires that $r(\mathbf{x})=0$ and $g(\mathbf{x})$ is a smooth function with a Lipchitz continuous gradient.
\paragraph {Our Contributions - I.} In Section~\ref{sec:DC}, we propose new stochastic optimization algorithms and establish their convergence results for solving the DC class of the problem~(\ref{eqn:P1}) that improves the algorithm and theory in~\cite{pmlr-v54-nitanda17a} from several perspectives. It is our intension to address the aforementioned drawbacks of their algorithm and theory. In particular, (i) our algorithm only requires unbiased stochastic (sub)-gradients of $g(\mathbf{x})$ and $h(\mathbf{x})$ without a requirement on the small variance of the used stochastic (sub)-gradients; (ii) we do not need to solve each constructed subproblem to the accuracy level of $\epsilon$. Instead, we allow the accuracy for solving each constructed subproblem to grow slowly without sacrificing the overall convergence rate; (iii) we improve the convergence theory significantly. First, our convergence analysis does not require $g(\mathbf{x})$ to be smooth with a Lipchitz continuous gradient. Instead, we only require either $g(\mathbf{x})+r(\mathbf{x})$ or $h(\mathbf{x})$ to be differentiable with a H\"{o}lder continuous gradient, under the former condition $h(\mathbf{x})$ can be a non-smooth non-differentiable function and under the later condition $r(\mathbf{x})$ and $g(\mathbf{x})$ can be non-smooth non-differentiable functions. Second, the convergence rate is automatically adaptive to the H\"{o}lder continuity of the involved function without requiring the knowledge of the H\"{o}lder continuity to run the algorithm. Third, when adaptive stochastic gradient method is employed to solve each subproblem, we establish an adaptive convergence similar to existing theory of AdaGrad for convex problems~\citep{duchi2011adaptive,SadaGrad18} and weakly convex problems~\citep{chen18stagewise}.
\paragraph{Our Contributions - II. }Moreover, in Section~\ref{sec:nsncr} we extend our algorithm and theory to the more general class of non-convex non-smooth problem~(\ref{eqn:P1}), in which $r(\mathbf{x})$ is a general non-convex non-differentiable regularizer that enjoys an efficient proximal mapping. Although such kind of non-smooth non-convex regularization has been considered in literature~\citep{Attouch2013,Bolte:2014:PAL:2650160.2650169,Bot2016,Li:2015:APG:2969239.2969282,YuZMX15,leiyangpg18,Liu2018,doi:10.1080/02331934.2016.1253694,DBLP:conf/aaai/ZhongK14}, existing results are restricted to deterministic optimization and asymptotic or local convergence analysis. In addition, most of them consider a special case of our problem with $g - h$ being a smooth non-convex function. To the best of our knowledge, this is the first work of stochastic optimization with a non-asymptotic first-order convergence result for tackling the non-convex objective~(\ref{eqn:P1}) with a non-convex non-differentiable regularization and a smooth function $g$ and a possibly non-smooth function $h$ with a H\"{o}lder continuous gradient. Our algorithm and theory are based on using the Moreau envelope of $r(\mathbf{x})$ that can be written as a DC function, which then reduces to the problem that is studied in Section~\ref{sec:DC}. By using the algorithms and their convergence results established in Section~\ref{sec:DC} and carefully controlling the approximation parameter, we establish the first non-asymptotic convergence of stochastic optimization for solving the original non-convex problem with a non-convex non-differentiable regularizer. This non-asymptotic convergence result can be also easily extended to the deterministic optimization, which itself is novel and could be interesting to a broader community. A summary of our results is presented in Table~\ref{tab:2}.
\begin{table*}[t]
\caption{Summary of results presented in this paper for finding a (nearly) $\epsilon$-critical point of the problem~(\ref{eqn:P1}), where $g$ and $h$ are assumed to be convex.
HC refers to H\"{o}lder continuous gradient condition; SM refers to the smooth condition; CX means convex; NC means non-convex and NS means non-smooth; LP denotes Lipchitz continuous function; LB means lower bounded over $\mathbb{R}^d$; FV means finite-valued over $\mathbb{R}^d$; FVC means finite-valued over a compact set. $\nu\in(0,1]$ denotes the power constant of the involved function's H\"{o}lder continuity. $n$ denotes the total number of components in a finite-sum problem. SPG denotes stochastic proximal gradient algorithm. SVRG denotes stochastic variance reduced gradient algorithm. AdaGrad denotes adaptive stocahstic gradient method. AG denotes accelerated gradient methods. Complexity for SPG and AdaGrad means iteration complexity, and for SVRG and AG means gradient complexity.
}
\centering
\label{tab:2}
{\begin{tabular}{l|l|l|ll}
\toprule
$g$ & $h$ & $r$ &Algorithms for subproblems&Complexity\\
\midrule
-& HC &CX&SPG, AdaGrad&$O(1/\epsilon^{4/\nu})$\\
SM& HC &CX&SVRG&$\widetilde O(n/\epsilon^{2/\nu})$\\
HC& - &CX, HC&SPG, AdaGrad&$O(1/\epsilon^{4/\nu})$\\
SM& - &CX, HC&SVRG&$\widetilde O(n/\epsilon^{2/\nu})$\\
\midrule
SM&HC&NC, NS, LP&SPG&$O(1/\epsilon^{5(1+1/\nu)/2})$\\
SM&HC&NC, NS, FV, LB&SPG&$O(1/\epsilon^{2(1+2/\nu)})$\\
SM&HC&NC, NS, LP& SVRG, AG&$\widetilde O(n/\epsilon^{3(1+1/\nu)/2})$\\
SM&HC&NC, NS, FV, LB&SVRG, AG&$\widetilde O(n/\epsilon^{4(1+2/\nu)/3})$\\
SM&HC&NC, NS, FVC &SVRG, AG&$\widetilde O(n/\epsilon^{4(1+2/\nu)/3})$\\
\bottomrule
\end{tabular}}
\end{table*}
\begin{table*}[t]
\caption{Summary of improved complexities when $\nu$ is known
}
\centering
\label{tab:new}
{\begin{tabular}{l|l|l|ll}
\toprule
$g$ & $h$ & $r$ &Algorithms for subproblems&Complexity\\
\midrule
-& HC &CX&SPG, AdaGrad&$O(1/\epsilon^{(1+3\nu)/\nu})$\\
SM& HC &CX&SVRG&$\widetilde O(n/\epsilon^{(1+\nu)/\nu})$\\
HC& - &CX, HC&SPG, AdaGrad&$O(1/\epsilon^{(1+3\nu)/\nu})$\\
SM& - &CX, HC&SVRG&$\widetilde O(n/\epsilon^{(1+\nu)/\nu})$\\
\midrule
SM&HC&NC, NS, LP&SPG&$O(1/\epsilon^{4+1/\nu})$\\
SM&HC&NC, NS, FV, LB&SPG&$O(1/\epsilon^{4+2/\nu)})$\\
SM&HC&NC, NS, LP& SVRG, AG&$\widetilde O(n/\epsilon^{2+1/\nu})$\\
SM&HC&NC, NS, FV, LB&SVRG, AG&$\widetilde O(n/\epsilon^{2+2/\nu})$\\
SM&HC&NC, NS, FVC &SVRG, AG&$\widetilde O(n/\epsilon^{2+2/\nu})$\\
\bottomrule
\end{tabular}}
\end{table*}
\section{Preliminaries}\label{sec:pre}
In this section, we present some preliminaries. Let $\|\cdot\|_p$ denote the standard $p$-norm with $p\geq 0$.
For a non-convex function $f(\mathbf{x}): \mathbb{R}^d\rightarrow \mathbb{R}$, let $\hat\partial f(\mathbf{x})$ denote the Fr\'{e}chet subgradient and $\partial f(\mathbf{x})$ denote the limiting subgradient, i.e.,
\begin{align*}
\hat\partial f(\bar\mathbf{x}) & = \left\{\mathbf{v}\in\mathbb{R}^d: \lim_{\mathbf{x}\rightarrow\bar \mathbf{x}}\inf \frac{f(\mathbf{x}) - f(\bar \mathbf{x}) - \mathbf{v}^{\top}(\mathbf{x} - \bar \mathbf{x})}{\|\mathbf{x} - \bar\mathbf{x}\|}\geq 0\right\},\\
\partial f(\bar\mathbf{x}) & = \{\mathbf{v}\in\mathbb{R}^d: \exists \mathbf{x}_k \xrightarrow[]{f} \bar\mathbf{x}, v_k\in\hat\partial f(\mathbf{x}_k), \mathbf{v}_k\rightarrow \mathbf{v}\},
\end{align*}
where the notation $\mathbf{x}\xrightarrow[]{f} \bar \mathbf{x}$ means that $\mathbf{x}\rightarrow \bar\mathbf{x}$ and $f(\mathbf{x})\rightarrow f(\bar\mathbf{x})$. It is known that $\hat\partial f(\mathbf{x})\in\partial f(\mathbf{x})$. If $f(\cdot)$ is differential at $\mathbf{x}$, then $\hat\partial f(\mathbf{x}) = \{\nabla f(\mathbf{x})\}$. Moreover, if $f(\mathbf{x})$ is continuously differentiable on a neighborhood of $\mathbf{x}$, then $\partial f(\mathbf{x}) = \{\nabla f(\mathbf{x})\}$. When $f$ is convex, the Fr\'{e}chet and the limiting subgradient reduce to the subgradient in the sense of convex analysis: $\partial f(\mathbf{x}) = \{\mathbf{v}\in\mathbb{R}^d: f(\mathbf{x})\geq f(\mathbf{y}) + \mathbf{v}^{\top}(\mathbf{x} - \mathbf{y}), \forall\mathbf{y}\in\mathbb{R}^d\}$. For simplicity, we use $\|\cdot\|$ to denote the Euclidean norm (aka. $2$-norm) of a vector. Let $\text{dist}(\mathcal{S}_1, \mathcal{S}_2)$ denote the distance between two sets.
A function $f(\mathbf{x})$ is smooth with a $L$-Lipchitz continuous gradient if it is differentiable and the following inequality holds
\begin{align*}
\|\nabla f(\mathbf{x}) - \nabla f(\mathbf{y})\|\leq L\|\mathbf{x} - \mathbf{y}\| ,\forall\mathbf{x}, \mathbf{y}.
\end{align*}
A differentiable function $f(\mathbf{x})$ has $(L, \nu)$-H\"{o}lder continuous gradient if there exists $\nu\in(0,1]$ such that
\begin{align*}
\|\nabla f(\mathbf{x}) - \nabla f(\mathbf{y})\|\leq L\|\mathbf{x} - \mathbf{y}\|^\nu, \forall \mathbf{x}, \mathbf{y}.
\end{align*}
Next, let us characterize the critical points of the considered problem~(\ref{eqn:P1}) that are standard in the literature~\citep{10.1007/978-3-642-45610-7_3,Horst1999,LeThi2018,doi:10.1080/02331934.2016.1253694}, and introduce the convergence measure for an iterative optimization algorithm. First, let us consider the DC problem:
\begin{align}\label{eqn:GP}
\min_{\mathbf{x}\in\mathbb{R}^d} f(\mathbf{x}):=g(\mathbf{x}) - h(\mathbf{x})
\end{align}
where $g: \mathbb{R}^d\rightarrow \mathbb{R}\cup\{\infty\}$ is a proper lower semicontinuous convex function and $h:\mathbb{R}^d\rightarrow R$ is convex. If $\bar\mathbf{x}$ is a local minimizer of $f(\mathbf{x})$, then $\partial h(\bar\mathbf{x})\subset\partial g(\bar\mathbf{x})$. Any point $\bar\mathbf{x}$ that satisfies the condition $\partial h(\bar\mathbf{x})\subset\partial g(\bar\mathbf{x})$ is called a stationary point of~(\ref{eqn:GP}) and any point $\bar\mathbf{x}$ such that $\partial h(\bar\mathbf{x})\cap\partial g(\bar\mathbf{x})\neq\emptyset$ is called a critical point of~(\ref{eqn:GP}). If $h(\mathbf{x})$ is further differentiable, the stationary points and the critical points coincide. For an iterative optimization algorithm, it is hard to find an exactly critical point in a finite-number of iterations. Therefore, one is usually concerned with finding an $\epsilon$-critical point $\mathbf{x}$ that satisfies
\begin{align}
\text{dist}(\partial h(\mathbf{x}), \partial g(\mathbf{x}))\leq \epsilon.
\end{align}
Similarly, we can extend the above definitions of stationary and critical points to the general problem~(\ref{eqn:P1}) with $r(\mathbf{x})$ being a proper and lower semi-continuous (possibly non-convex) function~\citep{doi:10.1080/02331934.2016.1253694}. In particular, $\bar\mathbf{x}$ is called a stationary point of the considered problem~(\ref{eqn:P1}) if it satisfies $\partial h(\mathbf{x})\subset \hat\partial (g+r)(\mathbf{x})$, and any point $\bar \mathbf{x}$ such that $\partial h(\bar\mathbf{x})\cap \hat\partial (g+r)(\bar\mathbf{x})\neq\emptyset$ is called a critical point of~(\ref{eqn:P1}). When $g$ is differentiable, $\hat\partial (g+r)(\bar\mathbf{x}) = \nabla g(\mathbf{x}) + \hat\partial r(\mathbf{x})$~\citep{RockWets98}[Exercise 8.8], and when both $g$ and $r$ are convex and their domains cannot be separated $\hat\partial (g+r)(\bar\mathbf{x}) = \partial g(\mathbf{x}) + \partial r(\mathbf{x})$~\citep{RockWets98}[Corollary 10.9].
An $\epsilon$-critical point of~(\ref{eqn:P1}) is a point $\mathbf{x}$ that satisfies $ \text{dist}( \partial h(\mathbf{x}), \hat\partial (g+ r)(\mathbf{x}))\leq \epsilon$. It is notable that when $g + r$ is non-differentiable, finding an $\epsilon$-critical point could become a challenging task for an iterative algorithm even under the condition that $r$ is a convex function. Let us consider the example of $g=h=0, r=|x|$. As long as $x\neq 0$, we have $\text{dist}(0, \partial|x|) = 1$. To address this challenge when $g + r$ is non-differentiable, we introduce the notion of nearly $\epsilon$-critical points. In particular, a point $\mathbf{x}$ is called a nearly $\epsilon$-critical point of the problem~(\ref{eqn:P1}) if there exists $\bar\mathbf{x}$ such that
\begin{align}
\|\mathbf{x} - \bar\mathbf{x}\|\leq O(\epsilon), \quad \text{dist}( \partial h(\bar\mathbf{x}), \hat\partial(g+ r)(\bar\mathbf{x}))\leq \epsilon.
\end{align}
A similar notion of nearly critical points for non-smooth and non-convex optimization problems have been utilized in several recent works~\citep{davis2017proximally,sgdweakly18,modelweakly18,chen18stagewise}.
\paragraph{Examples and Applications of DC functions.} Before ending this section, we present some examples of DC functions and their applications in machine learning and statistics.
{\it Example 1: Additive composition of a smooth loss function and a non-smooth convex regularizer.} Let us consider
\begin{align*}
\min_{\mathbf{x}\in\mathbb{R}^d}g(\mathbf{x}) + r(\mathbf{x}),
\end{align*}
where $r(\mathbf{x})$ is a convex function and $g(\mathbf{x})$ is an $L$-smooth function. For an $L$-smooth function, it is clear that $\hat g(\mathbf{x}) = g(\mathbf{x}) + \frac{L}{2}\|\mathbf{x}\|^2$ is a convex function. Therefore, the above objective function can be written as $\hat g(\mathbf{x}) + r(\mathbf{x}) - \frac{L}{2}\|\mathbf{x}\|^2$ - a DC function.
{\it Example 2: Weakly convex functions.} Weakly convex functions have been recently studied in numerous papers~\citep{davis2017proximally,sgdweakly18,modelweakly18,chen18stagewise,zhang2018convergence}. A function $f(\mathbf{x})$ is called $\rho$-weakly convex if $f(\mathbf{x}) + \frac{\rho}{2}\|\mathbf{x}\|^2$ is a convex function. More generally, $f(\mathbf{x})$ is called $\rho$-relative convex with respect to a strongly convex function $\omega(\mathbf{x})$ if $f(\mathbf{x}) + \rho\omega(\mathbf{x})$ is convex~\citep{zhang2018convergence}. It is obvious that a weakly convex function $f(\mathbf{x})$ is a DC function. Examples of weakly convex functions can be found in deep neural networks with a smooth active function and a non-smooth loss function~\citep{chen18stagewise}, robust learning~\citep{DBLP:journals/corr/abs-1805-07880}, robust phase retrieval~\citep{modelweakly18}, etc.
{\it Example 3: Non-Convex Sparsity-Promoting Regularizers.}
Many non-convex sparsity-promoting regularizers in statistics can be written as a DC function, including log-sum penalty (LSP)~\citep{Candades2008}, minimax concave penalty (MCP)~\citep{cunzhang10}, smoothly clipped absolute deviation (SCAD)~\citep{CIS-172933}, capped $\ell_1$ penalty~\citep{Zhang:2010:AMC:1756006.1756041}, transformed $\ell_1$ norm~\cite{DBLP:journals/corr/ZhangX14}. For detailed DC composition of these regularizers, please refer to~\citep{Wen2018,DBLP:conf/icml/GongZLHY13}. It is notable that for LSP, MCP and SCAD, the second function in their DC decomposition can be a smooth function. In particular, if one consider regression or classification with LSP, MCP, SCAD or transformed $\ell_1$ norm regularizer, the problem is a special case of~(\ref{eqn:P1}) with $r(\mathbf{x})$ being a convex function and $h(\mathbf{x})$ being a smooth convex function. Here we give one example by considering learning with MCP as a regularization, where the problem is
\begin{align}
\min_{\mathbf{x}\in\mathbb{R}^d}\frac{1}{n}\sum_{i=1}^n\ell(\mathbf{x}^{\top}\mathbf{a}_i, b_i) +\underbrace{ \lambda\sum_{i=1}^d\int^{|w_i|}_0\left[1 - \frac{x}{\theta\lambda}\right]_+dx}\limits_{P(\mathbf{x})},
\end{align}
where $(\mathbf{a}_i, b_i), i=1,\ldots, n$ denote a set of data points (feature vector and label pairs), $\ell(\cdot, \cdot)$ is a convex loss function with respect to its first argument, $\theta>0$ is a constant and $\lambda>0$ is a regularization parameter. We can write $P(\mathbf{x})$ as a difference of two convex functions
\begin{align*}
P(\mathbf{x}) = \lambda\|\mathbf{x}\|_1 - \underbrace{\lambda\sum_{i=1}^d\int_{0}^{|x_i|}\min\{1, \frac{x}{\theta\lambda}\} dx}\limits_{h(\mathbf{x})},
\end{align*}
where $h(\mathbf{x})$ is continuously differentiable with $\frac{1}{\theta}$-Lipchitz continuous gradient~\cite{}.
{\it Example 4: Least-squares Regression with $\ell_{1-2}$ Regularization}. Recently, a non-convex regularization in the form of $\lambda(\|\mathbf{x}\|_1 - \|\mathbf{x}\|_2)$ was proposed for least-squares regression or compressive sensing~\citep{Yin2015MinimizationO}, which is naturally a DC function.
{\it Example 5: Positive-Unlabeled (PU) Learning}. A standard learning task is to find a model denoted by $\mathbf{x}$ that minimizes the expected risk based on a convex surrogate loss $\ell$, i.e.,
\begin{align*}
\min_{\mathbf{x}\in\mathbb{R}^d}\mathrm{E}_{\mathbf{z}, y}[\ell(\mathbf{x}; \mathbf{z}, y)],
\end{align*}
where $\mathbf{z}\in\mathbb{R}^m$ denotes the feature vector of a random data and $y\in\{1, -1\}$ denotes its corresponding label. In practice one observes a finite set of i.i.d training data $\{\mathbf{z}_i, y_i\}, i=1\ldots, n$, which leads to the well-known empirical risk (ERM) minimization problem, i.e., $\min_{\mathbf{x}\in\mathbb{R}^d}\frac{1}{n}\sum_{i=1}^n\ell(\mathbf{x}; \mathbf{z}_i, y_i)$. However, if only positive data $\{\mathbf{z}_i, +1, i=1, \ldots, n_+\},$ are observed, ERM becomes problematic. A remedy to address this challenge is to use unlabeled data for computing an unbiased estimation of $\mathrm{E}_{\mathbf{z}, y}[\ell(\mathbf{x}; \mathbf{z}, y)]$. In particular, the objective in the following problem is an unbiased risk~\citep{NIPS2017_6765}:
\begin{align*}
\min_{\mathbf{x}\in\mathbb{R}^d}\frac{\pi_p}{n_+}\sum_{i=1}^{n_+}\left(\ell(\mathbf{x}; \mathbf{z}_i, +1) - \ell(\mathbf{x}; \mathbf{z}_i, -1)\right) + \frac{1}{n_u}\sum_{j=1}^m\ell(\mathbf{x}; \mathbf{z}^u_j, -1),
\end{align*}
where $\{\mathbf{z}_i^u, i=1, \ldots, n_u\}$ is a set of unlabeled data, and $\pi_p = \Pr(y=1)$ is the prior probability of the positive class. It is obvious that if $\ell(\mathbf{x}; \cdot)$ is a convex loss function in terms of $\mathbf{x}$, the above objective function is a DC function. In practice, an estimation of $\pi_p$ is used.
\paragraph{Examples of Non-Convex Non-Smooth Regularizers.} Finally, we present some examples of non-convex non-smooth regularizers $r(\mathbf{x})$ that cannot be written as a DC function or whose DC composition is unknown. Thus, the algorithms and theories presented in Section~\ref{sec:DC} are not directly applicable, but the algorithms discussed in Section~\ref{sec:nsncr} are applicable when the proximal mapping of each component of $r(\mathbf{x})$ is efficient to compute. Examples include $\ell_0$ norm (i.e., the number of non-zero elements of a vector) and $\ell_p$ norm regularization for $p\in(0,1)$ (i.e., $\sum_{i=1}^d|x_i|^p$), whose proximal mapping can be efficiently computed~\citep{Attouch2013,Bolte:2014:PAL:2650160.2650169}. For another example, let us consider a penalization approach for tackling non-convex constraints. Consider a non-convex optimization problem with domain constraint $\mathbf{x}\in\mathcal{C}$, where $\mathcal{C}$ is a non-convex set. Directly handling a non-convex constrained problem could be difficult. An alternative solution is to convert the domain constraint into a penalization $\frac{\lambda}{2}\|\mathbf{x} - \mathbb{P}_\mathcal{C}(\mathbf{x})\|^2$ with $\lambda>0$ in the objective, where $\mathbb{P}_\mathcal{C}(\cdot)$ denotes the projection of a point to the set $\mathcal{C}$. Note that when $\mathcal{C}$ is a non-convex set, $r(\mathbf{x})=\frac{\lambda}{2}\|\mathbf{x} - \mathbb{P}_\mathcal{C}(\mathbf{x})\|^2$ is a non-convex non-smooth function in general, and its proximal mapping enjoys a closed-form solution~\citep{DBLP:journals/mp/LiP16}
As a final remark, it is worth mentioning that even if $r(\mathbf{x})$ can be written as a DC function such that the two components in its DC decomposition are both non-smooth non-differntiable (e.g., $\ell_{1-2}$ regularization, capped $\ell_1$ norm $\sum_{i=1}^d\min(|x_i|, \theta)$), the theory presented in Section~\ref{sec:nsncr} can be still useful to derive a non-asymptotic first-order convergence in terms of finding a close critical point, while the theory in Section~\ref{sec:DC} is not directly applicable.
\section{New Stochastic Algorithms of DC functions}\label{sec:DC}
In this section, we present new stochastic algorithms for solving the problem~(\ref{eqn:P1}) when $r(\mathbf{x})$ is a convex function and their convergence results. We assume both $g(\mathbf{x})$ and $h(\mathbf{x})$ have a large number of components such that computing a stochastic gradient is much more efficient than computing a deterministic gradient. Without loss of generality, we assume $g(\mathbf{x}) = \mathrm{E}_\xi[g(\mathbf{x}; \xi)]$ and $h(\mathbf{x}) = \mathrm{E}_{\varsigma}[h(\mathbf{x}; \varsigma)]$, and consider the following problem:
\begin{align}\label{eqn:PS}
\min_{\mathbf{x}\in\mathbb{R}^d} F(\mathbf{x}) : = \mathrm{E}_\xi[g(\mathbf{x}; \xi)] + r(\mathbf{x}) - \mathrm{E}_{\varsigma}[h(\mathbf{x}; \varsigma)].
\end{align}
where $g, h: \mathbb{R}^d\rightarrow\mathbb{R}^d$ are real-valued lower-semicontinuous convex functions and $r$ is a proper lower-semicontinuous convex function.
It is notable that a special case of this problem is the finite-sum form:
\begin{align}
\min_{\mathbf{x}\in\mathbb{R}^d} F(\mathbf{x}):=\frac{1}{n_1}\sum_{i=1}^{n_1} g_i(\mathbf{x}) + r(\mathbf{x}) - \frac{1}{n_2}\sum_{j=1}^{n_2}h_j(\mathbf{x}),
\end{align}
which will allows us to develop faster algorithms for smooth functions by using variance reduction techniques.
Since we do not necessarily impose any smoothness assumption on $g(\mathbf{x})$ and $h(\mathbf{x})$, we will postpone the particular assumptions for these functions in the statements of later theorems. For all algorithms presented below, we assume that the proximal mapping of $r(\mathbf{x})$ can be efficiently computed, i.e., the solution to the following problem can be easily computed for any $\eta>0$:
\begin{align*}
\min_{\mathbf{x}\in\mathbb{R}^d} \frac{1}{2\eta}\|\mathbf{x} - \mathbf{y}\|^2 + r(\mathbf{y}).
\end{align*}
A basic assumption that will be used in the analysis is the following.
\begin{ass}\label{ass:0}
For a given initial solution $\mathbf{x}_1\in\text{dom}(r)$, assume that there exists $\Delta>0$ such that $F(\mathbf{x}_1) - \inf_{\mathbf{x}\in\mathbb{R}^d}F(\mathbf{x})\leq \Delta$.
\end{ass}
The basic idea of the proposed algorithm is similar to the stochastic algorithm proposed in~\citep{pmlr-v54-nitanda17a}. The algorithm consists of multiple stages of solving convex problems. At the $k$-th stage ($k\geq 1$), given a point $\mathbf{x}_k$, a convex majorant function $F^\gamma_{\mathbf{x}_k}(\mathbf{x})$ is constructed as following such that $F^{\gamma}_{\mathbf{x}_k}(\mathbf{x}) \geq F(\mathbf{x}), \forall \mathbf{x}$ and $F^{\gamma}_{\mathbf{x}_k}(\mathbf{x}_k) = F(\mathbf{x}_k)$:
\begin{align}
F^{\gamma}_{\mathbf{x}_k}(\mathbf{x}) &= g(\mathbf{x}) + r(\mathbf{x}) - (h(\mathbf{x}_k) + \partial h(\mathbf{x}_k)^{\top}(\mathbf{x} - \mathbf{x}_k)) + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_k\|^2,
\end{align}
where $\gamma>0$ is a constant parameter. Then a stochastic algorithm is employed to optimize the convex majorant function. The key difference from the previous work lies at how to solve each convex majorant function. An important change introduced to our design is to make the proposed algorithms more efficient and more practical. Roughly speaking, we only require solving each function $F^\gamma_{\mathbf{x}_k}(\mathbf{x})$ up to an accuracy level of $c/k$ for some constant $c>0$, i.e., finding a solution $\mathbf{x}_{k+1}$ such that
\begin{align}\label{eqn:acc}
\mathrm{E}[F^{\gamma}_{\mathbf{x}_k}(\mathbf{x}_{k+1}) - \min_{\mathbf{x}\in\mathbb{R}^d} F^{\gamma}_{\mathbf{x}_k}(\mathbf{x}) ]\leq \frac{c}{k}.
\end{align}
In contrast, the algorithm and anlysis presented in~\citep{pmlr-v54-nitanda17a} requires solving each convex problem up to an accuracy level of $\epsilon$, which is the expected accuracy level on the final solution. This change not only makes our algorithms more efficient by saving a lot of unnecessary computations but also more practical without requiring $\epsilon$ to run the algorithm.
\begin{algorithm}[t]
\caption{A Stagewise Stochastic DC Algorithm: SSDC-$\mathcal{A}$}\label{alg:meta}
\begin{algorithmic}[1]
\STATE \textbf{Initialize:} $\mathbf{x}_1\in\text{dom}(r)$
\FOR {$k = 1,\ldots, K$}
\STATE Let $F_{k}(\mathbf{x})=F^\gamma_{\mathbf{x}_k}=g(\mathbf{x}) + r(\mathbf{x}) - h(\mathbf{x}_k)- \partial h(\mathbf{x}_k)^{\top}(\mathbf{x} - \mathbf{x}_k)+\frac{\gamma}{2}\| \mathbf{x}-\mathbf{x}_{k}\|^2$
\STATE $\mathbf{x}_{k+1} = \mathcal{A}(F^\gamma_{\mathbf{x}_k}, \Theta_k)$ \hfill $\diamond$ $\Theta_k$ denotes algorithm dependent parameters
\ENDFOR
\end{algorithmic}
\end{algorithm}
We present a meta algorithm in Algorithm~\ref{alg:meta}, in which $\mathcal A$ refers to an appropriate stochastic algorithm for solving each convex majorant function. The Step 4 means that $\mathcal A$ is employed for finding $\mathbf{x}_{k+1}$ such that~(\ref{eqn:acc}) is satisfied (or a more fine-grained condition is satisfied for a particular algorithm as discussed later), where $\Theta_k$ denotes the algorithm dependent parameters (e.g., the number of iterations).
There are three issues that deserve further discussion in order to fully understand the proposed algorithm. First, how many outer iterations $k$ is needed to ensure finding a (nearly) $\epsilon$-stationary point of the original problem under the condition that~(\ref{eqn:acc}) is satisfied for each problem. Second, how to ensure the condition~(\ref{eqn:acc}) to be satisfied for a stochastic algorithm. Third, what is the overall complexity (iteration complexity or gradient complexity) taking into account the complexity of the stochastic algorithm $\mathcal A$ for solving each convex majorant function. Note that the last two issues are closely related to the particular algorithm employed. We emphasize that the last two issues are important not only in theory but also in practice. Related factors such as how to initialize the algorithm $\mathcal A$, how to set the step size and how many iterations for each call of $\mathcal A$ suffice have dramatic effect on the practical performance. Next, we first present a general convergence analysis of Algorithm~\ref{alg:meta} under the condition that~(\ref{eqn:acc}) is satisfied for solving each problem. Then, we present several representative stochastic algorithms for solving each convex majorant function and derive their overall iteration complexities.
Our convergence analysis also has its merits compared with the previous work~\citep{pmlr-v54-nitanda17a}. We will divide our convergence analysis into three parts. First, in subsection~\ref{sec:general} we introduce a general convergence measure without requiring any smoothness assumptions of involved functions and conduct a convergence analysis of the proposed algorithm. Second, we analyze different stochastic algorithms and their convergence results in subsection~\ref{sec:algs}, including an adaptive convergence result for using AdaGrad. Finally, we discuss the implications of these convergence results for solving the original problem in terms of finding a (nearly) $\epsilon$-stationary point in subsection~\ref{sec:stationary}.
\subsection{A General Convergence Result}\label{sec:general}
For any $\gamma>0$, define
\begin{align*}
P_\gamma(\mathbf{z}) & = \arg\min_{\mathbf{x}\in\mathbb{R}^d} g(\mathbf{x}) + r(\mathbf{x}) -(h(\mathbf{z}) + \partial h(\mathbf{z})^{\top}(\mathbf{x} - \mathbf{z})) + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{z}\|^2,\\
G_\gamma(\mathbf{z}) & = \gamma (\mathbf{z} - P_\gamma(\mathbf{z})).
\end{align*}
It is notable that $P_\gamma(\mathbf{z})$ is well defined since the above problem is strongly convex. The following proposition shows that when $\mathbf{z} = P_\gamma(\mathbf{z})$, then $\mathbf{z}$ is a critical point of the original problem.
\begin{prop}\label{prop:1}
If $\mathbf{z} =P_\lambda(\mathbf{z})$, then $\mathbf{z}$ is a critical point of the problem $\min_{\mathbf{x}\in\mathbb{R}^d}g(\mathbf{x}) + r(\mathbf{x}) - h(\mathbf{x})$.
\end{prop}
\begin{proof}
According to the first-order optimality condition, we have
\begin{align*}
0\in \partial (g(P_\lambda(\mathbf{z}))+ r(P_\lambda(\mathbf{z}))) -\partial h(\mathbf{z}) + \gamma(P_\gamma(\mathbf{z}) - \mathbf{z}).
\end{align*}
Since $\mathbf{z} =P_\gamma(\mathbf{z})$, we have
\begin{align*}
0\in \partial (g+r)(\mathbf{z}) -\partial h(\mathbf{z}),
\end{align*}
which implies that $\mathbf{z}$ is a critical point of the original minimization problem.
\end{proof}
The above proposition implies that $\|G_\gamma(\mathbf{z})\|= \gamma\|P_\gamma(\mathbf{z}) - \mathbf{z}\|$ can serve as a measure of convergence of an algorithm for solving the considered minimization problem. In subsection~\ref{sec:stationary}, we will discuss how the convergence in terms of $ \gamma\|P_\gamma(\mathbf{z}) - \mathbf{z}\|$ implies that the standard convergence measure in terms of the (sub)gradient norm of the original problem. The following theorems are the main results of this subsection.
\begin{thm}\label{thm:2}
Suppose Assumption~\ref{ass:0} holds and there exists an stochastic algorithm $\mathcal A$ that when applied to $F^\gamma_{\mathbf{x}_k}(\mathbf{x})$ can find a solution $\mathbf{x}_{k+1}$ satisfying~(\ref{eqn:acc}), then with a total of $K$ stages we have
\begin{align*}
\mathrm{E}\bigg[\|G_\gamma(\mathbf{x}_{\tau})\|^2\bigg]&\leq \frac{2\gamma\Delta}{K} + \frac{2\gamma c(1+\log (K))}{K},
\end{align*}
where
$\tau\in\{1,\ldots, K\}$ is uniformly sampled.
\end{thm}
{\bf Remark:} It is clear that when $K\rightarrow\infty$, $\gamma\|\mathbf{x}_{\tau} - P_\gamma(\mathbf{x}_{\tau})\|\rightarrow 0$ in expectation, implying the convergence to a critical point. Note that the $\log (K) $ factor will lead to an iteration complexity of $O(\log(1/\epsilon)/\epsilon^4)$ for using stochastic (sub)gradient method. This seems to be slightly worse than that presented in~\citep{pmlr-v54-nitanda17a} by a logarithmic factor. However, practically our algorithms can perform much better. This is because that if we simply run a stochastic algorithm $\mathcal A$ at each stage to find a solution $\mathbf{x}_{k+1}$ such that $E[F^{\gamma}_{\mathbf{x}_k}(\mathbf{x}_{k+1}) - \min_{\mathbf{x}\in\mathbb{R}^d} F^{\gamma}_{\mathbf{x}_k}(\mathbf{x}) ]\leq \frac{c}{K}$, one can obtain a convergence upper bound of $O(1/K)$ without a logarithmic factor. However, the stochastic algorithm $\mathcal A$ will need much more iterations at each stage, leading to a worse performance in practice.
A simple way to get rid of such a logarithmic factor without sacrificing the practical performance is by exploiting non-uniform sampling under a slightly stronger condition of the problem.
\begin{thm}\label{thm:3}
Suppose there exists an stochastic algorithm $\mathcal A$ that when applied to $F^\gamma_{\mathbf{x}_k}(\mathbf{x})$ can find a solution $\mathbf{x}_{k+1}$ satisfying~(\ref{eqn:acc}), and there exists $\Delta>0$ such that $\mathrm{E}[F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$ for all $k\in\{1,\ldots, K\}$, then with a total of $K$ stages we have
\begin{align*}
\mathrm{E}\bigg[\|G_\gamma(\mathbf{x}_{\tau})\|^2\bigg]&\leq \frac{2\gamma(\Delta+c)(\alpha+1)}{K},
\end{align*}
where
$\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
\end{thm}
{\bf Remark: } Compared to Theorem~\ref{thm:2}, the condition $\mathrm{E}[F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$ for all $k\in\{1,\ldots, K\}$ is slightly stronger than Assumption~\ref{ass:0}. However, it can be easily satisfied if $\mathbf{x}_k\in\text{dom}(r)$ resides in a bounded set (e.g., when $r(\mathbf{x})$ is the indicator function of a bounded set), or if $\mathrm{E}[F(\mathbf{x}_k)]$ is non-increasing (e.g., when using variance-reduction methods for the case that $g(\mathbf{x})$ is smooth).
\begin{proof}[of Theorem~\ref{thm:3}] The proof of Theorem~\ref{thm:2} can be obtained by a slight change of the following proof.
Define the following notations.
\begin{align*}
\mathbf{z}_k = P_\gamma(\mathbf{x}_k)=\arg\min_{\mathbf{x}\in\mathbb{R}^d} F_k(\mathbf{x}) := \underbrace{g(\mathbf{x}) + r(\mathbf{x}) - \partial h(\mathbf{x}_k)^{\top}(\mathbf{x} - \mathbf{x}_k)}\limits_{f_k(\mathbf{x})} + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_k\|^2.
\end{align*}
By the assumption of~(\ref{eqn:acc}), we have $\mathrm{E}[F_k(\mathbf{x}_{k+1}) -F_k(\mathbf{z}_k)]\leq \epsilon_k = c/k$. By the strong convexity of $F_k$, we have $F_k(\mathbf{x}_k)\geq F_k(\mathbf{z}_k) + \frac{\gamma}{2}\|\mathbf{x}_k - \mathbf{z}_k\|^2$. Thus we have
\begin{align}\label{eqn:c1}
\mathrm{E}[f_k(\mathbf{x}_{k+1}) + \frac{\gamma}{2}\|\mathbf{x}_{k+1} - \mathbf{x}_k\|^2]&\leq F_k(\mathbf{x}_k) - \frac{\gamma}{2}\|\mathbf{x}_k - \mathbf{z}_k\|^2 + \epsilon_k\nonumber\\
& = g(\mathbf{x}_k) + r(\mathbf{x}_k)- \frac{\gamma}{2}\|\mathbf{x}_k - \mathbf{z}_k\|^2 + \epsilon_k.
\end{align}
Rearranging the terms, we have
\begin{align*}
\mathrm{E}\bigg[\frac{\gamma}{2}\|\mathbf{z}_{k} - \mathbf{x}_k\|^2\bigg]&\leq \mathrm{E}[g(\mathbf{x}_k) + r(\mathbf{x}_k) - f_k(\mathbf{x}_{k+1})] + \epsilon_k\\
&\leq \mathrm{E}[g(\mathbf{x}_k) + r(\mathbf{x}_k) - g(\mathbf{x}_{k+1}) - r(\mathbf{x}_{k+1}) + \partial h(\mathbf{x}_k)^{\top}(\mathbf{x}_{k+1} - \mathbf{x}_k)] + \epsilon_k\\
&\leq \mathrm{E}[g(\mathbf{x}_k) + r(\mathbf{x}_k )- g(\mathbf{x}_{k+1}) -r(\mathbf{x}_{k+1}) + h(\mathbf{x}_{k+1}) - h(\mathbf{x}_k)] + \epsilon_k\\
&= \mathrm{E}[F(\mathbf{x}_k) - F(\mathbf{x}_{k+1})] + \epsilon_k,
\end{align*}
where the last inequality follows the convexity of $h(\cdot)$. Multiplying both sides by $w_k = k^\alpha$ and taking summation over $k=1,\ldots, K$, we have
\begin{align}\label{eqn:c2}
\mathrm{E}\bigg[\frac{\gamma}{2}\sum_{k=1}^Kw_k\|\mathbf{z}_{k} - \mathbf{x}_k\|^2\bigg]&\leq \mathrm{E}\bigg[ \sum_{k=1}^Kw_k(F(\mathbf{x}_k) - F(\mathbf{x}_{k+1}))\bigg] +\sum_{k=1}^Kw_k \epsilon_k,
\end{align}
The second term in the R.H.S of the above inequality can be easily bounded using simple calculus. For the first term, we use similar analysis as that in the proof of Theorem 1 in~\citep{chen18stagewise}:
\begin{align*}
&\sum_{k=1}^{K} w_k (F(\mathbf{x}_k) - F(\mathbf{x}_{k+1}))= \sum_{k=1}^{K} (w_{k-1}F(\mathbf{x}_{k}) - w_kF(\mathbf{x}_{k+1})) + \sum_{k=1}^{K}(w_k - w_{k-1})F(\mathbf{x}_{k})\\
&= w_0 F(\mathbf{x}_1) - w_{K}F(\mathbf{x}_{K+1}) +\sum_{k=1}^{K}(w_k - w_{k-1})F(\mathbf{x}_{k})\\
& =\sum_{k=1}^{K}(w_s - w_{s-1})(F(\mathbf{x}_{k}) - F(\mathbf{x}_{K+1}))\leq \sum_{k=1}^{K}(w_k - w_{k-1})(F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})),
\end{align*}
where we use $\mathbf{w}_0=0$.
Taking expectation on both sides, we have
\begin{align*}
\mathrm{E}\bigg[ \sum_{k=1}^Kw_k(F(\mathbf{x}_k) - F(\mathbf{x}_{k+1}))\bigg] \leq \sum_{k=1}^{K}(w_k - w_{k-1})\mathrm{E}[(F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x}))]\leq \Delta w_{K}
\end{align*}
Then, we have
\begin{align*}
\mathrm{E}\bigg[\frac{\gamma}{2}\|\mathbf{z}_{\tau} - \mathbf{x}_\tau\|^2\bigg]&\leq \frac{\Delta (\alpha+1)}{K} + \frac{c(\alpha+1)}{K},
\end{align*}
which can complete the proof by multiplying both sides by $2\gamma$.
The result in Theorem~\ref{thm:2} for the uniform sampling can be easily derived from the equality~(\ref{eqn:c2}) by using the fact $\sum_{k=1}^K1/k\leq (1+\log K)$.
\end{proof}
\subsection{Convergence Results of Different Stochastic Algorithms}\label{sec:algs}
In this section, we will present the convergence results of Algorithm~\ref{alg:meta} for employing different stochastic algorithms for minimizing $F_k(\mathbf{x})$ at each stage. In particular, we consider three representative algorithms, namely stochastic proximal subgradient (SPG) method~\citep{DBLP:conf/colt/DuchiSST10,DBLP:conf/icml/ZhaoZ15}, adaptive stochastic gradient (AdaGrad) method~\citep{duchi2011adaptive,SadaGrad18}, and proximal stochastic gradient method with variance reduction (SVRG)~\citep{DBLP:journals/siamjo/Xiao014}. SPG is a simple stochastic method, AdaGrad allows us to derive adaptive convergence to the history of learning, and SVRG allows us to leverage the finite-sum structure and the smoothness of the problem to improve the convergence rate.
\paragraph{Stochastic Proximal Subgradient Method.}
We make the additional assumptions about the problem for developing SPG.
\begin{ass}\label{ass:1}
Assume one of the following conditions hold:
\begin{itemize}
\item[(i)] $g(\mathbf{x})$ is $L$-smooth and there exists $G>0$ such that $\mathrm{E}[\|(\nabla g(\mathbf{x}; \xi) - \partial h(\mathbf{x}; \varsigma)) - \mathrm{E}[\nabla g(\mathbf{x}; \xi) - \partial h(\mathbf{x}; \varsigma)]\|^2]\leq G^2$, where $\partial h(\mathbf{x})$ denotes a subgradient such that $ \mathrm{E}_{\varsigma}[\partial h(\mathbf{x}; \varsigma)] = \partial h(\mathbf{x})$.
\item[(ii)] there exists $G>0$ such that $\mathrm{E}[\|\partial g(\mathbf{x}; \xi)\|^2]\leq G^2$, $\mathrm{E}[\| \partial h(\mathbf{x}; \varsigma) \|^2\}]\leq G^2$ for $\mathbf{x}\in\text{dom}(r)$, and either $r = \delta_{\mathcal{X}}(\mathbf{x})$ for a closed convex set $\mathcal{X}$ or $\|\partial r(\mathbf{x})\|\leq G$ for $\mathbf{x}\in\text{dom}(r)$.
\end{itemize}
\end{ass}
{\bf Remark:} The first assumption is typically used in the analysis of stochastic gradient method when the involved function is smooth~\citep{DBLP:conf/icml/ZhaoZ15}, and the second assumption is typically used when the involved function is non-smooth~\citep{DBLP:conf/colt/DuchiSST10}. Note that the condition $\partial r(\mathbf{x})^{\top}(\mathbf{x} - \mathbf{y})\geq 0, \forall \mathbf{x}, \mathbf{y}\in\text{dom}(r)$ is to capture the indicator function of a convex set. When $r(\mathbf{x})$ is the indicator function of a convex set $\mathcal{X}$, we have $\text{dom}(r)= \mathcal{X}$ and $\partial r(\mathbf{x})$ corresponds to the normal cone of $\mathcal{X}$, implying $\partial r(\mathbf{x})^{\top}(\mathbf{x} - \mathbf{y})\geq 0, \forall \mathbf{x}, \mathbf{y}\in\mathcal{X}$.
Denote by $F^\gamma_{\mathbf{x}_1}(\mathbf{x}) = g(\mathbf{x}) + r(\mathbf{x}) - \partial h(\mathbf{x}_1)^{\top}(\mathbf{x} - \mathbf{x}_1) + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_1\|^2$. We present the SPG algorithm in Algorithm~\ref{alg:sgd} with two options to handle smooth and non-smooth $g$ separately. The constraint $\|\mathbf{x} - \mathbf{x}_1\|\leq 3G/\gamma$ at Step 5 is added to accommodate the proximal mapping of $r(\mathbf{x})$ when $g(\mathbf{x})$ is non-smooth. When using the subgradient of $r(\mathbf{x})$ instead of the proximal mapping of $r(\mathbf{x})$ in the update or $r(\mathbf{x})$ is the indicator function of a bounded convex set, the constraint $\|\mathbf{x} - \mathbf{x}_1\|\leq 3G/\gamma$ can be removed.
\begin{algorithm}[t]
\caption{{SPG}$(F^\gamma_{\mathbf{x}_1}, \mathbf{x}_1, T)$}\label{alg:sgd}
\begin{algorithmic}[1]
\STATE Set step size $\eta_t$ according to Proposition~\ref{prop:sgd}, $\Omega=\{\mathbf{x}\in\text{dom}(r): \|\mathbf{x} - \mathbf{x}_1\|\leq 3G/\gamma\}$
\FOR{$t=1,\ldots, T$}
\STATE Compute stochastic subgradients $\partial g(\mathbf{x}_t; \xi_t)$ and $\partial h(\mathbf{x}_1; \varsigma_t)$
\STATE Option 1: $$\mathbf{x}_{t+1} =\arg\min_{\mathbf{x}} \mathbf{x}^{\top}(\partial g(\mathbf{x}_t; \xi_t) - \partial h(\mathbf{x}_1; \varsigma_t)) +r(\mathbf{x}) + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_1\|^2 + \frac{1}{2\eta_t}\|\mathbf{x}- \mathbf{x}_{t}\|^2$$
\STATE Option 2: $$\mathbf{x}_{t+1} =\arg\min_{\mathbf{x}\in\Omega} \mathbf{x}^{\top}(\partial g(\mathbf{x}_t; \xi_t) - \partial h(\mathbf{x}_1; \varsigma_t) ) + r(\mathbf{x}) + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_1\|^2+ \frac{1}{2\eta_t}\|\mathbf{x}- \mathbf{x}_{t}\|^2$$
\ENDFOR
\STATE \textbf{Output}: $\widehat{\mathbf{x}}_T = \sum_{t=2}^{T+1} t\mathbf{x}_t/\sum_{t=2}^{T+1}t$ (Option 1) or $\widehat{\mathbf{x}}_T = \sum_{t=1}^{T} t\mathbf{x}_t/\sum_{t=1}^{T}t$ (Option 2)
\end{algorithmic}
\end{algorithm}
\begin{prop}\label{prop:sgd} Suppose Assumption~\ref{ass:1}(i) hold, then by setting $\eta_t = 3/(\gamma(t+1))$ and $\gamma\geq 3L$, Algorithm~\ref{alg:sgd} with Option 1 guarantees that
\begin{align*}
\mathrm{E}[F^\gamma_{\mathbf{x}_1}(\widehat{\x}_T) - F^\gamma_{\mathbf{x}_1}(\mathbf{x}_*)]\leq\frac{4\gamma\|\mathbf{x}_* - \mathbf{x}_1\|^2}{3T(T+3)} + \frac{6G^2}{(T+3)\gamma}.
\end{align*}
Suppose Assumption~\ref{ass:1}(ii) hold, then by setting $\eta_t = 4/(\gamma t)$, Algorithm~\ref{alg:sgd} with Option 2 guarantees that
\begin{align*}
\mathrm{E}\bigg[F^\gamma_{\mathbf{x}_1}(\widehat{\x}_T) - F^\gamma_{\mathbf{x}_1}(\mathbf{x}_*)\bigg]\leq \frac{\gamma\|\mathbf{x}_* - \mathbf{x}_1\|^2}{4T(T+1)} + \frac{28G^2}{\gamma (T+1)},
\end{align*}
where $\mathbf{x}_* = \arg\min_{\mathbf{x}}F^\gamma_{\mathbf{x}_1}(\mathbf{x})$.
\end{prop}
We present a proof the above Proposition in the Appendix, which mostly follows existing analysis of SPG or related algorithms. By applying the above results (e.g., the second result in Proposition~\ref{prop:sgd}) to the $k$-th stage, we have
\[
\mathrm{E}\bigg[F_k(\mathbf{x}_{k+1}) - F_k(\mathbf{z}_k)\bigg]\leq \frac{\gamma\|\mathbf{z}_k - \mathbf{x}_k\|^2}{4T_k(T_k+1)} + \frac{28G^2}{\gamma (T_k+1)},
\]
where $T_k$ denotes the number of iterations used by SPG for the $k$-th stage. One might directly use the above result to argue that the condition~(\ref{eqn:acc}) holds by assuming that $\|\mathbf{x}_k - \mathbf{z}_k\|$ is bounded, which is true in the non-smooth case due to the domain constraint $\mathbf{x}\in\Omega$ in the update. In the smooth case, the upper bound is not directly available for setting $T_k$ such that the condition~(\ref{eqn:acc}) holds. Fortunately, when we apply the above result in the convergence analysis of Algorithm~\ref{alg:meta}, we can utilize the strong convexity of $F_k$ to cancel the term $O(\frac{\gamma\|\mathbf{z}_k - \mathbf{x}_k\|^2}{T_k(T_k+1)})$ by setting $T_k$ to be larger than a constant.
Let us summarize the convergence of Algorithm~\ref{alg:meta} when using SPG to solve each subproblem.
\begin{thm}\label{thm:sgd}
Suppose Assumption~\ref{ass:1} (i) holds and Algorithm~\ref{alg:sgd} is employed for solving $F_k$ with parameters given in Proposition~\ref{prop:sgd} and with $\gamma\geq 3L$ and $T_k = 3Lk/\gamma+3$, and there exists $\Delta>0$ such that $\mathrm{E}[F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$ for all $k\in\{1,\ldots, K\}$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}\bigg[\|G_\gamma(\mathbf{x}_\tau)\|^2\bigg]&\leq \frac{8\gamma\Delta(\alpha+1)}{K} + \frac{32G^2\gamma(\alpha+1)}{LK}.
\end{align*}
Similarly, Suppose Assumption~\ref{ass:1} (ii) holds and Algorithm~\ref{alg:sgd} is employed for solving $F_k$ with parameters given in Proposition~\ref{prop:sgd} and with $T_k = k/\gamma +1$, and there exists $\Delta>0$ such that $\mathrm{E}[F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$ for all $k\in\{1,\ldots, K\}$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}\bigg[\|G_\gamma(\mathbf{x}_{\tau})\|^2\bigg]&\leq \frac{8\gamma\Delta(\alpha+1)}{K} + \frac{448G^2 \gamma (\alpha+1)}{K},
\end{align*}
where
$\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
\end{thm}
{\bf Remark:} Let us consider the iteration complexity of using SPG for finding a solution that satisfies $\mathrm{E}\bigg[\|G_\gamma(\mathbf{x}_{\tau})\|^2\bigg]\leq \epsilon^2$. In both cases, we need a total number of stages $K = O(\gamma/\epsilon^2)$ and total iteration complexity $\sum_{k=1}^KT_k = \sum_{k=1}^KO(k/\gamma +1 )= O(\gamma/\epsilon^4)$.
One can also derive similar results for using the uniform sampling under Assumption~\ref{ass:0}, which are worse by a logarithmic factor.
\paragraph{AdaGrad.}
AdaGrad~\citep{duchi2011adaptive} is an important algorithm in the literature of stochastic optimization, which uses adaptive step size for each coordinate. It has potential benefit of speeding up the convergence when the cumulative growth of stochastic gradient is slow. Next, we show that AdaGrad can be leveraged to solve each convex majorant function and yield adaptive convergence for the original problem. Similar to previous analysis of AdaGrad~\citep{duchi2011adaptive,SadaGrad18}, we make the following assumption.
\begin{ass}\label{ass:new}
For any $\mathbf{x}\in\text{dom}(r)$, there exists $G>0$ such that $\|\partial g(\mathbf{x}; \xi)\|_\infty\leq G$ and $\|\partial h(\mathbf{x}; \varsigma)\|_\infty\leq G$, either $\partial r(\mathbf{x})^{\top}(\mathbf{x} - \mathbf{y})\geq 0, \forall \mathbf{x}, \mathbf{y}\in\text{dom}(r)$ or $\|\partial r(\mathbf{x})\|\leq G_r$ for $\mathbf{x}\in\text{dom}(\mathbf{x})$.
\end{ass}
\begin{algorithm}[t]
\caption{\textsc{AdaGrad}($F^\gamma_{\mathbf{x}_1}, \mathbf{x}_1, \eta$)} \label{alg:adagrad}
\begin{algorithmic}[1]
\STATE \textbf{Initialize:} $t=1$, $\mathbf{g}_{1:0}=[]$, $H_0\in\mathbb{R}^{d\times d}$, $\Omega=\{\mathbf{x}\in\text{dom}(r): \|\mathbf{x} -\mathbf{x}_1\|\leq \frac{2\sqrt{d}G + G_r}{\gamma}\}$
\WHILE{{$T$ does not satisfy the condition in Proposition~\ref{lem:adagrad}}}
\STATE Compute a stochastic subgradient $\mathbf{g}_t $ for $g(\mathbf{x}_t) - \partial h(\mathbf{x}_1)^{\top}\mathbf{x}_t$
\STATE Update $g_{1:t} = [g_{1:t-1}, \mathbf{g}_t]$, $s_{t,i} = \|g_{1:t,i}\|_2$
\STATE Set $H_t = H_0 + \mbox{diag}(\mathbf{s}_t)$ and $\psi_t(\mathbf{x}) = \frac{1}{2}(\mathbf{x}-\mathbf{x}_1)^{\top}H_t(\mathbf{x}-\mathbf{x}_1)$
\STATE Let $\mathbf{x}_{t+1}=\arg\min\limits_{\mathbf{x}\in\Omega} \mathbf{x}^{\top}\left(\frac{1}{t}\sum_{\tau=1}^t\mathbf{g}_\tau\right) + r(\mathbf{x})+ \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_1\|^2 + \frac{1}{t\eta}\psi_t(\mathbf{x})
\ENDWHILE
\STATE \textbf{Output}: $\widehat{\x}_T=\sum_{t=1}^{T}\mathbf{x}_t/T$
\end{algorithmic}
\end{algorithm}
The convergence analysis of using AdaGrad is build on the following proposition about the convergence AdaGrad for minimizing $F^\gamma_{\mathbf{x}}$, which is attributed to~\cite{SadaGrad18}.
\begin{prop}\label{lem:adagrad}
Let $H_0=2G I$ with $2G\geq\max_t \|\mathbf{g}_t\|_\infty$, and iteration number $T$ be the smallest integer satisfying $T\geq M\max\{a(2G+ \max_i\|g_{1:T,i}\|), \sum_{i=1}^d\|g_{1:T,i}\|/a, G_r\|\mathbf{x}_1 - \mathbf{x}_{T+1}\|/\eta\}$ for any $a>0$.
Algorithm~\ref{alg:adagrad} returns an averaged solution $\widehat{\x}_T$ such that
\begin{align}\label{eqn:Es}
\mathrm{E}[ F^\gamma_{\mathbf{x}_1}(\widehat{\x}_T)-F^\gamma_{\mathbf{x}_1}(\mathbf{x}_*)] \leq \frac{1}{2aM\eta}\|\mathbf{x}_1-\mathbf{x}_*\|^2 +\frac{(a+1)\eta}{M},
\end{align}
where $\mathbf{x}_*=\arg\min_{\mathbf{x}}F^\gamma_{\mathbf{x}_1}(\mathbf{x})$, $g_{1:t}=(\mathbf{g}_1,\ldots, \mathbf{g}_t)$ and $g_{1:t,i}$ denotes the $i$-th row of $g_{1:t}$.
\end{prop}
The convergence result of Algorithm~\ref{alg:meta} by using AdaGrad to solve each problem is described by following theorem.
\begin{thm} \label{thm:nadagrad}
Suppose Assumption~\ref{ass:new} hold and Algorithm~\ref{alg:adagrad} is employed for solving $F_k$ with $\eta_{k} = c/ \sqrt{k}$, $T_k=\lceil M_k \max\{a(2G+ \max_i\|g^k_{1:T_k,i}\|), \sum_{i=1}^d\|g^k_{1:T_k,i}\|/a, G_r\|\mathbf{x}_1^k - \mathbf{x}_{T_k+1}^k\|/\eta_k\}\rceil$ and $M_k\eta_k \geq 4/(a\gamma) $, and there exists $\Delta>0$ such that $\mathrm{E}[F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$ for all $k\in\{1,\ldots, K\}$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}[\|G_\gamma(\mathbf{x}_\tau)\|^2] \leq& \frac{8\gamma\Delta(\alpha+1)}{K} + \frac{4\gamma^2c^2a(a+1)(\alpha+1)}{K},
\end{align*}
where $g^k_{1:t, i}$ denotes the cumulative stochastic gradient of the $i$-th coordinate at the $k$-th stage, and $\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
\end{thm}
{\bf Remark:} It is obvious that the total number of iterations $\sum_{k=1}^KT_k$ is adaptive to the data. Next, let us present more discussion on the iteration complexity. Note that $M_k = O(\sqrt{k})$. By the boundness of stochastic gradient $\|g^k_{1:T_k, i}\|\leq O(\sqrt{T_k})$, therefore $T_k$ in the order of $O(k)$ will satisfy the condition in Theorem~\ref{thm:nadagrad}. Thus in the worst case, the iteration complexity for finding $ \mathrm{E}[\|G_\gamma(\mathbf{x}_\tau)\|^2]\leq \epsilon^2$ is in the order of $\sum_{k=1}^K O(k)\leq O(1/\epsilon^4)$. We can show the potential advantage of adaptiveness similar to that in~\citep{chen18stagewise}. In particular, let us consider $r=\delta_{\mathcal{X}}$ and thus $G_r=0$ in the above result. When the cumulative growth of stochastic gradient is slow, e.g., assuming $\|g^k_{1:T_k,i}\|\leq O({T_k}^{\beta})$ with $\beta<1/2$. Then $T_k = O(k^{1/(2(1-\beta))})$ will work, and then the total number of iterations $\sum_{k=1}^K T_k \leq K^{1+1/(2(1-\alpha))}\leq O(1/\epsilon^{2+1/(1-\alpha)})$, which is better than $O(1/\epsilon^4)$.
\paragraph{SVRG.} Next, we present SVRG for solving each subproblem when it has a finite-sum form and $g$ is a smooth function. In particular, we consider the following problem:
\begin{align}\label{eqn:fs}
\min_{\mathbf{x}\in\mathbb{R}^d} F(\mathbf{x}):=\frac{1}{n_1}\sum_{i=1}^{n_1} g_i(\mathbf{x}) + r(\mathbf{x}) - \frac{1}{n_2}\sum_{j=1}^{n_2}h_j(\mathbf{x}),
\end{align}
and make the following assumption.
\begin{ass}\label{ass:4}
Assume $g_i(\mathbf{x})$ is $L$-smooth function.
\end{ass}
It is notable that the smoothness of $h$ is not necessary for developing the SVRG algorithm since at each stage we linearize $h(\mathbf{x})$. Each subproblem is of the following form:
\begin{align*}
\min_{\mathbf{x}\in\mathbb{R}^d} F^\gamma_{\mathbf{x}_1}(\mathbf{x}):=\frac{1}{n_1}\sum_{i=1}^{n_1} g_i(\mathbf{x}) + r(\mathbf{x}) - h(\mathbf{x}_1)- \frac{1}{n_2}\sum_{j=1}^{n_2}\partial h_j(\mathbf{x}_1)^{\top}(\mathbf{x} - \mathbf{x}_1) + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_1\|^2.
\end{align*}
\begin{algorithm}[t]
\caption{SVRG($F^\gamma_{\mathbf{x}_1}, \mathbf{x}_1$, $T$, $S$)}\label{alg:SVRG}
\begin{algorithmic}[1]
\STATE \textbf{Input}: $\mathbf{x}_1\in\text{dom}(r)$, the number of inner initial iterations $T_1$, and the number of outer loops $S$.
\STATE $\bar\mathbf{x}^{(0)}=\mathbf{x}_1
\FOR{$s=1,2,\ldots,S$}
\STATE $\bar\mathbf{g}_s=\nabla g(\bar\mathbf{x}^{(s-1)}) - \partial h(\mathbf{x}_1)$, $\mathbf{x}_0^{(s)} =\bar\mathbf{x}^{(s-1)}$
\FOR{$t=1,2,\ldots,T$}
\STATE Choose $i_t\in\{1,\ldots,n_1\}$ uniformly at random
\STATE $\nabla_t^{(s)}=\nabla g_{i_t}(\mathbf{x}_{t-1}^{(s)})-\nabla g_{i_t}(\bar\mathbf{x}^{(s-1)})+\bar\mathbf{g}_s.$
\STATE
$\mathbf{x}_t^{(s)} = \arg\min_{\mathbf{x}} \langle \nabla_t^{(s)}, \mathbf{x}-\mathbf{x}_{t-1}^{(s)}\rangle+\frac{1}{2\eta}\|\mathbf{x}-\mathbf{x}_{t-1}^{(r)}\|_2^2 +r(\mathbf{x}) + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_1\|^2.$
\ENDFOR
\STATE $\bar\mathbf{x}^{(s)}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{x}_{t}^{(s)}$
\ENDFOR
\STATE \textbf{Output: } $\bar\mathbf{x}^{(S)}$
\end{algorithmic}
\end{algorithm}
We use the proximal SVRG proposed in~\citep{DBLP:journals/siamjo/Xiao014} to solve the above problem, which is presented in Algorithm~\ref{alg:SVRG}. Its convergence result for solving $F^\gamma_{\mathbf{x}_1}$ is given below.
\begin{prop}\label{prop:svrg}
By setting $\eta<1/(4L)$ and $T$ is large enough such that $\rho = \frac{1}{\gamma\eta (1 - 4L\eta)T} + \frac{4L\eta(T+1)}{(1-4L\eta)T}<1$, then
\begin{align*}
\mathrm{E}[F^\gamma_{\mathbf{x}_1}(\bar\mathbf{x}^{(S)}) - F^\gamma_{\mathbf{x}_1}(\mathbf{x}_*)]\leq \rho^S[F^\gamma_{\mathbf{x}_1}(\mathbf{x}_1) - F^\gamma_{\mathbf{x}_1}(\mathbf{x}_*)].
\end{align*}
In particular, if we set $\eta=0.05/L$, $T \geq \max(2, 200L/\gamma)$, we have
\begin{align*}
\mathrm{E}[F^\gamma_{\mathbf{x}_1}(\bar\mathbf{x}^{(S)}) - F^\gamma_{\mathbf{x}_1}(\mathbf{x}_*)]\leq 0.5^S[F^\gamma_{\mathbf{x}_1}(\mathbf{x}_1) - F^\gamma_{\mathbf{x}_1}(\mathbf{x}_*)].
\end{align*}
\end{prop}
{\bf Remark:} The gradient complexity of SVRG is $(n + T)S$, where $n = n_1 + n_2$.
\begin{thm} \label{thm:svrg}
Suppose Assumption~\ref{ass:0} and Assumption~\ref{ass:4} hold, and Algorithm~\ref{alg:SVRG} is employed for solving $F_k$ with $\eta_{k} = 0.05/L$, $T_k\geq \max(2, 200L/\gamma)$, $S_k =\lceil \log_2(k)\rceil$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}[\|G_\gamma(\mathbf{x}_\tau)\|^2] \leq& \frac{12\gamma\Delta(\alpha+1)}{K},
\end{align*}
where $\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
\end{thm}
{\bf Remark:} For finding a solution such that $\mathrm{E}[\|G_\gamma(\mathbf{x}_\tau)\|^2] $, the total number of stages $K= O(\gamma/\epsilon^2)$ and the total gradient complexity is $\widetilde O((n\gamma + L)/\epsilon^2)$.
\subsection{Convergence results for finding a (nearly) $\epsilon$-critical point}\label{sec:stationary}
In this subsection, we will discuss the convergence results of the proposed algorithms for finding a nearly $\epsilon$-critical point. The key is to connect the convergence in terms of $\|G_\gamma(\mathbf{x})\|$ to the convergence in terms of (sub)gradient. To this end, we present the following result.
\begin{prop}\label{prop:5}
If $g(\mathbf{x})+r(\mathbf{x})$ is differentiable and has $L_{g+r}$-H\"{o}lder continuous gradient, we have
\begin{align*}
\text{dist}(\partial h(\mathbf{x}), \nabla g(\mathbf{x}) + \nabla r(\mathbf{x})) &\leq \frac{L_{g+r}}{\gamma^\nu}\|G_\gamma(\mathbf{x})\|^\nu + \|G_\gamma(\mathbf{x})\|.
\end{align*}
If $h(\mathbf{x})$ is differentiable and has $L_h$-H\"{o}lder continuous gradient, we have
\begin{align*}
\text{dist}(\nabla h(\mathbf{x}_+), \partial g(\mathbf{x}_+) + \partial r(\mathbf{x}_+)) &\leq \frac{L_h}{\gamma^\nu}\|G_\gamma(\mathbf{x})\|^\nu + \|G_\gamma(\mathbf{x})\|.
\end{align*}
where $\mathbf{x}_+ = P_\gamma(\mathbf{x})$ and $G_\gamma(\mathbf{x}) = \gamma(\mathbf{x} - \mathbf{x}_+)$.
\end{prop}
\begin{proof}From the proof of Proposition~\ref{prop:1}, we have
\begin{align*}
0\in \partial g(\mathbf{x}_+) + \partial r(\mathbf{x}_+)-\partial h(\mathbf{x}) + \gamma(\mathbf{x}_+ - \mathbf{x}),
\end{align*}
When $g(\mathbf{x})+r(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient, there exists $\mathbf{v}\in\partial h(\mathbf{x})$ such that
\begin{align*}
\|\nabla g(\mathbf{x}) + \nabla r(\mathbf{x}) - \mathbf{v}\| &= \|\nabla g(\mathbf{x}_+) +\nabla r(\mathbf{x}_+) - \nabla g(\mathbf{x}) - \nabla r(\mathbf{x})\| + \gamma\|\mathbf{x} - \mathbf{x}^+\|\\
&\leq L_{g+r}\|\mathbf{x} - \mathbf{x}_+\|^\nu + \|G_\gamma(\mathbf{x})\| = \frac{L_{g+r}}{\gamma^\nu}\|G_\gamma(\mathbf{x})\|^\nu + \|G_\gamma(\mathbf{x})\|.
\end{align*}
Similarly, we can prove the case when $h$ is differentiable and had H\"{o}lder continuous gradient.
\end{proof}
Next, we present a general result that can be used to derive the convergence rate for the proposed algorithms.
\begin{thm}\label{thm:6}
Assume Algorithm~\ref{alg:meta} returns a solution $\mathbf{x}_\tau$ such that
\begin{align*}
\mathrm{E}[\|G_\gamma(\mathbf{x}_\tau)\|^2] \leq& O\left(\frac{1}{K}\right)
\end{align*}
under appropriate conditions. Then if $g(\mathbf{x})+r(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, we have
\begin{align*}
\mathrm{E}[\text{dist}(\partial h(\mathbf{x}_\tau), \nabla g(\mathbf{x}_\tau) + \nabla r(\mathbf{x}_\tau))] &\leq O\left(\frac{1}{K^{\nu/2}} + \frac{1}{K^{1/2}}\right).
\end{align*}
If $h(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, we have
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq O\left(\frac{1}{K^{1/2}}\right), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{z}_\tau), \partial g(\mathbf{z}_\tau) + \partial r(\mathbf{z}_\tau))] &\leq O\left(\frac{1}{K^{\nu/2}} + \frac{1}{K^{1/2}}\right).
\end{align*}
where $\mathbf{z}_\tau = P_\gamma(\mathbf{x}_\tau)$.
\end{thm}
{\bf Remark:} In the above results, the value of $\gamma$ is set to be a constant. We can see that the convergence of SPG and AdaGrad can be automatically adaptive to the H\"{o}lder continuous of the involved functions without requiring the value of $L$ and $\nu$ for running the algorithm. Both algorithms have an iteration complexity (in the worst-case) of $O(1/\epsilon^{4/\nu})$ for finding a (nearly) $\epsilon$-critical point. When the problem has a finite-sum structure~(\ref{eqn:fs}) and $g(\mathbf{x})$ is smooth, SVRG has a gradient complexity of $O(n/\epsilon^{2/\nu})$ for finding a (nearly) $\epsilon$-critical point.
\subsection{Improved Convergence results when $\nu$ is known}\label{sec:DC:new:improve}
In this subsection, we present some improved convergence results when $\nu$ is known. The key idea is by using an increasing sequence of regularization parameters $\gamma$ that depend on $\nu$.
\begin{lemma}\label{lem:DC:new1}
Suppose Assumption~\ref{ass:1} (i) holds and Algorithm~\ref{alg:sgd} is employed for solving $F_k$ with parameters given in Proposition~\ref{prop:sgd} and with $\gamma_k= 3Lk^{\frac{1-\nu}{ 1+ \nu}}$ and $T_k = 3Lk/\gamma_k+3$, and there exists $\Delta>0$ such that $\mathrm{E}[F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$ for all $k\in\{1,\ldots, K\}$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}\bigg[\|\mathbf{z}_{\tau} - \mathbf{x}_\tau\|^2\bigg]&\leq \frac{8 \Delta (\alpha +1)}{3LK^{\frac{2}{1+\nu}}} + \frac{32G^2(\alpha +1) }{3K^{\frac{2}{1+\nu}}L^2},
\end{align*}
where $\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
Similarly, Suppose Assumption~\ref{ass:1} (ii) holds and Algorithm~\ref{alg:sgd} is employed for solving $F_k$ with parameters given in Proposition~\ref{prop:sgd} and with $\gamma_k= ck^{\frac{1-\nu}{ 1+ \nu}}$ and $T_k = k/\gamma_k +1$, and there exists $\Delta>0$ such that $\mathrm{E}[F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$ for all $k\in\{1,\ldots, K\}$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}\bigg[\|\mathbf{z}_{\tau} - \mathbf{x}_\tau\|^2\bigg]&\leq \frac{8\Delta(\alpha+1)}{cK^{\frac{2}{1+\nu}}} + \frac{448G^2 (\alpha+1)}{cK^{\frac{2}{1+\nu}}},
\end{align*}
where
$\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
\end{lemma}
\begin{cor}\label{cor:7}
Suppose the same conditions as in Lemma~\ref{lem:DC:new1} hold. If $g(\mathbf{x})+r(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, by setting $K=O(1/\epsilon^{\frac{1+\nu}{\nu}})$ we have
\begin{align*}
\mathrm{E}[\text{dist}(\partial h(\mathbf{x}_\tau), \nabla g(\mathbf{x}_\tau) + \nabla r(\mathbf{x}_\tau))] &\leq O(\epsilon),
\end{align*}
and the total iteration complexity is $\sum_{k=1}^KT_k =O(1/\epsilon^{\frac{1+3\nu}{\nu}})$.
If $h(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, by setting $K=O(1/\epsilon^{\frac{1+\nu}{\nu}})$ we have
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq O\left(\epsilon^{\frac{1}{\nu}}\right), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{z}_\tau), \partial g(\mathbf{z}_\tau) + \partial r(\mathbf{z}_\tau))] &\leq O(\epsilon),
\end{align*}
where $\mathbf{z}_\tau = P_\gamma(\mathbf{x}_\tau)$. The total iteration complexity is $\sum_{k=1}^KT_k =O(1/\epsilon^{\frac{1+3\nu}{\nu}})$.
\end{cor}
{\bf Remark:} The iteration complexity $O(1/\epsilon^{\frac{1+3\nu}{\nu}})$ in the above Corollary is an improved one compared with $O(1/\epsilon^{4/\nu})$ derived from Theorem~\ref{thm:6} for SPG.
\begin{proof}
By the analysis of Proposition~\ref{prop:5}, if $g(\mathbf{x})+r(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, we have
\begin{align*}
\mathrm{E}[\text{dist}(\partial h(\mathbf{x}_\tau), \nabla g(\mathbf{x}_\tau) + \nabla r(\mathbf{x}_\tau))] &\leq \mathrm{E}[L_{g+r}\|\mathbf{z}_\tau - \mathbf{x}_\tau\|^\nu + \gamma_\tau \|\mathbf{z}_\tau - \mathbf{x}_\tau\| ]\\
&\leq L_{g+r}\mathrm{E}[\|\mathbf{z}_\tau - \mathbf{x}_\tau\|^\nu] + \gamma_K \mathrm{E}[\|\mathbf{z}_\tau - \mathbf{x}_\tau\|]\\
&\leq L_{g+r}(\mathrm{E}[\|\mathbf{z}_\tau - \mathbf{x}_\tau\|^2])^{\nu/2}+ \gamma_K (\mathrm{E}[\|\mathbf{z}_\tau - \mathbf{x}_\tau\|^2])^{1/2},
\end{align*}
where we use the concavity of $x^{s}$ for $s<1$ and $x\geq 0$. By setting $K=O(1/\epsilon^{\frac{1+\nu}{\nu}})$ and $\gamma_K = O(K^{\frac{1-\nu}{ 1+ \nu}})$, and using the results in Lemma~\ref{lem:DC:new1}, we have
\begin{align*}
\mathrm{E}[\text{dist}(\partial h(\mathbf{x}_\tau), \nabla g(\mathbf{x}_\tau) + \nabla r(\mathbf{x}_\tau))] &\leq O(\epsilon).
\end{align*}
Similarly, if $h(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, by the analysis of Proposition~\ref{prop:5} we have
\begin{align*}
\mathrm{E}[\text{dist}(\nabla h(\mathbf{z}_\tau), \partial g(\mathbf{z}_\tau) + \partial r(\mathbf{z}_\tau))] &\leq \mathrm{E}[L_{h}\|\mathbf{z}_\tau - \mathbf{x}_\tau\|^\nu + \gamma_\tau \|\mathbf{z}_\tau - \mathbf{x}_\tau\| ]\\
&\leq L_{h}\mathrm{E}[\|\mathbf{z}_\tau - \mathbf{x}_\tau\|^\nu] + \gamma_K \mathrm{E}[\|\mathbf{z}_\tau - \mathbf{x}_\tau\|].
\end{align*}
We can finish the proof using similar analysis.
\end{proof}
Next, we develop the results for AdaGrad by using the following lemma.
\begin{lemma}\label{lem:DC:new2}
Suppose Assumption~\ref{ass:new} hold and Algorithm~\ref{alg:adagrad} is employed for solving $F_k$ with $\gamma_k= k^{\frac{1-\nu}{ 1+ \nu}}$, $\eta_{k} = c/ \sqrt{\gamma_k k}$, $T_k=\lceil M_k \max\{a(2G+ \max_i\|g^k_{1:T_k,i}\|), \sum_{i=1}^d\|g^k_{1:T_k,i}\|/a, G_r\|\mathbf{x}_1^k - \mathbf{x}_{T_k+1}^k\|/\eta_k\}\rceil$ and $M_k\eta_k \geq 4/(a\gamma_k) $ with $c$ and $a$ being constants, and there exists $\Delta>0$ such that $\mathrm{E}[F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$ for all $k\in\{1,\ldots, K\}$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}\bigg[\|\mathbf{z}_{\tau} - \mathbf{x}_\tau\|^2\bigg]&\leq \frac{8\Delta (\alpha +1)}{K^{\frac{2}{ 1+ \nu}}} + \frac{4a(a+1)c^2(\alpha +1) }{K^{\frac{2}{ 1+ \nu}}}.
\end{align*}
where $g^k_{1:t, i}$ denotes the cumulative stochastic gradient of the $i$-th coordinate at the $k$-th stage, and $\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
\end{lemma}
\begin{cor}\label{cor:8}
Suppose the same conditions as in Lemma~\ref{lem:DC:new2} hold. If $g(\mathbf{x})+r(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, by setting $K=O(1/\epsilon^{\frac{1+\nu}{\nu}})$ we have
\begin{align*}
\mathrm{E}[\text{dist}(\partial h(\mathbf{x}_\tau), \nabla g(\mathbf{x}_\tau) + \nabla r(\mathbf{x}_\tau))] &\leq O(\epsilon),
\end{align*}
and the total iteration complexity is $\sum_{k=1}^KT_k =O(1/\epsilon^{\frac{1+3\nu}{\nu}})$.
If $h(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, by setting $K=O(1/\epsilon^{\frac{1+\nu}{\nu}})$ we have
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq O\left(\epsilon^{\frac{1}{\nu}}\right), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{z}_\tau), \partial g(\mathbf{z}_\tau) + \partial r(\mathbf{z}_\tau))] &\leq O(\epsilon),
\end{align*}
where $\mathbf{z}_\tau = P_\gamma(\mathbf{x}_\tau)$. The total iteration complexity is $\sum_{k=1}^KT_k =O(1/\epsilon^{\frac{1+3\nu}{\nu}})$.
\end{cor}
{\bf Remark:} The worst-case iteration complexity can be derived as following. According to the setting $M_k = O(\sqrt{k/\gamma_k}) = O(k^{\frac{v}{1+v}})$. Since $ \max\{a(2G+ \max_i\|g^k_{1:T_k,i}\|), \sum_{i=1}^d\|g^k_{1:T_k,i}\|/a=O(\sqrt{T_k})$ and $ G_r\|\mathbf{x}_1^k - \mathbf{x}_{T_k+1}^k\|/\eta_k =O(\sqrt{k/\gamma_k}) $, we have that $T_k = O(k^{\frac{2v}{1+v}})$. Then $\sum_k T_k \leq O(1/\epsilon^{\frac{1+3\nu}{\nu}})$. Nevertheless, the convergence of SSDC-AdaGrad is adaptive to the data.
\begin{lemma}\label{lem:DC:new3}
Suppose Assumption~\ref{ass:0} and Assumption~\ref{ass:4} hold, and Algorithm~\ref{alg:SVRG} is employed for solving $F_k$ with $\gamma_k= ck^{\frac{1-\nu}{ 1+ \nu}}$ with a constant $c>0$, $\eta_{k} = 0.05/L$, $T_k\geq \max(2, 200L/\gamma_k)$, $S_k =\lceil \log_2(k)\rceil$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}\bigg[\|\mathbf{z}_{\tau} - \mathbf{x}_\tau\|^2\bigg]&\leq \frac{12\Delta (\alpha +1)}{cK^{\frac{2}{ 1+ \nu}}}.
\end{align*}
where $\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
\end{lemma}
\begin{cor}\label{cor:9}
Suppose the same conditions as in Lemma~\ref{lem:DC:new3} hold. If $g(\mathbf{x})+r(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, by setting $K=O(1/\epsilon^{\frac{1+\nu}{\nu}})$ we have
\begin{align*}
\mathrm{E}[\text{dist}(\partial h(\mathbf{x}_\tau), \nabla g(\mathbf{x}_\tau) + \nabla r(\mathbf{x}_\tau))] &\leq O(\epsilon),
\end{align*}
and the total gradient complexity is $O(n/\epsilon^{\frac{1+\nu}{\nu}})$.
If $h(\mathbf{x})$ is differentiable and has $L$-H\"{o}lder continuous gradient with $\nu\in(0,1]$, by setting $K=O(1/\epsilon^{\frac{1+\nu}{\nu}})$ we have
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq O\left(\epsilon^{\frac{1}{\nu}}\right), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{z}_\tau), \partial g(\mathbf{z}_\tau) + \partial r(\mathbf{z}_\tau))] &\leq O(\epsilon),
\end{align*}
where $\mathbf{z}_\tau = P_\gamma(\mathbf{x}_\tau)$. The total gradient complexity is $O(n/\epsilon^{\frac{1+\nu}{\nu}})$.
\end{cor}
The proofs of Corollary~\ref{cor:8} and \ref{cor:9} are similar to that of Corollary~\ref{cor:7} and hence are omitted. The proofs of Lemma~\ref{lem:DC:new2} and Lemma~\ref{lem:DC:new3} are presented in the Appendix.
\section{Tackling Non-Smooth Non-Convex Regularization}\label{sec:nsncr}
In this section, we consider the more general class of problems where $r(\mathbf{x})$ is non-smooth and non-convex that is not necessarily a DC function (e.g., $\ell_0$ norm). Even if $r(\mathbf{x})$ is a DC function such that both components in its DC decomposition $r(\mathbf{x}) = r_1(\mathbf{x}) - r_2(\mathbf{x})$ are non-differentiable functions without H\"{o}lder continuous gradients (e.g., $\ell_{1-2}$ regularization, capped $\ell_1$ norm), the theories presented in this section are useful to derive non-asymptotic convergence results in terms of finding an $\epsilon$-critical point. Please note that in this case the results presented in section~\ref{sec:stationary} are not applicable.
In particular, we consider the following class of problems:
\begin{align}\label{eqn:Pf}
\min_{\mathbf{x}\in\mathbb{R}^d} \underbrace{g(\mathbf{x}) - h(\mathbf{x})}\limits_{f(\mathbf{x})}+ r(\mathbf{x}),
\end{align}
where $g(\mathbf{x})$ and $h(\mathbf{x})$ are real-valued lower-semicontinuous convex functions, $g$ has a Lipchitz continuous gradient, and $r(\mathbf{x})$ is a proper non-smooth and non-convex lower-semicontinuous function. Both $g$ and $h$ can be stochastic functions as in~(\ref{eqn:PS}). We assume $r(\mathbf{x})$ is simple such that its proximal mapping exisits and can be efficiently computed, i.e.,
\begin{align*}
\text{prox}_{\mu r}(\mathbf{x})=\arg\min_{\mathbf{y}\in\mathbb{R}^d}\frac{1}{2\mu}\|\mathbf{y} - \mathbf{x}\|^2 + r(\mathbf{y}).
\end{align*}
The problem is challenging due to the presence of non-smooth non-convex functions $r$. To tackle this function, we introduce the Moreau envelope of $r$:
\begin{align*}
r_{\mu}(\mathbf{x}) = \min_{\mathbf{y}\in\mathbb{R}^d}\frac{1}{2\mu}\|\mathbf{y} - \mathbf{x}\|^2 + r(\mathbf{y}).
\end{align*}
where $\mu>0$. A nice property of the Moreau envelope function is that it can be written as a DC function:
\begin{align*}
r_{\mu}(\mathbf{x})= \frac{1}{2\mu}\|\mathbf{x}\|^2 - \underbrace{\max_{\mathbf{y}\in\mathbb{R}^d}\frac{1}{\mu}\mathbf{y}^{\top}\mathbf{x} - \frac{1}{2\mu}\|\mathbf{y}\|^2 - r(\mathbf{y})}\limits_{R_\mu(\mathbf{x})},
\end{align*}
where $R_\mu(\mathbf{x})$ is a convex function because it is the max of convex functions of $\mathbf{x}$. We also have several nice properties about the Moreau envelope function that will be useful for our analysis.
\begin{lemma}\label{lem:prox}
\begin{align}
&\frac{1}{\mu}\text{prox}_{\mu r}(\mathbf{x})\subseteq \partial R_\mu(\mathbf{x})\\
&\forall\mathbf{v}\in \text{prox}_{\mu r}(\mathbf{x}),~\frac{1}{\mu}(\mathbf{x} - \mathbf{v})\subseteq \hat \partial r(\mathbf{v}),
\end{align}
where $\hat\partial$ denotes the Fr\'{e}chet subdifferentiale.
\end{lemma}
The proof of the first fact can be found in~\citep{Liu2018}[Eqn. 7], and the second fact follows~\citep{rockafellar-1970a}[Theorem 10.1]. Given the Moreau envelope of $r$, the key idea is to solve the following DC problem:
\begin{align}\label{eqn:New}
\min_{\mathbf{x}\in\mathbb{R}^d} g(\mathbf{x}) - h(\mathbf{x})+ \frac{1}{2\mu}\|\mathbf{x}\|^2 - R_{\mu}(\mathbf{x}) = \underbrace{g(\mathbf{x}) + \frac{1}{2\mu}\|\mathbf{x}\|^2}\limits_{\hat g(\mathbf{x})} - \underbrace{(h(\mathbf{x}) + R_{\mu}(\mathbf{x}))}\limits_{\hat h(\mathbf{x})}.
\end{align}
By carefully controlling the value of $\mu$ and combining the results presented in previous section, we are able to derive non-asymptotic convergence results for the original problem. It is worth mentioning that using the Moreau envelope of $r$ and its DC decomposition for handling non-smooth non-convex function is first proposed in~\citep{Liu2018}. However, their algorithms are deterministic and convergence results are only asymptotic. To formally state our non-asymptototic convergence results, we make the following assumptions.
\begin{ass}\label{ass:last}
Assume one of the following conditions holds:
\begin{itemize}
\item[(i)] $r$ is Lipchitz continuous.
\item[(ii)] $r$ is lower bounded and finite-valued over $\mathbb{R}^d$.
\item[(ii)] $f(\mathbf{x}) + r_{\mu}(\mathbf{x})$ is level bounded for a small $\mu<1$, and $r$ is finite-valued on a compact set, and lower bounded over $\mathbb{R}^d$.
\end{itemize}
\end{ass}
{\bf Remark:} The above assumptions on $r$ capture many interesting non-convex non-smooth regularizers. For example, $\ell_{1-2}$ regularization and capped $\ell_1$ norm satisfy Assumption~\ref{ass:last} (i). The $\ell_0$ norm satisfies Assumption~\ref{ass:last} (ii). A coersive function $r$ usually satisfies Assumption~\ref{ass:last} (iii), e.g., $\ell_p$ norm $r(\mathbf{x}) = \sum_{i=1}^d|x|^p$ for $p\in(0,1)$.
When employing the presented algorithms in last section to solve the problem~(\ref{eqn:New}), we let $r = \frac{1}{2\mu}\|\mathbf{x}\|^2$, $g=g$ and $h= \hat h$. It is also notable that the new component $ R_{\mu}(\mathbf{x})$ in $\hat h$ is deterministic, whose subgradient can be computed according to Lemma~\ref{lem:prox}. Thus the condition in Assumption~\ref{ass:1} (i) is sufficient for running SPG (option 1), and the condition in Assumption~\ref{ass:4} is sufficient for running SVRG.
\subsection{Basic Results with constant $\gamma$}
In this subsection, we present the basic results by using a constant regularization parameter $\gamma$. In next subsection, we present improved complexities by using an increasing sequence of regularization parameters. It will exhibit how an increasing sequence of $\gamma$ reduces the complexities.
We first present the results under the smoothness condition $h$, i.e., the following condition hold.
\begin{ass}\label{ass:ghsm}
Assume $h$ is $L$-smooth.
\end{ass}
\begin{thm}\label{thm:ncns:reg}
Regarding the problem~(\ref{eqn:Pf}), we have the following results:
\begin{itemize}
\item [a.]
If Assumption~\ref{ass:ghsm}, Assumption~\ref{ass:last} (i) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with a constant $\gamma$ and with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^4)$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$.
\item [b.] If Assumption~\ref{ass:ghsm}, Assumption~\ref{ass:last} (ii) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with a constant $\gamma$ and with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon^2$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^6)$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$.
\item[c.]
If $g$ and $h$ have a finite-sum form, Assumption~\ref{ass:ghsm} and Assumption~\ref{ass:4} hold, then we can use Algorithm~\ref{alg:meta} with a constant $\gamma$ and with Algorithm~\ref{alg:SVRG} to solve~(\ref{eqn:New}). We can set $\mu = \epsilon$ if Assumption~\ref{ass:last} (i) holds or $\mu=\epsilon^2$ if Assumption~\ref{ass:last} (ii) or (iii) holds. The algorithm will return a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^4)$ (corresponding to Assumption~\ref{ass:last} (i)) or $K=O(1/\epsilon^6)$ (corresponding to Assumption~\ref{ass:last} (ii) or (iii)) stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$.
\end{itemize}
\end{thm}
{\bf Remark}: The above result establishes that $\mathbf{w}_\tau$ is an $\epsilon$-critical point. Under Assumption~\ref{ass:last} (i), the iteration complexity of using SPG is $O(1/\epsilon^8)$ and the gradient complexity of using SVRG is $\widetilde O(n/\epsilon^4)$; and under Assumption~\ref{ass:last} (ii), the iteration complexity of using SPG is $O(1/\epsilon^{12})$ and the gradient complexity of using SVRG is $\widetilde O(n/\epsilon^6)$. Under Assumption~\ref{ass:last} (iii), the only provable interesting result is by using SVRG, which gives a gradient complexity of $\widetilde O(n/\epsilon^6)$.
\begin{proof} In the following proof, $\hat\partial r(\mathbf{x})$ means there exists $\mathbf{v}\in\hat\partial r(\mathbf{x})$.
By applying the stochastic algorithms for DC functions in last section, at each stage the following problem is solved approximately
\begin{align*}
\mathbf{z}_k = \arg\min_{\mathbf{x}\in\mathbb{R}^d} \hat g(\mathbf{x}) + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_k\|^2 - (\nabla h(\mathbf{x}_k) + \frac{1}{\mu}\text{prox}_{\mu r}(\mathbf{x}_k))^{\top}(\mathbf{x} - \mathbf{x}_k).
\end{align*}
Then we have
\begin{align*}
\mathrm{E}[\|\nabla \hat g(\mathbf{x}_\tau) -\nabla h(\mathbf{x}_\tau) - \frac{1}{\mu}\text{prox}_{\mu r}(\mathbf{x}_\tau)\|] &\leq \mathrm{E}[ \|\nabla \hat g(\mathbf{z}_\tau) - \nabla\hat g(\mathbf{x}_\tau)\| + \gamma\|\mathbf{z}_\tau - \mathbf{x}_\tau\|]\\
&\leq \mathrm{E}[L\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\|+ \gamma \|\mathbf{z}_\tau - \mathbf{x}_\tau\|],
\end{align*}
for any $\tau\in\{1,\ldots, K\}$. Denote by $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_\tau)$.
It is notable that $\frac{1}{\mu}\left(\mathbf{x}_\tau - \text{prox}_{\mu r}(\mathbf{x}_\tau)\right)\in\hat \partial r(\mathbf{w}_\tau) $. Then we have
$\nabla \hat g(\mathbf{x}_\tau) -\nabla h(\mathbf{x}_\tau) - \frac{1}{\mu}\text{prox}_{\mu r}(\mathbf{x}_\tau)= \nabla g(\mathbf{x}_\tau) - \nabla h(\mathbf{x}_\tau) + \hat\partial r(\mathbf{w}_\tau)$ and
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{x}_\tau) - \nabla h(\mathbf{x}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[L\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\|+ \gamma \|\mathbf{z}_\tau - \mathbf{x}_\tau\|],
\end{align*}
which implies that
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+\gamma)\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|],
\end{align*}
where uses the facts that $g$ and $h$ are smooth.
Next, we need to show that $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_\tau\|]$ is small. The argument will be different for part (a), part (b) and part (c). For part (a), using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2\leq r(\mathbf{x}_\tau) - r(\mathbf{w}_\tau)\leq G\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\Rightarrow \|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq 2G\mu,
\end{align*}
where the second inequality with an appropriate $G>0$ follows the Lipchitz continuity of $r$.
Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(\gamma + L) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 4GL\mu ].
\end{align*}
By setting $\mu = \epsilon$ and $K= O(1/\epsilon^4)$ and $\tau$ randomly sampled, we have $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq \epsilon^2$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq O(\epsilon).
\end{align*}
For part (b), using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq \sqrt{2\mu \left(r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x})\right)}\leq \sqrt{2\mu M},
\end{align*}
where $M>0$ exists due to Assumption~\ref{ass:last}(ii).
Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(\gamma + L) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L\sqrt{2\mu M}].
\end{align*}
By setting $\mu = \epsilon^2$ and $K= O(1/\epsilon^6)$ and $\tau$ randomly sampled, we have $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq \epsilon^3$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq O(\epsilon).
\end{align*}
For part (c) under Assumption~(\ref{ass:last}) (iii), we take expectation over the above inequality giving
\begin{align*}
\mathrm{E}[ \|\mathbf{x}_\tau - \mathbf{w}_\tau\|]\leq \sqrt{ 2\mu \left(\mathrm{E}[r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x})]\right)}.
\end{align*}
Since using the SVRG, we can show $\mathrm{E}[f(\mathbf{x}_\tau) + r_\mu(\mathbf{x}_\tau)]$ is bounded above, i.e., $\mathbf{x}_\tau$ is in a bounded set (in expectation), which together with the assumption $r$ is lower bounded implies that there exists a constant $M>0$ such that $\mathrm{E}[r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x}) ]\leq M$ for $\tau=1,\ldots, K$. Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(\gamma + 3L) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L\sqrt{2\mu M} ].
\end{align*}
By setting $\mu = \epsilon^2$ and $K=O(1/\epsilon^6)$ and $\tau$ randomly sampled, we have $\mathrm{E} \|\mathbf{x}_\tau - \mathbf{z}_\tau\| \leq \epsilon^3$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq O(\epsilon).
\end{align*}
For part (c) under Assumption~(\ref{ass:last}) (i) and Assumption~(\ref{ass:last}) (ii), we can use similar analysis as for part (a) and (b).
\end{proof}
Next, we extend the results to the case that $h$ has a H\"{o}lder continuous gradient, i.e, the following conditions hold.
\begin{ass}\label{ass:ghsm2}
Assume $h$ is differentiable and has $(L, \nu)$-H\"{o}lder continuous gradient for some $\nu\in(0,1]$.
\end{ass}
\begin{thm}\label{thm:ncns:reg:Appendix}
Regarding the problem~(\ref{eqn:Pf}), we have the following results:
\begin{itemize}
\item [a.]
If Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:last} (i) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{2(1+\nu)})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$.
\item [b.] If Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:last} (ii) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon^2$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{2(2+\nu)})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$.
\item[c.]
If $g$ and $h$ have a finite-sum form, Assumption~\ref{ass:ghsm2} and Assumption~\ref{ass:4} hod, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:SVRG} to solve~(\ref{eqn:New}). We can set $\mu = \epsilon$ if Assumption~\ref{ass:last} (i) holds or $\mu=\epsilon^2$ if Assumption~\ref{ass:last} (ii) or (iii) holds. The algorithm will return a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{2(1+\nu)})$ (corresponding to Assumption~\ref{ass:last} (i)) or $K=O(1/\epsilon^{2(2+\nu)})$ (corresponding to Assumption~\ref{ass:last} (ii) or (iii)) stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$.
\end{itemize}
\end{thm}
\begin{proof}
In the following proof, $\hat\partial r(\mathbf{x})$ means there exists $\mathbf{v}\in\hat\partial r(\mathbf{x})$.
By applying the stochastic algorithms for DC functions in last section, at each stage the following problem is solved approximately
\begin{align*}
\mathbf{z}_k = \arg\min_{\mathbf{x}\in\mathbb{R}^d} \hat g(\mathbf{x}) + \frac{\gamma}{2}\|\mathbf{x} - \mathbf{x}_k\|^2 - (\partial h(\mathbf{x}_k) + \frac{1}{\mu}\text{prox}_{\mu r}(\mathbf{x}_k))^{\top}(\mathbf{x} - \mathbf{x}_k).
\end{align*}
Then we have
\begin{align*}
\mathrm{E}[\|\nabla \hat g(\mathbf{x}_\tau) -\nabla h(\mathbf{x}_\tau) - \frac{1}{\mu}\text{prox}_{\mu r}(\mathbf{x}_\tau)\|] &\leq \mathrm{E}[ \|\nabla \hat g(\mathbf{z}_\tau) - \nabla\hat g(\mathbf{x}_\tau)\| + \gamma\|\mathbf{z}_\tau - \mathbf{x}_\tau\|]\\
&\leq \mathrm{E}[L\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\|+ \gamma \|\mathbf{z}_\tau - \mathbf{x}_\tau\|],
\end{align*}
for any $\tau\in\{1,\ldots, K\}$. Denote by $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_\tau)$.
It is notable that $\frac{1}{\mu}\left(\mathbf{x}_\tau - \text{prox}_{\mu r}(\mathbf{x}_\tau)\right)\in\hat \partial r(\mathbf{w}_\tau) $. Then we have $\nabla \hat g(\mathbf{x}_\tau) -\partial h(\mathbf{x}_\tau) - \frac{1}{\mu}\text{prox}_{\mu r}(\mathbf{x}_\tau) = \nabla g(\mathbf{x}_\tau) - \partial h(\mathbf{x}_\tau) + \hat\partial r(\mathbf{w}_\tau)$ and
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{x}_\tau) - \nabla h(\mathbf{x}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[L\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\|+ \gamma \|\mathbf{z}_\tau - \mathbf{x}_\tau\|],
\end{align*}
which implies that
\begin{align*}
&\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) -\nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] \\
\leq& \mathrm{E}[(L+\gamma)\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + L\|\mathbf{x}_\tau - \mathbf{w}_\tau\| + L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^\nu ],
\end{align*}
where uses the facts that $g$ is $L$-smooth and $h$ has $L$-H\"{o}lder-continuous gradient with parameter $\nu\in(0,1]$.
Next, we need to show that $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_\tau\|]$ is small. The argument will be different for part (a), part (b) and part (c).
For part (a), using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2\leq r(\mathbf{x}_\tau) - r(\mathbf{w}_\tau)\leq G\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\Rightarrow \|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq 2G\mu,
\end{align*}
where the second inequality with an appropriate $G>0$ follows the Lipchitz continuity of $r$.
Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(\gamma + L) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2GL\mu + L(2G\mu)^\nu ].
\end{align*}
By setting $\mu = \epsilon$ and $K= O(1/\epsilon^{2(1+\nu)})$ and $\tau$ randomly sampled, we have $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq \epsilon^{1+\nu}$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq O(\epsilon^\nu).
\end{align*}
For part (b), using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq \sqrt{2\mu \left(r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x})\right)}\leq \sqrt{2\mu M},
\end{align*}
where $M>0$ exists due to Assumption~\ref{ass:last}(ii).
Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(\gamma + L) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + L\sqrt{2\mu M} + L(2\mu M)^{\nu/2} ].
\end{align*}
By setting $\mu = \epsilon^2$ and $K= O(1/\epsilon^{2(2+\nu)})$ and $\tau$ randomly sampled, we have $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq \epsilon^{2+\nu}$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq O(\epsilon^\nu).
\end{align*}
For part (c) under Assumption~\ref{ass:last} (iii), we take expectation over the above inequality giving
\begin{align*}
\mathrm{E}[ \|\mathbf{x}_\tau - \mathbf{w}_\tau\|]\leq \sqrt{ 2\mu \left(\mathrm{E}[r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x})]\right)}.
\end{align*}
Since using the SVRG, we can show $\mathrm{E}[f(\mathbf{x}_\tau) + r_\mu(\mathbf{x}_\tau)]$ is bounded above, i.e., $\mathbf{x}_\tau$ is in a bounded set (in expectation), which together with the assumption $r$ is lower bounded implies that there exists a constant $M>0$ such that $\mathrm{E}[r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x}) ]\leq M$ for $\tau=1,\ldots, K$. Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(\gamma + L) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + L\sqrt{2\mu M} + L(2\mu M)^{\nu/2} ].
\end{align*}
By setting $\mu = \epsilon^2$ and $K=O(1/\epsilon^{2(2+\nu)})$ and $\tau$ randomly sampled, we have $\mathrm{E} \|\mathbf{x}_\tau - \mathbf{z}_\tau\| \leq \epsilon^{2+\nu}$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq O(\epsilon^\nu).
\end{align*}
For part (c) under Assumption~(\ref{ass:last}) (i) and Assumption~(\ref{ass:last}) (ii), we can use similar analysis as for part (a) and (b).
\end{proof}
\subsection{Improved Complexities with Increasing $\gamma$}\label{sec:new:improve}
In this subsection, we present improved complexities by simply changing the value of $\gamma$ across stages. The key idea is to use an increasing sequence of $\gamma$ across stages. In particular, at the $k$-stage, we use $\gamma_k = O(k^\beta)$ with $0 <\beta < 1$. Let us abuse the notations $F, F_k$ defined by
\begin{align*}
F & =g(\mathbf{x}) + \hat r(\mathbf{x}) - h(\mathbf{x}) - R_{\mu}(\mathbf{x}) = g(\mathbf{x}) - h(\mathbf{x}) + r_\mu(\mathbf{x})\\
F_k &= g(\mathbf{x}) +\hat r(\mathbf{x})+ \frac{\gamma_k}{2}\|\mathbf{x} - \mathbf{x}_k\|^2 - (\nabla h(\mathbf{x}_k) + \frac{1}{\mu}\text{prox}_{\mu r}(\mathbf{x}_k))^{\top}(\mathbf{x} - \mathbf{x}_k),
\end{align*}
where $\hat r(\mathbf{x}) = \frac{1}{2\mu}\|\mathbf{x}\|^2$. Similar to Lemma~\ref{lem:DC:new1}, we have the following result for the convergence of $\|\mathbf{x}_\tau - \mathbf{z}_{\tau}\|$ when SPG (option 1) is employed at each stage.
\begin{lemma}\label{lem:new1}
Suppose Assumption~\ref{ass:1} (i) holds and Algorithm~\ref{alg:sgd} is employed for solving $F_k$ with parameters given in Proposition~\ref{prop:sgd} and with $\gamma_k= 3Lk^{\beta}$ with $0\leq \beta < 1$ and $T_k = 3Lk/\gamma_k+3$, and there exists $\Delta>0$ such that $\mathrm{E}[F(\mathbf{x}_k) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$ for all $k\in\{1,\ldots, K\}$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}\bigg[\|\mathbf{z}_{\tau} - \mathbf{x}_\tau\|^2\bigg]&\leq \frac{8 \Delta (\alpha +1)}{3LK^{1+\beta}} + \frac{32G^2(\alpha +1) }{3K^{1+\beta}L^2},
\end{align*}
where $\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
\end{lemma}
\begin{thm}(Improved Complexities of SSDC-SPG)\label{thm:new1}
Regarding the problem~(\ref{eqn:Pf}), we have the following results:
\begin{itemize}
\item [a.]
If Assumption~\ref{ass:ghsm}, Assumption~\ref{ass:last} (i) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon$, $\gamma_k= 3Lk^{1/3}$ and $T_k = 3Lk/\gamma_k+3$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^3)$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total iteration complexity is $\sum_{k=1}^KT_k \le O(K^{5/3}) = O(1/\epsilon^5)$.
\item [b.] If Assumption~\ref{ass:ghsm}, Assumption~\ref{ass:last} (ii) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon^2$, $\gamma_k= 3Lk^{1/2}$ and $T_k = 3Lk/\gamma_k+3$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{4})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total iteration complexity is $\sum_{k=1}^KT_k \le O(K^{3/2}) = O(1/\epsilon^{6})$.
\end{itemize}
\end{thm}
{\bf Remark:} We note that after the first version of this paper was posted online, \cite{DBLP:journals/corr/abs/1901.08369} also considered a special setting of our problem~(\ref{eqn:Pf}) where $g$ is a smooth and non-convex, $h=0$, $r$ is non-smooth non-convex and Lipchitz continuous, which is covered in part (a) of the above theorem by noting that a smooth function $g$ can be written as a DC decomposition of two smooth convex functions. In terms of iteration complexity, they obtained the same complexity of $O(1/\epsilon^5)$ but with a large mini-batch size equal to $O(1/\epsilon)$. When using a mini-batch size of $1$, their algorithm has a worse complexity of $O(1/\epsilon^6)$. In contrast, our algorithm does not need a large mini-batch size. In addition, the step size in their algorithm is also small in the order of $O(\epsilon^2)$ or $(\epsilon^3)$ for finding an $\epsilon$-critical point, while the step size in our algorithm is decreasing in a stagewise manner.
\begin{proof}
This proof is similar to the proof of Theorem~\ref{thm:ncns:reg}. Following the analysis of Theorem~\ref{thm:ncns:reg} we know
for any $\tau\in\{1,\ldots, K\}$,
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+\gamma_\tau)\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|].
\end{align*}
For part (a), by the setting of $\gamma_\tau= 3L\tau^{1/3}$, we know $\gamma_\tau \leq 3LK^{1/3}$ so that
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+3LK^{1/3})\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|].
\end{align*}
By using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2\leq r(\mathbf{x}_\tau) - r(\mathbf{w}_\tau)\leq G\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\Rightarrow \|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq 2G\mu,
\end{align*}
where the second inequality with an appropriate $G>0$ follows the Lipchitz continuity of $r$.
Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+3LK^{1/3}) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 4GL\mu ].
\end{align*}
By setting $\mu = \epsilon$ and $K= O(1/\epsilon^3)$ and $\tau$ randomly sampled, we have $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq \epsilon^2$ by Lemma~\ref{lem:new1} with $\beta=1/3$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] \leq O(\epsilon), \quad \mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon).
\end{align*}
For part (b), by the setting of $\gamma_\tau= 3L\tau^{1/2}$, we know $\gamma_\tau \leq 3LK^{1/2}$ so that
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+3LK^{1/2})\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|].
\end{align*}
By using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq \sqrt{2\mu \left(r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x})\right)}\leq \sqrt{2\mu M},
\end{align*}
where $M>0$ exists due to Assumption~\ref{ass:last}(ii).
Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+3LK^{1/2}) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L\sqrt{2\mu M}].
\end{align*}
By setting $\mu = \epsilon^2$ and $K= O(1/\epsilon^{4})$ and $\tau$ randomly sampled, we have $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq \epsilon^3$ by Lemma~\ref{lem:new1} with $\beta=1/2$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq O(\epsilon), \quad \mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon).
\end{align*}
\end{proof}
We can also solve the subproblem by using SVRG (Algorithm~\ref{alg:SVRG}) when $g$ and $h$ are of finite-sum form. Similar to Lemma~\ref{lem:DC:new3}, we have the following result for the convergence of $\|\mathbf{x}_\tau - \mathbf{z}_{\tau}\|$.
\begin{lemma}\label{lem:new2}
Suppose Assumption~\ref{ass:4} holds and there exists $\Delta>0$ such that $\mathrm{E}[F(\mathbf{x}_1) - \min_{\mathbf{x}}F(\mathbf{x})]\leq \Delta$, and Algorithm~\ref{alg:SVRG} is employed for solving $F_k$ with $\gamma_k= ck^{\beta}$ with $0\leq \beta < 1$ and $c>0$, $\eta_{k} = 0.05/L$, $T_k\geq \max(2, 200L/\gamma_k)$, $S_k =\lceil \log_2(k)\rceil$, then with a total of $K$ stages Algorithm~\ref{alg:meta} guarantees
\begin{align*}
\mathrm{E}\bigg[\|\mathbf{z}_{\tau} - \mathbf{x}_\tau\|^2\bigg]&\leq \frac{12\Delta (\alpha +1)}{cK^{1+\beta}},
\end{align*}
where $\tau\in\{1,\ldots, K\}$ is sampled according to probabilities $p(\tau=k) = \frac{k^\alpha}{\sum_{k=1}^Kk^\alpha}$ with $\alpha\geq 1$.
\end{lemma}
\begin{thm}(Improved Complexities of SSDC-SVRG)\label{thm:new2}
Regarding the problem~(\ref{eqn:Pf}), we have the following results:
\begin{itemize}
\item [a.]
If $g$ and $h$ have a finite-sum form, Assumption~\ref{ass:ghsm} and Assumption~\ref{ass:4} hold, and Assumption~\ref{ass:last} (i) holds, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:SVRG} to solve~(\ref{eqn:New}). We can set $\mu = \epsilon$, $\gamma_k= ck^{1/3}$, $T_k\geq \max(2, 200L/\gamma_k)$, $S_k =\lceil \log_2(k)\rceil$. The algorithm will return a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^3)$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total gradient complexity is $\widetilde O(n/\epsilon^3)$.
\item [b.]
If $g$ and $h$ have a finite-sum form, Assumption~\ref{ass:ghsm} and Assumption~\ref{ass:4} hold, and Assumption~\ref{ass:last} (ii) or (iii) holds, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:SVRG} to solve~(\ref{eqn:New}). We can set $\mu=\epsilon^2$, $\gamma_k= ck^{1/2}$, $T_k\geq \max(2, 200L/\gamma_k)$, $S_k =\lceil \log_2(k)\rceil$. The algorithm will return a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{4})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total gradient complexity is $\widetilde O(n/\epsilon^{4})$.
\end{itemize}
\end{thm}
{\bf Remark:} \cite{DBLP:journals/corr/abs/1901.08369} also considered a special setting of our problem~(\ref{eqn:Pf}) where $g$ is a finite-sum smooth and non-convex function, $h=0$, $r$ is non-smooth non-convex and Lipchitz continuous, which is covered in part (a) of the above theorem. In terms of iteration complexity, they obtained a gradient complexity of $O(n^{2/3}/\epsilon^3)$ but with a large mini-batch size equal to $n^{2/3}$. Their gradient complexity is better than the result of part (a) in the above theorem. However, their algorithm needs to use a large mini-batch size and a small step size.
In contrast, our algorithm does not need a large mini-batch size, and the step size in SVRG is a constant. In addition, we also emphasize that our algorithm SSDC is a general framework, which allows one to employ any suitable stochastic algorithms for smooth and strongly convex functions. It therefore gives us much more flexibility in practice. However, it is an interesting question whether one can obtain a better dependence on $n$ in our framework.
\begin{proof}
Similar to the proof of Theorem~\ref{thm:new1}, we know
for any $\tau\in\{1,\ldots, K\}$,
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+\gamma_K)\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|].
\end{align*}
For part (a), using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2\leq r(\mathbf{x}_\tau) - r(\mathbf{w}_\tau)\leq G\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\Rightarrow \|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq 2G\mu,
\end{align*}
where the second inequality with an appropriate $G>0$ follows the Lipchitz continuity of $r$.
Then by the setting of $\gamma_K = 3LK^{1/3}$,
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+3LK^{1/3}) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 4GL\mu ].
\end{align*}
By setting $\mu = \epsilon$ and $K= O(1/\epsilon^3)$ and $\tau$ randomly sampled, we have $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq \epsilon^2$ by Lemma~\ref{lem:new2} with $\beta=1/3$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] \leq O(\epsilon), \quad \mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon).
\end{align*}
For part (b), if Assumption~\ref{ass:last}(ii) holds, then using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq \sqrt{2\mu \left(r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x})\right)}\leq \sqrt{2\mu M}.
\end{align*}
If Assumption~\ref{ass:last}(iii) holds, we take expectation over the above inequality giving
\begin{align*}
\mathrm{E}[ \|\mathbf{x}_\tau - \mathbf{w}_\tau\|]\leq \sqrt{ 2\mu \left(\mathrm{E}[r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x})]\right)}.
\end{align*}
Since using the SVRG, we can show $\mathrm{E}[f(\mathbf{x}_\tau) + r_\mu(\mathbf{x}_\tau)]$ is bounded above, i.e., $\mathbf{x}_\tau$ is in a bounded set (in expectation), which together with the assumption $r$ is lower bounded implies that there exists a constant $M>0$ such that $\mathrm{E}[r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x}) ]\leq M$ for $\tau=1,\ldots, K$.
For both cases, by the setting of $\gamma_K = 3LK^{1/2}$ we have
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(3L+3LK^{1/2}) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L\sqrt{2\mu M} ].
\end{align*}
By setting $\mu = \epsilon^2$ and $K=O(1/\epsilon^{4})$ and $\tau$ randomly sampled, we have $\mathrm{E} \|\mathbf{x}_\tau - \mathbf{z}_\tau\| \leq \epsilon^3$ by Lemma~\ref{lem:new2} and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq O(\epsilon), \quad \mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon).
\end{align*}
\end{proof}
Next, we present the results under the H\"{o}lder continuous gradient condition of $h$. These results are simple extension of that in Theorems~\ref{thm:new1}, and Theorem~\ref{thm:new2}.
\begin{thm}\label{thm:new1:Appendix}
Regarding the problem~(\ref{eqn:Pf}), we have the following results:
\begin{itemize}
\item [a.]
If Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:last} (i) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon$, $\gamma_k= 3Lk^{1/3}$ and $T_k = 3Lk/\gamma_k+3$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{\frac{3(1+\nu)}{2}})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total iteration complexity for such convergence guarantee is $\sum_{k=1}^KT_k \le O(K^{5/3}) = O(1/\epsilon^{\frac{5(1+\nu)}{2}})$.
\item [b.] If Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:last} (ii) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon^2$, $\gamma_k= 3Lk^{1/2}$ and $T_k = 3Lk/\gamma_k+3$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{\frac{4(2+\nu)}{3}})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total iteration complexity for such convergence guarantee is $\sum_{k=1}^KT_k \le O(K^{3/2}) = O(1/\epsilon^{ 2(2+\nu)})$.
\item [c.]
If $g$ and $h$ have a finite-sum form, Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:4} and Assumption~\ref{ass:last} (i) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:SVRG} to solve~(\ref{eqn:New}). We can set $\mu = \epsilon$, $\gamma_k= ck^{1/3}$, $T_k\geq \max(2, 200L/\gamma_k)$, $S_k =\lceil \log_2(k)\rceil$. The algorithm will return a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{\frac{3(1+\nu)}{2}})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total gradient complexity for such convergence guarantee is $\widetilde O(n/\epsilon^{\frac{3(1+\nu)}{2}})$.
\item [d.]
If $g$ and $h$ have a finite-sum form, Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:4} and Assumption~\ref{ass:last} (ii) or (iii) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:SVRG} to solve~(\ref{eqn:New}). We can set $\mu=\epsilon^2$, $\gamma_k= ck^{1/2}$, $T_k\geq \max(2, 200L/\gamma_k)$, $S_k =\lceil \log_2(k)\rceil$. The algorithm will return a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{\frac{4(2+\nu)}{3}})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total gradient complexity for such convergence guarantee is $\widetilde O(n/\epsilon^{\frac{4(2+\nu)}{3}})$.
\end{itemize}
\end{thm}
{\bf Remark:} When deriving the total gradient complexity for ensuring $ \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon)$ we can simply replace $\epsilon$ by $\epsilon^{1/\nu}$ in the above results.
\begin{proof}
This proof is similar to the proof of Theorem~\ref{thm:ncns:reg:Appendix} and Theorems~\ref{thm:new1} and~\ref{thm:new2}. Following the analysis of Theorem~\ref{thm:ncns:reg:Appendix} we know
for any $\tau\in\{1,\ldots, K\}$,
\begin{align*}
&\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \partial h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] \\
\leq &\mathrm{E}[(L+\gamma_\tau)\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|+ L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^\nu].
\end{align*}
For part (a), by the setting of $\gamma_\tau= 3L\tau^{1/3}$, we know $\gamma_\tau \leq 3LK^{1/3}$ so that
\begin{align*}
&\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \partial h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] \\
\leq& \mathrm{E}[(L+3LK^{1/3})\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|+ L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^\nu].
\end{align*}
By using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2\leq r(\mathbf{x}_\tau) - r(\mathbf{w}_\tau)\leq G\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\Rightarrow \|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq 2G\mu,
\end{align*}
where the second inequality with an appropriate $G>0$ follows the Lipchitz continuity of $r$.
Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+3LK^{1/3}) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2GL\mu + L (2G\mu)^\nu].
\end{align*}
By setting $\mu = \epsilon$ and $K= O(1/\epsilon^{\frac{3(1+\nu)}{2}})$ and $\tau$ randomly sampled, we have $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq \epsilon^{(1+\nu)}$ by Lemma~\ref{lem:new1} with $\beta=1/3$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] \leq O(\epsilon^\nu), \quad \mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon).
\end{align*}
For part (b),
by the setting of $\gamma_\tau= 3L\tau^{1/2}$, we know $\gamma_\tau \leq 3LK^{1/2}$ so that
\begin{align*}
&\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \partial h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] \\
\leq& \mathrm{E}[(L+3LK^{1/2})\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|+ L\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^\nu].
\end{align*}
By using
\begin{align*}
\frac{1}{2\mu}\|\mathbf{x}_\tau - \mathbf{w}_\tau\|^2 + r(\mathbf{w}_\tau) \leq r(\mathbf{x}_\tau)
\end{align*}
we have
\begin{align*}
\|\mathbf{x}_\tau - \mathbf{w}_\tau\|\leq \sqrt{2\mu \left(r(\mathbf{x}_\tau) - \min_{\mathbf{x}\in\mathbb{R}^d}r(\mathbf{x})\right)}\leq \sqrt{2\mu M},
\end{align*}
where $M>0$ exists due to Assumption~\ref{ass:last}(ii).
Then
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq \mathrm{E}[(L+3LK^{1/2}) \|\mathbf{x}_\tau - \mathbf{z}_\tau\| + \frac{1}{\mu}\|\mathbf{x}_\tau - \mathbf{z}_\tau\| + 2L(2\mu M)^{\nu/2}].
\end{align*}
By setting $\mu = \epsilon^2$ and $K= O(1/\epsilon^{\frac{4(2+\nu)}{3}})$ and $\tau$ randomly sampled, we have $\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{z}_\tau\|]\leq \epsilon^{2+\nu}$ by Lemma~\ref{lem:new1} with $\beta=1/2$ and hence
\begin{align*}
\mathrm{E}[\|\nabla g(\mathbf{w}_\tau) - \nabla h(\mathbf{w}_\tau) + \hat\partial r(\mathbf{w}_\tau)\|] &\leq O(\epsilon^\nu), \quad \mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon).
\end{align*}
Part (c) and Part (d) can be proved similarly as in Theorem~\ref{thm:new2}.
\end{proof}
\subsection{Improved Complexities when $\nu$ of $h$ is known}\label{sec:new:improve}
Similar to the case when $r$ is convex, we can also improve the complexity for solving problems with non-smooth non-convex $r$ and $h$ with a H\"{o}lder continuous gradient under the condition that $\nu$ is known.
\begin{thm}\label{thm:new1:Appendix2}
Regarding the problem~(\ref{eqn:Pf}), we have the following results:
\begin{itemize}
\item [a.]
If Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:last} (i) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon$, $\gamma_k= 3Lk^{1/(1+2\nu)}$ and $T_k = 3Lk/\gamma_k+3$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{1+2\nu})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total iteration complexity is $\sum_{k=1}^KT_k= O(1/\epsilon^{1+4\nu})$.
\item [b.] If Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:last} (ii) and Assumption~\ref{ass:1} (i) hold, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:sgd} (option 1) to solve~(\ref{eqn:New}) with $\mu = \epsilon^2$, $\gamma_k= 3Lk^{1/(1+\nu)}$ and $T_k = 3Lk/\gamma_k+3$, which returns a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{2(1+\nu)})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total iteration complexity for such convergence guarantee is $\sum_{k=1}^KT_k = O(1/\epsilon^{ 2(1+2\nu)})$.
\item [c.]
If $g$ and $h$ have a finite-sum form, Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:4} and Assumption~\ref{ass:last} (i) holds, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:SVRG} to solve~(\ref{eqn:New}). We can set $\mu = \epsilon$, $\gamma_k= ck^{1/(1+2\nu)}$, $T_k\geq \max(2, 200L/\gamma_k)$, $S_k =\lceil \log_2(k)\rceil$. The algorithm will return a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{1+2\nu})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total gradient complexity is $\widetilde O(n/\epsilon^{1+2\nu})$.
\item [d.]
If $g$ and $h$ have a finite-sum form, Assumption~\ref{ass:ghsm2}, Assumption~\ref{ass:4} and Assumption~\ref{ass:last} (ii) or (iii) holds, then we can use Algorithm~\ref{alg:meta} with Algorithm~\ref{alg:SVRG} to solve~(\ref{eqn:New}). We can set $\mu=\epsilon^2$, $\gamma_k= ck^{1/(1+\nu)}$, $T_k\geq \max(2, 200L/\gamma_k)$, $S_k =\lceil \log_2(k)\rceil$. The algorithm will return a solution $\mathbf{x}_{\tau}$ after $K=O(1/\epsilon^{2(1+\nu)})$ stages satisfying
\begin{align*}
\mathrm{E}[\|\mathbf{x}_\tau - \mathbf{w}_{\tau}\|]\leq O(\epsilon), \quad \mathrm{E}[\text{dist}(\nabla h(\mathbf{w}_\tau), \nabla g(\mathbf{w}_{\tau}) + \hat \partial r(\mathbf{w}_{\tau}))]\leq O(\epsilon^\nu),
\end{align*}
where $\mathbf{w}_\tau = \text{prox}_{\mu r}(\mathbf{x}_{\tau})$. The total gradient complexity is $\widetilde O(n/\epsilon^{2(1+\nu)})$.
\end{itemize}
\end{thm}
{\bf Remark:} The above results can be easily proved as Theorem~\ref{thm:new1:Appendix}.
\paragraph{Deterministic Methods for~(\ref{eqn:Pf}) with a Non-smooth Non-convex Regularizer.}
Finally, we note that for the problem~(\ref{eqn:Pf}) with a non-smooth non-convex regularizer, a non-asymptotic convergence result for a deterministic optimization method is novel and interesting of its own. In particular, SVRG can be replaced by deterministic gradient based methods (e.g., accelerated gradient methods~\citep{citeulike:9501961,Beck:2009:FIS:1658360.1658364,Composite}) to enjoy a similar non-asymptotic convergence in terms of $\epsilon$. From this perspective, we can also generalize the results to the case when both $g$ and $h$ have H\"{o}lder continuous gradients by using a universal gradient method~\citep{Nesterov2015} that can adapts to the actual level of smoothness of the objective function for solving each subproblem.
\section{Numerical Experiments}
In this section, we perform some experiments for solving different tasks to demonstrate effectiveness of proposed algorithms by comparing with different baselines.
We use very large-scale datasets from libsvm website in experiments, including real-sim (n = 72309) and rcv1 (n=20242) for classification, million songs (n = 463715) for regression.
For all algorithms, the initial stepsizes are tuned in the range of $\{10^{-6:1:4}\}$, and the same initial solution with all zero entries is used.
The initial iteration number $T_0$ of SSDC-SPG is tuned in $ \{10^{1:1:4}\}$. For SSDC algorithms, we use a constant parameter $\gamma$ and also tune its value in $\{10^{-7:1:3}\}$.
First, we compare SSDC algorithms with SDCA~\citep{pmlr-v70-thi17a}, SGD~\citep{sgdweakly18}, SVRG~\citep{reddi2016proximal}, GIST~\citep{DBLP:conf/icml/GongZLHY13} and GPPA~\citep{doi:10.1080/02331934.2016.1253694} for learning with a DC regularizer: minimizing logistic loss with a SCAD regularizer for classification and huber loss with a MCP regularizer for regression. The parameter in Huber loss is set to be $1$. The value of regularization parameter is set to be $10^{-4}$. Note that since these regularizers are weakly convex, SGD with step size $\eta_0/\sqrt{t}$ is applicable~\citep{sgdweakly18}. We set the inner iteration number of SVRG as $n$ following ~\citep{reddi2016proximal} and the same value is used as the inner iteration number $T$ of SSDC-SVRG. We set the values of parameters in GIST with their suggested BB rule~\citep{DBLP:conf/icml/GongZLHY13}.
Similar to~\citep{pmlr-v70-thi17a}, we tune the batch size of SDCA in a wide range and choose the one with the best performance.
GIST and GPPA are two deterministic algorithms that use the all data points in each iteration. For fairness of comparison, we plot the objective in log scale versus the number of gradient computations in Figure~\ref{fig01}.
\begin{figure}[t]
\centering
\centering
{\includegraphics[scale=.33]{fig_logistic_scad_realsim.pdf}}
{\includegraphics[scale=.33]{fig_huber_mcp_yearprediction.pdf}}
\caption{Learning with DC regularizers on different datasets for classification and regression.}\label{fig01}
\end{figure}
\begin{figure}[t]
\centering
{\includegraphics[scale=.33]{fig_nlls_l0_realsim.pdf}}
{\includegraphics[scale=.33]{fig_truncationLS_l0_yearprediction.pdf}}
\caption{Learning with non-DC regularizers (right two) on different datasets for classification and regression.}\label{fig02}
\end{figure}
\begin{figure}[t]
\centering
{\includegraphics[scale=.33]{fig_last_lss7_realsim_1e-04.pdf}}
{\includegraphics[scale=.33]{fig_last_lss6_realsim_1e-04.pdf}}
{\includegraphics[scale=.33]{fig_last_lss7_rcv1_1e-04.pdf}}
{\includegraphics[scale=.33]{fig_last_lss6_rcv1_1e-04.pdf}}
\caption{PU learning with different non-smooth loss functions on different datasets.}
\label{fig03}
\end{figure}
Second, we consider minimizing $\ell_0$ regularized non-linear least square loss function $\frac{1}{n}\sum_{i=1}^{n} (y_i - \sigma(\mathbf{w}^\top\mathbf{x}_i))^2 + \lambda\|\mathbf{w}\|_0$ with a sigmod function $\sigma(s) = \frac{1}{1+e^{-s}}$ for classification and $\ell_0$ regularized truncated least square loss function $\frac{1}{2n}\sum_{i=1}^{n} \alpha \log(1+(y_i - \mathbf{w}^\top\mathbf{x}_i)^2/\alpha) + \lambda\|\mathbf{w}\|_0$~\citep{DBLP:journals/corr/abs-1805-07880} for regression. We compare the proposed algorithms with GPPA, APG~\citep{Li:2015:APG:2969239.2969282} and proximal version of SGD (proxSGD), where GPPA and APG are deterministic algorithms. We fix the truncation value as $\alpha = \sqrt{10n}$. The loss function in these two tasks are smooth and non-convex. The value of regularization parameter is fixed as $10^{-6}$. For APG, we implement both monotone and non-monotone versions following~\citep{Li:2015:APG:2969239.2969282}, and then the better one is reported. Although the convergence guarantee of proxSGD remains unclear for the considered problems, we still include it for comparison. The results on two data sets are plotted in Figure~\ref{fig02}
The results of these two experiments indicate that the proposed stochastic algorithms outperform all deterministic baselines (GITS, GPPA, APG) on all tasks, which verify the necessity of using stochastic algorithms on large datasets. In addition, our algorithms especially SSDC-AdaGrad and SSDC-SPG also converge faster than stochastic algorithms SGD, SDCA, and non-convex SVRG verifying that our stochastic algorithms are more practical for the considered problems.
We also see that in most cases SSDC-AdaGrad is more effective than SSDC-SPG and SSDC-SVRG.
Finally, we compare SSDC algorithms with two baselines AdaSPD~\citep{pmlr-v54-nitanda17a} and SGD~\citep{DBLP:journals/corr/abs-1804-07795} for solving two $\ell_2$ regularized positive-unlabeled (PU) learning problems~\citep{du2015convex} with non-smooth losses, i.e., hinge loss and absolute loss. The $\ell_2$ regularization parameter is set to be $10^{-4}$. For SGD, we use the standard stepsize $\eta = \eta_0/\sqrt{t}$~\citep{DBLP:journals/siamjo/GhadimiL13a} with $\eta_0$ tuned. The mini-batch size and the number of iterations of each stage of AdaSPD are simply set as $10^4$. The results on two classification datasets are plotted in Figure~\ref{fig03}, which show that SSDC-SPG and SSDC-AdaGrad outperforms SGD and AdaSPD.
\section{Conclusions}
In this paper, we have presented stochastic optimization algorithms for solving a broad family of DC functions with convex or non-convex non-smooth regularizer. We consider several stochastic algorithms and presented their convergence results in terms of finding an approximate critical point. The proposed stochastic optimization algorithms and their analysis for solving DC functions are improvements over an existing work by making the algorithms more efficient and practical and the theories more broad and general. For the first time, we provide non-asymptotic convergence for solving non-convex problems with non-smooth non-convex regularizer.
|
2,877,628,089,360 | arxiv | \section{Introduction}
During the past decades, new experimental facilities with radioactive beams have extended our knowledge of nuclear chart to the very limits of nuclear binding, in particular to the unstable neutron-rich nuclei.
Many novel and striking features have been found in the structure of neutron-rich nuclei, such as the halo phenomenon, and the disappearance of traditional magic numbers and occurrence of new ones~\cite{Tanihata2013PPNP}.
The new observations do not only provide us new insights for nuclear systems, but also challenge the established nuclear theory.
Enormous efforts have been made to understand the physics of nuclear many-body systems based on microscopic approaches.
The nuclear density functional theory (DFT) is one of the most popular approaches in this context~\cite{Bender2003Self, meng2016relativistic}.
Starting from a universal energy density functional, the complicated nuclear many-body problem can be simplified as a one-body problem~\cite{Kohn1965DFT}.
In this way, the DFT can provide a global description for almost all nuclei in the nuclear chart including very neutron-rich nuclei, and a fairly good accuracy has been achieved with only a few parameters in the energy density functional.
By taking into account the Lorentz symmetry, the covariant density functional theory (CDFT) has attracted a lot of attention in nuclear physics~\cite{RING1996PPNP,Vretenar2005PhysicsReport,meng2006PPNP,NIKSIC2011PPNP,meng2016relativistic}.
In this framework, the nucleons are treated as Dirac particles moving in large scalar and vector fields with the order of a few hundred MeV~\cite{Volum16}.
This brings many advantages to describe the nuclear systems with the CDFT, such as the new saturation mechanism of nuclear matter~\cite{walecka1974theory}, the natural inclusion of spin-orbit interactions~\cite{Sharma1995Pb_shift} and, thus, the relativistic spin and pseudospin symmetries~\cite{liang2015hidden}.
Another important advantage of the CDFT is the self-consistent treatment of the time-odd fields, which share the same coupling constants as the time-even ones thanks to the Lorentz invariance~\cite{Vretenar2005PhysicsReport, Meng2013FT_TAC}.
With these advantages, CDFT has been successfully used to investigate the ground-state properties of many exotic nuclei~\cite{meng1996relativistic, meng1998giant, zhou2010neutron, xia2018ADNDT} and also various nuclear excitation phenomena including rotations~\cite{Peng2008maganetic_roration, Zhao2011PRL_AMR, Zhao2015Rod-shaped, Zhao2017ChiralRotation} and vibrations~\cite{Niksic2002DDME1_QRPA, Paar2007RPP, Paar2009QRPA, Niu2009FTQRPA}.
The time-dependent DFT (TDDFT) is a dynamical extension of DFT~\cite{Rung1984TDDFT} for describing dynamical processes of many-body systems.
In nuclear physics, the development of TDDFT can be traced back to the mid 1970s~\cite{ENGEL1975215NPA, Bonche19761DTDHF, KOONIN1976TDHF_O16,Cusson1976TDHFO16, Koonin1977TDHFO16, Flocard1978TDHFO16, Bonche1978TDHFO16, Davies1978TDHFO16}, which are known under the notation of the time-dependent Hartree-Fock method~\cite{dirac1930TDHF}.
However, the early applications of the nuclear TDDFT were suffered from the simplified effective interactions and/or restricted geometric symmetries~\cite{Negele1982TDDFT}.
With the ever-improving computational capabilities, the TDDFT experienced a revival during the last twenty years, and the unrestricted three-dimensional (3D) calculations with modern nuclear density functionals become available~\cite{Simene2012PEPJA, NakatsukasaRMP2016, SIMENEL2018TDHF_PPNP, STEVENSON2019PPNP}.
Up to now, the TDDFT in 3D lattice space has been widely applied to many nuclear dynamical processes, such as the multinucleon transfer process~\cite{Simenel2010MNT, Sekizawa2013TDHFMNT, Sekizawa2016TDHF_Ni_U, Wu2019MNT},
fission~\cite{Goddard2015TDHFfission, Bulgac2016Pu240_fission, Tanimura2017fission, scamps2018impact},
fusion~\cite{Guo2012fusion,Umar2015SHE_TDHF, Yu2017TDHF3D, Guo2018TDHF_fusion, Guo2018tensor_fusion},
collective vibration~\cite{maruhn2005TDHF_GDR, Reinhard2007TDHF_GR, Schuetrumpf2016TABC},
cluster scattering~\cite{Umar2010TDHF_C12}, etc.
The dynamical extension of the CDFT, i.e., the time-dependent CDFT (TDCDFT), can be traced back to the early 1980s,
where the time-dependent versions of the Walecka model were adopted to describe the dynamics of colliding nuclear slabs~\cite{MULLER1981TDRMF} and relativistic heavy ion collisions~\cite{Cusson1985TDCDFT, Bai1987TDCDFT}.
Later on, the time-dependent relativistic mean-field theory is used to describe the dynamics of Coulomb excitations of nuclei by assuming axial symmetry~\cite{Vretenar1993TDRMF, VRETENAR1995TDRMF}.
In the present work, TDCDFT with the successful density functional PC-PK1 is developed in a three-dimensional coordinate space without any symmetry restrictions.
This would be helpful to clarify the ambiguity of the spin-orbit fields and time-odd fields in the nonrelativistic TDDFTs and, thus, provide a new framework to investigate the dynamical processes of nuclei.
However, such a development is not simple at all because of the longstanding difficulties in solving the CDFT in a 3D lattice~\cite{zhang2009first, ZhangIJMPE2010}.
Recently, the CDFT has been solved in a 3D lattice space with the inverse Hamiltonian~\cite{hagino2010iterative, tanimura20153d} and Fourier spectral methods~\cite{REN2017Dirac3D}, and its successful applications includes the studies of nuclear linear-chain ~\cite{Ren2019C12LCS} and toroidal structures~\cite{REN2020Toroidal}.
This paves the way to develop the corresponding time-dependent approaches in a full 3D lattice space without assuming any symmetries.
In our very recent work~\cite{Ren2020HeBeTDCDFT}, the TDCDFT was developed in a 3D lattice space with relativistic density functionals and applied to investigate the microscopic dynamics of the linear-chain cluster states.
Following the previous work, a systematic investigation of the ${}^{16}{\rm O}+{}^{16}{\rm O}$ reaction will be reported in this work with the detailed formalism of the TDCDFT in 3D lattice space.
In Sec.~\ref{sec_theo}, the theoretical framework is introduced.
The numerical details are given in Sec.~\ref{sec_nume}.
Section~\ref{sec_numericaltest} is devoted to the numerical tests.
Two primary applications, including the dissipation dynamics and above-barrier fusion cross sections, are presented in Secs.~\ref{sec_DisDy} and \ref{sec_fusion}, respectively.
Finally, a summary is given in Sec.~\ref{sec_summ}.
\section{Theoretical framework}\label{sec_theo}
\subsection{Covariant density functional theory}
The starting point of the CDFT is a standard Lagrangian density which, in the point-coupling form, can be written as~\cite{ZhaoPC-PK1},
\begin{equation}
\begin{split}
\mathcal{L}=\,&\mathcal{L}^{\rm free}+\mathcal{L}^{\rm 4f}+\mathcal{L}^{\rm hot}+\mathcal{L}^{\rm der}+\mathcal{L}^{\rm em}\\
=\,&\bar{\psi}(i\gamma^\mu\partial_\mu-m_N)\psi-\frac{1}{2}\alpha_S(\bar{\psi}\psi)(\bar{\psi}\psi)-\frac{1}{2}\alpha_V(\bar{\psi}\gamma^\mu\psi)(\bar{\psi}\gamma_\mu\psi)-\frac{1}{2}\alpha_{TV}(\bar{\psi}\vec{\tau}\gamma^\mu\psi)\cdot(\bar{\psi}\vec{\tau}\gamma_\mu\psi)\\
&-\frac{1}{3}\beta_S(\bar{\psi}\psi)^3-\frac{1}{4}\gamma_S(\bar{\psi}\psi)^4-\frac{1}{4}\gamma_V[(\bar{\psi}\gamma^\mu\psi)(\bar{\psi}\gamma_\mu\psi)]^2-\frac{1}{2}\delta_S\partial^\nu(\bar{\psi}\psi)\partial_\nu(\bar{\psi}\psi)\\
&-\frac{1}{2}\delta_V\partial^\nu(\bar{\psi}\gamma^\mu\psi)\partial_\nu(\bar{\psi}\gamma_\mu\psi)-\frac{1}{2}\delta_{TV}\partial^\nu(\bar{\psi}\vec{\tau}\gamma^\mu\psi)\cdot\partial_\nu(\bar{\psi}\vec{\tau}\gamma_\mu\psi)\\
&-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-e\frac{1-\tau_3}{2}(\bar{\psi}\gamma^\mu\psi)A_\mu.
\end{split}
\end{equation}
It includes the Lagrangian density for free nucleons $\mathcal{L}^{\rm free}$,
the four-fermion point-coupling terms $\mathcal{L}^{\rm 4f}$, the higher-order terms $\mathcal{L}^{\rm hot}$ accounting for the medium effects,
the derivative terms $\mathcal{L}^{\rm der}$ to simulate the finite-range effects that are crucial for a quantitative description of nuclear density distributions,
and the electromagnetic interaction terms $\mathcal{L}^{\rm em}$.
Thus, one can build the energy density functional for a nuclear system,
\begin{equation}\label{Eq_energy_functional}
\begin{split}
E_{\rm tot}=\,&E_{\rm kin}+E_{\rm int}+E_{\rm em}\\
=\,&\int d^3r~\left\{\sum_{k=1}^A\psi_k^\dag(\bm{\alpha}\cdot\hat{\bm{p}}+\beta m_N)\psi_k+\frac{1}{2}\alpha_S\rho_S^2+\frac{1}{3}\beta_S\rho_S^3+\frac{1}{4}\gamma_S\rho_S^4+\frac{1}{2}\delta_S\rho_S\Delta\rho_S\right.\\
&+\frac{1}{2}\alpha_Vj^\mu j_\mu+\frac{1}{4}\gamma_V(j^\mu j_\mu)^2+\frac{1}{2}\delta_V j^\mu\Delta j_\mu+\frac{1}{2}\alpha_{TV}j^\mu_{TV}(j_{TV})_\mu+\frac{1}{2}\delta_{TV}j_{TV}^\mu\Delta(j_{TV})_\mu\\
&+\left.ej_c^\mu A_\mu+\frac{1}{2}A_\mu\Delta A^\mu\right\},
\end{split}
\end{equation}
where $E_{\rm kin}$, $E_{\rm int}$, and $E_{\rm em}$ are the kinetic, interaction, and electromagnetic energies, respectively.
The local densities and currents $\rho_S$, $j^\mu$, $j_{TV}^\mu$, and $j_c^\mu$ are given by,
\begin{subequations}\label{Eq_density_current}
\begin{align}
&\rho_s=\sum_{k=1}^A\bar{\psi}_k\psi_k,\\
&j^\mu=\sum_{k=1}^A\bar{\psi}_k\gamma^\mu\psi_k,\\
&j_{TV}^\mu=\sum_{k=1}^A\bar{\psi}_k\gamma_\mu\tau_3\psi_k,\\
&j_c^\mu=\sum_{k=1}^A\bar{\psi}_i\gamma^\mu\frac{1-\tau_3}{2}\psi_k,
\end{align}
\end{subequations}
where $\tau_3$ is the isospin Pauli matrix with the eigenvalues $+1$ for neutrons and $-1$ for protons.
The time component $j^0$ is usually denoted as the vector density $\rho_v$.
In the static case, the densities and currents in Eq.~\eqref{Eq_density_current} are time-independent.
By means of the variation of energy density functional Eq.~\eqref{Eq_energy_functional} with respect to the densities and currents, one obtains the Kohn-Sham equation for nucleons,
\begin{equation}\label{Eq_static_Dirac_eq}
\hat{h}(\bm{r})\psi_k(\bm{r})=\varepsilon_k\psi_k(\bm{r}),
\end{equation}
where $\varepsilon_k$ is the single-particle energy and $\hat{h}$ is the single-particle Dirac Hamiltonian,
\begin{equation}\label{Eq_dirac_hamiltonian}
\hat{h}(\bm{r})=\bm{\alpha}\cdot(\hat{\bm{p}}-\bm{V})+V^0+\beta(m_N+S).
\end{equation}
The scalar $S(\bm{r})$ and four-vector $V^\mu(\bm{r})$ potentials read
\begin{subequations}
\begin{align}
S(\bm{r})=\,&\alpha_S\rho_S+\beta_S\rho_S^2+\gamma_S\rho_S^3+\delta_S\Delta\rho_S,\\
V^\mu(\bm{r})=\,&\alpha_Vj^\mu+\gamma_V(j^\mu j_\mu)j^\mu+\delta_V\Delta j^\mu+\tau_3\alpha_{TV}j_{TV}^\mu+\tau_3\delta_{TV}\Delta j_{TV}^\mu+e\frac{1-\tau_3}{2}A^\mu, \label{Eq_vecpot}
\end{align}
\end{subequations}
where the electromagnetic field $A^\mu$ is determined by Poisson's equation,
\begin{equation}
-\Delta A^\mu=ej_c^\mu.
\end{equation}
By solving the Dirac equation Eq.~\eqref{Eq_static_Dirac_eq} self-consistently, one can obtain the single-nucleon wavefunctions for a nucleus in its ground state.
\subsection{Time-dependent covariant density functional theory}
In the dynamical case, the evolution of single-nucleon wavefunctions $\psi_k$ should fulfill the time-dependent Kohn-Sham equation~\cite{Rung1984TDDFT,Leeuwen1999TDDFT},
\begin{equation}\label{Eq_td_Dirac_eq}
i\frac{\partial}{\partial t}\psi_k(\bm{r},t)=\hat{h}(\bm{r},t)\psi_k(\bm{r},t).
\end{equation}
The time-dependent $\hat{h}(\bm{r},t)$ is purely determined by the time-dependent densities and currents~\cite{Rung1984TDDFT}.
With the \textit{adiabatic approximation}~\cite{NakatsukasaRMP2016}, the time-dependent single-particle Hamiltonian $\hat{h}(\bm{r},t)$ in Eq.~\eqref{Eq_td_Dirac_eq} is taken as the Dirac Hamiltonian in Eq.~\eqref{Eq_dirac_hamiltonian}, in which the ground-state densities and currents Eqs.~\eqref{Eq_density_current} are obtained with the wavefunctions $\psi_k(\bm{r},t)$ at the time $t$.
This obviously lacks the memory effect, i.e., $\hat{h}(\bm{r},t)$ does not depend on the history of the system.
The time-dependent Dirac equation \eqref{Eq_td_Dirac_eq} has the formal solution,
\begin{equation}\label{Eq_formal_sol}
\psi_k(\bm{r},t)=\hat{\mathcal{T}}\exp\left[-i\int_{t_0}^tdt'~\hat{h}(\bm{r},t')\right]\psi_k(\bm{r},t_0),
\end{equation}
where $\hat{\mathcal{T}}$ represents the time-ordering operation and $t_0$ is the initial time.
For nuclear collisions, the initial wavefunctions $\psi_k(\bm{r},t_0)$ are composed of the single-particle wavefunctions of the two nuclei, which are usually in their ground states, and are obtained from two separate static CDFT calculations.
Subsequently, the two nuclei are placed on the mesh of a 3D lattice space with a large enough distance between them, so that the overlap between their wavefunctions is negligible at the initial time.
Moreover, the nuclei are boosted to set them in motion.
As the Dirac equation is Lorentz covariant, the boost of nuclei can be realized by using the inhomogeneous Lorentz transformation~\cite{greiner2013relativistic}.
Starting from the ground-state single-particle wavefunctions $\psi_k^{(\rm g.s.)}(\bm{r})$, the Lorentz boosted ones $\psi_k'(\bm{r})$ with velocity $\bm{v}$ read,
\begin{equation}\label{Eq_Lorentz_transform}
\psi_k'(\bm{r})=\hat{S}(\bm{v})\psi_k^{\rm (g.s.)}(\bm{r}')e^{i\varepsilon_k\bm{v}\cdot\bm{r}/\sqrt{1-v^2}},
\end{equation}
where $\hat{S}(\bm{v})$ denotes the transformation on the four components of a Dirac spinor,
\begin{equation}\label{Eq_Lorentz_transform_factor}
\hat{S}(\bm{v})=\sqrt{\frac{1+\sqrt{1-v^2}}{2\sqrt{1-v^2}}}+[\bm{\alpha}\cdot(\bm{v}/v)]\sqrt{\frac{1-\sqrt{1-v^2}}{2\sqrt{1-v^2}}},
\end{equation}
and $\bm{r}'$ represents the transformed coordinate,
\begin{equation}
\bm{r}'=\bm{r}+\left(\frac{1}{\sqrt{1-v^2}}-1\right)\frac{(\bm{r}\cdot\bm{v})\bm{v}}{v^2}.
\end{equation}
Note that here the single-particle energy $\varepsilon_k$ is not shifted by the nucleon mass $m_N$.
The Lorentz boost in Eq.~\eqref{Eq_Lorentz_transform} can be connected with the Galilean boost used in the nonrelativistic TDDFT by approaching the nonrelativistic limits [$v/c\approx0$ and $(\varepsilon_k-m_N)/m_N\approx0$], under which the Lorentz boosted wavefunctions in Eq.~\eqref{Eq_Lorentz_transform} become
\begin{equation}
\psi_k'(\bm{r})\approx\psi_k^{\rm(g.s.)}(\bm{r})e^{im_N\bm{v}\cdot\bm{r}}.
\end{equation}
They are just identical with the Galilean boosted wavefunctions~\cite{Maruhn2014CPC}.
Finally, it should mention that the spatial components of the electromagnetic vector potential $\bm{A}(\bm{r})$ are neglected in the calculations, since their contributions are extremely small.
Although the center-of-mass correction energy is usually included a posteriori in the self-consistent static CDFT calculations,
this strategy is disputable in the time-dependent case.
For instance, it involves only the total mass number and does not account for the masses of the fragments.
Therefore, the center-of-mass correction is neglected in the present TDCDFT calculations.
\section{Numerical details}\label{sec_nume}
In the present work, the density functional PC-PK1~\cite{ZhaoPC-PK1} is employed to study the ${}^{16}{\rm O}+{}^{16}{\rm O}$ reaction.
The Dirac spinors of the nucleons and the potentials are represented in 3D lattice space without any symmetry restriction.
The mesh sizes along the $x$, $y$, and $z$ axes are identical and chosen as $d=0.8$ fm.
The ground state of ${}^{16}{\rm O}$ is calculated in a box with $24\times24\times24$ grid points, while for the time-dependent calculations, a larger box with $30\times30\times50$ grid points is used.
For the initial states of the time-dependent calculations, the centers of the two ${}^{16}{\rm O}$ nuclei are placed in the $z$ axis with a separation distance $16$ fm.
The Poisson equation for the Coulomb potential is solved by the Hockney's method with the isolated boundary condition~\cite{eastwood1979remarks}.
For the numerical implementation of the formal solution \eqref{Eq_formal_sol}, the predictor-corrector strategy~\cite{Maruhn2014CPC} is adopted, in which the evolution time is cut into a series of small time steps $\Delta t$.
Over each time interval $[t,t+\Delta t]$, the single-particle Hamiltonian in Eq.~\eqref{Eq_formal_sol} is approximated as the one at the mid-time $\hat{h}(t+\Delta t/2)$.
Thus, the evolution of the single-particle wavefunction from $t$ to $t+\Delta t$ is obtained as,
\begin{equation}\label{Eq_pc_solution}
\psi_k(\bm{r},t+\Delta t)\approx\exp\left[-i\hat{h}(\bm{r},t+\Delta t/2)\Delta t\right]\psi_k(\bm{r},t),
\end{equation}
which also provides the initial condition for the evolution over $[t+\Delta t,t+2\Delta t]$.
In this work, the single-particle Hamiltonian $\hat{h}(t+\Delta t/2)$ is determined with a two-step recipe, i.e., first roughly constructed and then corrected to be a better one.
In the first step, the densities and currents at time $t+\Delta t$, denoted generally as $\tilde{\rho}^{(1)}(t+\Delta t)$, are estimated from $\tilde{\psi}_k^{(1)}(\bm{r},t+\Delta t)$,
\begin{equation}
\tilde{\psi}_k^{(1)}(\bm{r},t+\Delta t) = \exp\left[-i\hat{h}(\bm{r},t)\Delta t\right]\psi_k(\bm{r},t).
\end{equation}
The Hamiltonian $\hat{h}^{(1)}(\bm{r},t+\Delta t/2)$ is roughly constructed using the average densities and currents $[\rho(\bm{r},t)+\tilde{\rho}^{(1)}(\bm{r},t+\Delta t)]/2$.
In the second step, the obtained $\hat{h}^{(1)}(\bm{r},t+\Delta t/2)$ is used to update the wavefunctions
\begin{equation}
\tilde{\psi}_k^{(2)}(\bm{r},t+\Delta t) = \exp\left[-i\hat{h}^{(1)}(\bm{r},t+\Delta t/2)\Delta t\right]\psi_k(\bm{r},t),
\end{equation}
which provide a new estimation for the densities and currents $\tilde{\rho}^{(2)}(t+\Delta t)$ at time $t+\Delta t$.
The Hamiltonian $\hat{h}(\bm{r},t+\Delta t/2)$ in Eq.~\eqref{Eq_pc_solution} is then constructed from the average densities and currents $[\rho(\bm{r},t)+\tilde{\rho}^{(2)}(\bm{r},t+\Delta t)]/2$.
The exponential function of the Hamiltonian operator is evaluated by the Taylor expansion up to order $m$,
\begin{equation}\label{Eq_taylor}
\exp\left(-i\hat{h}\Delta t\right)\psi\approx\sum_{n = 0}^m\frac{(-i\Delta t)^n}{n!}\hat{h}^n\psi.
\end{equation}
The values of $\Delta t=0.1$~fm/$c$ and $m=4$ are adopted in the following calculations if not specified.
A truncation of the Taylor expansion would violate the strict unitarity of $\exp(-i\hat{h}\Delta t)$ and energy conservation, so the conservation of particle number and energy should be checked carefully to preserve the quality of the time evolution.
\section{Numerical tests}\label{sec_numericaltest}
In this section, the TDCDFT benchmark calculations for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ reaction are performed in 3D lattice space.
Numerical tests, including the excitation energy as a function of boost velocity, the conservation of momentum, total energy, and particle number, as well as the time reversal invariance, are carefully examined.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig1}\\
\caption{(Color online) The excitation energy of a boosted ${}^{16}{\rm O}$ as a function of the boost velocity $v$.
The open circles represent the excitation energies obtained by TDCDFT.
The solid and dashed lines denote the results of relativistic $M/(1-v^2)^{1/2}-M$ and nonrelativistic kinetic $Mv^2/2$ energies (see text for the mass $M$), respectively.
The insert figure shows the results with subtracting the nonrelativistic kinetic energies.
}\label{fig1}
\end{figure}
The examinations are first focused on the tests involving a single ${}^{16}{\rm O}$.
In Fig.~\ref{fig1}, the excitation energy of a boosted ${}^{16}{\rm O}$ is shown as a function of the boost velocity $v$, whose direction is set along the $z$ axis.
For comparison, the results of relativistic and nonrelativistic kinetic energies, i.e., $M/(1-v^2)^{1/2}-M$ and $Mv^2/2$, are also shown, where the mass $M$ of ${}^{16}{\rm O}$ is evaluated from the ground-state total energy $E_{\rm tot}$ in Eq.~\eqref{Eq_energy_functional}.
The TDCDFT results coincide with the relativistic kinetic energies very well, which is seen more clearly by subtracting the nonrelativistic kinetic energies (see the insert figure in Fig.~\ref{fig1}).
This shows that the adiabatic approximation for $\hat{h}(\bm{r},t)$ in Eq.~\eqref{Eq_td_Dirac_eq} is quite reasonable.
The nonrelativistic kinetic energies deviate from relativistic ones dramatically with the velocity above $0.3c$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig2}\\
\caption{(Color online) The relative momentum deviation $|(p_z(t)-p_{\rm avg.})/p_{\rm avg.}|$ with respect to the average momentum $p_{\rm avg.}$ of a boosted ${}^{16}{\rm O}$ as a function of the center-of-mass position $z_{\rm c.m.}$.
The abscissa is scaled by the mesh size $d$.
The collective kinetic energy $E_{\rm coll.\,kin.}$ for the boosted ${}^{16}{\rm O}$ is set to $50$ MeV.
Panel (a) shows the results with the Taylor expansion orders $m = 4$, $6$, $8$ and the time evolution step $\Delta t = 0.10$ fm/$c$.
Panel (b) shows the results with $\Delta t = 0.05$, $0.10$, $0.20$ fm/$c$ and $m = 4$.
}\label{fig2}
\end{figure}
A boosted ${}^{16}{\rm O}$ moves with a constant momentum.
In TDCDFT, the momentum $\bm{p}(t)$ is represented by the expectation value of the momentum operator $\hat{\bm{p}}$.
To examine the conservation of momentum, the ${}^{16}{\rm O}$ is placed in the origin point and, then, is boosted with a collective kinetic energy $E_{\rm coll.\,kin.}=50$ MeV along the $z$ axis.
The system is evolved for $T = 100$ fm/$c$.
The average momentum along the $z$ axis is estimated as
\begin{equation}
p_{\rm avg.}=\frac{\int_0^{T}dt~p_z(t)}{\int_0^{T}dt}.
\end{equation}
Figure \ref{fig2} shows the evolution of the relative momentum deviation $|(p_z(t)-p_{\rm avg.})/p_{\rm avg.}|$ with Taylor expansion orders $m$ and time evolution steps $\Delta t$ as a function of the center-of-mass position $z_{\rm c.m.}$,
which is evaluated by
\begin{equation}
z_{\rm c.m.}=\frac{\int d^3r~z\rho_v(\bm{r},t)}{\int d^3r~\rho_v(\bm{r},t)}.
\end{equation}
The relative momentum deviation is reduced with larger $m$ and smaller $\Delta t$.
In the case of $\Delta t = 0.1$ fm/$c$ and $m = 4$, the relative momentum deviations are as small as $10^{-5}$, which reveals the accuracy of the momentum conservation.
Even so, it is interesting to note that the relative momentum deviations oscillate with $z_{\rm c.m.}$, because the space is not exactly translational invariant but is discretized on the lattices.
In fact, the oscillation period is approximately the mesh size $d$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig3}\\
\caption{(Color online) The relative energy deviation $|(E_{\rm tot}(t)-E_{\rm init.})/E_{\rm init.}|$ with respect to the initial energy $E_{\rm init.}$ for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ head-on collision at the center-of-mass energy $E_{\rm c.m.} = 50$ MeV.
The rest mass $m_N$ for nucleons has been subtracted from the total energy $E_{\rm tot}$.
Panel (a) shows the results with the Taylor expansion orders $m = 4$, $6$, $8$ and the time evolution step $\Delta t = 0.10$ fm/$c$.
Panel (b) shows the results with $\Delta t = 0.05$, $0.10$, $0.20$ fm/$c$ and $m = 4$.
}\label{fig3}
\end{figure}
Next, the conservation of total energy and particle number, as well as the time reversal invariance for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ reaction are investigated.
The head-on collision with a center-of-mass energy $E_{\rm c.m.}=50$ MeV is taken as an example.
In Fig.~\ref{fig3}, the time evolutions of the relative energy deviation $|(E_{\rm tot}(t)-E_{\rm init.})/E_{\rm init.}|$ with different $\Delta t$ and $m$ values are shown.
For $\Delta t=0.1$ fm/$c$, the relative energy deviations are around $10^{-4}$ and $10^{-5}$ for $m=4$ and $8$, respectively.
However, the evolution of the relative energy deviation for $m=6$ are not stable, in particular at longer time.
The reason is not clear at the moment, but similar phenomenon is also found in the calculation of nonrelativistic TDDFT~\cite{Maruhn2014CPC}.
Moreover, it is found that this unstable behavior for $m=6$ disappears in the calculaions with a smaller $\Delta t$, such as $\Delta t=0.05$ fm/$c$.
For $m=4$, the smaller the time evolution step $\Delta t$, the better the total energy is conserved.
This can be understood because the approximations in Eqs.~\eqref{Eq_pc_solution} and \eqref{Eq_taylor} are better for smaller $\Delta t$ values.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig4}\\
\caption{(Color online) Time evolution of the total energy and its constituents including the interaction energy $E_{\rm int}$, the electromagnetic energy $E_{\rm em}$, and the kinetic energy $E_{\rm kin}$, for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ head-on collision at the center-of-mass energy $E_{\rm c.m.}=50$ MeV.
The rest mass $m_N$ for nucleons has been subtracted from the total and kinetic energies.
}\label{fig4}
\end{figure}
In Fig.~\ref{fig4}, the evolution of the total energy is shown as a function of time, where $\Delta t = 0.1$ fm/$c$ and $m = 4$ are adopted.
The total energy is conserved along the time evolution at a precision about $10^{-4}$.
The three energy constituents including the interaction energy $E_{\rm int}$, the electromagnetic energy $E_{\rm em}$, and the kinetic energy $E_{\rm kin}$ [see Eq.~\eqref{Eq_energy_functional}], are also shown in Fig.~\ref{fig3}.
There are obvious fluctuations up to 70 MeV for these energy constituents, in particular for the interaction and kinetic energies, which correspond to the oscillation of the compound system.
Note that in the present covariant framework, the interaction energy $E_{\rm int}$ is determined by the densities and/or currents in the scalar and vector channels.
The energy fluctuations in each channel are large and even beyond $1000$ MeV.
This reveals that the conservation of the total energy is indeed achieved by an elegant balance between two large energies in the scalar and vector channels.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig5}\\
\caption{(Color online) Same as Fig.~\ref{fig3} but for the relative particle number deviation $|(N(t)-N_{\rm init.})/N_{\rm init.}|$ with respect to the initial particle number $N_{\rm init.}$.
}\label{fig5}
\end{figure}
Another important examination associated with the approximation in Eq.~\eqref{Eq_taylor} is the conservation of the total particle number $N(t)$ with the definition,
\begin{equation}
N(t)=\int d^3r~\rho_v(\bm{r},t).
\end{equation}
It reveals the influences of the Taylor expansion on the strict unitarity of the exponential $\exp(-i\hat{h}\Delta t)$.
In Fig.~\ref{fig5}, the time evolution of the relative particle number deviation $|(N(t)-N_{\rm init.})/N_{\rm init.}|$ is shown with different $\Delta t$ and $m$ values.
Similar to the conservation of the total energy (see Fig.~\ref{fig3}), the particle number is better conserved with smaller $\Delta t$ and larger $m$ values; except for the unstable evolution with $\Delta t=0.1$ fm/$c$ and $m=6$.
The particle number is conserved quite well for all stable evolutions, and the relative particle number deviation is around $10^{-7}$ at 1000~fm/$c$ in the case of $\Delta t = 0.1$ fm/$c$ and $m = 4$.
All in all, it is found that the momentum, total energy, and particle number are conserved with high precisions in the present TDCDFT calculations with $\Delta t = 0.1$ fm/$c$ and $m = 4$. Therefore, they are adopted in the following investigations.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig6}\\
\caption{(Color online) Time evolution of the quadrupole deformation $\beta_{20}$ for the ${}^{16}{\rm O}$$+^{16}$O head-on collision at $E_{\rm c.m.}=50$ MeV.
The single-particle wavefunctions at time $t=1000$ fm/$c$ are replaced by their time-reversal conjugates.
}\label{fig6}
\end{figure}
Apart from the conservation laws, another severe test of the TDCDFT is provided by the time-reversal invariance, which means that the whole system has the microscopic reversibility~\cite{Bonche19761DTDHF, ring2004nuclear}.
To see this property in ${}^{16}{\rm O}$$+^{16}$O head-on collision at $E_{\rm c.m.}=50$ MeV, the single-particle wavefunctions $\psi_k(\bm{r},t)$ at $t=1000$ fm/$c$ are replaced by their time-reversal conjugates,
\begin{equation}
\hat{T}\psi_k(\bm{r},t)=-i\alpha_x\alpha_z\psi^*_k(\bm{r},t),
\end{equation}
where $\alpha_x$ and $\alpha_z$ are Dirac matrices.
With the time going on, the system should return to the state at the initial time.
In Fig.~\ref{fig6}, the time evolution of the quadrupole deformation $\beta_{20}$ is shown.
It is clearly seen that $\beta_{20}$ evolves back precisely after replacing $\psi_k(\bm{r},t)$ with $\hat{T}\psi_k(\bm{r},t)$ at $1000$~fm/$c$.
Moreover, the nucleon density at $t=2000$ fm/$c$ is also found to agree quite well with the initial one.
These results demonstrate that the time-reversal invariance is fulfilled in the present TDCDFT calculations.
\section{Dissipation dynamics}\label{sec_DisDy}
The dissipation dynamics plays an important role in heavy-ion collisions.
It is responsible for the irreversible conversion of the initial collective kinetic energy into intrinsic nuclear excitations.
To study the dissipation dynamics in deep-inelastic collisions, the ${}^{16}{\rm O}+{}^{16}{\rm O}$ head-on collisions with the center-of-mass energy $E_{\rm c.m.}$ above the upper threshold of fusion are calculated.
A measure of the dissipation is given by the percentage of energy dissipation $P_{\rm dis}=1-E_{\rm fin}/E_{\rm c.m.}$, where $E_{\rm c.m.}$ and $E_{\rm fin}$ represent the initial and final collective kinetic energies, respectively.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig7}\\
\caption{(Color online) Percentage of energy dissipation for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ head-on collisions as a function of the center-of-mass energy $E_{\rm c.m.}$.
For comparison, the nonrelativistic TDDFT results (circle) and the ones with further including the time-odd spin-orbit terms (triangle), taken from Ref.~\cite{Dai2014Dissipation}, are also shown.
}\label{fig7}
\end{figure}
In Fig.~\ref{fig7}, the percentage of energy dissipation $P_{\rm dis}$ calculated with the TDCDFT is depicted as a function of $E_{\rm c.m.}$ in comparison with the nonrelativistic TDDFT results, which are taken from Ref.~\cite{Dai2014Dissipation}.
The spin-orbit interaction has significant effects on the dissipation, since it couples the spatial motion of the nucleons with the spin degree of freedom, and gives a mechanism for the collective kinetic energy to excite the internal spin degrees of freedom~\cite{STEVENSON2019PPNP}.
It is well-known that the spin-orbit interaction is from relativistic dynamics, and it is naturally taken into account in a covariant density functional.
One can see from Fig.~\ref{fig7} that the energy dissipations $P_{\rm dis}$ in nonrelativistic TDDFT are much lower than the relativistic ones.
The discrepancies are significantly reduced with further including the time-odd spin-orbit terms in the nonrelativistic TDDFT calculations.
This reveals the fact that a covariant density functional automatically contains both time-even and time-odd spin-orbit interactions.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig8}\\
\caption{(Color online) Density distributions of the separating ions at a given relative distance $R=8.3$ fm for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ head-on collisions with the center-of-mass energies $E_{\rm c.m.}=90$ MeV (top), $130$ MeV (middle), and $170$ MeV (bottom).
The isolines correspond to multiples of $0.02$~fm$^{-3}$.
}\label{fig8}
\end{figure}
The features of energy dissipation could be seen more clearly through the density distributions.
Figure \ref{fig8} shows the density distributions of the separating ions at a given relative distance $R=8.3$ fm for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ head-on collisions with three center-of-mass energies, i.e., $E_{\rm c.m.}=90$ MeV, 130 MeV, and 170 MeV.
With the increasing $E_{\rm c.m.}$, the density distribution becomes less diffused.
This is due to the fact that the collective motion becomes faster for larger $E_{\rm c.m.}$ and, thus, the mean field has less time to rearrange itself and more likely keeps its identity as the incident nucleus.
This is also consistent with the decreased trend of the percentage of energy dissipation $P_{\rm dis}$ in Fig.~\ref{fig7}, and for the present three center-of-mass energies, the corresponding $P_{\rm dis}$ is respectively $84.5\%$, $70.9\%$, and $54.2\%$ in the TDCDFT calculations.
Similar features were also obtained in the nonrelativistic TDDFT calculations with the time-odd spin-orbit terms~\cite{Dai2014Dissipation}, while here the density distributions are more diffused in the TDCDFT due to the slightly larger energy dissipation $P_{\rm dis}$ (see Fig.~\ref{fig7}).
\section{Above-barrier fusion cross section }\label{sec_fusion}
The fusion of ${}^{16}{\rm O}+{}^{16}{\rm O}$ at above Coulomb barrier energies is one of the most important benchmarks for the early applications of TDDFT~\cite{KOONIN1976TDHF_O16,Cusson1976TDHFO16, Koonin1977TDHFO16, Flocard1978TDHFO16, Bonche1978TDHFO16, Davies1978TDHFO16}.
The primary reason is that ${}^{16}{\rm O}$ is a light double-magic nucleus, and there are abundant data for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ fusion cross section~\cite{Fernandez1978O16fusion, Tserruya1978O16fusion, Kolata1979O16fusion, Wu1984O16fusion, Thomas1986O16O16fusion}.
The early calculations of TDDFT gave conspicuous transparency for the collisions with low angular momenta, which was, however, not observed in experiment.
This problem is known as the ``fusion window anomaly'', and was latter resolved by the inclusion of spin-orbit interactions~\cite{Umar1986TDHFLS, Reinhard1988TDHFLS}.
Here, the above-barrier fusion cross section of ${}^{16}{\rm O}+{}^{16}{\rm O}$ is investigated with the newly developed TDCDFT in 3D lattice space.
In the present work, the fusion cross section is calculated by
\begin{equation}\label{Eq_fusion}
\sigma_{\rm fus}(E_{\rm c.m.})=\frac{\pi}{2\mu E_{\rm c.m.}}\sum_{L=0}^{\infty}(2L+1)P_{\rm fus}(L,E_{\rm c.m.}),
\end{equation}
where $\mu$ is the reduced mass of the system, and $P_{\rm fus}(L,E_{\rm c.m.})$ is the fusion probability for the partial wave with orbital angular momentum $L$ at the center-of-mass energy $E_{\rm c.m.}$.
Since ${}^{16}{\rm O}+{}^{16}{\rm O}$ is a system comprised of two identical spin-zero nuclei, the cross section must be multiplied by a factor of 2 and the sum over angular momenta in Eq.~\eqref{Eq_fusion} is restricted to even values of $L$.
Due to the mean-field approximation in TDCDFT, the sub-barrier tunneling of the many-body wavefunction is not included, i.e, $P_{\rm fus}=0$ or $1$.
Such a sharp change can be smoothed by the well-known Hill-Wheeler formula~\cite{Hill1953PhysicalReview} with a Fermi function,
\begin{equation}\label{Eq_HillWheeler}
P_{\rm fus}(L,E_{\rm c.m.})=\frac{\exp(x_L)}{1+\exp(x_L)},
\end{equation}
with $x_L=[E_{\rm c.m.}-B(L)]/\varepsilon_0$.
Here, the decay constant $\varepsilon_0$ is chosen as $0.4$ MeV~\cite{Esbensen2012HighEdata}, and $B(L)$ is the position of the angular-momentum-dependent barrier.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=0.7\textwidth]{fig9}\\
\caption{(Color online) Total density evolutions for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ reactions with the orbital angular momentum $L=20\hbar$.
The first and second rows depict the results at the center-of-mass energy $E_{\rm c.m.}=26.7$ MeV and $26.8$ MeV, respectively.
The isolines correspond to multiples of $0.02$~fm$^{-3}$.
}\label{fig9}
\end{figure*}
To obtain the barriers $B(L)$ with the TDCDFT, the fusion dynamics are examined in terms of semiclassical trajectories.
As an example, the total density evolutions for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ reactions with $L=20\hbar$ are shown in Fig.~\ref{fig9}.
The first and second rows depict the total density evolutions at the center-of-mass energy $E_{\rm c.m.}=26.7$ MeV and $26.8$ MeV, respectively.
For both energies, the two incident nuclei first form a compound system with a neck [see Figs.~\ref{fig9}(b), (c), (f), and (g)].
The compound system then reseparates in a short time at $E_{\rm c.m.}=26.7$ MeV [see Fig.~\ref{fig9}(d)], while it fuses to a more compact system at $E_{\rm c.m.}=26.8$ MeV [see Fig.~\ref{fig9}(h)].
This indicates that the barrier $B(L=20\hbar)$ is in the range of $26.7\sim26.8$ MeV and, thus, taken as $26.75$ MeV approximately in this work.
The barriers $B(L)$ for other $L$ values can be obtained in the same way, and for a given angular momentum $L$, the center-of-mass energy $E_{\rm c.m.}$ is altered with a step $0.1$ MeV until the transition between not-fusion and fusion is found.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.4\textwidth]{fig10}\\
\caption{(Color online) Above-barrier fusion cross sections as a function of the center-of-mass energy $E_{\rm c.m.}$ for $^{16}{\rm O}+^{16}{\rm O}$ reactions.
The nonrelativistic TDDFT results with the time-odd spin-orbit terms are taken from Ref.~\cite{Simenel2013O16O16}, and the experimental data are taken from Refs.~\cite{Fernandez1978O16fusion, Tserruya1978O16fusion, Kolata1979O16fusion, Wu1984O16fusion, Thomas1986O16O16fusion}.
}\label{fig10}
\end{figure}
With the obtained barriers $B(L)$, the fusion probability $P_{\rm fus}(L,E_{\rm c.m.})$ can be further calculated via the Hill-Wheeler formula Eq.~\eqref{Eq_HillWheeler}.
The above-barrier fusion cross sections $\sigma_{\rm fus}$ in turn obtained are shown in Fig.~\ref{fig10}, in comparison with the data~\cite{Fernandez1978O16fusion, Tserruya1978O16fusion, Kolata1979O16fusion, Wu1984O16fusion, Thomas1986O16O16fusion} and the nonrelativistic ones.
There is an overall overestimation of the data of Fernandez \textit{et al.}~\cite{Fernandez1978O16fusion} by around $16\%$.
Note that the TDCDFT calculations are based on a universal functional fitted to the bulk properties of the finite nuclei, and have no free parameters coming from the reaction mechanism, so this systematic discrepancy remains small.
Due to the quantization of the angular momentum $L$, the cross sections of the TDCDFT calculations exhibit oscillations with respect to $E_{\rm c.m.}$.
Similar oscillations can also be found in the data.
Therefore, one can conclude that the newly developed TDCDFT in 3D lattice space is an effective approach to investigate the nuclear fusion processes.
For comparison, the nonrelativistic TDDFT results with the time-odd spin-orbit terms~\cite{Simenel2013O16O16} are also shown in Fig.~\ref{fig10}, and they are very close to the TDCDFT ones.
Since the spin-orbit interactions are automatically included in the TDCDFT, here the problem of the fusion window anomaly is resolved naturally; otherwise the fusion cross section would be suppressed significantly~\cite{STEVENSON2019PPNP}.
\section{Summary}\label{sec_summ}
In summary, time-dependent covariant density functional theory with the successful density functional PC-PK1 has been developed in a three-dimensional coordinate space without any symmetry restrictions, and benchmark calculations for the ${}^{16}{\rm O}+{}^{16}{\rm O}$ reaction have been performed systematically.
Numerical tests and two primary applications including the dissipation dynamics and the above-barrier fusion cross sections are performed.
For a boosted ${}^{16}{\rm O}$, the excitation energy with respect to the boost velocity agrees well with the relativistic kinetic energy, and the total momentum is conserved with a relative deviation around $10^{-5}$ during the time evolution.
For the ${}^{16}{\rm O}+{}^{16}{\rm O}$ head-on collision with the center-of-mass energy $E_{\rm c.m.}=50$ MeV, the total energy and particle number are conserved precisely with the relative deviations respectively around $10^{-4}$ and $10^{-7}$ within a time evolution of 1000 fm/$c$, and the time-reversal invariance is fulfilled quite well.
The dissipation dynamics have been investigated for the deep-inelastic head-on collisions of the ${}^{16}{\rm O}+{}^{16}{\rm O}$ system.
It is revealed that the obtained percentages of the energy dissipation are reasonable and similar to the nonrelativistic TDDFT results with the time-odd spin-orbit terms.
The above-barrier fusion cross section of ${}^{16}{\rm O}+{}^{16}{\rm O}$ is taken as another benchmark, and the experimental data are well reproduced.
These systematic investigations demonstrate that the TDCDFT in 3D lattice can be an effective approach for the future studies of nuclear dynamical processes.
\begin{acknowledgments}
This work was partly supported by the National Key R\&D Program of China (Contracts No. 2018YFA0404400 and 2017YFE0116700), the National Natural Science Foundation of China (Grants No. 11621131001, 11875075, 11935003, and 11975031), the State Key Laboratory of Nuclear Physics and Technology, Peking University (No. NPT2020ZZ01), and the China Postdoctoral Science Foundation under Grant No. 2020M670013.
\end{acknowledgments}
|
2,877,628,089,361 | arxiv | \section{Introduction}
Geometric approaches to associating uncertainty with poses, essentially for mobile robot localization, have been quite successful in the robotics community over the past decade. Since the discovery that mobile robots dispersion under the effect of sensor noise resembles more a ``banana" than a standard Gaussian ellipse, which can be traced back to \cite{thrun2000real}, studies have evidenced the fact that the Lie group structure of the configuration space $SE(3)$ plays a prominent role in probabilistic robotics, see \cite{barfoot2016state,chirikjian2014gaussian,barfoot2014associating,BayesianLieGroups2011,diemer2015invariant,barrau2013intrinsicp,hertzberg2013integrating,park2008kinematic}. In particular, Gaussian distributions in (Lie) exponential coordinates provide accurate approximations of banana distributions as first advocated by \cite{long2012banana}. The reader is also referred to the recent monographs \cite{chirikjian2011stochastic,barfoot2016state}.
When inertial sensors (gyrometers and accelerometers) embedded in an Inertial Measurement Unit (IMU) are utilized, one needs to manipulate extended poses, which are 9 dimensional elements that cannot be modeled as elements of $SE(3)$. Moreover, the IMU propagation equations are not amenable to left multiplications on Lie groups of the form ${\mathbf T}_{k+1}={\mathbf T}_k\bm{\Gamma}_k$. This has two consequences. First the results about uncertainty propagation of \cite{barfoot2014associating} are not easily transposed in an IMU context. Then, note that if IMU propagation equations were of the form above we would readily have ${\mathbf T}_{k+N}={\mathbf T}_k(\Pi_k^{N-1}\bm{\Gamma}_i)$, and preintegrating IMU measurements as in \cite{forster2017manifold} would be trivial, which is not the case. As a result, the theory of preintegration on manifolds \cite{forster2017manifold} is more subtle and relies on smart algebraic tricks, see also \cite{martinelli2014closed,eckenhoff2019closed}.
In \cite{barrau2015non,barrau2017invariant} the introduction of the group $SE_2(3)$, along with the discovery of the associated group affine property and hence log-linearity of IMU equations using $SE_2(3)$, proves a major step to overcome these obstacles. It has already led to more robust EKFs for data fusion with IMU, has prompted an industrial product \cite{barrau2018invariant}, has improved EKF-based visual inertial consistency \cite{wu_invariant-ekf_2017, heo_consistent_2018,brossard2017unscented, caruso_2018, heo2018consistent} and robot state estimation \cite{Hartley-RSS-18,hartley2019contact}.
In this paper, we show the group $SE_2(3)$ allows transposing the recent results about estimation of poses using wheel speeds of \cite{barfoot2014associating,wang2006error} to the context of IMUs. More precisely, our main contributions are as follows:
\begin{itemize}
\item We provide a nontrival extension of the approach and results of \cite{barfoot2014associating} (see also \cite{wang2006error,long2012banana}) which deals with position and orientation (i.e. pose) for wheel sensors based robotics, to position, orientation plus velocity (i.e. extended pose) when using IMUs, leveraging the log-linear property of IMU equations of \cite{barrau2017invariant};
\item We provide an explanation of why the method of IMU on-manifold preintegration of \cite{forster2017manifold} exists: it is in fact rooted in the group affine property introduced in \cite{barrau2017invariant}, and the result can be purely expressed by the Lie group $SE_2(3)$. This extends preliminary results of \cite{barrau2018linear} regarding bias and noise free IMU equations;
\item We provide a more rigorous treatment on the Coriolis effect than \cite{indelman2012factor,indelman2013information}, using the introduced mathematical framework and a nontrivial trick, see eq. \eqref{eq:trick}.
\end{itemize}\color{black}
Secondary contributions as follows. First we redemonstrate the log-linear property of IMU equations \cite{barrau2017invariant} in discrete time using elementary computations. Then, regarding preintegration we come up with a novel first order development with respect to noise and bias based on Lie exponential coordinates that proves more accurate than the classical Taylor expansion of \cite{forster2017manifold}. We derive additional ``exact'' preintegration formulas when IMU are either noise free, or bias free.
The paper is organized as follows. Section \ref{sec1} is a summary of our preliminary results about noise free and bias free preintegration recently published in \cite{barrau2018linear}. Section \ref{sec2} proves the unexpected and novel result that IMU based navigation equations where Earth rotation is taken into account have the log-linearity property, and hence allow derivation of IMU preintegration formulas in this context. Section \ref{sec3} presents our theory for associating uncertainty with extended poses, with applications to IMU noise propagation. Finally Section \ref{sec4} deals with IMU biases.
\section{A matrix lie Group approach to IMU preintegration}\label{sec1}
We suggest the reader first familiarize with classical Lie groups of robotics, referring to \cite{barfoot2014associating} and ideally to \cite{barfoot2016state,chirikjian2011stochastic}.
\subsection{Mathematical Preliminary: the Group $SE_2(3)$}
The special orthogonal group $SO(3)$ that encodes orientation ${\bf R}$ of a rigid body in space may be modeled as:
$$
SO(3):=\{{\bf R}\in{\mathbb R}^{3\times 3}\mid {\bf R}^T{\bf R}=I_3,~\det {\bf R}=1\}.
$$
In turn, the set of poses, i.e., position ${\mathbf X}$ and orientation ${\bf R}$, may be modeled using the matrix representation of the special Euclidean group
$$
SE(3):=\{{\mathbf T}=\begin{pmatrix}{\bf R} &{\mathbf X}\\0_{1,2}&1\end{pmatrix}\in{\mathbb R}^{4\times 4}\mid ({\bf R},{\mathbf X})\in SO(3)\times{\mathbb R}^3\}.
$$
Finally to describe \emph{extended poses}, i.e. position ${\mathbf X}$, velocity ${\mathbf V}$ and orientation ${\bf R}$, we introduced the following group
$$SE_2(3):=\{{\mathbf T}=
\begin{pmatrix}{\bf R}&{\mathbf V}&{\mathbf X}\\0_{1, 3} &1&0\\0_{1, 3} &0&1\end{pmatrix}\mid ({\bf R},{\mathbf V},{\mathbf X})\in SO(3)\times{\mathbb R}^3 \},
$$in \cite{barrau2017invariant} (see also \cite{barrau2015non}) we called group of ``double direct spatial isometries". The latter are all matrix Lie groups, embedded in respectively ${\mathbb R}^{3\times 3}$, ${\mathbb R}^{4\times 4}$, and ${\mathbb R}^{5\times 5}$. Matrix multiplication then provides group composition of two elements of $SE_2(3)$. We see the obtained composition is a natural extension of poses composition as elementary computations show $({\bf R}_1,{\mathbf V}_1,{\mathbf X}_1)\cdot ({\bf R}_2,{\mathbf V}_2,{\mathbf X}_2)=({\bf R}_1{\bf R}_2,{\bf R}_1{\mathbf V}_2+{\mathbf V}_1,{\bf R}_1{\mathbf X}_2+{\mathbf X}_1)$.
As in classical Lie group theory, small perturbations of extended poses may be described by elements of the Lie algebra $ \frak{se}_2(3)$. The operator $^\wedge$ turns elements $\bm{\xi}:=(\bm{\omega}^T,{\mathbf v}^T,{\mathbf x}^T)^T\in{\mathbb R}^9$ into elements of the Lie algebra:
$$ \bm{\xi}^\wedge:= \begin{pmatrix} \bm{\omega} \\{\mathbf v}\\{\mathbf x} \end{pmatrix}^\wedge= \begin{pmatrix} (\bm{\omega})_{\times} & {\mathbf v} & {\mathbf x} \\ 0_{1,3} & 0 & 0 \\ 0_{1,3} & 0 & 0 \end{pmatrix}$$where $(\bm{\omega})_{\times}\in{\mathbb R}^{3\times 3}$ denotes the skew symmetric matrix associated with cross product with $\bm{\omega}\in{\mathbb R}^3$. The exponential map conveniently maps small perturbations encoded in ${\mathbb R}^9$ to $SE_2(3)$. For matrix Lie groups it is defined as
$$
\exp(\bm{\xi}):=\exp_m(\bm{\xi}^\wedge),
$$where
$\exp_m$ denotes the classical matrix exponential. The following closed form expression may be shown \cite{barrau2017invariant,barrau2015non}:\begin{align}
\exp(\begin{pmatrix} \bm{\omega} \\{\mathbf v}\\{\mathbf x} \end{pmatrix})=\begin{pmatrix} \exp_m((\bm{\omega})_\times)&N(\bm{\omega}){\mathbf v}&N(\bm{\omega}){\mathbf x}\\0_{1, 3} &1&0\\0_{1, 3} &0&1\end{pmatrix}\label{exp:map}
\end{align}
with $N(\bm{\omega})=I_3 + \frac{1 - \cos(||\bm{\omega}||)}{||\bm{\omega}||^2}) ((\bm{\omega})_\times)^2 + \frac{ ||\bm{\omega}|| -\sin(||\bm{\omega}||)}{||\bm{\omega}||^3}( (\bm{\omega})_\times)^3$. The Baker-Campbell-Hausdorff (BCH) formula stipulates that for $ \bm{\xi}_1,\bm{\xi}_2\in{\mathbb R}^9$ we have $\exp(\bm{\xi}_1)\exp(\bm{\xi}_2)\approx\exp(\bm{\xi}_1+\bm{\xi}_2)$ up to second order terms in $ \bm{\xi}_1,\bm{\xi}_2$.
Finally, the so-called adjoint operator is defined by analogy to $SE(3)$ as:
\begin{align}
Ad_{{\mathbf T}}= \begin{pmatrix}{\bf R} & 0_{3,3} & 0_{3,3} \\ ({\mathbf V})_\times {\bf R} & {\bf R} & 0_{3,3}\\ ({\mathbf X})_\times {\bf R} & 0_{3,3} & {\bf R} \end{pmatrix}\in{\mathbb R}^{9\times 9},\label{ad:eq}
\end{align}
where we conveniently describe it as an operator acting directly on ${\mathbb R}^9$ instead of on the Lie algebra $\frak{se}_2(3)$. We have the useful relation that can be considered as a definition:
\begin{align}
{\mathbf T} \exp(\bm{\xi}) {\mathbf T}^{-1}=\exp(Ad_{{\mathbf T}}\bm{\xi}).
\end{align}
\subsection{IMU Equations Revisited}
We now summarize recent results \cite{barrau2018linear}.
Let ${\bf R}_t$ denote the rotation matrix encoding the orientation of the IMU, and let ${\mathbf X}_t$ and ${\mathbf V}_t$ denote position and velocity of the IMU. Let ${\mathbf a}_t$ denote the specific acceleration, that is, true acceleration minus gravity vector ${\mathbf g}$ expressed in the body frame, and $\bm{\omega}_t$ the angular velocity expressed in the body frame. The dynamical motion equations on flat Earth write (see \cite{forster2017manifold}):
\begin{align}
{\frac{d}{dt}} {\bf R}_t={\bf R}_t(\omega)_\times,\quad{\frac{d}{dt}} {\mathbf V}_t={\mathbf g}+{\bf R}_t {\mathbf a}_t,\quad{\frac{d}{dt}}{\mathbf X}_t={\mathbf V}_t\label{nav:eq}
\end{align}
If we associate a matrix ${\mathbf T}_t\in SE_2(3)$ to the extended pose $({\bf R}_t,{\mathbf V}_t,{\mathbf X}_t)$, we noticed in \cite{barrau2017invariant} that \eqref{nav:eq} may be rewritten as
\begin{align}
\frac{d}{dt} {\mathbf T}_t= {\mathbf W}_t {\mathbf T}_t + f ({\mathbf T}_t) + {\mathbf T}_t {\mathbf U}_t,\label{f(ab):eq}
\end{align}
where the various matrices at play write\begin{equation}\label{various:eq1}\begin{aligned}
&{\mathbf W}_t =
\begin{pmatrix}
0_3 & {\mathbf g} & 0 \\
0_{2,3}& 0_{2,1}&0_{2,1}
\end{pmatrix}, \quad
{\mathbf U}_t=
\begin{pmatrix}
(\bm{\omega}_t)_{\times} & {\mathbf a}_t & 0 \\
0_{2,3}& 0_{2,1}&0_{2,1}
\end{pmatrix},\\
& f ({\mathbf T}_t)=
\begin{pmatrix}
0_3 & 0_{3,1} & {\mathbf V}_t \\
0_{2,3}& 0_{2,1}&0_{2,1}
\end{pmatrix}.
\end{aligned}\end{equation}
It can be easily checked to verify the group affine property:
\begin{dfn}[Group affine dynamics \cite{barrau2017invariant}]
Let $G$ be a Lie group. Dynamics ${\frac{d}{dt}} {\mathbf T}_t=g({\mathbf T}_t)$ on $G$ is \emph{group affine} if it verifies for any couple ${\mathbf T},\tilde {\mathbf T} \in G$ the relation:
\begin{align}
g({\mathbf T}\tilde {\mathbf T}) = g({\mathbf T})\tilde {\mathbf T} + {\mathbf T} g(\tilde{\mathbf T}) -{\mathbf T} g(Id)\tilde {\mathbf T}
\label{gpaff}\end{align}
\end{dfn}
This is the key property opening the door to invariant filtering, autonomous error variables, log-linearity and EKF stability leveraged in e.g. \cite{hartley2019contact,barrau2018invariant}.
The next section summarizes the links between the latter formulation of inertial navigation and the theory of preintegration of \cite{lupton2012visual,forster2017manifold}. Notably we have shown in \cite{barrau2018linear} any equation of the form \eqref{f(ab):eq} on a matrix Lie group may in fact be preintegrated.
\subsection{Preintegration of Group Affine Dynamics}
\begin{prop}If $f$ satisfies $f({\mathbf T}\tilde {\mathbf T}) = f({\mathbf T})\tilde {\mathbf T} + {\mathbf T} f(\tilde{\mathbf T})$, then dynamics $\frac{d}{dt} {\mathbf T}_t= g({\mathbf T}_t):={\mathbf W}_t {\mathbf T}_t + f ({\mathbf T}_t) + {\mathbf T}_t {\mathbf U}_t$ is easily verified to define group affine dynamics.
\end{prop}
\color{black}
\begin{prop}[\cite{barrau2018linear} Corollary 9]\label{prop::pre-integration}Assuming $f({\mathbf T}\tilde {\mathbf T}) = f({\mathbf T})\tilde {\mathbf T} + {\mathbf T} f(\tilde{\mathbf T})$, which is obviously the case with $f$ as in \eqref{various:eq1}, the solution ${\mathbf T}_t$ at arbitrary $t$ of equation \eqref{f(ab):eq} can be written as a function of the initial value ${\mathbf T}_0$ as:
\begin{align}
{\mathbf T}_t=\bm{\Gamma}_t\Phi_t({\mathbf T}_0)\bm{\Upsilon}_t\label{discrete}
\end{align}where $\bm{\Gamma}_t,\bm{\Upsilon}_t$ are solution to differential equations involving \emph{only} ${\mathbf W}_t,{\mathbf U}_t$, and where $\Phi_t$ only depends on $t$. Solving the corresponding equations (see \cite{barrau2018linear}) in the particular case of equations \eqref{nav:eq} on $SE_2(3)$ with values given by \eqref{various:eq1} yields
\begin{align}
\Phi_t:
\begin{pmatrix} {\bf R}&{\mathbf V}&{\mathbf X}\\0_{1, 3} &1&0\\0_{1, 3} &0&1\end{pmatrix}\mapsto\begin{pmatrix} {\bf R}&{\mathbf V}&{\mathbf X}+t{\mathbf V}\\0_{1, 3} &1&0\\0_{1, 3} &0&1\end{pmatrix}, \label{phi:eq}\\
\bm{\Gamma}_t= \begin{pmatrix}I_3&t{\mathbf g}&\frac{1}{2}{\mathbf g} t^2\\0_{1, 3} &1&0\\0_{1, 3} &0&1\end{pmatrix}, ~\bm{\Upsilon}_t=\begin{pmatrix} {\bf R}_t^\upsilon&{\mathbf V}_t^\upsilon&{\mathbf X}_t^\upsilon\\0_{1, 3} &1&0\\0_{1, 3} &0&1\end{pmatrix}, \label{phi2:eq}
\end{align}
where the latter quantities are defined by
\begin{equation}
\begin{aligned}
{\bf R}_0^\upsilon&=I_3, ~{\frac{d}{dt}} {\bf R}_t^\upsilon={\bf R}_t^\upsilon(\omega)_\times,~~ {\mathbf V}_0^\upsilon&=0,~{\frac{d}{dt}} {\mathbf V}_t^\upsilon={\bf R}_t^\upsilon{\mathbf a}_t,\\{\mathbf V}_0^\upsilon&=0,~{\frac{d}{dt}} {\mathbf X}_t^\upsilon={\mathbf V}_t^\upsilon.
\end{aligned}\label{ODE}\end{equation}
\end{prop}
In \cite{forster2017manifold}, the quantities ${\bf R}_t^\upsilon, {\mathbf V}_t^\upsilon, {\mathbf X}_t^\upsilon$ are referred to as the Delta preintegrated measurements and are based solely on the inertial measurements and do \emph{ not} depend on the initial state ${\mathbf T}_0$.
This allows one to define constraints between extended poses at temporally distant key frames based on a \emph{unique} (pre)integration of IMU outputs \eqref{ODE}, no matter how many relinearizations are then used in the optimization scheme.
\section{IMU preintegration with rotating Earth}\label{sec2}
Many applications require accurate localization over long time scales based on accurate inertial sensors.
To apply factor based optimization techniques to accurate inertial navigation systems requires to take into account Earth rotation and Coriolis effect. To this date, and to our best knowledge the theory of on-manifold preintegration cannot handle the corresponding equations exactly as the work of \cite{forster2017manifold} is based on non rotating Earth approximation based equations \eqref{nav:eq}.
\subsection{IMU Equations with Rotating Earth are Group Affine}Accounting for Earth rotation, \eqref{nav:eq} becomes (see e.g. \cite{farrell2008aided}):
\begin{equation}
\begin{aligned}
{\frac{d}{dt}} {\bf R}_t & = -\Omega_{\times} {\bf R}_t + {\bf R}_t(\omega)_\times, \\
{\frac{d}{dt}} {\mathbf V}_t & = {\mathbf g}+{\bf R}_t{\mathbf a}_t - 2 \Omega_{\times} {\mathbf V}_t - \Omega_{\times}^2 {\mathbf X}_t, \quad
{\frac{d}{dt}} {\mathbf X}_t & = {\mathbf V}_t
\end{aligned}\label{nav:eq_Coriolis}
\end{equation}
where $\Omega$ is the Earth rotation vector written in the local (geographic) reference frame. The term $- 2 \Omega_{\times}^2 {\mathbf V}_t$ is called Coriolis force while the term $- \Omega_{\times}^2 {\mathbf X}_t$ is called centrifugal force\footnote{To be perfectly accurate, this second term is the varying part of the centrifugal force, which actually writes $-\Omega_{\times}^2 ({\mathbf X}_t-p_0)$ with $p_0$ a point of the Earth rotation axis. But expanding the parenthesis we obtain a constant term $\Omega_{\times}^2 p_0$ which can be simply added to $g$. And this is already the case: the $g$ we are familiar with (with approximate value $9.81 m.s^{-2}$) is actually the sum of the Newton gravitation force and the centrifugal force due to Earth rotation. Hence the residual term $- \Omega_{\times}^2 {\mathbf X}_t$.}. Eq. \eqref{nav:eq_Coriolis} does seemingly not lend itself to application of Prop. \ref{prop::pre-integration}. However, if we introduce an auxiliary variable:
\begin{align}
{\mathbf V}_t' = {\mathbf V}_t + \Omega_{\times} {\mathbf X}_t\label{eq:trick}
\end{align}
replacing velocity ${\mathbf V}_t$, \eqref{nav:eq_Coriolis} unexpectedly simplifies to:
\begin{align*}
{\frac{d}{dt}} {\bf R}_t & = -\Omega_{\times} {\bf R}_t + {\bf R}_t(\omega)_\times, \\
{\frac{d}{dt}} {\mathbf V}_t' & = {\mathbf g}+{\bf R}_t {\mathbf a}_t - \Omega_{\times} {\mathbf V}_t', \quad
{\frac{d}{dt}} {\mathbf X}_t = {\mathbf V}_t' - \Omega_{\times} {\mathbf X}_t
\end{align*}
This trick allows embedding the state into a matrix Lie group that fits into the framework of Eq. \eqref{f(ab):eq}:
\begin{align}
\frac{d}{dt} {\mathbf T}_t'= {\mathbf W}_t' {\mathbf T}_t' + f ({\mathbf T}_t') + {\mathbf T}_t' {\mathbf U}_t,
\end{align}
where $f(\cdot)$ and ${\mathbf U}_t$ are unchanged, and ${\mathbf T}_t',{\mathbf W}_t'$ write:
\begin{equation}
{\mathbf T}_t'=\begin{pmatrix}
{\bf R}_t&{\mathbf V}_t'&{\mathbf X}_t \\
0_{1, 3} &1&0\\
0_{1, 3} &0&1
\end{pmatrix},
\qquad
{\mathbf W}_t' =
\begin{pmatrix}
\Omega_{\times} & {\mathbf g} & 0 \\
0_{2,3}& 0_{2,1}&0_{2,1}
\end{pmatrix},
\end{equation}
proving \eqref{nav:eq_Coriolis} are group affine, which is a novel result.
\subsection{Preintegration with Coriolis Effect}
Using Proposition \ref{prop::pre-integration} we know the equations can be preintegrated. The explicit formulae, given in \eqref{eq::pre-integration-coriolis} below, may thus be derived along the lines of Section \ref{sec1}.
\begin{prop}[Preintegration with Coriolis effect]
\label{prop::coriolis}
The IMU equations \eqref{nav:eq_Coriolis} accounting for Coriolis and centrifugal force with initial state ${\bf R}_0, {\mathbf V}_0, {\mathbf X}_0$ write exactly (no approximation is made):
\begin{equation}
\label{eq::pre-integration-coriolis}
\begin{aligned}
{\bf R}_t & = \bm{\Gamma}_t^R {\bf R}_0 {\bf R}_t^\upsilon ,\\
{\mathbf X}_t & = \bm{\Gamma}_t^x + \bm{\Gamma}_t^R {\bf R}_0 {\mathbf X}_t^\upsilon + t \bm{\Gamma}^R_t {\mathbf V}_0' + \bm{\Gamma}^R_t {\mathbf X}_0, \\
{\mathbf V}_t & = \bm{\Gamma}_t^v + \bm{\Gamma}_t^R {\bf R}_0 {\mathbf V}_t^\upsilon + \bm{\Gamma}^R_t {\mathbf V}_0' - \Omega_{\times} {\mathbf X}_t,
\end{aligned}
\end{equation}
with ${\mathbf V}_0'={\mathbf V}_0+\Omega_{\times} {\mathbf X}_0$, and where ${\bf R}_t^\upsilon,{\mathbf V}_t^\upsilon,{\mathbf X}_t^\upsilon$ are the same as in \eqref{ODE} while $\bm{\Gamma}^R_t, \bm{\Gamma}^v_t, \bm{\Gamma}^x_t$ are defined through the following equations that do \emph{not} involve the state:\begin{equation}
\begin{aligned}
&\bm{\Gamma}_0^R = I_3, \quad \bm{\Gamma}_0^v = 0_{3,1}, \quad \bm{\Gamma}_0^x = 0_{3,1}
;\quad
{\frac{d}{dt}} \bm{\Gamma}_t^R = -\Omega_{\times} \bm{\Gamma}_t^R , \\&\quad {\frac{d}{dt}} \bm{\Gamma}_0^v = {\mathbf g} - \Omega_{\times} \bm{\Gamma}^v_t, \quad {\frac{d}{dt}} \bm{\Gamma}_t^x = \bm{\Gamma}_t^v - \Omega_{\times} \bm{\Gamma}_t^x.
\end{aligned}\label{ODE21}\end{equation}
\end{prop}
\paragraph{Proof}
After having checked the initial conditions match, we need to check the quantities defined by Eq. \eqref{eq::pre-integration-coriolis} verify Eq. \eqref{nav:eq_Coriolis}. ${\frac{d}{dt}} {\bf R}_t = -\Omega_{\times} {\bf R}_t + {\bf R}_t \omega_{\times}$ comes easily while we have, using matrix product differentiation rules and the definitions \eqref{ODE} and \eqref{ODE21}, then rearranging terms:
\begin{align*}
{\frac{d}{dt}} {\mathbf X}_t & = (\bm{\Gamma}^v_t + \bm{\Gamma}_t^R {\bf R}_0 {\mathbf V}_t^\upsilon + \bm{\Gamma}^R_t {\mathbf V}_0') \\
& - \Omega_{\times} (\bm{\Gamma}^x_t + \bm{\Gamma}_t^R {\bf R}_0 {\mathbf X}_t^\upsilon+ t \bm{\Gamma}^R_t {\mathbf V}_0'+ \bm{\Gamma}^R_t {\mathbf X}_0)
\end{align*}
which we recognize as ${\mathbf V}_t$. Now, differentiating ${\mathbf V}_t$ the same way and using the relation ${\frac{d}{dt}} {\mathbf X}_t = {\mathbf V}_t$ we obtain:
\begin{align*}
{\frac{d}{dt}} {\mathbf V}_t = {\mathbf g} + {\bf R}_t {\mathbf a}_t -\Omega_{\times} ( \bm{\Gamma}_t^v +\bm{\Gamma}_t^R {\bf R}_0 {\mathbf V}_t^\upsilon + \bm{\Gamma}^R_t {\mathbf V}_0' + {\mathbf V}_t).
\end{align*}
But using the last equality of Eq. \eqref{eq::pre-integration-coriolis} we have $\bm{\Gamma}_t^v +\bm{\Gamma}_t^R {\bf R}_0 {\mathbf V}_t^\upsilon + \Gamma^R_t {\mathbf V}_0' = {\mathbf V}_t + \Omega_{\times} {\mathbf X}_t$ and we end up with:
$$
{\frac{d}{dt}} {\mathbf V}_t = {\mathbf g} + {\bf R}_t {\mathbf a}_t -2 \Omega_{\times}{\mathbf V}_t - \Omega_{\times}^2 {\mathbf X}_t.
$$
$\blacksquare$
The latter novel mathematical results opens avenues for the application of factor based optimization methods such as
GTSAM \cite{dellaertFactor2012} or $g^2o$ \cite{kummerleG2o2011} to real time high performance localization and SLAM based on the use of high-grade IMUs, along the lines of
\cite{forster2017manifold}.
\begin{rem}
\cite{indelman2012factor,indelman2013information} attacked factor graph based accurate navigation. The formulas for preintegration with Coriolis effect in the appendix of \cite{indelman2013information}, based on the early approach to preintegration \cite{lupton2012visual} are not presented as exact, and are not indeed, as can be checked for instance propagating Eq. (35) of \cite{indelman2013information} for one time step, which yields:
$$
v_{j+1}^{L_{j+1}} = R_{L_j}^{L_{j+1}} \left(v_j^{L_j} + R_{b_j}^{L_i} a_j \Delta t + \left[g^{L_i}- 2 \left[\omega_{iL_i}^{L_i} \right]_\times v_i^{L_i}\right] \Delta t \right).
$$
We obtain a term $- 2 \left[\omega_{iL_i}^{L_i} \right]_\times v_i^{L_i}$ in place of the expected $- 2\left[ \omega_{iL_i}^{L_i} \right]_\times v_j^{L_j}$ (index $i$ of $v$ should be $j$): we see Coriolis term is actually approximated by its value at initial time $t_i$.
\end{rem}
\section{Associating uncertainty with extended poses}\label{sec3}
The goal of the present section is twofold. First, it shows how to account for noise in our approach to preintegration. Then, and more importantly, it provides a generalization of various methods and results of \cite{barfoot2014associating} devoted to $SE(3)$ to the case of extended poses of $SE_2(3)$. This extension is independent from the theory of preintegration and is a contribution in itself. It is not trivial as even using the recently introduced $SE_2(3)$ group, IMU propagation is not amenable to Lie group compounding ${\mathbf T}_{k+1}\exp(\bm{\xi}_{k+1})={\mathbf T}_{k} \exp(\bm{\xi}_{k})\bar {\mathbf T}\exp(\bm{\xi})$ as considered in \cite{barfoot2014associating}. The unexpected log-linearity property of \cite{barrau2017invariant}, that we rederive more simply below, plays a key role.
\subsection{Associating uncertainty with elements of $SE_2(3)$}
Using the exponential map of $SE(3)$ to describe statistical dispersion of poses has been often advocated. In the robotics community, early attempts date back to \cite{smith1986representation}, and references \cite{chirikjian2011stochastic,barfoot2016state,chirikjian2014gaussian,barfoot2014associating,BayesianLieGroups2011,diemer2015invariant,barrau2013intrinsicp,hertzberg2013integrating,park2008kinematic,long2012banana,wang2006error} revolve around those ideas. Gaussians in exponential coordinates are also referred to as concentrated Gaussians \cite{bourmaud2013discrete}.
We define a (concentrated) Gaussian on $SE_2(3)$ as
\begin{align}
{\mathbf T} =\bar{\mathbf T} \exp(\bm{\xi})\label{error:rep1},
\end{align}
where $\bar {\mathbf T}$ is a noise free ``mean" of the distribution and $\xi\sim\mathcal N(0,\bm{\Sigma})$ is a multivariate Gaussian in ${\mathbb R}^9$. Each component of $\bm{\xi}=(\bm{\omega}^T,{\mathbf v}^T,{\mathbf x}^T)^T$ corresponds to a degree of freedom.
\subsection{Propagation of Errors through Noise Free IMU Model}
We now come back to the widespread flat Earth model \eqref{nav:eq} and consider discrete time, leveraging formula \eqref{discrete}. In discrete time with time step $\Delta t$, denoting $\bm{\Gamma}_k:=\bm{\Gamma}_{\Delta t}$, $\Phi:=\Phi_{\Delta t}$ and $\bm{\Upsilon}_k:=\bm{\Upsilon}_{\Delta t}$, we use \eqref{discrete} to get indeed \begin{align}{\mathbf T}_{k+1}=\bm{\Gamma}_k\Phi({\mathbf T}_k)\bm{\Upsilon}_k\label{discrete2}.\end{align}
\begin{rem}
Contrary to \cite{forster2017manifold} our matrix group based preintegration formula \eqref{discrete} is an exact discretization of \eqref{nav:eq}. However it involves \eqref{ODE} that needs to be numerically solved (but the beauty of preintegration is that it needs be solved only once, and then the same formula \eqref{discrete} may be used over and over for various initial conditions). \color{black}
As IMU measurements come in discrete time, albeit at a high rate, we may call $\Delta t$ the discretization step and assume ${\mathbf a}_t$, $\bm{\omega}_t$ to be constant over time intervals of small size $\Delta t$, along the lines of \cite{forster2017manifold} (in IMUs this assumption is inevitable, and its negative impact mitigated by the high frequency of measurements).\color{black} The solution to \eqref{discrete} is based on \eqref{ODE} whose solution then writes $ {\bf R}_{t+\Delta t}^\upsilon= {\bf R}_{t}^\upsilon\exp_m((\omega_t)_\times \Delta t)$, ${\mathbf V}_{t+\Delta t}^\upsilon={\mathbf V}_{t}^\upsilon+ {\bf R}_{t}^\upsilon{\mathbf a}_t\Delta t$, and ${\mathbf X}_{t+\Delta t}^\upsilon={\mathbf X}_{t}^\upsilon+ {\mathbf V}_{t}^\upsilon\Delta t+ {\bf R}_{t}^\upsilon{\mathbf a}_t\Delta t^2$ under the approximation of $ {\bf R}_{t}^\upsilon$ being also constant over $\Delta t$ in the second equation.
\end{rem}
We are now in a position to derive a preliminary yet remarkable result regarding propagation through noise free IMU equations of a concentrated Gaussian \eqref{error:rep1}.
\begin{prop}[Extended pose error propagation]Let ${\mathbf T}_k =\bar{\mathbf T}_k \exp(\bm{\xi}_k)$ where both ${\mathbf T}_k $ and $\bar{\mathbf T}_k$ evolve through noise free model \eqref{discrete2}. The propagation of discrepancy $\bm{\xi}_k\in{\mathbb R}^9$ between $\bar{\mathbf T}_k$ and ${\mathbf T}_k$ writes $\bm{\xi}_{k+1}=Ad_{\bm{\Upsilon}^{-1} } F\bm{\xi}_k$ with $F(\bm{\omega},{\mathbf v},{\mathbf x})^T:=(\bm{\omega},{\mathbf v},{\mathbf x}+\Delta t {\mathbf v})^T$, i.e., \begin{align}
{\mathbf T}_{k+1}:=\bm{\Gamma}_k \Phi (\bar {\mathbf T}_k \exp(\bm{\xi}_k))\bm{\Upsilon}_k =\bar {\mathbf T}_{k+1}\exp(Ad_{\bm{\Upsilon}_k^{-1} } F\bm{\xi}_k)
\label{magic:formula}\end{align}\label{bananaprop}
\end{prop}
\paragraph{Proof}We have
${\mathbf T}_{k+1}=\bm{\Gamma} \Phi (\bar {\mathbf T}_k \exp(\bm{\xi}_k))\bm{\Upsilon} \\
=\bm{\Gamma} \Phi (\bar {\mathbf T}_k)\Phi ( \exp(\bm{\xi}_k))\bm{\Upsilon} $ where we used $\Phi ( {\mathbf T}_1 {\mathbf T}_2)=\Phi ( {\mathbf T}_1 )\Phi ( {\mathbf T}_2)$ as easily shown from \eqref{phi:eq}. Thus
$
{\mathbf T}_{k+1} =\bm{\Gamma} \Phi (\bar {\mathbf T}_k)\bm{\Upsilon}\bUpsilon^{-1} \Phi ( \exp(\bm{\xi}_k))\bm{\Upsilon}
=\bar {\mathbf T}_{k+1}Ad_{\bm{\Upsilon}^{-1} }(\Phi ( \exp(\bm{\xi}_k))).
$
Using the expression \eqref{exp:map} and \eqref{phi:eq} we see that
\begin{align}
\Phi(\exp(\bm{\xi}))=\exp(F\bm{\xi})\label{line16}
\end{align}
where $F(\bm{\omega},{\mathbf v},{\mathbf x})^T=(\bm{\omega},{\mathbf v},{\mathbf x}+\Delta t {\mathbf v})^T$. Using that matrix exponential commutes with conjugation we get $Ad_{\bm{\Upsilon}^{-1} } ( \exp(F\bm{\xi}))=\exp(Ad_{\bm{\Upsilon}^{-1} } F\bm{\xi})$, proving the result. $\blacksquare$
We have just proved in discrete time using elementary means the log-linear property of \cite{barrau2017invariant} dealing with continuous time. It proves the interest of error representation \eqref{error:rep1} using exponential coordinates in $SE_2(3)$, as \eqref{magic:formula} shows the errors $\bm{\xi}_k$ expressed using those coordinates in
${\mathbb R}^9$ propagate \emph{linearly} in exponential coordinates through IMU equations \eqref{discrete} and hence \eqref{nav:eq}, as \eqref{ad:eq} ensures that $Ad_{\bm{\Upsilon}^{-1} } F\in{\mathbb R}^{9\times 9}$.
\begin{rem}
\cite{barfoot2014associating} considers propagation by compounding and proves noise free propagation $ {\mathbf T}_{k+1} = \bm{\Gamma}_k{\mathbf T}_{k} \bm{\Upsilon}_k$, in place of the more sophisticated dynamics \eqref{discrete2}, preserves statistical distributions \eqref{error:rep1}, see Eq. (26) in Reference \cite{barfoot2014associating}. It also explains why dispersion of wheeled robots is banana shaped \cite{long2012banana,chirikjian2011stochastic,barfoot2016state,chirikjian2014gaussian}. We recover the result in a more complex case, as $\Phi$ doesn't boil down to simple multiplications. What saves the day, though, is that $\Phi$ is a group automorphism, i.e. $\Phi ( {\mathbf T}_1 {\mathbf T}_2)=\Phi ( {\mathbf T}_1 )\Phi ( {\mathbf T}_2)$, see also \cite{barrau2018linear}.\color{black}\label{rem1}
\end{rem}
\subsection{IMU Noise Model}\label{justify:sec}
In practice IMU actually measure $\tilde {\mathbf a}_k:={\mathbf a}_k-{\mathbf b}_k^a-\bm{\eta}_k^a$, and $\tilde \bm{\omega}_k:=\bm{\omega}_k-{\mathbf b}_k^g-\bm{\eta}_k^g$, with ${\mathbf b}_k=({\mathbf b}_k^g,{\mathbf b}_k^a)\in{\mathbb R}^6$ the gyrometers and accelerometers biases, and where $\bm{\eta}$ represents sensor noises. For now, let's assume ${\mathbf b}=0$, as biases will be the focus of Section \ref{sec4}. Integrating \eqref{ODE} for one time step yields to the first order in $\Delta t$:
\begin{align}
\bm{\Upsilon}=\begin{pmatrix} \exp(\tilde\bm{\omega}\Delta t)_\times &\tilde{\mathbf a}\Delta t&0_{3, 1}\\0_{1, 3} &1&0\\0_{1, 3} &0&1\end{pmatrix}.\label{backtofuture}
\end{align}A simple matrix multiplication proves that this implies the following first order relation between factor $\bm{\Upsilon}$ associated to noisy inertial increments and $\bar\bm{\Upsilon}$ associated to noise free ones\begin{align}
\bm{\Upsilon}\approx\bar\bm{\Upsilon}\exp(\bm{\eta}^g\Delta t,\bm{\eta}^a\Delta t,0):=\bar\bm{\Upsilon}\exp(\bm{\eta}_k),
\end{align}
and we let $\bm{\eta}_k:=(\bm{\eta}_k^g\Delta t,\bm{\eta}_k^a\Delta t,0)\in{\mathbb R}^9$. Recalling \eqref{discrete2}, we then have in the presence of IMU noise the motion model:
\begin{align}{\mathbf T}_{k+1}=\bm{\Gamma}_k\Phi({\mathbf T}_k) \bar\bm{\Upsilon}_k\exp(\bm{\eta}_k),\label{discrete3}\end{align}
and we now seek to describe how the distribution \eqref{error:rep1}, i.e., its associated parameters $\bar{\mathbf T},\bm{\Sigma}$, propagate through \eqref{discrete3}.
\subsection{Propagation with Noisy IMU: an Exact Formula}
The theory allows deriving a novel result describing error accumulation over time.
As show by \eqref{discrete3} the noisy version of \eqref{discrete2} has the following form:
\begin{align}
{\mathbf T}_{k+1} = \bm{\Gamma}_k \Phi({\mathbf T}_k) \bm{\Upsilon}_k \exp(\bm{\eta}_k),\label{noisy:eqq}
\end{align}
with $\bm{\eta}_i's$ independent centered Gaussian noises. The group affine property of the $SE_{2}(3)$ embedding allows deriving a novel result describing error accumulation over time.
\begin{prop}[Extended pose error accumulation]
\label{prop::acc}Referring to \eqref{error:rep1} let us write ${\mathbf T}_k$ as ${\mathbf T}_k=\bar{\mathbf T}_k\exp(\bm{\xi}_k)$ where $\bar{\mathbf T}_k$ is propagated through noise free equations \eqref{discrete2} i.e., $\bar{\mathbf T}_{k+1}=\bm{\Gamma}_k \Phi(\bar{\mathbf T}_k ) \bm{\Upsilon}_k$.
Let ${\mathbf F}_k:=Ad_{\bm{\Upsilon}_k^{-1} } F\in{\mathbb R}^{9\times 9}$ and ${\mathbf F}_i^k:=\Pi_{j=i}^{k-1} {\mathbf F}_j$. We have the recursive formula\begin{equation}
\label{eq::one-step}
\exp(\bm{\xi}_{k+1}) = \exp({\mathbf F}_k \bm{\xi}_k) \exp(\bm{\eta}_k)
\end{equation}leading to the following exact formula:
\begin{equation}
\label{eq::prod}\boxed{
\exp(\bm{\xi}_{k}) = \exp({\mathbf F}_0^{k-1} \bm{\xi}_0) \cdot \prod_{i=0}^{k-1} \exp({\mathbf F}_{i+1}^{k-1} \bm{\eta}_i) . }
\end{equation}
\end{prop}
\paragraph{Proof}
Juste before \eqref{line16} we proved
$
\exp(\bm{\xi}_{k+1})
= Ad_{\bm{\Upsilon}_k^{-1} }(\Phi ( \exp(\bm{\xi}_k)))
$ for noise free model \eqref{discrete2}. With noisy model \eqref{noisy:eqq} we have along the same lines
\begin{align}\label{line122}
\exp(\bm{\xi}_{k+1}) = Ad_{\bm{\Upsilon}_k^{-1} }(\Phi ( \exp(\bm{\xi}_k)))\exp(\bm{\eta}_k).
\end{align}
Let $\Psi_k$ denote $Ad_{\bm{\Upsilon}_k^{-1} }\circ\Phi$. At \eqref{line16} and just after we proved \begin{align}\label{line123}\Psi_k(\exp(\bm{\xi})):=Ad_{\bm{\Upsilon}_k^{-1} }\circ \Phi(\exp(\bm{\xi}))= \exp(Ad_{\bm{\Upsilon}_k^{-1} }F\bm{\xi})\end{align}hence $\Psi_k(\exp(\bm{\xi}))=\exp({\mathbf F}_k\bm{\xi})$
readily proving \eqref{eq::one-step}. Moreover we proved $\Phi({\mathbf T}_1{\mathbf T}_2)=\Phi({\mathbf T}_1) \Phi({\mathbf T}_2)$, i.e. $\Phi$ is an automorphism. The adjoint is well-known to be an automorphism also. As the composition of automorphisms is an automorphism, $\Psi_k$ satisfies the same property. We have
\begin{align*}
\exp(\bm{\xi}_{2}) & = Ad_{\bm{\Upsilon}_1^{-1}} \circ \Phi\left(Ad_{\bm{\Upsilon}_0^{-1}} \circ \Phi(\exp(\bm{\xi}_0)\exp(\bm{\eta}_0) \right) \exp(\bm{\eta}_1) \\
& = \Psi_1\bigg( \Psi_0(\exp(\bm{\xi}_0))\exp(\bm{\eta}_0) \bigg) \exp(\bm{\eta}_1)
\\
& = \Psi_1(\Psi_0(\exp(\bm{\xi}_0))\Psi_1(\exp(\bm{\eta}_0) )) \exp(\bm{\eta}_1)
\\
& = \exp({\mathbf F}_1{\mathbf F}_0\bm{\xi}_0)) \exp({\mathbf F}_1\bm{\eta}_0) ) \exp(\bm{\eta}_1)
\end{align*}and \eqref{eq::prod} is proved by recursion along the same lines. $\blacksquare$
This result is remarkable: having a simple closed-formula for the error propagation is not usual in nonlinear state estimation. A first application is that if the initial error $\bm{\xi}_0$ and the noises $\bm{\eta}_i$ are centered on zero then the propagated error is centered up to the third order w.r.t. the standard deviation of $\bm{\xi}_0$ and the $\bm{\eta}_i$'s. This may be proved along the lines of \cite{barfoot2014associating} that deals with the simpler case of compounding.
Albeit unusual to be able to come up with exact formulas such as \eqref{eq::prod}, the formula is yet nonlinear. The evolution of $\bm{\xi}_k$'s covariance may be approximated as follows.
\begin{prop}[IMU noise propagation]Consider a sequence of uncertain extended poses, modeled as ${\mathbf T}_k=\bar{\mathbf T}_k\exp(\bm{\xi}_k)$ with $\bm{\xi}_k\sim\mathcal N(0,\bm{\Sigma}_k)$. Using the BCH formula to the first order in \eqref{eq::one-step} readily provides an approximation of uncertainty propagation through noisy IMU model \eqref{discrete3} as \begin{align}\boxed{ \bar{\mathbf T}_{k+1}=\bm{\Gamma}_k \Phi(\bar{\mathbf T}_k ) \bm{\Upsilon}_k,\quad
\bm{\Sigma}_{k+1}={\mathbf F}_k\bm{\Sigma}_k{\mathbf F}_k^T+\bm{\Sigma}_{\bm{\eta}}.}\label{riccati}
\end{align}
\end{prop}
Up to the first order, the obtained Riccati equation agrees with the results of \cite{forster2017manifold}, see appendix therein. However, higher-order formulas are different and shall be explored in future work in a similar way to \cite{barfoot2014associating,chirikjian2011stochastic}, but on $SE_2(3)$.
Riccati equation \eqref{riccati} only provides an approximation to the true formula \eqref{eq::prod}. However, Proposition \ref{bananaprop} shows propagation of concentrated Gaussians is exact when sensors are unnoisy, which is a good indication of accuracy in the case where noise is present but moderate. This is also supported by the simple numerical experiment of Figure \ref{MC:fig} where we see true and computed dispersions match indeed.
{\center{
\begin{figure}
\includegraphics[width=\textwidth]{unnamed}
\caption{Numerical experiment to support uncertainty propagation model \eqref{riccati} coupled with uncertainty representation \eqref{error:rep1}. The initial extended pose is null and known, and the IMU moves nominally to the right at constant translational speed (blue line). Noisy IMU measurements generate a dispersion of the belief at the endpoint. We generate point clouds at the trajectory endpoint based on Monte-Carlo simulations. Black dots represent true dispersion under noisy equations \eqref{nav:eq}. Red dots are generated through our exponential uncertainty model \eqref{error:rep1} for extended pose propagation with parameters computed via \eqref{riccati}. Finally green dots are generated using the endpoint distribution computed by a standard (multiplicative) EKF: we see linearization implies the assumed dispersion lies within a horizontal plane. However, the true distribution (black) is ``banana'' shaped in 3D, as already observed mainly in the case of poses in 2D for wheeled robots \cite{long2012banana,chirikjian2011stochastic,barfoot2016state,chirikjian2014gaussian,barfoot2014associating}, and \eqref{riccati} captures this effect and agrees with ground truth. }\label{MC:fig}
\end{figure}
}}
\section{Impact of biases in exponential coordinates for preintegration}\label{sec4}
In this section, we compute first-order bias correction using our representation of errors based on exponential coordinates on $SE_2(3)$. First, our matrix formalism allows for more elementary computations than the first order expansions that can be found in the Appendix of \cite{forster2017manifold}. Second our theory yields slightly more accurate Jacobians for first-order bias correction in the theory of preintegration.
\subsection{Theory}
Consider full IMU measurements $\tilde {\mathbf a}_k:={\mathbf a}_k-{\mathbf b}_k^a-\bm{\eta}_k^a$ and $\tilde \bm{\omega}_k:=\bm{\omega}_k-{\mathbf b}_k^g-\bm{\eta}_k^g$, and let us ignore the noise and focus only on the biases. In the context of smoothing, given a bias update $\bar {\mathbf b}\leftarrow \bar {\mathbf b}+\bm{\delta b}$, one needs to compute how the preintegrated quantities change. Assume we have computed the extended pose ${\mathbf T}_k(\bar {\mathbf b})$ at time $k$ corresponding to bias $ \bar {\mathbf b}$ and let ${\mathbf T}_k(\hat {\mathbf b})$ denote the extended pose associated to new bias estimation $\hat {\mathbf b}:=\bar {\mathbf b}+\bm{\delta b}$ with $\bm{\delta b}=(\bm{\delta b}^g,\bm{\delta b}^a)\in{\mathbb R}^6$. Building upon our exponential representation of errors on $SE_2(3)$ \eqref{error:rep1} in a stochastic context, we define the discrepancy between the associated extended poses as ${\bf d}_k\in{\mathbb R}^9$, i.e.,
\begin{align}{\mathbf T}_k(\hat {\mathbf b})={\mathbf T}_k(\bar {\mathbf b})\exp({\bf d}_k),\label{bias:discrep}
\end{align}
and we seek how this correction evolves with time.
Denote $\bm{\Upsilon}({\mathbf b})$ the quantity obtained by replacing $(\tilde\bm{\omega},\tilde{\mathbf a})$ with $(\tilde\bm{\omega}+{\mathbf b}^g,\tilde{\mathbf a}+{\mathbf b}^a)$ in \eqref{backtofuture}. Neglecting $O(\Delta t^2)$ terms yields
\begin{align}
\bm{\Upsilon}(\hat {\mathbf b})\approx\bm{\Upsilon}(\bar {\mathbf b})\exp(\bm{\delta b}^g\Delta t,\bm{\delta b}^a\Delta t,0).
\label{approx2:eq}
\end{align}Thus using \eqref{discrete2}, \eqref{bias:discrep} and \eqref{approx2:eq} we get
\begin{align}
{\mathbf T}_{k+1}(\hat {\mathbf b}) &=\bm{\Gamma}_k\Phi ( {\mathbf T}_{k}(\hat {\mathbf b}))\bm{\Upsilon}_k(\hat {\mathbf b}) \\&=\bm{\Gamma}_k \Phi ( {\mathbf T}_k(\bar {\mathbf b}) \exp({\bf d}_k))\bm{\Upsilon}_k(\bar {\mathbf b}) \exp(\bar{\bm{\delta b}} ).
\end{align}
where $\bar\bm{\delta b}:=(\bm{\delta b}^g\Delta t,\bm{\delta b}^a\Delta t,0)\in{\mathbb R}^9$. Using \eqref{magic:formula} we get
\begin{align}
{\mathbf T}_{k+1}(\hat {\mathbf b} )&= {\mathbf T}_{k+1}(\bar {\mathbf b})\exp(Ad_{\bm{\Upsilon}_k(\bar {\mathbf b})^{-1} } F{\bf d}_k)\exp(\bar{\bm{\delta b}})\\
&\approx {\mathbf T}_{k+1}(\bar {\mathbf b})\exp(Ad_{\bm{\Upsilon}_k (\bar {\mathbf b})^{-1}} F{\bf d}_k+\bar{\bm{\delta b}}) ,\label{approx:eq}
\end{align}
where we used the BCH formula. We have thus proved the discrepancy ${\bf d}_k$ in the sense of \eqref{bias:discrep} between extended poses respectively associated to biases $\bar{\mathbf b}$ and $\hat{\mathbf b}=\bar{\mathbf b}+\bm{\delta b}$ satisfies
\begin{align}\boxed{{\bf d}_{k+1}=Ad_{\bm{\Upsilon}_k (\bar {\mathbf b})^{-1}} F{\bf d}_k+\bar{\bm{\delta b}}}\label{biascor}
\end{align}
We see the only approximation\footnote{besides the Euler approximation \eqref{approx2:eq} justified by small $\Delta t$.} comes in at line \eqref{approx:eq}. Note \eqref{biascor} may rewrite
$
{\bf d}_k=J_k\bar{\bm{\delta b}},~\text{where}~J_{k+1}=Ad_{\bm{\Upsilon}_k(\bar {\mathbf b})^{-1} } FJ_k+I_{9,6}$ and where $I_{9,6}$ is a $6\times 6$ identity matrix concatenated with a $3 \times 6$ matrix of zeros, and we let $J_0$ be the identity.
\begin{rem}
Neglecting terms in $\Delta t^2$ in \eqref{approx2:eq} alleviates computations but is not fully accurate. For correct expansion w.r.t. $ \bar{\bm{\delta b}}$ the diagonal identity blocks $1:3 \times 1:3$ and $4:6 \times 4:6$ of matrix $I_{9,6}$ should be replaced with $\Delta t D$ and $\Delta t \exp_m((\bm{\omega}_t)_{\times} \Delta t)$ respectively, where $D$ is defined by the expansion $\exp_m((\bm{\omega}+u)_{\times})=\exp_m(\bm{\omega})[I_3+(Du)_{\times}+\circ(u)]$.
\end{rem}The following result is already true for classical Taylor expansion of \cite{forster2017manifold}, although it seems to have been noticed only in \cite{martinelli2014closed}.
\begin{prop}
In absence of gyro bias both formulas \eqref{approx2:eq} and \eqref{approx:eq} are exact, meaning we have exact pre-integration where accelerometer bias is fully modeled.
\end{prop}
\subsection{Numerical comparison}
We showed the exponential mapping on $SE_2(3)$ more closely reflects uncertainty when using noisy IMUs. In turn, one may wonder if using the exponential to model bias correction \eqref{bias:discrep} improves accuracy. The answer turns out to be positive, although the improvement is rather slight.
We set a simple simulation where a UAV follows a 3D trajectory (see Fig. \ref{fig::traj}) while recording IMU measurements, and storing preintegrated factors, each covering a duration $T$. The original sampling frequency is 100Hz, so $T=0.01$ means all measurements are stored. Then we sample values of the gyro and accelerometer bias and compute the difference between preintegrated factors obtained re-integrating the IMU increments and pre-integrated factors obtained through the two first-order expansion, as in standard preintegration theory \cite{forster2017manifold} and exponential as in \eqref{bias:discrep}, \eqref{biascor}. Results for the velocity and position components of the pre-integrated factors are displayed on Table \ref{tab::res} for a bias corresponding to a low-cost IMU. We see the exponential mapping of our group theoretic approach tends to improve velocity accuracy of the first-order expansion. As the errors between the preintegrated factors and their first-order approximation are very small for a standard pre-integration time of $T=1s$, we conclude that regarding bias correction exponential mapping may prove useful in specific situations such as long term preintegration.
\begin{figure}
\includegraphics[width=.8\columnwidth]{traj-eps-converted-to.pdf}
\caption{Simulated trajectory}
\label{fig::traj}
\end{figure}
\begin{table}
\centering
\caption{Higher-order errors for a low-cost IMU ($1^\circ/s - 100 mg$)}
\label{tab::res}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
IMU & \multicolumn{3}{c|}{Velocity RMS (m/s)} & \multicolumn{3}{c|}{Position RMS (m)} \\
\hline
$T$ & $1s$ & $10s$ & $60s$& $1s$ & $10s$ & $60s$ \\
\hline
Classical & $6,9.10^{-3}$ & $1,25$ & $151,5$ & $0,0016$ & $3,66$ & $2405,8$ \\
\hline
Proposed & $9,375.10^{-4}$ & $0,37$ & $70,5$ & $0,0015$ & $2,57$ & $2126,5$ \\
\hline
\end{tabular}
\end{table}
\section{Conclusion}
We showed the properties of $SE_2(3)$ allow transposing the rather recent results about estimation of poses using wheel speeds to the context of IMUs.
Moreover, the framework provides an elegant mathematical approach that brings further maturity to the theory of preintegration on manifolds. It unifies flat and rotating Earth IMU equations within a single framework, hence providing extensions of the theory of preintegration to the context of high grade IMUs and opening up for novel implementations of factor graph based methods to high precision (visual) inertial navigation systems.
|
2,877,628,089,362 | arxiv | \section{Introduction}
The study of localizations of triangulated categories has a rich and
varied heritage arising from the work of Adams, Bousfield, Brown,
Thomason, and others. Neeman further developed these theories, and
showed that to bring the full power of such arguments to bear one
needs a compactly generated category.
Localization techniques were brought to the attention of the
representation theory world in Rickard's \cite{R}, where they were
applied to the stable module category which is easily shown to be
compactly ge\-ne\-ra\-ted.
The stable module category is not the only triangulated quotient of
the module category that one meets in representation theory. In
\cite{CPW} Carlson, Peng, and Wheeler note that one can adapt
Rickard's work to relative stable module categories. However, not much
is known about the structure of these categories. In particular no
non-trivial examples have been given which are known to be compactly
generated.
In this note we prove the following theorem on the relative stable
mo\-du\-le category $\operatorname{StMod}_H(kG)$:
\label{pag:1}
{\bf Theorem.} {\em Let $k$ be an algebraic closure of $\mathbb{Z}/p$. Let
$G$ be a finite group, $H$ a subgroup of $G$. If $kH$ has finite
representation type, then $\operatorname{StMod}_H(kG)$ is compactly generated.}
We will keep the assumptions on $k$, $G$, and $H$ for the rest of the
paper. All modules will be left-modules. Recall that $kH$ has finite
representation type precisely if the Sylow $p$-subgroups of $H$ are
cyclic, see \cite[thm.\ VI.3.3]{ARS}. Let us remind the reader of the
construction of $\operatorname{StMod}_H(kG)$.
We will denote the class of $H$-projective $kG$-modules by $\mbox{\rm $H$-Proj}$;
this is the class of all summands of modules induced up from $kH$. It
is an {\em additive} subcategory of $\operatorname{Mod}(kG)$. We use $\mbox{\rm $H$-Proj}$ to
define the triangulated categories $\operatorname{StMod}_H(kG)$ and $\operatorname{K}(\mbox{\rm $H$-Proj})$.
The relative stable module category $\operatorname{StMod}_H(kG)$ is $\operatorname{Mod}(kG)$ modulo
the morphisms that factor through objects of $\mbox{\rm $H$-Proj}$. The category
$\operatorname{K}(\mbox{\rm $H$-Proj})$ is the homotopy category of complexes of objects of
$\mbox{\rm $H$-Proj}$.
By $\operatorname{Tate}_H(kG)$ we denote the collection of complexes $Q$ of
$\mbox{\rm $H$-Proj}$-modules for which the restriction $\operatorname{Res}_H^G(Q)$ to $H$ is
split exact. We may think of $\operatorname{Tate}_H(kG)$ either as a triangulated
subcategory of $\operatorname{K}(\mbox{\rm $H$-Proj})$, or as a full subcategory of $\operatorname{C}(\mbox{\rm $H$-Proj})$,
the category of complexes of objects of $\mbox{\rm $H$-Proj}$ and chain maps.
\begin{Remark}
\label{rmk:Tate}
If $X$ is in $\operatorname{Tate}_H(kG)$ then $X$ is exact and splits into
short exact sequences
\[
0 \rightarrow \operatorname{Z}^n(X) \rightarrow X^n \rightarrow \operatorname{Z}^{n+1}(X) \rightarrow 0
\]
which become split exact upon restriction to $H$. In particular, it
is easy to show that $\operatorname{Z}^n(X) \rightarrow X^n$ is an
$\mbox{\rm $H$-Proj}$-preenvelope and $X^n \rightarrow \operatorname{Z}^{n+1}(X)$ is an
$\mbox{\rm $H$-Proj}$-precover.
\end{Remark}
Several variants of the following result are well known, see for
instance \cite[thm.\ 2.3]{BG} and \cite[thm.\ 9.6.4]{HPS}.
\begin{Proposition}
\label{pro:equiv}
View $\operatorname{Tate}_H(kG)$ as a full subcategory of $\operatorname{K}(\mbox{\rm $H$-Proj})$.
There is a triangulated equivalence of categories
\[
\operatorname{Tate}_H(kG) \simeq \operatorname{StMod}_H(kG)
\]
given by $X \mapsto \operatorname{Z}^0(X)$.
\end{Proposition}
\begin{proof}
We can clearly view $\operatorname{Z}^0$ as a functor $\operatorname{C}(\mbox{\rm $H$-Proj}) \rightarrow
\operatorname{Mod}(kG)$. Viewing $\operatorname{Tate}_H(kG)$ as a full subcategory of
$\operatorname{C}(\mbox{\rm $H$-Proj})$, we hence have a functor
\begin{equation}
\label{equ:a}
\operatorname{Z}^0 : \operatorname{Tate}_H(kG) \rightarrow \operatorname{Mod}(kG).
\end{equation}
Let $X \rightarrow Y$ be a chain map of complexes from $\operatorname{Tate}_H(kG)$.
There is an induced homomorphism $\operatorname{Z}^0(X) \rightarrow \operatorname{Z}^0(Y)$. The
chain map is null homotopic if and only if the induced homomorphism
factors through a module from $\mbox{\rm $H$-Proj}$, that is, if and only if the
induced homomorphism becomes $0$ in $\operatorname{StMod}_H(kG)$. This holds by a
lifting argument using Remark \ref{rmk:Tate}; cf.\ \cite[proof of
lem.\ 2.2]{BG}.
Hence the functor from equation \eqref{equ:a} induces a faithful
functor
\begin{equation}
\label{equ:b}
\operatorname{Z}^0 : \operatorname{Tate}_H(kG) \rightarrow \operatorname{StMod}_H(kG)
\end{equation}
where $\operatorname{Tate}_H(kG)$ is now viewed as a full subcategory of
$\operatorname{K}(\mbox{\rm $H$-Proj})$.
Observe that $X$ can be viewed as a relative Tate resolution of
$\operatorname{Z}^0(X)$. Hence the functor from \eqref{equ:b} is also full, since
any homomorphism of modules can be lifted to the relative Tate
resolutions; this is again a lifting argument using Remark
\ref{rmk:Tate}.
To conclude that the functor from \eqref{equ:b} is an equivalence of
categories, all that is needed is to see that it is essentially
surjective. But each $kG$-module $m$ has a relative Tate resolution
$X$, so indeed, $m \cong \operatorname{Z}^0(X)$ for some $X$. Note that we can
construct such an $X$ by splicing a left-$\mbox{\rm $H$-Proj}$-resolution and a
right-$\mbox{\rm $H$-Proj}$-resolution of $m$. These resolutions become split
exact upon restriction to $H$ because this is true for
$\mbox{\rm $H$-Proj}$-precovers and -preenvelopes.
\end{proof}
\begin{Definition}
If $kH$ has finite representation type, then $y$ will be the
direct sum of its indecomposable finitely generated modules, and
$x = \operatorname{Ind}_H^G(y)$ the induced module over $kG$.
\end{Definition}
\begin{Remark}
\label{rmk:x}
In the case of the definition, note that $\mbox{\rm $H$-Proj} = \operatorname{Add}(x)$. Note
also that $x$ can be viewed as a complex concentrated in degree zero.
As such, it is in $\operatorname{K}(\mbox{\rm $H$-Proj})$.
\end{Remark}
\begin{Lemma}
\label{lem:perp}
We have
{\rm
\begin{align*}
\lefteqn{\operatorname{Tate}_H(kG) = x^{\perp}} & \\
& \;\;\;
= \{\, Q \in \operatorname{K}(\mbox{\rm $H$-Proj}) \,|\,
\operatorname{Hom}_{\operatorname{K}(\mbox{\tiny $H$-Proj})}(\Sigma^n x,Q) = 0 \mbox{ for each } n \,\}
\end{align*}
}
\!\!in $\operatorname{K}(\mbox{\rm $H$-Proj})$.
\end{Lemma}
\begin{proof}
Let $Q$ be in $\operatorname{K}(\mbox{\rm $H$-Proj})$. Then
\begin{align*}
\operatorname{Hom}_{\operatorname{K}(\mbox{\tiny $H$-Proj})}(\Sigma^n x,Q)
& = \operatorname{Hom}_{\operatorname{K}(kG)}(\Sigma^n \operatorname{Ind}_H^G(y),Q) \\
& \cong \operatorname{Hom}_{\operatorname{K}(kH)}(\Sigma^n y,\operatorname{Res}_H^G(Q)) \\
& = (*)
\end{align*}
by adjointness, since $\operatorname{Ind}_H^G(y) = kG \otimes_{kH} y$ while
$\operatorname{Res}_H^G$ restricts $kG$-modules to $kH$-modules. If $(*)$ is $0$
then so is
\[
\operatorname{Hom}_{\operatorname{K}(kH)}(\Sigma^n m,\operatorname{Res}_H^G(Q))
\]
for each $m$ in $\operatorname{Mod}(kH)$, since $\operatorname{Mod}(kH)$ equals $\operatorname{Add}(y)$ by
\cite[cor.\ 4.8]{A} because $kH$ has finite representation type. But
if this expression is $0$ for each $m$ and each $n$, then
$\operatorname{Res}_H^G(Q)$ is null homotopic by an easy argument; that is, $Q$ is
in $\operatorname{Tate}_H(kG)$.
\end{proof}
\begin{Proposition}
\label{pro:comp}
If $kH$ has finite representation type, then $\operatorname{K}(\mbox{\rm $H$-Proj})$ is
compactly generated.
\end{Proposition}
\begin{proof}
Since $k$ is countable, $kG$ has pure global dimension $\leq 1$ by
\cite[thm.\ 11.21]{JL}. The finite dimensional algebra $kG$ is
certainly coherent, and $x$ is a finitely generated $kG$-module.
By \cite[sec.\ 4, (1)]{HJ}, the category $\operatorname{K}(\operatorname{Add} x)$ is compactly
generated. But this category is $\operatorname{K}(\mbox{\rm $H$-Proj})$ by Remark \ref{rmk:x}.
\end{proof}
\begin{Corollary}
\label{cor:comp}
If $kH$ has finite representation type, then $\operatorname{Tate}_H(kG)$ is
compactly generated.
\end{Corollary}
\begin{proof}
The category $\operatorname{K}(\mbox{\rm $H$-Proj})$ is compactly generated by Proposition
\ref{pro:comp}, and $\operatorname{Tate}_H(kG) = x^{\perp}$ by Lemma
\ref{lem:perp}.
But $x$ is a compact object of $\operatorname{K}(\mbox{\rm $H$-Proj})$, as follows for instance
from the formula
\[
\operatorname{Hom}_{\operatorname{K}(\mbox{\tiny $H$-Proj})}(x,-) \simeq \operatorname{H}^0 \operatorname{Hom}_{kG}(x,-)
\]
since $x$ is finitely generated over $kG$.
So $\operatorname{Tate}_H(kG)$ is the right perpendicular category of a compact
object, so it is compactly generated by \cite[prop.\ 1.7(1)]{IK}.
\end{proof}
Finally, the theorem from page \pageref{pag:1} follows.
\begin{Theorem}
\label{thm:comp}
If $kH$ has finite representation type, then $\operatorname{StMod}_H(kG)$ is
compactly generated.
\end{Theorem}
\begin{proof}
Combine Proposition \ref{pro:equiv} with Corollary \ref{cor:comp}.
\end{proof}
\begin{Remark}
It is not clear that our methods can be used to compute a set of
compact generators of $\operatorname{StMod}_H(kG)$.
To do so, we would need to find a set of compact generators of the
category $\operatorname{Tate}_H(kG)$ and then use the equivalence $\operatorname{Z}^0$. By
unravelling the proof of \cite[prop.\ 1.7(1)]{IK}, it can be seen that
the compact generators of $\operatorname{Tate}_H(kG)$ would come by taking a set of
compact generators of $\operatorname{K}(\mbox{\rm $H$-Proj})$ and applying the left adjoint to
the inclusion of $\operatorname{Tate}_H(kG)$ into $\operatorname{K}(\mbox{\rm $H$-Proj})$. This left adjoint
is constructed by Neeman in \cite{N}, but the construction is infinite
and does not obviously lend itself to concrete computations.
It would be interesting to find a procedure whereby a set of compact
generators of $\operatorname{StMod}_H(kG)$ could be computed.
\end{Remark}
\bigskip
\noindent
{\bf Acknowledgements.}
The first author is (partly) supported by the Heilbronn Institute for
Mathematical Research.
|
2,877,628,089,363 | arxiv |
\section{Introduction}
HERA-B was designed as a fixed target experiment for studying CP
violation in $B$-meson systems using an internal wire target in the
proton beam of HERA \cite{herab}. To reach the necessary production
rate of $b$ quarks an average of four interactions per bunch crossing at
a frequency of about 10 MHz (96 ns bunch separation) has to be
generated. This leads to a very high particle flux density in the
detector.
The main detector components (fig.\,\ref{figotr}) are a silicon vertex
detector, a dipole magnet with a field integral of 2.13\,Tm, a main
tracker with an Inner Tracker composed of microstrip gas chambers
and an Outer Tracker composed of drift tubes, High-p$_T$
Chambers, a ring imaging Cherenkov counter (RICH),
an electromagnetic calorimeter (ECAL), and a Muon
System with drift tubes. The detector covers a forward angular
range of 220 mrad in the bending plane of the magnet and 160 mrad
vertically.
In the following we describe the front-end electronics of the Outer
Tracker drift tubes, i.\ e.\ the outer part of the main tracking
system, which is based on the amplifier-shaper-discriminator chip
ASD-8 \cite{asdpub} and a TDC (time-to-digital-converter) chip
customized for HERA-B.
The paper is organized as follows: in the next section the Outer
Tracker detector is briefly described followed in section 3 by a
discussion of the design considerations for the front-end
electronics. The sections 4 to 6 contain the description of the main
components of the front-end electronics: the high-voltage system, the
ASD8 board and the TDC board. Section 7 describes the installation and
commissioning of the electronics and gives a first evaluation of the
performance. The paper finishes with a summary.
\section{The Outer Tracker System}
\label{secotr}
%
\begin{figure}
\begin{center} \leavevmode
\epsfxsize=\textwidth \epsfbox{pict/det2.eps}
\end{center}
\caption{Top view of the HERA-B detector. The Outer Tracker
superlayers, arranged as magnet, pattern and trigger chambers, are
indicated by the black areas (with the Inner Tracker modules
attached to them near the beam pipe, white areas).}
\label{figotr}
\end{figure}
\paragraph*{Detector Description:}
The Outer Tracker of HERA-B \cite{herab,otr_det} is composed of 13
planar superlayers (fig.\,\ref{figotr}) of drift tube modules
comprising 112\,674 readout channels. Each superlayer consists of two
independent chambers. For each chamber the modules are contained in a
gas-tight box which is suspended from a rigid frame (`outer
frame'). Of the 13 superlayers 7 are placed in the magnet (`magnet
chambers' = MC), 4 in the field-free region between the magnet and the
RICH serving for pattern recognition (`pattern chambers'= PC) and 2
between the RICH and the ECAL (`trigger chambers' = TC). The two TC
superlayers and the first and last PC superlayers deliver hit signals
to the First Level Trigger. The main tracking system allows for
momentum measurement and provides track recognition on the first
trigger level.
The honeycomb modules of hexagonal drift tubes are built from folded
gold-coated, carbon-loaded polycarbonate foil. The tubes are oriented
at $0^{\circ}$ and $\pm 5^{\circ}$ relative to the perpendicular on
the bending plane. In the bending plane the inner diameter of the
cells changes from 5\,mm near the beam to 10 mm above a certain
distance from the beam (about 1\,m in the PC area) to account for the
radial dependence of the particle flux.
As counting gas the fast mixture Ar/CF$_4$/CO$_2$ (65/30/5) is chosen.
Operating at a gain of $3\cdot 10^4$, the drift velocity is about
80\,$\mu$m/ns which allows the particle signals to be collected within
the bunch separation time of 96\,ns, even within the magnetic field. In
this gas the ionisation density of a minimum ionising track is
estimated to be about 100/cm with an average cluster size of 2.5
electrons. However, due to the loss of electrons by attachment
only about 30\% of the electrons reach the anode, leading
to a mean cluster size of about 1 electron.
\paragraph*{Radiation load and occupancies:}
The Outer Tracker has been designed for particle densities and
radiation levels comparable to those in similar detectors currently
developed for LHC. At an interaction rate of 40\,MHz the radial
distribution of the charged particle flux density is roughly given by:
$$
\phi \approx \frac{10^{8}}{R^2}\,{\rm Hz}
$$
where $R$ is the distance from the beam (in any units). Since the
Outer Tracker acceptance starts at a radial distance of about 19~cm to
the beam this leads to a particle flux of about $2\cdot 10^{5}\,{\rm
cm}^{-2} s^{-1}$ in the hottest area.
The drift tubes are longitudinally subdivided into sensitive and
insensitive segments to limit the single channel occupancy to about
20\% at an interaction rate of 40\,MHz (the shortest segmentation near
the beam is 20 cm). The subdivision is achieved by installing a sense
wire only in the sensitive parts of a cell \cite{otr_det}. If a
sensitive part is not at an end of the cell the signal is carried to
the upper or lower end of the module via a thicker wire which does
not generate an avalanche. The sensitive length of the drift cells,
$L_{cell}$, varies between 0.2\,m and 2.8\,m yielding different cell
capacities. The capacitive load at the input of the amplifier is about
$15\,{\rm pF} + L_{cell} \cdot 10\,{\rm pF/m}$, resulting in loads
between 17 and 43\,pF.
The amplifiers are placed at the upper or lower end of the chambers,
away from the beam pipe at a location where the radiation load is
below 50 Gy per year. The TDC electronics is installed on the
outer frames, at an even lower radiation level.
\section{Design Criteria for the Front-End Electronics}
\label{secthree}
The performance of the front-end electronics influences the drift chamber
resolution, the detection efficiency and the rate of noise hits. In
the following we discuss the considerations which led to the definition
of requirements for the Outer Tracker electronics.
\subsection{Requirements on the Front-End Electronics}
\label{secthreeone}
A good position resolution of the drift chamber hits is not only needed
for a precise momentum measurement, but in a dense particle
environment it also facilitates the pattern recognition. For the Outer
Tracker it was estimated that the resolution should be about 200
$\mu$m \cite{herab}.
In order to achieve this for the given cylindrical drift tube geometry
and the ionisation density of the chosen gas the electronics should be
able to trigger on the first cluster of electrons arriving at the
anode. The charge signal generated by the electrons at the amplifier
input depends on the gas gain. For safety reasons, to avoid
high-voltage break-down and accelerated aging effects, it was decided
to limit the gas gain to $3\cdot 10^{4}$. With this gain an average
cluster (including the attachment loss in CF$_4$) results in a charge
of about 2\,fC after fast shaping with a charge collection efficiency
of about 20\% which is typical for drift chamber electronics. With a
threshold corresponding to a single electron cluster also the
requirement of the First Level Trigger of a high single cell
efficiency (design value $> 98\%$) can be fulfilled. The efficiency
should stay high for counting rates of up to about 2 MHz per cell
corresponding to the maximal allowed occupancy.
Processing such small signals requires low noise and low crosstalk in
the system. The noise occupancy per channel, i.~e.~the probability
to find in a channel a noise hit in the readout window of 96~ns,
should not exceed 1\% to limit the amount of false data.
The crosstalk of a charge signal into neighbouring channels depends on
its total charge which is roughly 40 times larger than the desired
threshold charge. Therefore the analog crosstalk should not exceed
1\%.
The strongest demands on the amplifier bandwidth and the signal
shaping come from the First Level Trigger which requires a fast signal
collection within the bunch separation time of 96\,ns. With a fast signal
shaping also pile-up in following bunches will be reduced.
Because of the large number of channels a high integration density of
the amplifiers at low power consumption is necessary. The requirements
on radiation tolerance are moderate, as explained in the previous section.
For the digitization of the drift time 1\,ns bins are sufficient
because in this case the uncertainty of the time measurement adds only
about 25\,$\mu$m to the position resolution which is negligible when
quadratically added to the intended 200\,$\mu$m. To allow for a
deadtime-less trigger and readout scheme the data have to be stored in
a 128 bunch crossings deep pipeline \cite{daqpaper}.
In the design phase of the electronics, which started in 1995, the
evaluation of available electronics led to the selection of the
amplifier-shaper-discriminator chip ASD-8 and the decision to develop
a TDC chip as the basic components of the front-end electronics.
\subsection{Overview of the system}
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=13cm \epsfbox{pict/otr_fe_new.eps} \\[1cm] %
\epsfxsize=13cm \epsfbox{pict/fee_setup2.eps}%
\end{center}
\caption{The front-end electronics of the Outer Tracker
(top: schematics for one channel, bottom: photograph of a demonstration
assembly cabled for 16 channels).}
\label{figchain}
\end{figure}
The front-end electronics of the Outer Tracker (fig.~\ref{figchain})
consists of
\begin{itemize}
\item[-] a high-voltage (HV) board with coupling capacitors for the
chamber signals and resistors for the current protection,
\item[-] a twisted pair cable, routing the signals to a feed-through
board in the wall of the gas box,
\item[-]a feed-through board which transfers the signal through the
wall of the gas box and which serves as a mount for the ASD-8 board,
\item[-]an ASD-8 board,
\item[-]a low voltage distribution board to supply the ASD-8 board
with power, threshold voltages, and test pulses,
\item[-]a shielded twisted pair cable from the ASD-8 board to the TDC
board,
\item[-]a TDC board.
\end{itemize}
The HV boards, distributing the HV to each cell, are mounted on the
modules inside the counting gas-volume where the dry gas serves as
discharge protection. The other readout electronics components are
mounted outside the gas box for better accessibility. The HV part
includes a coupling capacitor for each channel which AC-couples the
anode signals to the amplifier since the anode wires are on positive
high voltage while the cathode foils are connected to the ground. The
TDC boards are housed in crates on the outer frames of the superlayers
which carry the cables between the ASD-8 and the TDC boards. The TDC
boards are connected via electrical cables (signal standard: LVDS =
Low Voltage Differential Signaling) to digital signal processors which
build the front-end of the DAQ system. In addition the hit
information of 4 selected superlayers (trigger layers) is transferred
to trigger link boards for use in the First Level Trigger system. More
information on the HERA-B trigger and data acquisition system can be
found in \cite{daqpaper}.
In the following section we discuss the functionality of the listed
components. More details, including the electronic schematics of the
developed boards can be found in \cite{schematics}.
\subsection{Grounding and Shielding}
\label{sec_ground}
A concept for grounding and shielding, which is essential for the
performance and stability of the whole system, has been developed to
minimize noise, pickup, crosstalk and signal feedback.
\begin{figure}
\begin{center} \leavevmode
\includegraphics[clip,width=\textwidth,bb=35 105 594 491]{pict/grounding_new.eps}
\end{center}
\caption{Grounding scheme of the Outer Tracker front-end electronics.}
\label{figground}
\end{figure}
In the following we explain the scheme referring
to fig.\,\ref{figground}. For each superlayer half one single
``ground reference point'' (GND RP) is defined which is chosen to be
fixed to the potential of the gas box. The ground
potentials of all components have to be connected to GND RP
(star-like). There is only one connection to the reference potential
of the mains (Net0) via the HV power supply which, for safety reasons,
had to be connected to the gas box. All other lines coming from Net0
connect only to otherwise electrically insulated parts, like the
enclosures of electronics (LV, TDC), the outer frame, and the gas pipe.
The most critical points are the grounding of the analog and digital
sections on the ASD-8 board, and the signal connections from the
chamber to the ASD-8 board and from the ASD-8 board to the TDC. On the
ASD-8 board, the digital ground (DGND) and the analog ground (AGND)
are separated (see section \ref{secasd_board}). A good connection of
the AGND to the cathodes is mediated by the gas box enclosing each
chamber. From the inside, the cathodes are connected to the box, and
from the outside Cu-Be brackets, carrying the ASD-8 boards, connect to
the AGND of the board. Being designed as a Faraday cage the gas box
serves also as a RF shield. It is insulated from the outer frame and
from the gas pipe (using insulating pipe connections).
Shielding of the signal cables going to the TDC turned out to be
absolutely necessary to minimize the feedback of the digital output
signals to the amplifier inputs. The shielding is connected to DGND on
the ASD-8 board without any direct ground connection from the ASD-8
board to the TDC. The ground potential on the TDC board is connected
to the ground reference point GND RP.
The final tuning of the grounding and shielding of the complete system during
commissioning is described in section \ref{secinstall}.
\section{High Voltage System}
\subsection{High Voltage and Signal Routing in the Gas Box}
\label{sec_hv_routing}
The high voltage is supplied to the anode wires via HV boards, which
are mounted on the module base plates \cite{otr_det}. The schematics
of the boards is shown in fig.~\ref{fighvboards}. Besides supplying
high voltage to individual anode wires via a 1 M$\Omega$ protection
resistor, the board also leads the signals from the anode wires through
coupling capacitors (330\,pF, 4\,kV)\footnote{ceramic chip capacitors
330\,pF, 4\,kV, X7R dielectric, Johanson Dielectrics (art.nr.~402 S43
W 331 KV4)} to the cable connectors, leading to the ASD-8 boards. The
high voltage enters the board through an RC filter (100 k$\Omega$,
330\,pF).
To accommodate the HV board to the chamber structure
(fig.~\ref{fighvboards}), a combination of two boards, a main
board with a piggy-back board on top of it, is used
(fig.~\ref{fighvboardp}). The main board supplies the channels 1A to
16A, the piggy-back board the remaining channels 1B to 16B. For the
chambers which are not used in the First Level Trigger the signal
routing is such that the channels on the main board end up in one
cable and the channels from the piggy-back board in another. For
trigger chambers the cabling is such that back-to-back drift cells nB
and (n+2)A end up in neighbouring channels of the output cable. This
routing facilitates performing a logical OR of hits from these cells
in order to increase the trigger efficiency.
The HV boards are double-sided printed circuit boards with surface mounted
components (SMD). For the 5\,mm cells the matching to the wire pitch
limits the board width to 67\,mm requiring a tight spacing between the
capacitors. In order to guarantee the high voltage proofness, two of
the 17 HV capacitors are placed on the back side of the board (because
of a different soldering method these two capacitors caused major
problems in the first running period, see section \ref{secinstall}).
The signal cables (16 twisted pairs, lengths between 25 and 50 cm) from the HV
boards are plugged on the inside of the gas-box to the feed-through
boards which hold on their outside the ASD-8 boards. The feed-through
boards provide also the feed-through for the high voltage and a
possibility to individually disable problematic HV boards.
\begin{figure}
\begin{center}
\leavevmode
\epsfxsize=9cm \epsfbox{pict/hv-board-scheme.eps} %
\end{center}
\caption{ High voltage distribution board: schematics (top)
and board-to-chamber mapping (bottom).}
\label{fighvboards}
\vspace{1cm}
\begin{center}
\leavevmode
\epsfxsize=9cm \epsfbox{pict/hv_board03.eps} %
\end{center}
\caption{ High voltage distribution board:
a photograph of the connected main and piggy-back boards for the
5~mm chambers.}
\label{fighvboardp}
\end{figure}
\subsection{The High Voltage Distribution and Protection System}
The Outer Tracker High Voltage System has a cascaded distribution
scheme which optimizes the number of dead channels in case of high
voltage problems due to unstable or broken wires with respect to the
cost of the system. It provides positive voltage for the 112\,674
anode wires, nominally 1700\,V for 5 mm drift cells and 1950\,V for
10\,mm cells.
The HV grouping follows the structure of the Outer Tracker which is
described in \cite{otr_det}: The largest individual detector units are
the superlayer halves, being composed of stereo layers (6 for PC and 3
for TC) which are then subdivided into sectors (12 per PC, 10 per TC
stereo layer).
The HV supply system\footnote{CAEN SY527 Multichannel HV System: One
crate with 10 HV Supply Boards A734P, each board with 16 HV channels
(max. 3\,kV/3\,mA), connected to the Slow Control system via V288
HS-CAENET-VME Interface.} has 160 individually controllable HV
sources, each of which feeds up to six sectors with
up to 1500 wires in total. Such a
HV source is the smallest unit which can be monitored and controlled
by the Slow Control software. However, failing single sectors can be
disconnected individually on a patch board in the electronics
trailer. Beyond that also the groups of 16 wires belonging to an HV
board (fig.\,\ref{fighvboards}) can be switched off at the
feed-through board on the gas box frame. This action requires an
access to the detector which is regularly scheduled once per
month. In this way the number of disabled channels per faulty wire
can be limited to 16.
For each of the 160 HV sources the current is continuously monitored
and in case of over-currents a protection scheme first reduces the
voltage and eventually switches it off. An interlock makes sure that
the HV can only be switched on, if the gas system works properly. To
provide a constant gas gain the HV is adjusted automatically within
defined limits by a gas monitor system. A description of the control
system is given in \cite{otr_perf}.
\section{The ASD-8 Electronics}
\subsection{The ASD-8 Chip}
The amplifier-shaper-discriminator chip ASD-8 \cite{asdpub}, developed
by the University of Pennsylvania for drift chamber applications in a
high rate environment (originally for SSC), is used in different
detector systems of the HERA-B experiment (Outer Tracker, RICH, Muon
System, High-p$_T$ Chambers) with a total of about 200\,000 channels.
\begin{table}
\begin{center}
\caption{Characteristics of the ASD-8 chip.}
\vspace{1mm}
\label{tab01}
\begin{tabular}{|l|l|}
\hline
integration density & 8 ch. on $2.7\times 4.3$\,mm$^2$ die \rule{0.0mm}{0mm} \\
power consumption & 0.3 W\,/\,chip (incl.\,output)\\
signal shaping time & about 10\,ns \\
tail cancellation & $t_0 \approx 1.5$\,ns\\
double pulse resol. & 25\,ns \\
intrinsic noise & (900 + 70/pF) electrons\\
on-chip crosstalk & analog: $\sim$ 0.1\% \\
threshold & $\sim 2\,$fC \\
baseline shift & 0.5 - 1.0 fC/MHz \\
\hline
\end{tabular}
\end{center}
\end{table}
The 8-channel chip is designed in
\begin{figure}
\begin{center} \leavevmode%
\epsfxsize=10cm \epsfbox{pict/asd8_principle.eps}
\end{center}
\caption{Principal functions of the ASD-8 chip. The analog outputs
are only used for test purposes and can be selected by an on-chip
multiplexer.}
\label{figasdprinzip}
\end{figure}
a bipolar technology. The process used by the producer Maxim combines high
speed with a low noise level,
and relatively
low power consumption (Table \ref{tab01}).
A block diagram of the ASD-8 chip is shown in fig.\,\ref{figasdprinzip}.
The input stage is a preamplifier with a sensitivity of 2.5\,mV/fC, a
bandwidth of 100 MHz and an input impedance of 115\,$\Omega$. The input
is differential and symmetric for positive and negative pulses.
The two-stage shaper with tail cancellation yields a double pulse
resolution of 25\,ns. The tail cancellation compensates the ion tail of
the drift chamber pulses. Analog outputs are provided for 3 channels
per chip, selectable after the second shaper or after the tail
cancellation.
The discriminator is a two-stage differential amplifier with positive
feedback. The threshold is voltage programmable for each channel. The
external voltage scales approximately like 250 mV\,/\,fC up to about
1.4 V where the threshold saturates. The baseline shift given in Table
\ref{tab01} is not negligible at the highest occupancies and has to be
compensated by a corresponding threshold shift. The differential,
open collector output is current programmable to adjust the signal
swing.
\subsection{The ASD-8 Board}
\label{secasd_board}
\paragraph*{Design considerations:}
The analog inputs of the ASD-8 chips have a very high sensitivity
which makes them susceptible to noise and RF
pickup. Because of the combination of analog inputs and digital
outputs on the chip a particular worry is the feedback from
\begin{table}
\begin{center}
\caption{Technical data of the ASD-8 board}
\vspace{1mm}
\label{tab03}
\begin{tabular}{|l|l|}
\hline
channels & 16 (2 chips)\\
dimensions & 4 layers, 67 $\times$ 56 mm$^2$ \\
pickup suppression & all supply voltages RC filtered \\
spark protection & diode protection $\geq$3 kV \\
crosstalk (analog) & $<0.5\%$\\
grounding & analog - digital separated \\
gain uniformity & $\pm 15\%$ per board \\
output & 2\,mA into 62\,$\Omega$ \\
voltage supply & $+3$ V, 100 mA \\
& $-3$ V, 100 mA \\
power & 600 mW \\
\hline
\end{tabular}
\end{center}
\end{table}
the output to the input. In the design of the printed circuit boards
carrying the ASD-8 chips special care was taken for a good grounding
scheme, decoupling of the analog and digital parts, noise rejection
from power sources, and crosstalk separation of different channels in a
densely packed environment.
Since only one board type for both the 5\,mm and 10\,mm cells should
be produced, the geometrical constraints are defined by the dimensions
of the 5\,mm cells. The width of the boards was adjusted to the cell
pitch; a compact assembly of connectors and electronic components was
achieved using SMD technology.
\paragraph*{Board layout:} Two ASD-8 chips with 8 channels each are
mounted on a multilayer board (Table \ref{tab03},
fig.\,\ref{figasdphoto}). A block diagram of the board is shown in
fig.\,\ref{figschematic}, more details can be found in \cite{schematics}.
\begin{figure}
\begin{center} \leavevmode
\epsfxsize=8cm \epsfbox{pict/asd8_photo_greyscale.eps}
\end{center}
\caption{Picture of the ASD-8 board of the HERA-B Outer Tracker.}
\label{figasdphoto}
\end{figure}
\begin{figure}
\begin{center} \leavevmode
\epsfig{file=pict/ampprinc.eps,width=8cm,angle=-90,}
\end{center}
\caption{Block diagram of the ASD-8 board.}
\label{figschematic}
\end{figure}
The chip has a differential input while the nature of the chamber
signals is not differential. With the anode signal fed into the
negative input different options for the positive input were
tested. The best common mode rejection
was obtained by connecting the positive input via 10\,pF to the
chamber ground, i.~e.\ the cathode (fig.\,\ref{figchain}). This
quasi-differential use of the inputs was clearly superior to the
option to leave the positive input open.
All supply voltages ($\pm 3$\,V) are separated for the analog, digital
and output drive circuits and have RC filters at the input. The ground
plane is separated for the analog and digital part of the chip. Both
grounds are connected via an inductance of $10\,\mu$H. The analog ground
is extended to rails along the sides of the board which slide into the
holding brackets on the chambers made of Cu-Be springs. Thus the
brackets together with the ground planes provide a shielding mesh
between the boards
in the densely packed front-end electronics on the chambers (see
fig.\,\ref{figfeephoto}).
\begin{figure}
\begin{center} \leavevmode
\epsfxsize=12cm \epsfbox{pict/pc4_fee.eps}
\end{center}
\caption{Photograph of the front-end electronics on an Outer
Tracker chamber: Shown is the lower part of a chamber with the ASD-8
cards plugged onto the feed-throughs at the gas box, the cables
connecting the ASD-8 outputs with the TDCs and the cable frame routing
the cables to the TDC crates (on the vertical frame part to the left,
not visible here).}
\label{figfeephoto}
\end{figure}
Since it was found that the ASD-8 inputs survive voltage spikes only
up to about 300\,V, a diode protection circuit was added to protect
against high voltage breakdowns in the drift cells. The protection
circuit consists of an input resistor of $50\,\Omega$ and two diodes
connected in parallel with opposite polarity (type BAV79) shortening
the ASD-8 input against a 0.7\,V level \cite{schematics}. Tests have
shown that in this way the input transistors could be protected
against discharges of 3000\,V fed into the input via a 1\,nF
capacitor.
Only one voltage level for the thresholds is provided per board. The
two chips on each board were therefore selected in order to be in the
same gain and noise quality classes (defined in
section \ref{secchiptests}).
The maximum difference of the reference thresholds $U_{ref}$ of two
channels of a single board, as defined below, should be less than
300\,mV. In some cases Schottky diodes (forward voltage 200 or
380\,mV) were used to shift the
thresholds of chips or of single channels to avoid more categories or
to reduce the number of rejects.
The current of the open collector, differential output of the ASD-8 is
adjustable. It was chosen to yield a swing of 120\,mV for the given
pull-up resistors on the TDC boards (fig.~\ref{figasdlevel} and
Sec.~\ref{tdc_circuit}). An offset of 1.25\,V is added by the
receivers on the TDC board. The analog outputs of the ASD-8 chips are
not used.
Test pulses are capacitively coupled (1\,pF) into the positive input
via a 1:10 voltage divider to reduce noise pickup via the test pulse
system. The pulses fire all channels on a board at the same time.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\epsfxsize=8cm \leavevmode\epsfbox{pict/asdout.eps}
\end{tabular}
\end{center}
\caption{ Schematics of the open
collector, differential output of the ASD-8; the current is adjusted
by the resistor $R_{Id}$. The output current together with the pull-up
resistors $R_P = 62\,\Omega$ and the voltage $U_P = 1.25$\,V
determines the signal swing and the reference level of the signals.}
\label{figasdlevel}
\end{figure}
\subsection{Low Voltage Distribution Board}
\label{secpdb}
The low voltage distribution board supplies 6 groups of 8 ASD-8 boards
each with power, thresholds and test pulses
(fig.\,\ref{figchain}). These boards are controlled by a SLIO
processor (Serial Linked I/O; Philips P82C150) and are connected to
the HERA-B Slow Control System via a CAN (Controller-Area-Network)
bus.
On each board, two different thresholds can be set using 10\,bit
DACs. For each of the 6 groups one of these thresholds has to be
preselected by setting a jumper. The board has an RS422 input for test
pulses which can be enabled for each group separately by the CAN
controller. The pulses have a length of about 30\,ns; the amplitude
can be adjusted by a potentiometer to a common value for each
group. An internal ADC of the SLIO measures the real thresholds,
supply voltages and currents using a multiplexer. The analog and
digital supply voltages for each ASD-8 board are filtered by RC
circuits.
\subsection{Test and selection of ASD-8 chips and boards}
\subsubsection{Series Chip Tests and Selection}
\label{secchiptests}
\paragraph*{Definition of chip quality: }
HERA-B ordered a total of 90 wafers, each with about 335 chips. The
yield per wafer was on average 77\%. Each chip was tested according to
a scheme which evaluated the general functioning, the threshold
behaviour, and the noise level.
For each channel a threshold reference voltage, $U_{ref}$, was defined
as the threshold at which a standard 4\,fC test pulse is recorded with
50\% efficiency. The noise behaviour of a chip was characterized by
the threshold $U_{noise}$ for which the noise rate exceeded 2~kHz. The
difference $U_{ref} - U_{noise}$ is a measure for the signal-to-noise
distance and thus of the quality of a channel. The parameters of the
test (4\,fC, 2\,kHz) were chosen to yield a high test sensitivity in
the relevant range,
and to yield stable and reproducible results.
\paragraph*{Results of the chip test: }
The chip tests revealed an appreciable range for the $U_{ref}$
threshold (fig.\,\ref{figcorr}). However,
within one chip the variation of thresholds was found to be mostly much
less than the $\pm 30\%$ specified in the purchase order.
The chip-to-chip variations prohibit the definition of a common
threshold for the whole system keeping at the same time the
overall efficiency high.
On the other hand, to keep the front-end electronics simple and
compact, individual threshold settings for each channel have to be
avoided. As a compromise 4 categories of threshold ranges were defined:
$$
U_{ref} = 850 \ldots 950,\, 950 \ldots 1050,\, 1050 \ldots 1150,\, >
1150\,{\rm mV}.
$$
To account for the signal-to-noise variation for a given $U_{ref}$
(fig.~\ref{figcorr}) in each of the 4 threshold categories, 3 noise
categories corresponding to noise distances were defined:
$$
\overline{U_{ref}}-U_{noise} =
350 \ldots 450,\, 450 \ldots 550, \, > 550\,{\rm mV},
$$
where $ \overline{U_{ref}}$ are the central values of the threshold
categories ($\overline{U_{ref}}$ = 900, 1000, 1100, 1200\,mV). A chip
was assigned to one of the 12 categories according to its minimal
$U_{ref}$ and maximal $U_{noise}$ values. The assignment was used to
mount two similar chips on a board and to combine similar boards to
groups which are supplied with a common threshold.
\begin{figure}
\begin{center} \leavevmode
\begin{tabular}{cc}
\epsfxsize=6.5cm \epsfbox{pict/thr_corr.eps} &
\epsfxsize=6.5cm \epsfbox{pict/diff_u50_unoise.eps}%
\end{tabular}
\end{center}
\caption{Left: Threshold $U_{ref}$ plotted against the noise
threshold $U_{noise}$ for each channel of the tested chips. Right:
Distribution of the signal-noise difference $U_{ref}- U_{noise}$ for
each channel of the tested chips and for the boards where the
signal-noise difference is defined by the channels with minimal
$U_{ref}$ and maximal $U_{noise}$. }
\label{figcorr}
\vspace{5mm}
\begin{center} \leavevmode
\epsfxsize=6cm \epsfbox{pict/diff_u50.eps}%
\end{center}
\caption{Threshold uniformity on the ASD-8 boards: distribution of the
difference between maximal and minimal reference threshold $U_{ref}$
on a board ($\sim 250\,$mV/fC).
Differences above 300\,mV have been reduced by a diode circuit (see text).
}
\label{figdiffuthr}
\end{figure}
\subsubsection{Board Tests}
The 11000 produced boards had to undergo quality tests and were then
sorted according to 12 categories as was done for the single chips: 4
categories according to the threshold reference voltage, $U_{ref}$,
and 3 categories of signal-to-noise distance $
\overline{U_{ref}}-U_{noise}$ (see fig.\,\ref{figcorr} right). A board
enters into a category according to the minimal $U_{ref}$
and maximal $U_{noise}$ of the channels.
Details about the distribution of the boards in different categories
are given in \cite{lhce}. For each board the two chips belong to the
same category. The remaining variations of $U_{ref}$ within the 16
channels can be seen in fig.\,\ref{figdiffuthr} which shows the
difference between the maximal and minimal $U_{ref}$ on the boards.
Differences larger than 300 mV have been decreased by Schottky diodes
as described in section \ref{secasd_board}. With this
procedure the threshold uniformity on the boards is about $\pm 15\%$.
The variations between different boards are accounted for by the
threshold settings on the distribution boards (section \ref{secpdb}).
The board quality is mainly determined by the noise
category, more than 80\% are in the upper two categories with
$\overline{U_{ref}}-U_{noise} > 450 \,{\rm mV}$ for both chips.
\renewcommand{\textfraction}{.3}
\renewcommand{\floatpagefraction}{.7}
\section{The Time Measurement System}
\subsection{Introduction}
The HERA--B TDC system has been developed for the readout of the Outer
Tracker, the RICH, the Muon System and the High-p$_T$ Chambers.
Except for the Outer Tracker the other systems use the TDC chip only as
a hit register. The system is highly integrated at low power
consumption and reasonable costs (Table \ref{tab02}).
\begin{table}
\begin{center}
\caption{Characteristics of the TDC chip.}
\vspace{1mm}
\label{tab02}
\begin{tabular}{|l|l|}
\hline
integration density & 8 ch. on $9.4\times 9.4$\,mm$^2$ die \rule{0.0mm}{0mm} \\
power consumption & 0.06 W\,/\,chip\\
operating voltage & 5 V\\
time bin & about 0.5\,ns \\
time resolution & about 0.2\,ns\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\epsfig{file=pict/tdcchip_new.eps,width=\textwidth}
\caption{\label{chip} Block diagram of the TDC chip.}
\end{center}
\end{figure}
The TDC was designed to digitize the time in 0.5\,ns bins with an 8
bit output. After digitization the data of each channel is stored in
a 128 cells deep digital pipeline which is read out by a digital signal
processor (SHARC processor ADSP-21060 from Analog Devices). With the
integrated channel buffering the requirement of a dead-time free
readout after the First Level Trigger decision is fulfilled.
The chips are mounted on boards with 16 chips each, corresponding to 128
TDC channels per board. The crates housing the TDC boards are
mounted on the cable frame of the detector superlayers, except for the
magnet chambers for which the TDC crates are placed outside the
magnet.
The maximal cable length between the ASD-8 boards and the TDC crates
is 5 to 10 m, depending on the superlayer.
%
\subsection{\label{tdc_circuit}The TDC Chip}
\subsubsection{Overview of the Layout}
Figure \ref{chip} shows the structure of the TDC chip which was
designed by the company MSC\footnote{MSC, Industriestr.\,16, D-76297
Stutensee, Germany} and produced by the company NEC (Japan). The chip was
developed as an ASIC (Application Specific Integrated Circuit) in
0.8~$\mu$m CMOS technology
using VLSI (Very Large Scale Integration) design techniques (process
provider: NEC). The 9.4~$\times$~9.4~mm$^2$ die is housed in a 160
pin-package. The TDC chip includes the following features:
\begin{itemize}
\item[-] 8 differential inputs for the ASD-8 outputs,
\item[-] 8 hit output registers (with optional OR for neighbouring
channels) for the {F}irst {L}evel {T}rigger,
\item[-] 8 x 8 bit time-to-digital converter,
\item[-] calibration unit (with high speed multiplier),
\item[-] buffer memory pipeline,
\item[-] derandomizer FIFO memory,
\item[-] readout buffer with addressing unit,
\item[-] chip and channel select units,
\item[-] test device for all input signals,
\item[-] hit input register for 64 channels.
\end{itemize}
\subsubsection{Input Circuit}
The differential input signals from the ASD-8 chips are received by a
{GTL-I/O}--Interface (GTL = Gunning Transceiver Logic).
The GTL standard, characterized as a low-level, high-speed,
noise-immune digital logic, requires the differential swing to be
$\ge$50~mV, in the Outer Tracker system it was set to 120~mV. The
levels are defined by the circuit on the TDC board as shown in
fig.~\ref{figasdlevel}. All other input/output signal levels of the
chip are TTL compatible to simplify interfacing.
After passing the GTL-I/O--Interface the drift chamber signals are
latched with the BX clock. Thus the hit information is available in
the hit output register in the next BX clock cycle and
can be used by the First Level Trigger. If required, the hit
information of two neighbouring channels can be merged by an
OR--function.
\subsubsection{TDC Circuit and Internal Calibration}
The heart of the TDC chip is an 8 bit {TDC} circuit which converts
time differences into digitized values using a delay line chain made
of logic gates. The method\footnote{German patent nr.~41\,11\,350}
\cite{tdc_meas} is fully digital with the major advantage that no
conversion time is required and the measurements can be made nearly
without deadtime (section \ref{tdcperform}).
The time measurement bin of the circuit follows from the basic gate
delay and was measured to be about 0.48\,ns, very close to the
targeted 0.5\,ns.
Each chip has nine channels, eight for time measurements and one for
calibration (not shown in fig.\,\ref{chip}). At the startup of the
system (after a Reset) all nine channels are calibrated assuming a
linear relation between the measured time interval and the TDC output
(see section \ref{tdcperform}). For each channel, slope and offset of a
straight line are determined by measuring time intervals of 100\,ns
and 200\,ns, both derived from a 10 MHz gauge clock, with a 10\,bit
resolution.
The straight line parameters include a mapping of a 100\,ns interval
onto the 256 output counts of the TDC. The two parameters for each
channel are written into the memory of the calibration unit.
To compensate temperature and voltage variations the chip features a
self-calibration procedure. During data taking, every 0.8 seconds the
slope parameter of the calibration channel is re-measured in the same
way as for the initial calibration and a correction factor is
calculated which is also written into the memory of the calibration
unit. The time for the other 8 channels on the chip is multiplied by
this
factor. The mapping of the 100\,ns interval onto 8 bits including the
multiplication by the correction factor is done during the transfer
from the pipeline to the derandomizer by the High Speed Multiplier
unit.
The time is measured in common stop mode. The pulse from the GTL input
or the test device yields the START while the STOP is derived from the
external HERA BX clock which is synchronized with the
bunch crossing signal (BX). Because of the mapping of 100~ns
onto the 8 bits (256 counts)
the least significant bit (LSB) of the time measurement (1 TDC count)
corresponds to 0.39~ns in HERA-B. Note that the bins for the time
measurement remain fixed at about 0.48\,ns which determines the time
resolution (section \ref{tdcperform}).
\subsubsection{Buffer Circuits}
The {pipeline} of each channel is a ring-buffer memory with 8 bits per
cell
and a depth of 128 events. The pipeline depth corresponds to a time
interval of about 12~$\mu$s available for the First Level Trigger
decision. Writing and reading the data is controlled by two different
pointers using the BX number as address. On an Accept signal from the
First Level Trigger (FLT Accept) the {F}ast {C}ontrol {S}ystem (FCS)
generates the read pointer (FLT Number) and pushes the event data to
the derandomizer FIFO. The design mean trigger rate is 50~kHz although
the TDC system is capable to run at more than 100~kHz.
The {derandomizer FIFO} with a depth of 16 events serves for each TDC
channel as buffer memory for peak rates. The events in the FIFO are
read out into the {S}econd {L}evel {B}uffer (system of SHARC processors).
The readout is organized by the {Chip Select Unit} which addresses the
TDC chip and the {Channel Select Unit} which addresses the buffer of
each channel. If the readout cannot follow the trigger rate the FIFO
is filled up and the next event could be lost. In this case a FIFO
Overflow bit is set which is used in HERA-B to stop the data
acquisition until the buffer is available again.
\subsubsection{Control and Test Functions}
The chip can be tested using an internal pattern generator which is
controlled by a 3~bit input for setting the test functions. Two bits
control the mode (all, even, odd channels on) and one bit starts the
measurement of a preselected time interval provided by the FCS system.
In this way the proper functioning of the chip can be tested.
Instead of the use as 8 channel TDC, the chip can also be used as a
hit register with 64 channels.
The hit register mode is set by a function bit (FUNC) which enables the 64
hit inputs and switches off the TDC circuits. The hit inputs are
grouped by 8, so that the same data format (8 times 8~bits) can be
used. The buffer management is the same as for the TDC application.
\subsection{The TDC board}
The TDC board (fig.\,\ref{board}), built in
9U VME format, comprises 16 TDC chips
together with a DAQ interface (SHARC Link).
Except for the GTL inputs the signals on the board are TTL levels.
On the boards to be used as a hit register, an auxiliary chip converts
the differential amplifier outputs into single ended lines (`64 TTL
HIT INPUTS' in fig.\,\ref{chip}). All channels can be initialized in
parallel by a general Reset.
The main components on the TDC board are:
\begin{itemize}
\item[-] 16 TDC chips,
\item[-] SHARC Link interface,
\item[-] system clock + gauge clock (reference clock for calibration),
\item[-] Protocol Control Unit (PCU) + multiplexer (MUX),
\item[-] address switches (board and chip addresses).
\end{itemize}
The high-speed SHARC Link interface
on the board (mainly fast drivers and transfer logic) transfers the
data for all channels from the FIFO on the TDC chip via its
point-to-point connection to a host system which, in HERA-B, is a
SHARC processor.
\begin{figure
\begin{center}
\includegraphics[scale=0.5,clip,bb=0 0 590 437]{pict/board1.eps}
\caption{\label{board} Schematics of the TDC board structure for
time measurement.}
\end{center}
\end{figure}
The 6-bit link port used for the transfer has four data lines, a clock
line and an acknowledge line. The link ports send data packed into 48 bit
words in direct communication with the SHARC memory. A fixed
clock/acknowledge handshake has been designed to control transfers
(only transmit cycles) to a SHARC-compatible receiver.
The Protocol Control Unit (PCU), which is a programmable logical
device (Lattice Semiconductors ispLSI 1048E), was employed to address
all TDC channels sequentially by using the Chip and Channel Select and
perform a packet-oriented protocol transmission consisting of
144~bytes. This Event Format Block Protocol is organized in three
major sections: the header field, the data field and the trailer field
(for details see \cite{zimmermann}).
The system clock determines the execution and transmission rates. The
TDC board can utilize clock rates from 10~MHz to 30~MHz. Thus the
maximal transfer rate to the SHARC board is 15~MByte/s (4 bit data
line at 30~MHz). In the Outer Tracker system a 27~MHz clock is
used. A small clock circuitry generates a gauge clock rate of 10~MHz
to be used for the TDC online calibration.
The status of the TDC board is displayed by four LEDs indicating a
System Reset, a possible overflow situation, a Trigger Accept and a
fault condition. The overflow signal is a logical OR signal of all TDC
overflow flags.
To limit noise on the ground plane the TDC board provides three
separate, independently grounded power systems: for the digital
control, for the TDC chips and for the FCS signals and the SHARC
outputs. The user can select the optimal power scheme by bridges
connecting planes at several locations on board. For the HERA-B
application one common power supply turned out to be sufficient. The
major environmental and electrical specifications of the board are
summarized in Table \ref{specs}.
\begin{table}
\begin{center}
\caption{\label{specs}Environmental and electrical specifications
of the TDC board.}
\vspace{1mm}
\begin{tabular}{|lll|}\hline
Operation temperature range&0 to 55 $^{\circ}$C&\\
Operation humidity range&0 to 90 \%& \\
Height&366.8~mm&(9 VME HU)\\
Depth&220.0~mm&\\
Width&1.9~mm& \\
Weight&736~g&\\\hline\hline
{\bf Voltage}&{\bf Regulation}&{\bf Max. Current}\\\hline
+5~V (digital)&$\pm$5\%&3.02~A\\
+5~V TDC&$\pm$5\%&1.18~A\\\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Calibration and Performance of the TDC system}
\label{tdcperform}
The TDC system was designed to have a time measurement binning of
about 0.5~ns with a differential non-linearity of less than 3\%. Due
to tolerances in the production process the actual value can differ
from the design value. The time bin size was measured for an
engineering sample of 20 TDC chips with a chip tester HP 82000
yielding ($0.48~\pm~0.05$)\,ns at the working
temperature of 35~$^{\circ}$C with a temperature dependence of
($0.0015~\pm~0.0007$)\,ns/K. The quoted errors include the uncertainty
of the measurement device and the dispersion of the sample. For all
delivered charges of chips random test samples have verified that
these values remained stable. Because of the mapping of the required
maximal time interval onto 8 bits by the calibration procedure
described in section \ref{tdc_circuit}, the actual size of the time
measurement bin is uncritical. It has only to be assured that the
maximal time interval to be measured can be covered with 8 bits
(i.~e.~the time corresponding to the LSB of the TDC output cannot be
larger than time measurement bin).
All TDC boards were tested for linearity, dead time and time
resolution. A dead time was measured to arise between sequential BX
clock cycles with typical values between 3\,ns and 5\,ns. This
decreases only slightly the maximally measurable time interval.
The {linearity} of the time measurement was verified for the whole
time range of 100\,ns. A straight line fitted to the measured TDC
values over a range of 100~ns yields a slope of
($2.560~\pm~0.002$)\,counts/ns reflecting the mapping of the time
range onto 256 bins. The error is determined from fitting time
measurements of a single channel.
The resolution for a single TDC channel was measured to be 0.14~ns (1
standard deviation),
in agreement with the statistically expected value for a time
measurement in steps of 0.48~ns.
The corresponding uncertainty of the hit position of 11\,$\mu$m
contributes negligibly to the position resolution of the detector. The
long-term stability of the resolution
is assured by the periodic calibration every 0.8\,s. Variations on a
shorter time scale could be caused by ripples from the TDC power
supplies. The measured resolution confirms that the chosen power
supplies are appropriate.
More details on the calibration of the TDC channels and the
performance of the TDC system can be found in \cite{zimmermann}.
\section{Electronics Installation and Commissioning}
\label{secinstall}
\subsection{Installation and System Integration}
The ASD-8 boards of the different categories were installed such that
for the innermost detector regions with the highest occupancies
amplifiers with a large signal-noise difference ($> 550\,$mV) were
chosen. The outer detector parts, contributing less to the acceptance,
were equipped with boards with a small signal-noise difference.
The CAN bus controlled distribution boards allow to set individual
thresholds for groups of 8 ASD-8 boards (see section \ref{secpdb}).
\begin{figure}
\begin{center} \leavevmode
\epsfxsize=8cm \epsfbox{pict/noise_paper_one.eps}%
\end{center}
\caption{Noise occupancies for two different sectors of a superlayer
after installation in HERA-B: The 5\,mm cells are equipped with ASD-8
boards from the best noise category, labelled as ``green'' (circles),
while the 10~mm cells are connected to boards of the worst noise
category, labelled as ``red'' (boxes).}
\label{fignoise}
\end{figure}
The grounding and shielding scheme of the front-end electronics has
been implemented as described in section \ref{sec_ground}. After the
initial installation some rework had to be done to improve the system
stability. For example, bad ground connections between either
amplifier and gas box or amplifier and cable shielding led to
oscillations of the amplifiers with their characteristic frequencies
of 40 to 80~MHz.
Another problem was a high noise level generated by the signal
connections between the TDC boards and the trigger link boards which
prepare the tracker hits for the First Level Trigger. This forced an
increase of the thresholds for the superlayers providing the trigger
input. To overcome this problem, high frequency filters were added and
the current transfer over these lines was reduced by removing the line
driver chips from the affected TDC boards. In this way the thresholds
could be set to similar values as in the other superlayers.
\subsection{Commissioning and Performance}
The described front-end electronics has been used in HERA-B since
1997, the beginning of prototype tests, and since January 2000 all tracker
channels were fully equipped. They were running until the end of data
taking of the HERA-B experiment in 2003 with a very low failure rate,
with the exception of intermediate problems with the HV boards as
described below. For example, only about 0.1\% of the TDC
boards and even less of the ASD-8 boards had to be replaced.
The noise occupancies (probability to find a noise hit in
the readout time window) for two sectors of a superlayer after
installation in HERA-B are shown in fig.\,\ref{fignoise}. The plot
shows a sector with 5 mm cells equipped with ASD-8 boards with low
noise thresholds and a sector with 10 mm cells and ASD-8 boards from
the worst noise class. For the best ASD-8 boards the threshold can be
set as low as 2~fC, while for the worst category the threshold has to
be increased to about 3~fC to achieve a similar noise occupancy around
1\,\%.
In the beginning of 2000 the HERA-B detector was
completed and took first data from April to August. Unfortunately the
expected performance of the Outer Tracker system could not be reached
during this run. The most severe problem was a continuous loss
of channels due to high voltage breakdowns until finally 15\% of the
channels were lost.
During the shutdown of HERA from autumn 2000 to summer 2001 major
repair and improvement work on the Outer Tracker had solved all
remaining problems. In particular, most of the high voltage breakdowns
could be identified to be due to two specific capacitors on the HV
board. These two were soldered with a different technique than the
rest, since they were placed on the opposite side of the HV boards
(section \ref{sec_hv_routing}). While 15 capacitors were glued prior
to the soldering process, the remaining two were positioned directly
on the board and soldered. It seems that the remaining soldering paste
under these two capacitors increased the probability of high voltage
breakdowns across them. About 12000 of these capacitors had to be
replaced. This required access to every module and hence a complete
disassembly of the detector which was quite manpower and time
consuming.
After these repairs the Outer Tracker reached its final status. In the
data taking period 2002/2003 the detector was operated without any
specific problems. The design goals of the readout electronics have
been mostly reached. A slight deterioration of performance was
introduced by the necessity to raise the thresholds in some parts of
the detector as explained above. This deficit is mainly caused by the
relatively large channel-to-channel variations of the ASD-8
sensitivity and noise performance and could probably have been
overcome by applying individual thresholds for each channel. In
retrospective, the drawbacks of the grouping of thresholds seem to
preponderate the advantages of simplicity.
It was demonstrated that the electronics can handle high rates, up to
the design value of 40 MHz interaction rate with an occupancy in the
hottest channels of more than 20\%. However, finally the experiment
was mostly run at rates not exceeding 5~MHz.
The analysis of the 2002/2003 data yielded an overall good performance
of the Outer Tracker, with a high tracking efficiency of 96\% and a
track hit resolution of 370\,$\mu$m for tracks above 5\,GeV. The
resolution is much worse than the design value. This is partly due
to the not optimal threshold settings, as described above, but also to
deficiencies in the calibration and alignment procedures. Details of the
Outer Tracker performance are described in a separate paper
\cite{otr_perf}.
\section{Summary}
In this paper we have described the front-end readout system for the
112\,674 drift chamber channels of the HERA-B Outer Tracker
detector. The basic components are the amplifier-shaper-discriminator
chip ASD-8 and a customized TDC chip which provide the required high
integration density, low noise, high sensitivity, rate tolerance, and
low per-channel cost.
The high sensitivity of the ASD-8 amplifiers together with a large
chip-to-chip variation of the thresholds was a major challenge for the
implementation of the amplifiers. Grouping the chips according to
threshold and noise categories an economic system of threshold setting
was developed leading to an acceptable noise performance for the whole
system. An improvement is still possible by an individual
adjustment of the thresholds for each channel.
The TDC system is based on an ASIC
which digitizes times in bins of about 0.5\,ns within a full scale of 256
bins. The time measurement is very stable due to an internal automatic
calibration procedure. In HERA-B the drift times are measured within
every bunch crossing period of 96~ns with respect to the external HERA
clock. An integrated pipeline stores data for 128 events
satisfying the requirement for a dead-time free trigger and data
acquisition system at the design trigger rate.
The prototype tests and the analysis of data taken with the full
detector show that the front-end electronics of the Outer Tracker
fulfills the requirements posed on the detector for running in a high
rate environment.
\section*{Acknowledgements}
We thank our colleagues of the HERA-B Collaboration who made in a
common effort the running of the detector possible. The HERA-B
experiment would not have been possible without the enormous effort
and commitment of our technical and administrative staff. It is a
pleasure to thank all of them.
We are grateful to Mitchell Newcomer for discussions and many
technical advices and to Karl-Tasso Kn\"opfle for carefully reading
the manuscript and for useful comments.
We express our gratitude to the DESY laboratory for the strong support
in setting up and running the HERA-B experiment. We are also indebted
to the DESY accelerator group for the continuous efforts to provide
good beam conditions.
\vspace{2cm}
|
2,877,628,089,364 | arxiv | \section{Experimental setup}
We here detail the setup presented in Fig.~\ref{fig:setup}(a) of the main text, see Refs.~\cite{Ortiz_2019,Ferreira_2020} for further details. The scattering medium is produced by loading a magneto-optical trap from a vapor of $N\approx 10^8$ $^{85}$Rb atoms, with a low atomic density $\rho\approx 0.005/\lambda^3$ ($\lambda=2\pi/k$ the optical wavelength). After a $2$~ms time of flight, the cloud is illuminated by a flattop intensity laser beam with a frequency $\omega_L$ locked on the $\lvert 3\rangle \rightarrow \lvert 4'\rangle$ hyperfine transition of the D2 line. The beam diameter at the atoms position is $14.7$~mm, which is much larger than the cloud radius ($\sim 0.4$ and $0.8$~mm in the two transverse directions). Hence, the intensity incident on the atoms is uniform (within 10$\,\%$), with Rabi frequency $\Omega$. We use $\lambda/2$ and $\lambda/4$ plates to obtain a circularly polarized light, and the intensity is changed to tune the saturation parameter $s=2\Omega^2/\Gamma^2$ between 0.004 and 60. To maintain similar heating effects over the different regimes, the duration of the laser pulse, always at resonance, is adjusted to get a constant number of photons scattered per atom of $\sim 400$.
The scattered light is collected at $\theta=90^\circ$ from the probe beam axis, using a polarization-maintaining (PM) single-mode fiber. The polarization is selected before the fiber with a $\lambda/2$ plate and a polarization beam splitter (PBS) to maximize the amount of collected photons as well as to adjust the incident polarization along the PM fiber axis. This PM fiber is then connected to a fibered beam splitter (FBS) whose outputs illuminate two single photon counter detectors (avalanche photodiodes APDs) connected to a time-to-digital converter (TDC). The latter device allows to time-tag the arrival of each photon. The second input of the fibered beamsplitter is used to add a local oscillator (LO) derived from the laser which delivers the probe beam. The LO is frequency-shifted by $\omega_\mathrm{BN}=220$~MHz with an acousto-optical modulator (AOM), and its polarization is adjusted before the entrance of the fiber to correspond to the PM fiber axis.
\section{Second-order correlation function for non-interacting atoms}
We derive here the analytical expression for the second-order correlation function at zero delay $g^{(2)}(0)$ for an ensemble of $N$ two-level non-interacting atoms. The detected electric field is assumed to be measured in the far-field and along a direction $\hat n$, so it reads:
\begin{equation}
\hat E^{+} = E_0 \sum_{a=1}^{N} e^{-ik\hat n\cdot \mathbf{r}_a} \hat\sigma^{-}_a,
\end{equation}
with $\mathbf{r}_a$ the position vector of atom $a$, $\hat\sigma^{\mp}_a$ the two-level lowering/raising spin operation, and $E_0$ a normalization prefactor. Without loss of generality, we hereafter set $E_0=1$, resulting in a normalized electric field intensity which peaks at unity for a single atom. We also assume that the atomic cloud is dilute, thus interaction between the atoms can be disregarded and the steady state of the system is separable. We can then write the state of the system as a direct product as follows
\begin{equation}\label{Eq:SeparableState}
\hat\rho=\bigotimes_{a=1}^{N}\hat\rho_a,
\end{equation}
where $\hat\rho_a$ is the single-particle density matrix.
\subsection{Scattered field intensity}
Let us first calculate the intensity of the field scattered by the atomic ensemble, which reads
\begin{equation}\label{Eq:IntensityDefition}
\begin{aligned}
I&=\langle \hat E^{-} \hat E^+\rangle=\sum_{ab}\Tr{e^{ik\hat n\cdot \mathbf{r}_a}\hat\sigma_a^{+}e^{-ik\hat n\cdot \mathbf{r}_b}\hat\sigma^{-}_b\hat\rho}\\
&=\sum_{a}\Tr{\hat\sigma_a^{+}\hat\sigma^{-}_a\hat\rho}+\sideset{}{'}\sum_{ab}\Tr{e^{ik\hat n\cdot \mathbf{r}_a}\hat\sigma_a^{+}e^{-ik\hat n\cdot \mathbf{r}_b}\hat\sigma^{-}_b\hat\rho},
\end{aligned}
\end{equation}
where we have introduced the notation $\sideset{}{'}\sum\limits_{a,b\ldots n}\equiv \sideset{}{}\sum\limits_a\sideset{}{}\sum\limits_{b\neq a}\quad\ldots\sideset{}{}\sum\limits_{n\neq a,b\ldots n-1}$.
Using now the separability of the atomic state as in Eq.~\eqref{Eq:SeparableState} and conveniently introducing the excited population
\begin{equation}\label{Eq:PopulationDefinition}
n_a \equiv \Tr{\hat\sigma_a^{+}\hat\sigma^{-}_a\hat\rho_a}
\end{equation}
and the coherence
\begin{equation}\label{Eq:CoherenceDefinition}
\beta_a \equiv e^{-ik\hat n\cdot \mathbf{r}_a}\Tr{\hat\sigma^{-}_a\hat\rho_a}\implies \beta_a^{*} \equiv e^{ik\hat n\cdot \mathbf{r}_a}\Tr{\hat\sigma^{+}_a\hat\rho_a},
\end{equation}
we can rewrite Eq.~\eqref{Eq:IntensityDefition} as
\begin{equation}
\begin{aligned}
I&=\sum_{a}n_a+\sideset{}{'}\sum_{ab}\beta_a^*\beta_b,
\\ &=\sum_{a}n_a+\Big|\sum_{a}\beta_a\Big|^2-\sum_{a}|\beta_a|^2.
\end{aligned}
\end{equation}
\subsection{Unormalized second-order correlation function}
Similarly, following the separability of the atomic state, the second-order correlation reads
\begin{equation}\label{Eq:SecondOrderDefinition}
\begin{aligned}
G^{(2)}&(0)=\langle \hat E^{-} \hat E^{-}\hat E^+ \hat E^+\rangle\\
&= 2\sideset{}{'}\sum_{ab}\Tr{\hat\sigma_a^{+}\hat\sigma^{-}_a\hat\rho_a}\Tr{\hat\sigma_b^{+}\hat\sigma^{-}_b\hat\rho_b}\\
&+4\sideset{}{'}\sum_{abc}\Tr{\hat\sigma_a^{+}\hat\sigma^{-}_a\hat\rho_a}\Tr{e^{ik\hat n\cdot \mathbf{r}_b}\hat\sigma^{+}_b\hat\rho_b}\\
&\quad\quad\quad\,\,\,\,\Tr{e^{-ik\hat n\cdot \mathbf{r}_c}\hat\sigma^{-}_c\hat\rho_c}\\
&+\sideset{}{'}\sum_{abcd}\Tr{e^{ik\hat n\cdot \mathbf{r}_a}\hat\sigma_a^{+}\hat\rho_a}\Tr{e^{ik\hat n\cdot \mathbf{r}_b}\hat\sigma^{+}_b\hat\rho_b}\\
&\quad\quad\quad\,\Tr{e^{-ik\hat n\cdot \mathbf{r}_c}\hat\sigma_c^{-}\hat\rho_c}\Tr{e^{-ik\hat n\cdot \mathbf{r}_d}\hat\sigma^{-}_d\hat\rho_d}.
\end{aligned}
\end{equation}
Using the definitions in Eqs.~\eqref{Eq:PopulationDefinition} and \eqref{Eq:CoherenceDefinition}, we are left with
\begin{equation}
G^{(2)}(0)=2\sideset{}{'}\sum_{ab}n_an_b+4\sideset{}{'}\sum_{abc}n_a\beta_b^*\beta_c+\sideset{}{'}\sum_{abcd}\beta_a^{*}\beta_b^{*}\beta_c\beta_d.
\end{equation}
Reorganizing the expression using sums without index exclusion, one can expand the expression above as
\begin{equation}\label{Eq:G2Separable}
\begin{aligned}
G^{(2)}(0)=&2\Big(\sum_a n_a\Big)^2-2\sum_a n_a^2\\
+&4\Big(\sum_a n_a\Big)\Big(\Big|\sum_b\beta_b\Big|^2-\sum_b|\beta_b|^2\Big)\\
-&8\mathrm{Re}\Big\{\Big(\sum_a n_a\beta^*_a\Big)\Big(\sum_b \beta_b\Big)\Big\}+8\sum_a n_a |\beta_a|^2\\
+&\Big|\sum_a \beta_a\Big|^4-6\sum_a |\beta_a|^4 - 4\Big|\sum_a \beta_a\Big|^2\Big(\sum_b |\beta_b|^2\Big)\\
+&8\mathrm{Re}\Big\{\Big(\sum_a \beta_a\Big)\Big(\sum_b |\beta_b|^2\beta_b^*\Big)\Big\}+2\Big(\sum_a |\beta_a|^2\Big)^2\\
-&2\mathrm{Re}\Big\{\Big(\sum_a \beta_a\Big)^2\Big(\sum_b (\beta^*_b)^2\Big)\Big\}+\Big|\sum_a \beta^2_a\Big|^2.
\end{aligned}
\end{equation}
\subsection{Separable steady-state as a function of the saturation parameter}
Considering a laser with wave-vector $\mathbf{k}_L$ driving a two-level atom on its resonance at a Rabi frequency $\Omega$, the single-atom density matrix in the steady state is given by
\begin{equation}
\begin{aligned}
\hat\rho_a=&\rho^{(a)}_{ee}|e\rangle\langle e|+\rho^{(a)}_{eg}|e\rangle\langle g|\\ +&\rho^{(a)}_{ge}|g\rangle\langle e|+\rho^{(a)}_{gg}|g\rangle\langle g|,
\end{aligned}
\end{equation}
with
\begin{equation}
\begin{aligned}
\rho^{(a)}_{ge}&=\Big(\rho^{(a)}_{eg}\Big)^*=-i\frac{e^{-i\mathbf{k}_L.\mathbf{r}_a}}{1+s}\sqrt{\frac{s}{2}},\\ \rho^{(a)}_{ee}&=\frac{s}{2(1+s)},\\
\rho^{(a)}_{gg}&=\frac{2+s}{2(1+s)},
\end{aligned}
\end{equation}
where $s\equiv 2\Omega^2/(\Gamma^2+4\Delta^2)$ is the saturation parameter, which on resonance ($\Delta=0$) reduces to the ratio $s=2\Omega^2/\Gamma^2=P_\mathrm{SE}/P_\mathrm{EL}$, as discussed in the main text.
Substituting the elements of the single-particle density matrix in the definition of $g^{(2)}(0)=G^{(2)}(0)/I^2$, one is left with
\begin{widetext}
\begin{equation}
\begin{aligned}
g^{(2)}(0)=&\frac{1}{(N s + |\Phi_{1}|^2)^2}\Bigg(2Ns[2+(N-1)s]+4s(N-2)|\Phi_1|^2+|\Phi_1^2-\Phi_2|^2\Bigg),
\end{aligned}
\end{equation}
\end{widetext}
where we have defined
\begin{equation}
\begin{aligned}
\Phi_1&=\sum_a e^{i(k\hat n-\mathbf{k}_L)\cdot \mathbf{r}_a},\\
\Phi_2&=\sum_a e^{i2(k\hat n -\mathbf{k}_L)\cdot \mathbf{r}_a}.
\end{aligned}
\end{equation}
For $s\to\infty$, one recovers the formula $g^{(2)}(0)=2(1-1/N)$. In the limit $s\to\infty$, the destructive interference condition $\Phi_1=0$ leads to a $g^{(2)}(0)$ scaling as $(|\Phi_2|/sN)^2$, which diverges for $s\to 0$ at fixed $N$.
\section{Siegert relation for two independent fields}
Here we show in a condensed manner that the sum of two fields, each satisfying the Siegert relation and the associated conditions described in the main text, also satisfies the relation, provided that the fields are uncorrelated. The derivation is provided for two arbitrary electric fields $\hat E^{+}_e$ and $\hat E^{+}_i$ corresponding, for example, to the elastically and inelastically (that is, spontaneously emitted) electric fields of the main text. The total electric field is $\hat{E}^+ = \hat{E}_e^+ + \hat{E}_i^+$, and it presents the following second-order correlation function in the steady state:
\begin{equation}
\begin{aligned}
G^{(2)}(\tau)= & \langle [\hat{E}^{-}_e(t) + \hat{E}^{-}_i(t)] [\hat{E}^{-}_e(t+\tau) + \hat{E}^{-}_i(t+\tau)]\\
\times&[\hat{E}^{+}_e(t+\tau) + \hat{E}^{+}_i(t+\tau)] [\hat{E}^{+}_e(t) + \hat{E}^{+}_i(t)]\rangle.
\end{aligned}
\end{equation}
Both elastic and inelastic terms of the electric field have a zero average: $\langle \hat{E}_{e,i}\rangle=0$, so their sum as well: $\langle \hat{E}\rangle=0$. Furthermore, the absence of correlation between the fields makes that the contributions from the elastic and inelastic terms can be factorized in the above expression of $G^{(2)}$, which in turn leads to the cancellation of several terms:
\begin{equation}
\begin{aligned}
G^{(2)}(\tau)= & \langle \hat{E}^{-}_e(t) \hat{E}^{-}_e(t+\tau) \hat{E}^{+}_e(t+\tau) \hat{E}^{+}_e(t) \rangle \\
+&\langle \hat{E}^{-}_i(t) \hat{E}^{-}_i(t+\tau) \hat{E}^{+}_i(t+\tau) \hat{E}^{+}_i(t) \rangle\\
+&2\langle \hat{E}^{-}_e(t) \hat{E}^{+}_e(t)\rangle\langle \hat{E}^{-}_i(t) \hat{E}^{+}_i(t)\rangle\\
+&2\mathrm{Re}\left[\langle \hat{E}^{-}_e(t+\tau) \hat{E}^{+}_e(t)\rangle\langle \hat{E}^{-}_i(t) \hat{E}^{+}_i(t+\tau) \rangle\right].
\end{aligned}
\end{equation}
Considering now that each scatterer presents the same single-particle correlation functions $G^{s(1)}_{e,i}(\tau)$ and contributes equally to the total field, then the single emitter correlation function of the field writes $G^{s(1)}(\tau)=G^{s(1)}_{e}(\tau)+G^{s(1)}_{i}(\tau)$, where we have used that the fields are uncorrelated and have zero average. The correlation function of the total field is then equal to $\langle \hat{E}^{-}(t)\hat{E}^{-}(t+\tau)\rangle=NG^{s(1)}(\tau)$, with $N$ the number of scatterers. Calling $G^{(2)}_{e,i}(\tau)$ the non-normalized second order correlation function of the total elastic or inelastic terms, respectively, we obtain:
\begin{equation}
\begin{aligned}
G^{(2)}(\tau) =& G^{(2)}_{e}(\tau)+G^{(2)}_{i}(\tau)+2N^2G^{s(1)}_{e}(0)G^{s(1)}_{i}(0)\\
+& 2N^2\mathrm{Re}\left[G^{s(1)}_{e}(\tau)G^{s(1)*}_{i}(\tau)\right].\label{eq:G2a}
\end{aligned}
\end{equation}
Assuming that the spectrum emitted by a single atom is symmetric, its Fourier transform $G^{s(1)}_{e,i}(\tau)$ is real. We also assumed that both the elastic and inelastic fields satisfy the Siegert relation, so \eqref{eq:G2a} can be rewritten as:
\begin{equation}
\begin{aligned}
G^{(2)}(\tau)=& N^2\left( \left[G^{s(1)}_{e}(0)\right]^2 +\left[G^{s(1)}_{e}(\tau)\right]^2 \right)\\
+& N^2\left( \left[G^{s(1)}_{i}(0)\right]^2 +\left[G^{s(1)}_{i}(\tau)\right]^2 \right)\\
+& 2N^2G^{s(1)}_{e}(0)G^{s(1)}_{i}(0)+ 2N^2G^{s(1)}_{e}(\tau)G^{s(1)}_{i}(\tau).
\end{aligned}
\end{equation}
This can be simplified to:
\begin{equation}
\begin{aligned}
g^{(2)}(\tau)&= \frac{\left[G^{s(1)}_{e}(0)+G^{s(1)}_{i}(0)\right]^2 + \left[G^{s(1)}_{e}(\tau)+G^{s(1)}_{i}(\tau)\right]^2} {\left[G^{s(1)}_{e}(0)+G^{s(1)}_{i}(0)\right]^2} \\ &= 1 + |g^{(1)}(\tau)|^2.
\end{aligned}
\end{equation}
Thus, the sum of two uncorrelated Siegert-satisfying field still satisfies the Siegert relation. For our particular system, it implies that in the intermediate saturation regime, where both elastic and inelastic scattering occur, the Siegert relation is verified for large number of scatterers.
\end{document}
|
2,877,628,089,365 | arxiv | \section{Introduction}
Grafakos and Sansing \cite{grafakos} developed the concept of directional sensitive variant of the short-time Fourier transform (STFT) by introducing the Gabor ridge functions that can be viewed as a time-frequency analysis in a certain domain. A slightly different version of their concept was considered by Giv in \cite{Giv}, where he defined the directional short-time Fourier transform (DSTFT). In \cite{KS} the authors analyzed the DSTFT on Schwartz test function spaces, proving the continuity theorems and extending it to the spaces of tempered distributions. Moreover, in \cite{APS}, the results of Giv are extended through the investigations of the STFT in the direction of $u\in\mathbb S^{n-1}$ on the distributions of exponential type and through the analysis of the (multi)directional wave front sets for tempered distributions.
In the past several decades, there has been a trend in investigating integral transforms on the spaces of ultradistribution, such as the wavelet transform, STFT, Laplace and Hilbert transform, \cite{CKP, DP, KPP, Pil1, T1}, as natural generalization of these transforms over the the spaces of distributions. In the first part of this paper, in Section \ref{se2}, we introduce on Gelfand-Shilov spaces (of Roumieu and Beurling type) \cite{GS} the multidimensional STFT in the direction of $\bold u^k=(u_1,...,u_k)$, where $u_i, i=1,\ldots,k$ are independent vectors of $\mathbb S^{n-1}$. Moreover, we analyze the corresponding synthesis operator. By a linear transformation of coordinates, we simplify our exposition considering direction $\bold e^k=(e_1,...,e_k)$. By the continuity results presented in Theorem \ref{ridir} and \ref{synthcont}, we show in Subsection \ref{temperedultra} that both transforms defined as a transposed mapings and as actions on appropriate window functions, coincide on Gelfand Shilov (GS) spaces of ultradistributions.
In the second part, in Section \ref{se3}, we analyze the regularity properties of a GS ultradistribu- tions $f\in\mathcal S^{'\alpha}(\mathbb R^n)=\mathcal S^{\prime\alpha}_\alpha(\mathbb R^n)$ by introducing the $k$-directional regular sets and the wave front sets using the $k$-DSTFT $f$. The main result is presented in Theorem \ref{nwr}, where we show that the $k$-directional wave front does not depend on the window function. We also consider partial wave front in the sense of \cite{hor} and show that this notion is equivalent with the $k$-directional wave front. Actually, the partial wave front was not considered neither for distribution spaces nor for GS type spaces.
More on the micro-local analysis in spaces of ultradistributions can be found in \cite{JPTT, JPTT1}, as well as in \cite{Coriasco1, Coriasco2} and the references there in.
Let us note that we follow \cite{APS} (related to distributions) and give here new details which are important when developing the analysis on GS spaces. Similarly, in the last subsection when we compare directional wave fronts, we follow \cite{PP} but again with details related to ultradistributions which makes the proof more complex.
\subsection{Notation}
We employ the notation $\mathbb N_0$, $\mathbb{R}$ and $\mathbb C$ for the sets of natural (including zero), real and complex numbers, respectively; $\mathbb{S}^{n-1}$ stands for the unit sphere of $\mathbb{R}^n$.
For given multi-indexes $p=(p_1,...,p_n),v=(v_1,\ldots,v_n)\in\mathbb N_0^n$ and $t=(t_1,...,t_n)\in\mathbb{R}^n$, we write $$t^p=t_1^{p_1}\cdot...\cdot t_n^{p_n}, \,
(-i)^{|p|}D^p=\partial^p_t=\frac{\partial^{|p|}}{\partial t_{1}^{p_1}\cdots \partial t_{n}^{p_n}}, \, \, |p|=p_1+...+p_n,
$$
and
for $v\leq p, \, \binom{p}{v}=\prod_{i=1}^n\binom{p_i}{v_i}$, where $ v\leq p \ \text{means} \ v_i\leq p_i, \ i=1,...,n.$
Points in $\mathbb R^k$ are denoted by $\tilde x, \tilde y,...,$ while points in $\mathbb R^{n-k}$ are denoted by $\tilde{\tilde x}, \tilde{\tilde y},\ldots$. So $$ x=(x_1,...,x_n)=(\tilde x,\tilde{\tilde x}), \mbox{ where } \tilde x=(x_1,...,x_k),\; \tilde{\tilde x}=(x_{k+1},...,x_n).$$
Elements of $\mathbb N_0^k$ and $\mathbb N_0^{n-k}$ are denoted by tilde and double tilde as in multidimensional real cases.
For an open set $\Omega\subseteq\mathbb{R}^n$ the symbol $K\subset\subset\Omega$ means that $K$ is a compact set contained in $\Omega$. The support of
a given function (or ultradistribution) $f$ is denoted by $\mathop{\mathrm{supp}} f$.
We say that a function $f$ is compactly supported if there exists
a $K\subset\subset\mathbb{R}^n$ such that $\mathop{\mathrm{supp}} f\subset K$. The Fourier transform of a function $f$ is defined as $\hat f(\xi)=\int_{\mathbb{R}^n}f(x)e^{-2\pi i\xi\cdot x}dx$, $\xi\in\mathbb{R}^n$. For the dual pairing we use $\langle f,g\rangle$; for the $L^2$ inner product of $f$ and $g$ we use the notation $(f,g)$.
\subsection{Ultradistribution spaces}
Following the approach of \cite{Kom73}, we introduce test spaces defined by the use of Gevrey sequences $ M_p=p!^\alpha, \alpha>1$. All the results can be given for general sequences $(M_p)_{p\in\mathbb N}$ satisfying appropriate conditions. We only consider spaces of Roumieu type \cite{Kom73}. Moreover, the results of this paper also hold for the Beurling type spaces. The topology for them is the same as in the case of Schwartz space of rapidly decreasing functions. Because of that, we omit this part of analysis.
Let $K\subset\subset\Omega$ and $h>0$. We recall the definitions
of some spaces of test functions \cite{Kom73} ({of the Roumieu type}):
\begin{align*}
\mathcal{E}^{\alpha}_h(K)\, &:=\, \{\varphi\in\mathcal{C}^{\infty} \left(\Omega\right)\!:\
\sup_{{t\in K, p\in\mathbb N_0^n}}
\frac{h^{ |p|}}{p!^\alpha}|D^p\varphi(t)|<\infty\};
\\
\mathcal{D}^{\alpha}_h(K)\, &:=\,
\mathcal{E}^{\alpha}_h(K)\cap\{\varphi\in\mathcal{C}^{\infty}\left(\Omega\right)\!:\
\mathop{\mathrm{supp}}\varphi\subset K\};\\
\\
\mathcal{E}^{\alpha} (K)\, &:=\,
\varinjlim_{{h\rightarrow0}}\mathcal{E}^{\alpha}_h(K); \qquad
\mathcal{E}^{\alpha}(\Omega)\, :=\, \varprojlim_{K\subset\subset\Omega} \mathcal{E}^{\alpha} (K);\\
\mathcal{D}^{\alpha} (K)\, &:=\,
\varinjlim_{{h\rightarrow0}}\mathcal{D}^{\alpha}_h(K); \qquad
\mathcal{D}^{\alpha}(\Omega)\, :=\, \varinjlim_{K\subset\subset\Omega} \mathcal{D}^{\alpha} (K).
\end{align*}
The spaces of {ultradistributions} $ \mathcal{D}^{\prime\alpha}(\Omega)$ is a strong dual of $\mathcal{D}^{\alpha}(\Omega)$; its subspace $ \mathcal{E}^{\prime\alpha}(\Omega)$ consists of
compactly supported ultradistributions. All these spaces are complete, bornological, Montel and Schwartz (\cite{Kom73}).
\subsubsection{Gelfand-Shilov type spaces}
Let $\alpha,\beta,a>0$. By $(\mathcal{S}_a)^{\alpha}_{\beta}(\mathbb{R}^n)$ is denoted the Banach space of all smooth functions $\varphi$ on $\mathbb{R}^n$ such that the norm
\begin{equation}\label{ngs}
\sigma^{\alpha,\beta}_a(\varphi)=\sup_{t\in\mathbb{R}^n,p,q\in\mathbb N_0^n}\frac{a^{|p|+|q|}}{p!^\beta q!^\alpha}|t^p \varphi^{(q)}(t)|
\end{equation}
is finite.
The space $\mathcal{S}^\alpha_\beta(\mathbb{R}^n)$ is defined as an inductive limit of the space $(\mathcal{S}_a)^{\alpha}_{\beta}(\mathbb{R}^n)$:
$$ \mathcal{S}^\alpha_\beta(\mathbb{R}^n)=\varinjlim_{{a\rightarrow0}}(\mathcal{S}_a)^{\alpha}_{\beta} (\mathbb{R}^n).$$
This is a $DFS$ (dual of Fr\' echet Schwartz) nuclear space. Its strong dual is called Gelfand Shilov space of ultradistributions.
The space $\mathcal{S}^\alpha_\beta(\mathbb{R}^n)$ is nontrivial if and only if $\alpha+\beta\geq{1}$. The Fourier transform is a topological isomorphism between $\mathcal{S}^\alpha_\beta(\mathbb{R}^n)$ and $\mathcal{S}^\beta_\alpha(\mathbb{R}^n)$, which extends to a continuous linear transform from $\mathcal{S}^{\prime\alpha}_{\beta}(\mathbb{R}^n)$ onto $\mathcal{S}^{\prime\beta}_{\alpha}(\mathbb{R}^n)$. If $\alpha=\beta$, the space $\mathcal{S}^\alpha_\alpha(\mathbb{R}^n)$ is denoted by $\mathcal{S}^{\alpha}(\mathbb{R}^n)$.
\begin{remark}\label{re1}We will often use equivalent families of norms in which in \eqref{ngs} $t^p \varphi^{(q)}(t)$ is replaced by $(t^p\varphi)^{(q)}$ or $t^q \hat{\varphi}^{(p)}$ or $(t^q\hat\varphi)^{(p)}$ (See \cite{CKP}, \cite{Pil} and \cite{Pil1}).
\end{remark}
\subsubsection{Ultradifferential operators}\label{ultrap}
A
formal expression $P(D)=\sum_{p\in\mathbb N_0^n}
a_p D^p$ ($a_p\in\mathbb{R}$), corresponds to the
ultrapolynomial $P(\xi)=\sum_{p\in\mathbb N_0^n} a_p
\xi^{p}$ ($\xi\in\mathbb{R}^n$), \cite{Kom73}. It is called an
\emph{ultradifferential operator
of the Roumieu type} $\alpha$
if for every $a>0$, there exist a constant $C=C(a)>0$ such that the coefficients $a_p$ satisfy the estimate
\begin{equation}\label{RP}
|a_p|\, \leq\,
\frac{C a^{|p|}}{p!^{\alpha}}, \quad \forall p\in\mathbb N_0^n.
\end{equation}
It is a $C^\infty$ function on $\mathbb{R}^n$.
We will use the following representation theorem for an $f\in
\mathcal{S}^{\prime\alpha}_\beta(\mathbb{R}^n):$
For any $f\in\mathcal{S}^{\prime \alpha}_\beta(\mathbb{R}^n)$ there exist $P_1(D)$-ultradifferential operator of the Roumieu type $\alpha$, an ultrapolynomial $P_2(x)$ of Roumieu type $\beta$ and an $F\in L^2(\mathbb{R}^n)$ so that
\begin{equation}\label{rep11}
f(x)=P_1(D)(P_2(x)F(x)).
\end{equation}
We note that $\phi\mapsto P_1(D)\phi$ and $\phi\mapsto P_2(x)\phi$ are continuous mappings of $\mathcal S^\alpha_\beta(\mathbb{R}^d)$ into itself.
We will consider elliptic operators of this type. For them one has that the function $P(\xi)$
satisfies (\cite[Proposition 4.5]{Kom73})
\begin{equation}\label{ulpobound}
\left( \forall a,\;\,
\exists C>0 \right)\;\, \forall \xi\in \mathbb{R}^n \quad |P(\xi )|\leq Ce^{a|\xi|^{1/\alpha}}.
\end{equation}
We point out that $P(D)$ defines the continuous mappings on $\mathcal{S}^{\alpha}_{\beta}(\mathbb{R}^n)$. Moreover
$$P(D)\varphi=\lim_{n\rightarrow \infty}\sum_{|p|<n}a_p D^p\varphi \,\, \rm{in} \,\, \mathcal{S}^{\prime\alpha}_{\beta}(\mathbb{R}^n)\,\, \rm{for \, every} \, \, \varphi\in\mathcal S^{\alpha}_\beta(\mathbb{R}^n).$$
\subsection{STFT and the synthesis operator on $L^2$} Let a window function $g\in L^2(\mathbb{R}^n)\setminus\{0\}$. Then STFT is defined by
\begin{equation*}\label{stft}
V_gf(y,\xi)=\int_{\mathbb{R}^n}f(t)\overline{g(t-y)}e^{-2\pi i \xi\cdot t}dt,\quad y,\xi\in\mathbb{R}^n, f\in L^2(\mathbb{R}^n).
\end{equation*}
The synthesis operator $V^{*}_g$ is defined on $L^2(\mathbb{R}^{2n})$ by
\[ V^{*}_gF(t)=\int\int_{\mathbb{R}^{2n}}F(y,\xi)g_{y,\xi}(t) dyd\xi, \ \ t\in\mathbb{R}^n\]
where $g_{y,\xi}(t)={g(t-y)}e^{2\pi i \xi\cdot t}$. Let $\varphi\in L^2(\mathbb{R}^n)$ be a synthesis window for $g$ ($(g,\varphi)\neq 0$). Then for any $f\in L^2(\mathbb{R}^n)$
\begin{equation}\label{as}
f(t)=\frac{1}{(g,\varphi)}\int \int_{\mathbb{R}^{2n}}V_gf(y,\xi)\varphi_{y,\xi}(t) dyd\xi,
\end{equation}
where $\varphi_{y,\xi}(t)=\varphi(t-y)e^{2\pi i \xi\cdot t}$. It is well-known that if $g\in \mathcal{S}^\alpha_\beta(\mathbb{R}^n)\setminus\{0\}$ is a fixed window, then $V_g:\mathcal{S}^\alpha_\beta(\mathbb{R}^n)\rightarrow \mathcal{S}^{\alpha}_{\beta}(\mathbb{R}^{2n})$
is a continuous mapping. Moreover, for $f\in\mathcal{S}^\alpha_\beta(\mathbb{R}^n)$, equation \eqref{as} holds pointwise (see \cite{T1}).
\section{$k$-directional STFT and the $k$-directional synthesis operator}\label{se2}
We define the $k$-DSTFT and the $k$-directional synthesis operator (DSO) for a fixed direction, over
$\mathcal{S}^\alpha_{\beta}(\mathbb{R}^n)$ and its dual.
\begin{definition}\label{analsyntopp} Let $\bold u^k=(u_1,\ldots,u_k),$ where $u_i, i=1,\ldots,k$ are independent vectors of $\mathbb S^{n-1}$. Let $\tilde y=(y_1,\ldots,y_k)\in\mathbb R^k$ and $g\in \mathcal{S}^\alpha_{\beta}(\mathbb{R}^k)\setminus\{0\}$. The $k$-DSTFT of $f\in L^2(\mathbb{R}^n)$ is defined by
\begin{equation}\label{dd2}
DS_{g,\bold u^k} f(\tilde y,\xi): = \int_{\mathbb R^n} f(t)\overline{g((u_1\cdot t,...,u_k\cdot t)-(y_1,...,y_k))} e^{-2\pi it\cdot \xi}dt
\end{equation}
and the $k$-DSO of $F\in L^2(\mathbb{R}^{2n})$ is defined by
\begin{equation}\label{ds}
DS^\ast_{g,\bold u^k} F(t)=\int_{\mathbb{R}^n}\int_{\mathbb{R}^k}F(\tilde y,\xi){g_{\bold u^k, \tilde y,\xi}}(t)d\tilde yd\xi, \quad t\in\mathbb{R}^n,
\end{equation}
where $g_{\bold u^k,\tilde y,\xi}(t)=g((u_1\cdot t,...,u_k\cdot t)-(y_1,\cdots,y_k))e^{2\pi i\xi\cdot t}, t\in\mathbb{R}^n.$
\end{definition}
Let $\varphi\in\mathcal{S}^\alpha_\beta(\mathbb{R}^k)$ be the synthesis window for $g\in \mathcal{S}^\alpha_\beta(\mathbb{R}^k)\setminus\{0\}$, which means $(g,\varphi)_{L^2(\mathbb{R}^k)}\neq 0.
$
We will show in Proposition \ref{propg} that for $f\in \mathcal{S}^\alpha_\beta(\mathbb{R}^n)$ the following reconstruction formula holds pointwise:
\begin{equation}\label{1rfdstft}
f(t)=\frac{1}{(g,\varphi)}\int_{\mathbb{R}^n}\int_{\mathbb{R}^k}DS_{g,\bold u^k}f(\tilde y,\xi)\varphi_{\bold u^k,y,\xi}(t) d\tilde yd\xi,
\end{equation}
where $\varphi_{\bold u^k,y,\xi}(t)=\varphi((u_1\cdot t,...,u_k\cdot t)-(y_1,\ldots, y_n))e^{2\pi i\xi\cdot t}, \ t\in\mathbb{R}^n.$
Relation \eqref{1rfdstft} takes the form
\[ (DS^\ast_{\varphi,\bold u^k}\circ DS_{g,\bold u^k})f=(g,\varphi)f.\]
\subsection{Coordinate transformation}
Let $A_{k,n}=[u_{i,j}]$ be a $k\times n$ matrix with rows $u^i, i=1,...,k$ and $I_{n-k,n-k}$ be the identity matrix. Let $B$ be an $n\times n$ matrix
determined by $A$ and $I_{n-k,n-k}$ so that $Bt=s$, where $$s_1=u_{1,1}t_1+\cdots+u_{1,n}t_n,\; ... ,\; s_k=u_{k,1}t_1+\cdots+u_{k,n}t_n,$$
$s_{k+1}=t_{k+1}, ..., s_n=t_n$. Clearly, this matrix is invertible (regular). Put $C=B^{-1}$ and $\bold e^k=(e_1,...,e_k),$ where $e_1=(1,0,...,0),..., e_k=(0,...,1)$ are
unit vectors of the coordinate system of $\mathbb R^k$. Then, with the change of variables
$t=Cs$, and $\eta=C^T\xi$ ($C^T$ is the transposed matrix for $C$), one obtains, for $f\in L^2(\mathbb R^n),$
that (\ref{dd2}) is transformed into:
\begin{equation}\label{dd22}
DS_{g,\bold u^k}f(\tilde y,\xi)=(DS_{g,\bold e^k}h(s))(\tilde y,\eta)=\int_{\mathbb R^n}h(s)\overline{g(\tilde s-\tilde y)}e^{-2\pi i s\cdot\eta}ds,
\end{equation}
where $h(s)=|C|f(Cs)$, $|C|$ is the determinant of $C$, and \eqref{ds} is transformed, for $f\in L^2(\mathbb R^{2n})$, into:
\begin{equation}\label{2ds}
DS^\ast_{g,\bold e^k} F(s)=\int_{\mathbb{R}^n}\int_{\mathbb{R}^k}F(\tilde y,\eta){g(\tilde s-\tilde y)} e^{2\pi is\cdot\eta}d\tilde yd\eta, \quad s\in\mathbb{R}^n.
\end{equation}
\begin{remark}
1. Let $f\in\mathcal{S}^{\alpha}_{\beta}(\mathbb R^n).$ Then
$h(s)=|C|f(Cs)\in\mathcal{S}^{\alpha}_{\beta}(\mathbb R^n).$
2. If $g(s_1,...,s_k)=g_1(s_1)\cdots g_k(s_k)\in (\mathcal{S}^{\alpha}_{\beta}(\mathbb R))^k, (\mathcal{S}^{\alpha}_{\beta}(\mathbb R))^k=\mathcal{S}^{\alpha}_{\beta}(\mathbb R)\times\ldots\times \mathcal{S}^{\alpha}_{\beta}(\mathbb R)$, then
\begin{equation*}\label{1dd2}
DS_{g,u^k} f(\tilde y,\xi): = \int_{\mathbb R^n} f(t)\overline{g_1(u_1\cdot t-y_1)}\cdots \overline{g_k(u_k\cdot t-y_k)} e^{-2\pi it\cdot \xi}dt=
\end{equation*}
$$\int_{\mathbb R^n} h(s)\overline{g_1(s_1-y_1)}\cdots \overline{g_k(s_k-y_k)} e^{-2\pi is\cdot \mu}ds,
$$
and we call it, the partial short-time Fourier transform.
\end{remark}
\subsection{Continuity properties}
\begin{theorem}\label{ridir}
By
$$(DS_{g,\bold e^k}h(s))(\tilde y,\eta)=H(\tilde y,\eta)=
\int_{\mathbb R^n}h(s)\overline{g(\tilde s-\tilde y)}e^{-2\pi i s\cdot\eta}ds$$
is defined a continuous bilinear mapping
$$\mathcal{S}^{\alpha}_{\beta}(\mathbb R^n)\times \mathcal{S}^{\alpha}_{\beta}(\mathbb R^k)\rightarrow
{\mathcal{S}}^{\alpha}_{\beta}(\mathbb{R}^{k+n}),$$
$$(h,g)\mapsto H=DS_{g,\bold e^k}h. $$
\end{theorem}
\begin{proof}
Let $v,p\in\mathbb N_0^n, \tilde w,\tilde \gamma\in\mathbb N_0^k$,
$\eta\in\mathbb R^n, \tilde y\in\mathbb R^k$. Using \eqref{dd22}, we have
\begin{align*}
J&=\eta^v\tilde y^{\tilde w}\partial_\eta^p\partial_{\tilde y}^{\tilde \gamma} DS_{g,\bold e^k}h(\tilde y,\eta)\\
&=\eta^v\tilde y^{\tilde w}(-2\pi i)^{|p|}\int_{\mathbb{R}^n}(-1)^{|\tilde\gamma|}s^p h(s)\overline{g^{(\tilde \gamma)}(\tilde s-\tilde y)}e^{-2\pi is\cdot\eta}ds\\
&=\eta^v(-2\pi i)^{|p|}\int_{\mathbb{R}^n}(-1)^{|\tilde \gamma|}s^p h(s)\tilde y^{\tilde w}\overline{g^{(\tilde \gamma)}(\tilde s-\tilde y)}e^{-2\pi is\cdot\eta}ds
\\
&=(-2\pi i)^{|p|-|v|}(-1)^{|\tilde \gamma|}
\int_{\mathbb{R}^n}\frac{\partial^{v}}{\partial s^v}\left(s^p h(s)\tilde y^{\tilde w}\overline{g^{(\tilde \gamma)}(\tilde s-\tilde y)}\right)e^{-2\pi is\cdot\eta}ds\\
&=(-2\pi i)^{|p|-|v|}(-1)^{|\tilde \gamma|}\sum_{\tilde j\leq \tilde v}{{v}\choose{\tilde j}}
\int_{\mathbb{R}^n}
\frac{\partial^{v-\tilde j}}{\partial s^{v-\tilde j}}
(s^p h(s))(\tilde y^{\tilde w}\overline{g^{(\tilde\gamma+\tilde j)}(\tilde s-\tilde y)})e^{-2\pi is\cdot\eta}ds.
\end{align*}
Above, $v-\tilde j=(v_1-j_1,...,v_k-j_k,\tilde{\tilde v})$. In the sequel we will use
\begin{equation*}
(v-\tilde j)!^{\alpha}\tilde j!^\alpha\leq v!^{\alpha},\;
(\tilde j+\tilde\gamma)!^\alpha\leq 2^{\tilde j+\tilde \gamma}\tilde j!^\alpha\tilde \gamma!^\alpha.
\end{equation*}
We divide integral $\int_{\mathbb R^n}=\int_{|s|\leq1}+\int_{|s|>1}$, denote corresponding expressions as $J=J_1+J_2$ and continue to estimate the second one $J_2$ in which we put in the numerator and denominator $s^{(2,...,2)}$.
With this preparation we will have the existence of the second integral. This will simplify the estimates inside it
We put
$J_2=J_{2,1}+J_{2,2}$ and $r_{p}^{v,\tilde \gamma}=(-2\pi i)^{|p|-|v|}(-1)^{|\tilde \gamma|}$. Then,
$$J_{2,1}
=r_{p}^{v,\tilde \gamma}\sum_{\tilde j\leq\tilde v}{{v}\choose{\tilde j}}\int_{|s|>1}
(s^{(2,...,2)}\frac{\partial^{v-\tilde j}}{\partial s^{v-\tilde j}}
(s^p h(s))\overline{g^{(\gamma+\tilde j)}(\tilde s-\tilde y)})
(\tilde y^{\tilde w}-\tilde s^{\tilde w})e^{-2\pi is\cdot\eta}
\frac{ds}{s^{(2,...,2)}}.
$$
$$J_{2,2}=r_{p}^{v,\tilde \gamma}\sum_{\tilde j\leq\tilde v}{{v}\choose{\tilde j}}\int_{|s|>1}
s^{\tilde w}s^{(2,...,2)}
\frac{\partial^{v-\tilde j}}{\partial s^{v-\tilde j}}
(s^p h(s))\overline{g^{(\gamma+\tilde j)}(\tilde s-\tilde y)}
e^{-2\pi is\cdot\eta}\frac{ds}{s^{(2,...,2)}}.
$$
The use of positive constants $c_1, c_2, C_1, C_2,$ below, will be clear from the context.
With suitable $c_1, C_1$ we have
$$\frac{c_1^{|v|+|p|+|\tilde w|+{|\tilde \gamma|}}}{v!^\alpha p!^{\beta}\tilde \gamma!^\alpha \tilde w!^\beta}
|J_{2,1}|/|r_{p}^{v,\tilde \gamma}|\leq
$$
$$
\leq\frac{c_1^{|v|+|p|+|\tilde w|+{|\tilde \gamma|}}}{v!^\alpha p!^{\beta}\tilde \gamma!^\alpha \tilde w!^\beta}\left|\sum_{\tilde j\leq\tilde v}{{v}\choose{\tilde j}}\int_{|s|>1}
(s^{(2,...,2)}\frac{\partial^{v-\tilde j}}{\partial s^{v-\tilde j}}
(s^p h(s))\overline{g^{(\gamma+\tilde j)}(\tilde s-\tilde y)})
(\tilde y^{\tilde w}-\tilde s^{\tilde w})e^{-2\pi is\cdot\eta}
\frac{ds}{s^{(2,...,2)}}\right|
$$
$$
\leq\sum_{\tilde j\leq\tilde v}{{v}\choose{\tilde j}}\int_{|s|>1}\frac{c_1^{|p|+|v-\tilde j|}}{(v-\tilde{j})!^\alpha p!^\beta}|s^{(2,...,2)}
\frac{\partial^{v-\tilde j}}{\partial s^{v-\tilde j}}
(s^p h(s))|
\frac{c_1^{|\tilde w|+|\tilde\gamma+\tilde j|}|\tilde y-\tilde s|^{\tilde w}|\overline{g^{(\gamma+\tilde j)}(\tilde s-\tilde y)}|}{\tilde w!^\beta(\tilde \gamma+\tilde j)!^\alpha}\frac{ds}{s^{(2,...,2)}}\leq C_{1}.$$
Next, with suitable $c_2, C_2$,
$$\frac{c_2^{|v|+|p|+|\tilde w|+|{\tilde \gamma|}}}{v!^\alpha p!^{\beta}\tilde \gamma!^\alpha \tilde w!^\beta}
|J_{2,2}|/|r_{p}^{v,\tilde \gamma}|
$$
$$
\leq\frac{c_2^{|v|+|p|+|\tilde w|+|{\tilde \gamma|}}}{v!^\alpha p!^{\beta}\tilde \gamma!^\alpha \tilde w!^\beta}\left|\sum_{\tilde j\leq\tilde v}{{v}\choose{\tilde j}}\int_{|s|>1}
s^{\tilde w}s^{(2,...,2)}
\frac{\tilde\partial^{v-\tilde j}}{\partial s^{v-\tilde j}}
(s^p h(s))\overline{g^{(\gamma+\tilde j)}(\tilde s-\tilde y)}
e^{-2\pi is\cdot\eta}\frac{ds}{s^{(2,...,2)}}\right|
$$
$$
\leq\sum_{\tilde j\leq\tilde v}{{v}\choose{\tilde j}}\int_{|s|>1}
\frac{c_2^{|p|+|v-\tilde j|+|\tilde w|}}{(v-\tilde{j})!^\alpha p!^\beta\tilde w!^\beta}|s^{(2,...,2)}\tilde s^{\tilde w}
\frac{\partial^{v-\tilde j}}{\partial s^{v-\tilde j}}
(s^p h(s))|
\frac{c_2^{|\tilde \gamma+\tilde j|}\overline{g^{(\gamma+\tilde j)}(\tilde s-\tilde y)}}{(\tilde \gamma+\tilde j)!^\alpha}
\frac{ds}{s^{(2,...,2)}}\leq C_2.
$$
We use \eqref{ngs} and Remark \ref{re1}, that is
\begin{equation*
\sup_{s\in\mathbb{R}^n, l,q\in\mathbb N_0^n, \tilde w\in\mathbb N_0^k}\frac{a^{|q|+|l|+|\tilde w|}|s^{(2,...,2)}\tilde s^{\tilde w}(s^l h(s))^{(q)}|}{q!^\alpha l!^\beta\tilde w!^\beta}<\infty, \mbox{ for some } a>0.
\end{equation*}
\noindent So, with $\sum_{\tilde j\leq \tilde v}{{v}\choose{\tilde j}}\leq 2^v,$ and new constants $c$ and $C$ (which includes the value $\int_{|s|>1}{ds}/s^{(2,...,2)}$), we have
$$|J_{2}|\leq C \sigma_{c}^{\alpha,\beta}(h)\sigma_{c}^{\alpha,\beta}(g).
$$
The estimate on $J_1$ can be done in an analogous fashion, and this completes the proof of the theorem.
\end{proof}
\begin{proposition}\label{propg}
Let $f\in \mathcal{S}_\beta^\alpha(\mathbb{R}^n)$ and $g, \varphi\in \mathcal{S}^\alpha_\beta(\mathbb{R}^k)$. Then the reconstruction formula \eqref{1rfdstft} holds pointwisely.
\end{proposition}
\begin{proof} Indeed, by using the Parseval identity for given $f_1,f_2\in L^2(\mathbb R^n)$ and $g,\varphi\in \mathcal{S}_\beta^\alpha(\mathbb{R}^k)$, and after the change of variables as in the representation (\ref{dd22}), that is $h_i(\cdot)=|C|f_i(C\cdot), i=1,2$, we have
\begin{eqnarray}\label{ddd12}&&
(DS_{g,\bold{u}^k}f_1(\tilde y,\xi),DS_{\varphi,\bold {u}^k}f_2(\tilde y,\xi))_{L^2(\mathbb R^k \times \mathbb R^n)}
\nonumber\\
&=&(DS_{g,\bold{e}^k} h_1(\tilde y,\eta),DS_{\varphi,\bold{e}^k}h_2(\tilde y,\eta))_{L^2(\mathbb R^k \times \mathbb R^n)}\nonumber\\
&=&( h_1,h_2)_{L^2(\mathbb R^n)}( \overline{g},\overline{\varphi})_{L^2(\mathbb R^k)}.
\end{eqnarray}
We obtain the reconstruction formula
(\ref{1rfdstft}) as a consequence of (\ref{ddd12}) as in \cite[Theorem 3.2.1. and Corollary 3.2.3]{Gr}.
\end{proof}
Now we will consider the continuity properties of \eqref{2ds}.
We fix $g\in\mathcal S^\alpha_\beta(\mathbb{R}^k)$.
\begin{theorem}\label{synthcont}
By
$DS^\ast_{g,\bold e^k}(H(\tilde y,\eta))(s)=h(s), s\in\mathbb{R}^n$ given by (\ref{2ds}),
is defined a continuous linear mapping
$$\mathcal{S}^{\alpha}_{\beta}(\mathbb{R}^k\times\mathbb R^n)\rightarrow
{\mathcal{S}}^{\alpha}_{\beta}(\mathbb{R}^{n}),$$
$$H\mapsto h=DS^{\ast}_{g,\bold e^k}H. $$
\end{theorem}
\begin{proof}
We will estimate $
\dfrac{c^{|p+v|}}{p!^\alpha v!^\beta}s^v\partial^p_sh(s)$.
There holds
\begin{align*}
|s^v\partial^p_sh(s)|
&=|s^v\sum_{\tilde j\leq \tilde p}{{ p}\choose{\tilde j}}(2\pi i)^{p-\tilde j}\int_{\mathbb{R}^n}\int_{\mathbb{R}^k}F(\tilde y,\eta)
\eta^{p-\tilde j}{g^{(\tilde j)}(\tilde s-\tilde y)} e^{2\pi i s\cdot\eta}d\tilde{y}d\eta|\\
&=|\sum_{\tilde j\leq \tilde p}{{ p}\choose{\tilde j}}(2\pi i)^{p-\tilde j-v}\int_{\mathbb{R}^n}\int_{\mathbb{R}^k}
\partial_\eta^v(F(\tilde y,\eta)
\eta^{p-\tilde j}){g^{(\tilde j)}(\tilde s-\tilde y)}e^{2\pi {i}s\cdot\eta}d\tilde{y}d\eta|.
\end{align*}
Now it is easy to finish the proof.
\end{proof}
\begin{remark}\label{vind}
This proof shows that one can assume less restrictive conditions on $g$ since we just differentiate $g$. For example, we can only assume that $g\in \mathcal S^\alpha_0(\mathbb{R}^k)$:
\end{remark}
\begin{corollary}
$(DS^\ast_{g,\bold e^k}(H(\tilde y,\eta))(s)=h(s), s\in\mathbb{R}^n$
defines a continuous bilinear mapping
$$\mathcal{S}^{\alpha}_{\beta}(\mathbb{R}^k\times\mathbb R^n)\times\mathcal{S}^\alpha_0(\mathbb{R}^k)\rightarrow
{\mathcal{S}}^{\alpha}_{\beta}(\mathbb{R}^{n}),$$
$$(H,g)\mapsto h=DS^{\ast}_{g,\bold e^k}H. $$
\end{corollary}
\subsection{$k$-DSTFT and $k$-DSO on $\mathcal{S}^{\prime\alpha}_\beta$}\label{temperedultra}
Let $\bold u^k=(u_1,\ldots,u_k),$ where $u_i, i=1,\ldots,k$ are independent vectors of $\mathbb S^{n-1}$, and $g\in\mathcal{S}^{\alpha}_\beta(\mathbb{R}^k)$. The continuity results allow us to define $k$-DSTFT of $f\in \mathcal{S}^{\prime\alpha}_\beta(\mathbb{R}^n)$ as an element $DS_{g,\bold u^k}f\in \mathcal{S}^{\prime\alpha}_{\beta}(\mathbb{R}^{k}\times\mathbb{R}^n)$ whose action on test functions is given as a transposed mapping
$$\langle DS_{g,\bold u^k} f,\Phi\rangle=\langle f, DS^\ast_{\overline g, \bold u^k}\Phi\rangle, \qquad \Phi\in \mathcal{S}^{\alpha}_{\beta}(\mathbb{R}^{k}\times\mathbb{R}^n).$$
We use notation $\mathbb{R}^{k}\times\mathbb{R}^n=\mathbb{R}^{k+n}$ just to justify the domain of the above mapping.
Since $g\in\mathcal{S}^{\alpha}_\beta(\mathbb{R}^k)$, one can define
the $k$-DSTFT of an $f\in\mathcal S^{\prime\alpha}_\beta(\mathbb{R}^d)$ as
$$DS_{g,\bold u^k}f(\tilde y,\xi)=\langle f(t),g((t\cdot u_1,...,t\cdot u_k)-\tilde y)e^{-2\pi t\cdot\xi}\rangle, \, \tilde y\in \mathbb{R}^k, \, \xi\in\mathbb{R}^n.
$$
This is a direct method of the definition of an integral transform.
We have
\begin{proposition}\label{nova1}
The two definitions of the $k$-DSTFT of an $f\in\mathcal S^{\prime\alpha}_\beta(\mathbb{R}^d)$ coincide.
\end{proposition}
\begin{proof}
One has to use the representation formula (\ref{rep11}), then the continuity of
the $P_1(-D)$, of $P_2(x)$ over $\mathcal S^\alpha_\beta(\mathbb{R}^n)$ and then the Fubinni theorem.
\end{proof}
Next, the $k$-DSO $DS^*_{g,\bold u^k}:\mathcal{S}^{\prime\alpha}_{\beta}(\mathbb{R}^{k}\times\mathbb{R}^n)\rightarrow \mathcal{S}^{\prime\alpha}_{\beta}(\mathbb{R}^n)$ can be defined as
\[ \langle DS^*_{g,\bold u^k} F, \varphi\rangle=\langle F, DS_{\overline g, \bold u^k}\varphi\rangle, \quad F\in\mathcal{S}^{\prime\alpha}_{\beta}(\mathbb{R}^{k}\times\mathbb{R}^n), \varphi \in \mathcal{S}_\beta^\alpha(\mathbb{R}^n).\]
We repeat the arguments given above.
Let $F\in\mathcal S^{\alpha}_\beta(\mathbb R^k\times\mathbb{R}^n)$ be of the form
$$F(\tilde y,\xi)=P_1(D_{\tilde y,\xi})(P_2(\tilde y,\xi)F_0(\tilde y,\xi)) \mbox{ (c.f.(\ref{rep11})), }
$$
where $P_1(D_{\tilde y,\xi})$ and $P_2(\tilde y,\xi)$ are ultradiferential operator over $\mathbb{R}^{k+n}$ of Roumieu class $\alpha$ and ultradifferential polynomial of Roumieu class $\beta$.
Again, we define $ DS^*_{g,\bold u^k}F$ by a direct method
$$ DS^*_{g,\bold u^k}F(t)=\langle F,g((u_1\cdot t,...,u_k\cdot t)-\tilde y)e^{2\pi i\xi\cdot t}\rangle, \quad t\in\mathbb{R}^n.
$$
We have
\begin{proposition}\label{nova2}
The two definitions of the $k$-DSO
$DS^*_{g,\bold u^k}f$
of an $f\in\mathcal S^{\prime\alpha}_\beta(\mathbb{R}^k\times\mathbb{R}^d)$ coincide.
\end{proposition}
We immediately obtain:
\begin{proposition} \label{pr1} Let $g\in\mathcal{S}^{\alpha}_0(\mathbb{R}^k)$. The $k$-directional short-time Fourier transform, $DS_{g,\bold u^k}:\mathcal{S}^{\prime\alpha}_\beta(\mathbb{R}^n)\to \mathcal{S}^{\prime\alpha}_\beta(\mathbb{R}^k\times\mathbb{R}^n)$ and the synthesis operator $DS^{*}_{g,\bold u^k}:$ $ \mathcal{S}^{\prime\alpha}_\beta(\mathbb{R}^k\times\mathbb{R}^n)\to\mathcal{S}^{\prime\alpha}_\beta(\mathbb{R}^n)$ are continuous linear maps. \end{proposition}
The following theorem connects the $k$-DSTFTs with respect to different windows.
\begin{theorem}\label{d444} Let $\bold u^k=(u_1,\ldots,u_k),$ where $u_i, i=1,\ldots,k$ are independent vectors of $\mathbb S^{n-1}$.
Let $\varphi, g, \gamma$ belong to $\mathcal S^\alpha_0(\mathbb R^k)$ where $\gamma$ is the synthesis
window for $g$. Let $f\in\mathcal S^{\prime\alpha}_\beta(\mathbb R^n)$, then
$$DS_{\varphi,\bold u^k}f(\tilde x,\eta)=(DS_{g,\bold u^k}f(\tilde s,\zeta))*(DS_{\varphi,\bold u^k}\gamma(\tilde s,\zeta))(\tilde x,\eta), \quad \tilde x,\tilde s\in\mathbb{R}^k, \ \eta, \zeta\in\mathbb{R}^n.
$$
\end{theorem}
\proof We follow the proof in \cite{APS}. By \eqref{dd22}, it is enough to prove the assertion for $\bold e^k$. Let $F\in\mathcal{S}^{\prime\alpha}_{\beta}(\mathbb R^{k}\times\mathbb{R}^n)$. By the continuity arguments, we can assume that $f=F\in L^2(\mathbb{R}^k\times\mathbb{R}^n).$
Then
\begin{eqnarray*} DS_{\varphi,\bold e^k}(DS_{\gamma,\bold e^k}^{*}F)(\tilde x,\eta) &=& \int_{\mathbb R^n}\big(\int_{\mathbb R
^{n}}\int_{\mathbb R^k}F(\tilde y, \xi)
\gamma(\tilde t-\tilde y) e^{2\pi i\xi \cdot t}d\tilde y d\xi\big)
\overline {\varphi(\tilde t-\tilde x)} e^{-2\pi it\cdot\eta}dt
\\&=& \int_{\mathbb R^n}\int_{\mathbb R
^{k}}(\int_{\mathbb R^n}\gamma(\tilde t)
\overline{\varphi(\tilde t-(\tilde x-\tilde y))}
e^{-2\pi it\cdot(\eta-\xi)}dt)F(\tilde y,\xi)d\tilde y d\xi\\&=&\int_{\mathbb R^n}\int_{\mathbb R^{k}}F(\tilde y,\xi)DS_{\varphi,\bold e^k}\gamma(\tilde x-\tilde y,\eta-\xi)d\tilde yd\xi.
\end{eqnarray*}
Now, we put $F=DS_{g,\bold e^k}f$ and obtain
\begin{equation}\label{dd333}
DS_{\varphi,\bold e^k}f(\tilde x,\eta)=(DS_{g,\bold e^k}f(\tilde s,\zeta))*(DS_{\varphi,\bold e^k}\gamma(\tilde s,\zeta))(\tilde x,\eta).
\end{equation}
\qed
\section{Directional wave fronts}\label{se3}
In order to detect singularities determined by the hyperplanes orthogonal to vectors $ u_1,..., u_k$ using the $k$-DSTFT, we introduce $k$-directional regular sets and wave front sets for GS ultradistributions. Theorem \ref{d444} guaranties that the wave front set will not depend on the used window. Again, we simplify our exposition by the use of \eqref{dd22}
and transfer the STFT in $\bold u^k$ direction to STFT in $\bold e^k$ direction.
Let $k=1$ and $y_0=y_{0,1}\in\mathbb R$. Put
$
\Pi_{e_1,y_0,\varepsilon}=\{t\in \mathbb R^n: |t_1- y_0|<\varepsilon\}.
$
It is an area of $\mathbb R^n$ between two hyperplanes orthogonal to $e_1$,
$$
\Pi_{e_1,y_0,\varepsilon}= \bigcup_{y\in (y_0-\varepsilon,y_0+\varepsilon)} P_{e_1,y},
\;\; \; (y_0=(y_0, 0,\ldots,0), y=(y,0,...,0)),$$
and
$P_{e_1,y}$ denotes the hyperplane orthogonal to $e_1$ passing through $y$.
We keep the notation of Section \ref{se2}.
Put
$$\Pi_{\bold e^k,\tilde y,\varepsilon}=\Pi_{e_1,y_1,\varepsilon}\cap\ldots\cap\Pi_{e_k,y_k,\varepsilon}, \quad \Pi_{\bold e^k,\tilde y}=\Pi_{e_1,y_1}\cap\ldots
\cap\Pi_{e_k,y_k}.
$$
The first set is a paralelopiped in $\mathbb R^k$ so that in $\mathbb R^n$ it is determined by $2k$ finite edges while the other edges are infinite. The set
$\Pi_{\bold e^k,\tilde y}$ equals $\mathbb R^{n-k}$ translated by vectors
$\vec y_1,\ldots,\vec y_k.$ We will call it $n-k$-dimensional element of
$\mathbb R^n $ and denote it as $P_{\bold e^k,\tilde y}\in\mathbb R^k.$
If $k=n$, then this is just the point $y=(y_1,\ldots,y_n).$
In the sequel, $\alpha>1.$
\begin{definition}\label{wp}
Let $f\in \mathcal S^{\prime\alpha}(\mathbb R^n)$. It is said that $f$ is $k$-directionally microlocally regular at $(P_{\bold e^k,\tilde y_0},\xi_0)\in
\mathbb R^n\times (\mathbb R^n\setminus \{0\})$, that is, at every point of the form $((\widetilde y_0, \cdot),\xi_0)$ ($\cdot$ denotes an arbitrary point of $\mathbb{R}^{n-k}$)
if there exists $g\in \mathcal D^{\alpha}(\mathbb R^k)$, $g(\tilde 0)\neq 0$, the product of
open balls $L_r(\tilde y_0)=L_r(y_{0,1})\times...\times L_r(y_{0,k})\in\mathbb R^k$, a cone $\Gamma_{\xi_0}$ and there exist
$N\in\mathbb N$ and $C_{N}>0$
such that
\begin{equation}\label{rhh}
\sup_{\tilde y \in L_r(\tilde y_0),\,\xi \in\Gamma_{\xi_0}}|DS_{g, \bold e^k}f((\tilde y,\cdot),\xi)|
=\sup_{\tilde y\in L_r(\tilde y_0),\,\xi \in\Gamma_{\xi_0}}|\mathcal F
(f(t)\overline{g(\tilde t-\tilde y)})(\xi)|\leq C_{N} e^{-N|\xi|^{1/\alpha}}.
\end{equation}
\end{definition}
Note that for $k=n$ our definition is the classical H\" ormander's definition of regularity, \cite[Section 8]{hor}.
\begin{remark}\label{wfud}
a) If $f$ is $k$--directionally microlocally regular at $(P_{\bold e^k,\tilde y_0},\xi_0)$, then there exists an open ball $ L_r(\tilde y_0)$ and an open cone $\Gamma\subset\Gamma _{\xi_0}$
so that $f$ is $k$--directionally microlocally regular at $(P_{\bold e^k,\tilde z_0},\theta_0)$ for any $\tilde z_0\in L_{r}(\tilde y_0)$ and $\theta_0 \in \Gamma.$ This implies that the union of all $k$--directionally microlocally regular points $(P_{\bold e^k,\tilde z_0},\theta_0)$, $((\tilde z_0,\cdot),\theta_0)\in (L_{r}(\tilde y_0)\times\mathbb R^k)\times\Gamma$ is an open set of $\mathbb R^n\times(\mathbb R^n\setminus\{0\})$.
b) Denote by $Pr_{k}$ the projection of $\mathbb R^n$ onto $\mathbb R^k$. Then, the $k$--directionally microlocally regular
point $(P_{\bold e^k,\tilde y_0},\xi_0)$, considered in $\mathbb R^n\times(\mathbb R^n\setminus\{0\})$ with respect to the first $k$ variables, equals $(Pr_k^{-1}\times I_\xi)(P_{\bold e^k,\tilde y_0},\xi_0)$ ($I_\xi $ is the identity matrix on $\mathbb R^n$).
As standard, $k$-directional wave front is defined as the complement in
$\mathbb R^k\times(\mathbb R^n\setminus\{0\})$ of all $k$--directionally microlocally regular points $(P_{\bold e^k,\tilde y_0},\xi_0)$, and we denoted as
$WF_{\bold e^k}(f).$
\end{remark}
\begin{proposition}
The set
$WF_{\bold e^k}(f)$
is closed in $\mathbb R^k\times(\mathbb R^n\setminus\{0\})$ (and $\mathbb R^n\times (\mathbb R^n\setminus \{0\})$).
\end{proposition}
$B_r(\tilde 0)$ denotes a closed ball in $\mathbb R^k$ with center at zero and radius $r>0.$
The following theorem relates sets of $k$--directionally microlocally regular points for two $k$-DSTFT of GS ultradistributions.
\begin{theorem} \label{nwr} If (\ref{rhh}) holds for some $g\in\mathcal D^{\alpha}(\mathbb R^k)$, then it holds for every $h\in\mathcal D^{\alpha}(\mathbb R^k),$ $(h(\tilde 0)\neq 0)$ supported by a ball
$B_\rho(\tilde 0)$, where $\rho\leq\rho_0$ and $\rho_0$ depends on $r$ in (\ref{rhh}).
\end{theorem}
\begin{proof}
We follow the idea of our proof in \cite{APS}. Compact supports of $g$ and $h$ simplify the integration. Moreover, by the structural theorem, we know that $f=P_0(D)F$, where $F$ is a continuous function of sub-exponential growth:
\begin{equation}\label{gr1}
(\forall a>0, \quad \exists C_a>0) \quad F(x)\leq C_ae^{a|\xi|^{1/\alpha}}
\end{equation}
and $P_0(D)$ satisfies (\ref{RP}) and \eqref{ulpobound}. So, we can use the technique of oscillatory integral, transfer the differentiation from $f$ on other factors in integral expressions and, from the beginning, assume that $f$ is a continuous function which satisfies \eqref{ulpobound}.
We use Theorem \ref{d444}, that is, the form (\ref{dd333}). Assume that (\ref{rhh}) holds. We repeat from \cite{APS} the constructions of balls. Without that one can not follow our new
estimates.
So, $\gamma$ is chosen so that
$\mbox{ supp }\gamma\subset B_{\rho_1}(\tilde 0)$ and $\rho_1<r-r_0$.
Let $h\in\mathcal D^{\alpha}(\mathbb R^k)$ and $\mbox{ supp }h\subset B_{\rho}(\tilde 0)$.
Our aim is to find $\rho_0$ such that (\ref{rhh}) holds for $DS_{h,\bold e^k}f(\tilde x,\eta)$, with
$\tilde x\in B_{r_0}(\tilde y_0), \eta\in\Gamma_1\subset\subset \Gamma_{\xi_0},$
for $ \rho\leq \rho_0$
($\Gamma_1\subset\subset \Gamma_{\xi_0}$ implies that $\Gamma_1\cap \mathbb S^{n-1}$
is a compact subset of $\Gamma_{\xi_0}\cap \mathbb S^{n-1}$).
Next,
$$
|\tilde p|\leq \rho_1,\;\; |\tilde x-\tilde y_0|\leq r_0 \mbox{ and }\;\; |\tilde p-((\tilde x-\tilde y_0)-(\tilde y-\tilde y_0))|\leq \rho
$$
\begin{equation}\label{suporti}
\Rightarrow |\tilde y-\tilde y_0|\leq \rho+\rho_1+r_0.
\end{equation}
So, we choose $\rho_0$ such that
$\rho_0+\rho_1<r-r_0$
and
\begin{equation}\label{suporti2}\rho+\rho_1+r_0<r \;\mbox{ holds for }\;
\rho\leq\rho_0.
\end{equation}
Let $\Gamma_1\subset\subset \Gamma_{\xi_0}$.
Then, there exists $c\in (0,1)$ such that
\begin{equation}\label{gam}
\eta\in \Gamma_1, |\eta|>1 \mbox{ and } |\eta-\xi|\leq c|\eta|\Rightarrow \xi \in\Gamma_{\xi_0}; \;\;
|\eta-\xi|\leq c|\eta|\Rightarrow |\eta|\leq (1-c)^{-1}|\xi|.
\end{equation}
Let $\tilde x\in B_{r_0}(\tilde y_0), \eta \in\Gamma_1$.
Then
$$
|DS_{h,\bold e^k}f((\tilde x,\cdot),\eta)|
=\left|\int_{\mathbb R^k}\int_{\mathbb R^n}DS_{g,\bold e^k}f((\tilde y,\cdot),\xi)DS_{h,\bold e^k}\gamma(\tilde x-\tilde y,\eta-\xi)d\xi d\tilde y\right|.
$$
Consider $$ J_1=\int_{\mathbb R^n}DS_{g,\bold e^k}f((\tilde y,\cdot),\eta-\xi)d\xi \mbox{ and }\; J_2=\int_{\mathbb R^n}DS_{h,\bold e^k}\gamma(\tilde x-\tilde y,\xi)d\xi.$$
We choose an elliptic ultradifferential operator $P(D)$ so that
$P(\xi)\geq e^{(a+1)|\xi|^{1/\alpha}},$ where $a$ is from (\ref{gr1}). Then
$$J_1=\int_{\mathbb R^n}\int_{\mathbb R^n}
\frac{f(t)}{P(2\pi t)}\overline{g(\tilde t-\tilde y)}P(D_\xi)(e^{-2\pi i t\cdot(\eta-\xi)})dtd\xi.
$$
This integral diverges with respect to $\xi$, while $J_2$ converges because
\begin{eqnarray*}J_2&=&\int_{\mathbb R^n}\int_{B_{\rho_1}(\tilde 0)}
\frac{\gamma(\tilde p)\overline{h(\tilde p-(\tilde x-\tilde y))}}{P(-2\pi\xi)}P(D_p)(e^{-2\pi i p\cdot\xi})dpd\xi
\\&=&
\int_{\mathbb R^n}\int_{B_{\rho_1}(\tilde 0)}
P(D_p)(\frac{\gamma(\tilde p)\overline{h(\tilde p-(\tilde x-\tilde y))}}{P(-2\pi\xi)})e^{-2\pi i p\cdot\xi}dpd\xi.
\end{eqnarray*}
Rewrite
$$|DS_{h,\bold e^k}f((\tilde x,\cdot),\eta)|=\int_{\mathbb R^k}|(\int_{|\eta-\xi|\leq c|\eta|}+\int_{|\eta-\xi|\geq c|\eta|})(...)d\xi|d\tilde y=I_1+I_2.
$$
Then,
$$I_1\leq \int_{\mathbb R^k}\left(\sup_{|\eta-\xi|\leq c|\eta|}
|DS_{g,\bold e^k}f((\tilde y,\cdot),\eta-\xi)|\int_{|\eta-\xi|\leq c|\eta|}
|DS_{h,\bold e^k}\gamma(\tilde x-\tilde y,\xi)|d\xi\right)d\tilde y.
$$
Using (\ref{suporti}), (\ref{suporti2}) and \eqref{gam} we obtain
\begin{equation}\label{dod1}
\sup_{\tilde x\in B_{r_0}(\tilde y_0),\,\eta\in\Gamma_1}e^{N|\eta|^{1/\alpha}}I_1\leq \int_{B_r(\tilde y_0)}\left (\sup_{\xi\in \Gamma_{\xi_0}}
|DS_{g,\bold e^k}f((\tilde y,\cdot),\xi)|e^{N(1-c)^{-1}|\xi|^{1/\alpha}}\right .
\end{equation}
$$\left .\times \int_{|\xi|\geq (1-c)|\eta|}
|DS_{h,\bold e^k}\gamma(\tilde x-\tilde y,\xi)|d\xi\right )d\tilde y.
$$
Now by the finiteness of $J_2$, we obtain that $I_1$ satisfies the necessary estimate of (\ref{rhh}).
Now we consider $I_2$ with the new explanations.
$$I_2\leq \int_{\mathbb R^k}\left|\int_{|\xi|\geq c|\eta|}
DS_{g,\bold e^k}f((\tilde y,\cdot),\eta-\xi)DS_{h,\bold e^k}\gamma(\tilde x-\tilde y,\xi)d\xi\right| d\tilde y.
$$
Let $K=\{\xi: |\eta-\xi|\geq c|\eta|\}$.
Denote by $\kappa^0_d, 0<d<1,$ the characteristic function of
$K_{d}=\bigcup_{\xi\in K}L_d(\xi)$, that is, $K_d$ is open $d$-neighborhood of $K.$
Then, put $$\kappa_d={\kappa}^0_{d}*\varphi_{d},$$
where $\varphi_d=\frac{1}{d^n}\varphi(\cdot/d)$,
$\varphi\in\mathcal D^{\alpha}(\mathbb R^n)$ is non-negative, supported by the ball $B_1(0)$ and equals
$1$ on $B_{1/2}(0).$ This construction implies that $\kappa_d$ equals one on $K$, is supported by $K_{2d}.$
Moreover, all the derivatives of $\kappa_d$ are bounded.
We note that
$$|\int_K...d\xi|=|\int_{K_{2d}}\kappa_d(\xi)...d\xi| +|\int_{K_{2d}\cap\{\xi: |\eta-\xi|\leq\eta\}}\kappa_d(\xi)...d\xi|
$$
Then,
$$\sup_{\tilde x\in B_{r/2}(\tilde y_0),\,
\eta\in \Gamma_1}I_2\leq
\int_{\mathbb R^k}
|\int_{\mathbb{R}^n}\kappa_d(\xi)
DS_{g,\bold e^k}f((\tilde y,\cdot),\eta-\xi)
DS_{h,\bold e^k}\gamma(\tilde x-\tilde y,\xi)d\xi|d\tilde y
$$
$$
+ \int_{\mathbb R^k}|
\int_{K_{2d}\cap\{\xi: |\eta-\xi|\leq\eta\}}\kappa_d(\xi)
DS_{g,\bold e^k}f((\tilde y,\cdot),\eta-\xi)
DS_{h,\bold e^k}\gamma(\tilde x-\tilde y,\xi)d\xi| d\tilde y=I_{2,1}+I_{2,2}.
$$
We continue with $I_{2,1}$.
For every $\tilde x\in B_{r_0}(\tilde y_0)$ and $\eta\in\Gamma_1$ and using \eqref{ulpobound} we see that all integrals on the right hand side of
$$\sup_{\tilde x\in B_{r/2}(\tilde y_0),\,
\eta\in \Gamma_1}e^{N|\eta|^{1/\alpha}}I_{2,1}
$$
$$\leq
\int_{\mathbb R^k}|\int_{\mathbb R^n_\xi}
(\int_{\mathbb R^n_t}\frac{|f(t)|}{P(2\pi t)}|\overline{g(\tilde t-\tilde y)}|dt
\big(\frac{e^{N|\xi|^{1/\alpha}}}{e^{N|\eta-\xi|^{1/\alpha}}}P(D_\xi)
\frac{\kappa_d(\xi)}{P(-2\pi\xi)}\big)
$$
$$ \int_{\mathbb R^n_p}|P(D_p)\big(\gamma(\tilde p)\overline{h(\tilde p-(\tilde x-\tilde y))}\big)|dp \big)d\xi| d\tilde y
$$
are finite.
Now we treat $I_{2,2}$ in the same way as $I_{1}$ since integration goes over a subset of
$\{\xi: |\eta-\xi|\leq c|\eta|\}$. Only a bounded factor $\kappa_d$ appears in
the last integral in (\ref{dod1}):
$$\int_{|\xi|\geq (1-c)|\eta|}\kappa_d(\xi)
|DS_{h,\bold e^k}\gamma(\tilde x-\tilde y,\xi)|d\xi.
$$
This gives
$$\sup_{\tilde x\in B_{r/2}(\tilde y_0),\,
\eta\in \Gamma_1}e^{N|\eta|^{1/\alpha}}I_{2,2}<\infty.
$$
This completes the proof of the theorem.
\end{proof}
The next corollary is a modification of the one in the distribution theory (see \cite{APS}).
\begin{corollary}
\label{suz}
Let $g\in\mathcal D^{\alpha}(\mathbb R^k)$ with $\mbox{supp }g \subset B_a(\tilde 0)$, have synthesis window $\gamma$ with $ \mbox{supp } \gamma\subset B_{\rho_1}(\tilde 0)$ and
$\rho_1\leq a.$ Then
\begin{equation}\label{2rh}
\sup_{\tilde y\in L_{2r}(\tilde y_0),\,\xi \in\Gamma_{\xi_0}}|DS_{g, \bold e^k}f((\tilde y,\cdot),\xi)|\leq C_Ne^{-N|\xi|^{1/\alpha}}.
\end{equation}
Moreover, assume that $a<r.$ Then, for any $h\in\mathcal D^{\alpha}(\mathbb R^k)$ with support $B_{\rho}(\tilde 0)$, $\rho<a$, there exists $r_0$ and $\Gamma_1\subset\subset \Gamma_{\xi_0}$ such that (\ref{2rh}) holds for $DS_{h,\bold e^k}f((\tilde x,\cdot),\eta)$ with the supremum over
$\tilde x\in B_{r_0}(\tilde y_0)$ and $ \eta \in \Gamma_1$.
\end{corollary}
\begin{proof}
Similarly
as in (\ref{suporti}),
$$|\tilde y -\tilde y_0|\leq \rho+\rho_1+r_0<\rho+a-r_0+r_0=a+\rho<2r.
$$
This implies $|\tilde y -\tilde y_0|<2r,$ so that the supremum in the estimate of $I_1$ holds. The proof now can be performed in the same way as in Theorem \ref{nwr}.
\end{proof}
With the standard proof we have
\begin{corollary}\label{w1}
If
$(P_{\bold e^k,\tilde y},\xi)$ is a $k$-directionally microlocally regular point of $f\in\mathcal S'^{\alpha}(\mathbb R^n)$ for every $\xi\in\mathbb R^n\setminus \{0\}$,
then $ f\in\mathcal E^{\alpha}(\mathbb R^n).$
\end{corollary}
\subsection{Relations with the partial wave front}
Recall again that the partial wave front of an $f\in\mathcal D^{\prime}(\mathbb R^n)$ was not considered in the literature. So, it is clear that this was not done for ultradistribution spaces in particular, for GS spaces of ultradistributions. For the purpose of this investigations, we will consider
the set of $k$-microlocally regular points (distinguishing them from the $k$-directionally microlocally regular points, for a moment) for an $f\in\mathcal S^{\prime\alpha}$:
The point $((\widetilde y_0,\widetilde{\widetilde{y}}_0),\xi_0)\in(\mathbb R^k\times\mathbb R^{n-k})\times (\mathbb R^n\setminus\{0\})$ is $k$-microlocally regular for $f$ if
there exists $ \chi\in\mathcal D^{\alpha}(\mathbb R^k)$ so that $ \chi(\tilde y_0)\neq 0$ and a cone $\Gamma_{\xi_0}$ around $\xi_0$ so that there exist $N\in\mathbb N$ and $C_{N,\chi}>0$ such that
\begin{equation}\label{sftkrn137}
|\mathcal{F}(\chi(\tilde y) f(y))(\xi)|\leq C_{N,\chi}e^{-N|\xi|^{1/\alpha}}, \quad \xi\in\Gamma_{\xi_0}, \, y=(\tilde y, \tilde{\tilde y})\in\mathbb{R}^k\times \mathbb{R}^{n-k}.
\end{equation}
Since, in this definition, $\chi$ does not depend on $\widetilde{\widetilde y},$
we will write in the sequel that $f$ is $k$--microlocally regular at
$((\widetilde y_0,\cdot),\xi_0)$.
\begin{remark} The implication (\ref{rhh}) $\Rightarrow$ (ii) is clear. We will prove, as a part of the next theorem ($(ii)\Rightarrow {(i)}$), that the opposite implication also holds, which means that the two notions coincide.
\end{remark}
\begin{theorem}
Let $f\in\mathcal S^{\prime\alpha}(\mathbb R^d)$ and $((\widetilde y_0,\cdot),\xi_0)\in(\mathbb R^k\times\mathbb R^{n-k})\times(\mathbb R^n\backslash\{0\})$. The following conditions are equivalent.
(i) $((\widetilde y_0,\cdot),\xi_0)\not\in WF_{\bold e^k}(f)$.
(ii) There exist a compact neighbourhood $\widetilde K$ of $\widetilde y_0$ and a cone neighbourhood $ \Gamma$ of $\xi_0$ such that there exist $N\in\mathbb N$ so that for every
${\chi}\in\mathcal D^{\alpha}({\widetilde K})$ there exists $C_{N, \chi}>0$ such that (\ref{sftkrn137}) is valid.
(iii) There exist a compact neighbourhood $\widetilde K$ of $\widetilde y_0$ and a cone neighbourhood $\Gamma$ of $\xi_0$ such that there exist $N\in\mathbb N$, $h>0$ and $C_{N,h}>0$ such that
\[
|DS_{\chi, \bold e^k} f((\widetilde y,\cdot),\xi)|\leq C_{N,h}
\sup_{p\in\mathbb N_0^n}\frac{h^{|p|}}{p!^\alpha}\|D^{p}\chi\|_{L^{\infty}(\mathbb R^k)}e^{-N|\xi|^{1/\alpha}},
\]
\[\forall \widetilde y\in \widetilde K,\,\, \forall \xi\in\Gamma,\,\, \forall \chi\in \mathcal D^{\alpha}(\widetilde K-\{y_0\}),
\]
where $\widetilde K-\{\widetilde y_0\}=\{\widetilde y\in\mathbb R^k|\,
\widetilde y+\widetilde y_0\in \widetilde K\}$.
(iv)There exist a compact neighborhood $\widetilde K$ of $\widetilde y_0$, a cone neighborhood $\Gamma$ of $\xi_0$ and $\chi\in\mathcal D^{\alpha}(\mathbb R^k)$, with $\chi(\widetilde 0)\neq 0$ such that there exist $N\in\mathbb N$ and $C_{N, \chi}>0$ such that
\[
|DS_{\bold e^k,\chi}f((\widetilde y,\cdot),\xi)|\leq C_{N,\chi}e^{-N|\xi|^{1/\alpha}},\,\, \forall \tilde y\in \widetilde K,\, \forall \xi\in\Gamma.
\]
\end{theorem}
\begin{proof} The proof follows the steps of those one in \cite{PP}.
$(i)\Rightarrow (ii)$
The fact that $((\widetilde y_0,\cdot),\xi_0)\not\in WF_{\bold e^k}(f)$ implies the existence
of $\chi\in\mathcal D^{\alpha}(\mathbb R^k)$ ($\chi(\cdot)=g(\cdot-\widetilde y_0)$) with $\chi(\tilde y_0)\neq 0$ and a cone neighborhood $\Gamma_{\xi_0}$ of $\xi_0$ for which (\ref{sftkrn137}) is valid for
$\xi\in\Gamma_{\xi_0}$. There exists a compact neighborhood
$\widetilde K$ of $\tilde y_0$ where $\chi$ never vanishes. Fix a cone neighborhood $\Gamma$ of $\xi_0$ such that $\overline{\Gamma}\subseteq \Gamma_{\xi_0}\cup\{0\}$.
Following the proof of Lemma 8.1.1 in \cite{hor} one can show that there exist $N\in\mathbb N$ and $\psi\in\mathcal D^{\alpha}(\widetilde K)$ such that $|\mathcal{F}(\psi\chi f)(\xi)|\leq C_{N,\psi,\chi}e^{-N|\xi|^{1/\alpha}}$,
$\forall \xi\in\ \Gamma$. Then $(ii)$ follows since
$\psi f=(\psi/\chi)\chi f$ where $\psi/\chi\in\mathcal D^{\alpha}(\widetilde K)$.
$(ii)\Rightarrow (iii)$ By $(ii)$, (\ref{sftkrn137}) implies that there exist $N\in\mathbb N$, the set
$H_N=\{e^{-N|\xi|^{1/\alpha}} e^{-i\xi \cdot}f|\, \xi\in \Gamma\}$ is weakly bounded in
$\mathcal D'^{\alpha}(\widetilde B)$. So it is equicontinuous as
$D^{\alpha}(\widetilde B)$ is barrelled. Let $\widetilde K=\widetilde B_{\widetilde y_0}(r/2)$. For each
$\chi\in\mathcal D^{\alpha}(\widetilde K-\{y_0\})$ and $\widetilde y\in \widetilde K$ the function
$$\widetilde t\mapsto \chi(\widetilde t-\widetilde y)$$ is in $\mathcal D^{\alpha}(\widetilde B)$ and the equicontinuity of $H_N$ implies the existence of $C_N>0$ and $h>0$
such that
\begin{eqnarray*}
|\langle e^{-i\xi t} f(t),\overline{\chi(\widetilde t-\widetilde y)}\rangle|&\leq& C_N
e^{-N|\xi|^{1/\alpha}}\sup_{p\in\mathbb N_0^n,\widetilde t,\widetilde y\in\widetilde K,}\
\frac{h^{|p|}}{p!^{\alpha}}|D^{m}\chi(\widetilde t-\widetilde y)|\\
&=&C_N\sup_{p\in\mathbb N_0^n}\frac{h^{|p|}}{p!^\alpha}\|D^{p}\chi\|_{L^{\infty}(\mathbb R^d)}
e^{-N|\xi|^{1/\alpha}},\,\, \forall \xi\in\Gamma,\, \forall \widetilde y\in \widetilde K,
\end{eqnarray*}
which implies the validity of $(iii)$.
$(iii)\Rightarrow (iv)$ is simple and skipped.
Using the estimate in $(iv)$ with $\widetilde y=\widetilde y_0$, $(iv)\Rightarrow (i)$ simply follow. This completes the proof.
\end{proof}
{Acknowledgements}: {This paper was
supported by the project "\textit{Time-frequency methods}", No. 174024 financed by the
Ministry of Science, Republic of Serbia, by the project "\textit{Localization in phase space: theoretical and numerical aspects}", No. 19.032/961-103/19 funded by MNRVOID Republic of Srpska} and by the bilateral project “\textit{Microlocal analysis and applications}" between the Macedonian and Serbian academies of sciences and arts.
|
2,877,628,089,366 | arxiv | \section{Introduction}\label{sec:a}
Collision of relativistic heavy-ions produces hot nuclear matter that can be described using the relativistic hydrodynamics \cite{Landau:1953gs,Belenkij:1956cd}. I will refer to this matter as the Quark-Gluon Plasma (QGP) leaving aside the issues of its equilibration and thermalization. Valence electric charges of the colliding ions are not a part of the plasma, as they continue on the incident trajectory along the beam directions with very little deflection \cite{Itakura:2003jp}. However, they create strong electromagnetic field (EMF) that influences the plasma behavior \cite{Kharzeev:2007jp,Tuchin:2013ie,Voronyuk:2011jd,Skokov:2009qp,Deng:2012pc,Bzdak:2011yy}. Electrically conducting plasma responds by generating induced EMF. The resulting EMF is a solution to a complicated magneto-hydrodynamic problem. As a first approximation, one can rely on slow time-dependence of the relevant kinetic coefficients on time to decouple the Maxwell equations from the time evolution of the QGP. Analytical solution to these equations shows that the EMF decreases with time much slower than in vacuum and is approximately collision energy independent; rather it depends only on the impact parameter and the electrical conductivity of the QGP \cite{Tuchin:2010vs,Tuchin:2013apa,Tuchin:2013ie,Gursoy:2014aka}. Numerical simulations that take into account the QGP expansion \cite{Zakharov:2014dia} qualitatively agree with this conclusion.\footnote{A different strength of EMF in \cite{Zakharov:2014dia} and \cite{Tuchin:2013apa} is due to different initial time at which the plasma evolution starts.}
It has been recently realized that kinetic properties of the QGP reflect the nontrivial topological structure of the QCD. In particular, the QGP responds to the chirality imbalance by generating metastable parity-odd domains. In the presence of external magnetic field such a metastable domain induces a parallel to it electric field, which is known as the Chiral Magnetic Effect (CME) \cite{Kharzeev:2004ey,Kharzeev:2007jp,Kharzeev:2007tn,Fukushima:2008xe,Kharzeev:2009fn}. Electric current generated by the CME is proportional to the external magnetic field, with the chiral conductivity $\sigma_\chi$ being the proportionality coefficient. In this paper, I study the electromagnetic field generated by valence charges at finite chiral conductivity and determine the role of the Chiral Magnetic Effect (CME) in the electromagnetic field dynamics in the QGP.
I found a two-fold effect of the CME on the electromagnetic field evolution. Firstly, the field becomes unstable because soft modes with $k<\sigma_\chi$ grow exponentially with time. For the QGP this effects is of little importance since the largest wavelength $1/k$ that is allowed in QGP is much smaller than $1/\sigma_\chi$. However, in non-Abelian plasmas with large spatial extent this is an important phenomenon that may lead to a breakdown of electromagnetic field into a set of knots with non-trivial topology.\footnote{A different type of ``chiral plasma instabilities" has been recently discussed in \cite{Joyce:1997uy,Boyarsky:2011uy,Tashiro:2012mf,Grabowska:2014efa,Akamatsu:2013pjd,Akamatsu:2014yza}.} Secondly, due to finite chiral conductivity, magnetic field, produced by valence electric charges, oscillates at early times after a heavy-ion collision. These oscillations may result in partial cancelation of the magnetic field effects, when averaged over time.
The paper is structured as follows: In \sec{sec:b} I describe the Maxwell-Chern-Simons (MCS) theory, which is an elegant way to incorporate the topological effects in QED. In the MCS the chiral conductivity arises from the time-dependent $\theta$-angle. Following \cite{McLerran:2013hla} I consider a simplest model with constant $\sigma_\chi$. In \sec{sec:c} I solve MCS equations away from charges and show that the dispersion relation of electromagnetic wave contains an unstable mode at $k<\sigma_\chi$. In \sec{sec:d} I derive expressions for the electromagnetic field of a relativistic point charge and discuss its properties. Explicit analytical expressions for the magnetic field of a point charge is derived in \sec{sec:g} in the diffusion approximation, which is appropriate for the relativistic heavy-ion collisions. The main result, shown in \fig{fig2}, indicates that at finite chiral conductivity, magnetic field components oscillate at early times. I discuss these results and conclude in \sec{sec:i}.
\section{ Maxwell-Chern-Simons equations}\label{sec:b}
The Lagrangian of electrodynamics coupled to the topological charge carried by the gluon field, the so-called Maxwell-Chern-Simons theory, reads \cite{Wilczek:1987mv,Carroll:1989vb, Sikivie:1984yz,Kharzeev:2009fn}
\ball{b10}
L= -\frac{1}{4}F^{\mu\nu}F_{\mu\nu}-A_\mu j^{\mu}-\frac{c}{4}\theta\tilde F^{\mu\nu}F_{\mu\nu}\,,
\gal
where $c=N_c\sum_fq_f^2e^2/2\pi^2$. An external pseudo-scalar field $\theta$ depends on the medium properties and originates in the QCD Lagrangian. The corresponding field equations are given by \footnote{ The correct signs in front of the anomalous terms where derived in \cite{Ozonder:2010zy}. }
\bal
&\b \nabla\cdot \b B=0\,, \label{b11}\\
& \b \nabla\cdot \b E= \rho- c\, \b\nabla\theta\cdot \b B\,, \label{b12}\\
& \b \nabla \times \b E= -\partial_t \b B\,,\label{b13}\\
& \b \nabla \times \b B= \partial_t \b E+ \b j + c(\partial_t\theta\, \b B+ \b\nabla \theta\times \b E)\,.\label{b14}
\gal
Time-derivative $\dot \theta= \mu_5$ can be identified with the axial chemical potential $\mu_5$ \cite{Fukushima:2008xe,Kharzeev:2009fn}. The part of the anomalous current density proportional to the magnetic field can be written down as $\b j= \sigma_\chi \b B$, where
\ball{b25}
\sigma_\chi = \mu_5 \frac{e^2}{2\pi^2}N_c\sum_fq_f^2\,
\gal
is the chiral conductivity induced by the QED anomaly \cite{Kharzeev:2009pj}. The $\theta$-angle is believed to be finite inside metastable regions of size $\sim 1/g^2T$. On average it must vanish $\aver{\theta}=0$ to preserve the global $\mathcal{CP}$-invariance of the QCD. Its space and time dynamics is complicated: shortly after a heavy-ion collision it is determined by the colored fields of glasma \cite{Kharzeev:2001ev,Lappi:2006fp,Hirono:2014oda}, while at later time by the sphaleron transition dynamics \cite{Joyce:1997uy,Boyarsky:2011uy,Tashiro:2012mf,Grabowska:2014efa}.
Since the detailed structure of inhomogeneous field $\theta$ is unknown, one has to resort to phenomenological models in order to study its effect on the electromagnetic field dynamics (see e.g.\ \cite{Hirono:2014oda}). The simplest model that captures the essential dynamics of the CME effect, and that we adopt in the present study, is to neglect the space variation of $\theta$ and approximate $\sigma_\chi$ by a constant. In other words we set $\b\nabla\theta=0$ and $\sigma_\chi=\text{const}$. This model was used in \cite{Chernodub:2010ye} to discuss non-trivial static topological solutions of \eq{b11}--\eq{b14} (see below) and in \cite{McLerran:2013hla} to numerically investigate time-evolution of magnetic field. The main advantage of this model is that it can be analytically solved and thus provides important insights into the dynamics of the electromagnetic fields in the presence of the chiral anomaly. Moreover, it is argued in \cite{Zhitnitsky:2014ria,Zhitnitsky:2014dra} that $\theta$ may actually be a slow function of $x$ that permits expansion $\theta\approx \theta_0+ \mu_5 t + c^{-1}\b P\cdot \b r$ with constant $\mu_5$ and $\b P$.
Consider now the system of equations \eq{b11}--\eq{b14} in the absence of electric charges, with the assumptions discussed in the previous paragraph. It has non-trivial stationary solutions with finite magnetic field and vanishing electric field that satisfies the following equations \cite{Chandra,RB,CDGT}:
\bal
&\b \nabla\cdot \b B=0\,, \label{z11}\\
&\b \nabla \times \b B= \sigma_\chi \b B\,. \label{z12}
\gal
It is argued in \cite{Chernodub:2010ye} that since the anomalous current $\b j =\sigma_\chi \b B$ exists only in the deconfined phase occupying a domain of finite volume $D$, there is no outward current on its boundary. This implies the boundary condition
\ball{z14}
\unit r \cdot \b B\big|_{\partial D} =0\,.
\gal
Solution to \eq{z11}-\eq{z14} is a system of magnetized knots of different sizes. In a simplest case of spherical boundary the possible values of its radius are
\ball{z16}
R_n= \frac{\kappa_n}{\sigma_\chi}\,,\quad n=0,1,2,\ldots\ldots\,,
\gal
where $n$ enumerates zeros of spherical Bessel functions $\kappa_n$. The smallest of $\kappa$'s is $\kappa_0\approx 4.5$, which for a realistic $\sigma_\chi$ yields $R_0\approx 200$~fm. $R_0$ is much larger than a characteristic transverse size of the QGP $R_A\sim 6-10$~fm and thus has no effect on the QGP phenomenology. It is possible that magnetic knots are artifacts of our model for the $\theta$-angle. It is far from clear whether any static topological solutions survive in a more realistic model.
\section{Instability of Electromagnetic waves in infinite plasma}\label{sec:c}
Consider electromagnetic waves propagating in plasma far from any sources. In a conducting medium Maxwell equations for the electromagnetic field read
\bal
&\b \nabla\cdot \b B=0\,, \label{c11}\\
& \b \nabla\cdot \b D= 0\,, \label{c12}\\
& \b \nabla \times \b E= -\partial_t \b B\,,\label{c13}\\
& \b \nabla \times \b H= \partial_t \b D + \sigma_\chi \b B\,.\label{c14}
\gal
$\b D$ is electric displacement vector. We will assume that $\mu =1$. Fourier transformation
\ball{c19}
\b E(\b r,t)= \int \frac{d^4k}{(2\pi)^4}e^{-ik\cdot x}\b E_{\omega,\b k}\,,\quad \b B(\b r,t)= \int \frac{d^4k}{(2\pi)^4}e^{-ik\cdot x}\b B_{\omega,\b k}
\gal
where $x=(t,\b r)$, $k= (\omega, \b k)$ yields Maxwell equations in momentum space
\bal
& \b k\cdot \b B_{\omega,\b k}= 0\,, \label{c21}\\
&\epsilon \b k\cdot \b E_{\omega,\b k}= 0\,, \label{c22}\\
& \b k\times \b E_{\omega,\b k}= \omega \b B_{\omega,\b k}\,, \label{c23}\\
& \b k\times \b B_{\omega,\b k}= -\omega \epsilon \b E_{\omega,\b k}-i\sigma_\chi \b B_{\omega,\b k}\,, \label{c24}
\gal
where $\b D_{\omega,\b k}= \epsilon \b E_{\omega,\b k}$. In electrically conducting medium with the Ohmic conductivity $\sigma$ the permittivity is $\epsilon= 1+i\sigma/\omega$,
Taking vector product of \eq{c24} with $\b k$ and using \eq{c21} and \eq{c23} we get
\ball{c27}
\b B_{\omega,\b k}[\omega(\omega+i\sigma)-\b k^2]= -i\sigma_\chi \b k\times \b B_{\omega,\b k}\,.
\gal
Taking another vector product with $\b k$ gives
\ball{c29}
(\b k\times \b B_{\omega,\b k})[\omega(\omega+i\sigma)-\b k^2]= i\sigma_\chi \b k^2 \b B_{\omega,\b k}\,.
\gal
Equations \eq{c27} and \eq{c29} have a non-trivial solution only if the following dispersion relation is satisfied
\ball{c31}
[\omega(\omega+i\sigma)-\b k^2]^2=\sigma_\chi^2\b k^2\,.
\gal
It has four solutions
\ball{c33}
\omega_{\lambda_1,\lambda_2}= -\frac{i\sigma}{2}+\lambda_1\sqrt{ k^2+\lambda_2\sigma_\chi k- \sigma^2/4}\,,
\gal
where $\lambda_1,\, \lambda_2= \pm 1$ and $k=\sqrt{\b k^2}\ge 0$. These solutions determine the time dependence of electromagnetic wave as $\sim e^{-i\omega_{\lambda_1,\lambda_2}t}$.
Let $\kappa^2= k^2+\lambda_2\sigma_\chi k- \sigma^2/4$.
When $\kappa^2>0$ the electromagnetic wave oscillates with frequency $\kappa$ and is damped over the distance $1/\sigma$. This corresponds to momenta
\ball{c35}
k>k_0\equiv \frac{1}{2}\sqrt{\sigma_\chi^2+\sigma^2}-\frac{\lambda_2\sigma_\chi}{2}\,.
\gal
For $k<k_0$, $\kappa^2<0$, and all $\omega_{\lambda_1,\lambda_2}$'s become imaginary implying that electromagnetic wave is a monotonic function of time. At $\kappa^2=-\sigma^2/4$, which occurs at $k=\sigma_\chi$, $\lambda_2=-1$, and $\lambda_1=+1$, $\omega_{+-}$ vanishes indicating a stationary mode. Finally, when $\kappa^2<-\sigma^2/4$, i.e.
$k<\sigma_\chi$, $\lambda_2=-1$, $\lambda_1=+1$ there is an unstable mode with $\,\mathrm{Im}\, \omega_{+-}>0$ which corresponds to exponentially increasing magnetic field. $\,\mathrm{Im}\, \omega_{+-}$ vanishes at $k=0$ and $k=\sigma_\chi$ and has a maximum value of $\left(\sqrt{\sigma^2+\sigma_\chi^2}-\sigma\right)/2$ at $k=\sigma_\chi/2$.
Electromagnetic wave which at some initial time contains modes extending to the region $k<\sigma_\chi$ is unstable. This is a usual situation in an infinite plasma. However, in a plasma of spatial size $R$ there are only modes $k\gtrsim 1/R$. Therefore, the instability affects the field evolution only if $R\gtrsim1/\sigma_\chi$. In the QGP this condition is not satisfied, except, perhaps, in a very rear fluctuations of the $\theta$-angle, and hence can be ignored.
\section{Electromagnetic field of a point charge}\label{sec:d}
In electrically conducting medium Maxwell equations for the electromagnetic field of a point charge moving along a straight line $z=vt$ read
\bal
&\b \nabla\cdot \b B=0\,, \label{d11}\\
& \b \nabla\cdot \b D= e\delta (z-vt)\delta (\b b)\,, \label{d12}\\
& \b \nabla \times \b E= -\partial_t \b B\,,\label{d13}\\
& \b \nabla \times \b H= \partial_t \b D + \sigma_\chi \b B+ ev\unit z \delta (z-vt)\delta (\b b)\,.\label{d14}
\gal
These equations in momentum space are
\bal
& \b k\cdot \b B_{\omega,\b k}= 0\,, \label{d21}\\
&\epsilon \b k\cdot \b E_{\omega,\b k}= -2\pi i e\delta (\omega - k_z v)\,, \label{d22}\\
& \b k\times \b E_{\omega,\b k}= \omega \b B_{\omega,\b k}\,, \label{d23}\\
& \b k\times \b B_{\omega,\b k}= -\omega \epsilon \b E_{\omega,\b k}-i\sigma_\chi \b B_{\omega,\b k}-2\pi i ev\unit z \delta(\omega-k_zv)\,. \label{d24}
\gal
We repeat the algebraic manipulations of the previous section. Firstly, taking the vector product of \eq{d24} with $\b k$ and using \eq{d21} and \eq{d23} we arrive at
\ball{d27}
\b B_{\omega,\b k}[\omega(\omega+i\sigma)-\b k^2]= -i\sigma_\chi \b k\times \b B_{\omega,\b k}-2\pi i ev\b k \times \unit z \delta(\omega-k_zv)\,.
\gal
Secondly, we take another vector product with $\b k$ to obtain
\ball{d29}
(\b k\times \b B_{\omega,\b k})[\omega(\omega+i\sigma)-\b k^2]= i\sigma_\chi \b k^2 \b B_{\omega,\b k}-2\pi i e v \b k\times (\b k\times \unit z)\delta(\omega-k_zv)\,.
\gal
We are interested in a particular solution to equations \eq{d27},\eq{d29}, namely the one that is generated by the electric charge $e$. Solving \eq{d27} and \eq{d29} yields
\ball{d31}
\b B_{\omega,\b k}= \frac{(\b k\times \unit z) [\omega(\omega+i\sigma)-\b k^2]-i\sigma_\chi \b k\times (\b k\times \unit z)}{[\omega(\omega+i\sigma)-\b k^2]^2-\sigma_\chi^2\b k^2}(-2\pi i) ev\delta(\omega - k_z v)\,.
\gal
Electric field follows from the Faraday law \eq{d23} upon taking its vector product with $\b k$:
\ball{d33}
\b k (\b k\cdot \b E_{\omega,\b k})-\b k^2 \b E_{\omega,\b k}= \omega (\b k\times \b B_{\omega,\b k})\,.
\gal
Substituting \eq{d22} and \eq{d24} we find
\ball{d35}
\b E_{\omega,\b k}= \frac{2\pi ie\delta(\omega - k_z v)[\b k/\epsilon-v\omega \unit z]-i\omega \sigma_\chi \b B_{\omega,\b k}}{\omega(\omega+i\sigma)-\b k^2}\,,
\gal
with $\b B_{\omega,\b k}$ given by \eq{d31}.
It will be suitable to write the cross products in \eq{d31} in cylindrical coordinates. Let $\psi$ be the angle between the vector $\b k_\bot$ and the $x$-axis, the corresponding unit vector is $\unit \psi=-\unit x \sin\psi+\unit y \cos\psi$. Then
\bal
&\b k\times \unit z= -k_\bot \unit \psi\,, \label{d37}\\
& \b k\times(\b k\times \unit z)= k_z \b k_\bot -k_\bot^2\unit z\,. \label{d38}
\gal
Using identities \eq{d37},\eq{d38} in \eq{d22}, substituting the result into \eq{c19} and taking integral over $k_z$ we find
\ball{d40}
\b B= ie\int_{-\infty}^{+\infty}\frac{d\omega}{2\pi}\int \frac{d^2k_\bot}{(2\pi)^2}
\frac{k_\bot \unit \psi [\omega(\omega+i\sigma)- k_\bot^2-\frac{\omega^2}{v^2}]+i\sigma_\chi (\b k_\bot \frac{\omega}{v}-k_\bot^2\unit z)}{[\omega(\omega+i\sigma)- k_\bot^2-\frac{\omega^2}{v^2}]^2-\sigma_\chi^2(k_\bot^2+\frac{\omega^2}{v^2})}e^{-i\omega x_-+i \b k_\bot \cdot \b b}\,.
\gal
where $x_-= t-z/v$.
Time dependence of magnetic field is determined by the poles of \eq{d31} in the plane of complex $\omega$. These poles are solutions of the following quartic equation
\ball{d57}
\left[\omega(\omega+i\sigma)- k_\bot^2-\frac{\omega^2}{v^2}\right]^2-\sigma_\chi^2\left(k_\bot^2+\frac{\omega^2}{v^2}\right)=0\,.
\gal
Eq.~\eq{d57} can be obtained from the dispersion relation \eq{c31} of a free wave by restricting it to the particle equation of motion $k_z= \omega/v$. Introducing $\gamma = (1-v^2)^{-1/2}$ allows us to cast \eq{d57} in a more convenient form
\ball{d59}
\left( -\frac{\omega^2}{v^2\gamma^2}+i\omega \sigma - k_\bot^2\right)^2-\sigma_\chi^2\left( \frac{\omega^2}{v^2}+k_\bot^2\right)=0\,.
\gal
Four solutions to this equation can be found using the standard algebraic methods. However, they are quite bulky, so I am not reproducing them here. Instead, I find it more illuminating to plot them at fixed $\sigma$, $\sigma_\chi$ and $\gamma$ for different values of $k_\bot$ as shown in \fig{fig1}.
\begin{figure}[ht]
\includegraphics[height=5cm]{fig1a.pdf}
\caption{Four solutions of \eq{d59} at $\sigma=5.8$~MeV, $\sigma_\chi=1$~MeV, $\gamma=100$. Horizontal and vertical axes are in units of GeV. Each line is a unique function of $k_\bot$. Squares, circles and triangles indicate the positions of the poles at $k_\bot=0.1,0.6,1.1$~GeV respectively. }
\label{fig1}
\end{figure}
Position of the four poles at $k_\bot\to 0$ can be found by expanding \eq{d59}, which gives three distinct solutions $\omega=0$ and $\omega=v^2\gamma^2(i\sigma\pm \sigma_\chi)$. The former corresponds to the minimum value of the lower branches, while the later to the minimum values of the upper branches.
Thus, the upper branches are separated from the real axis by a gap $v^2\gamma^2\sigma$. The absolute value of the real part of the upper branches decreases monotonically with $k_\bot$. At $k_\bot\to \infty$
\ball{d60}
\omega\approx\pm iv\gamma k_\bot \pm\frac{1}{2}v\gamma\sigma_\chi\sqrt{\gamma^2-1}\,,
\gal
Thus, the real value of $\omega$ of upper branches approaches a constant at large $k_\bot$, which indicates that a gap of size $\sim \gamma^2 \sigma_\chi$ exists also between the upper branches and the imaginary axis. In the ultra-relativistic limit $v\to 1$, or $\gamma\to \infty$, the upper branches move to infinity. Since the poles in the upper-half plane determine the electromagnetic field at $x_-<0$, it gets exponentially suppressed at $\gamma\gg 1$.
Behavior of the electromagnetic field at $x_->0$ is determined by the two poles in the lower half-plane. Unlike the poles in the upper half-plane they stay finite in the ultra-relativistic limit. One of the lower branches exhibits a peculiar behavior by crossing the real axis and acquiring a positive $\,\mathrm{Im}\, \omega$ when $k_\bot<\sigma_\chi$. This is a way in which the field instability discussed in the previous section manifests itself in this case. (This feature is not readily seen in \fig{fig1} due the small value of $\sigma_\chi$). Existence of a pole in the upper-half plane implies that the field of a point charge moving along $x_-=0$ receives acausal contribution, viz.\ a term that is finite at $x_-<0$ when $\gamma\to \infty$. Fortunately, transverse momenta as small as $k_\bot\sim \sigma_\chi$ are not relevant in relativistic heavy-ion phenomenology allowing me to neglect the acausal contribution. This however does not resolve a theoretical problem that the acausal term presents.\footnote{A solution to this problem might be related to existence magnetic knots discussed in \sec{sec:b} that also appear at $k\sim \sigma_\chi$. }
\section{Diffusion approximation}\label{sec:g}
At a given light-cone time $x_->0$ the $\omega$-integral in \eq{d40} vanishes at $\omega\gg 1/x_-$ due to the rapid oscillation of the integrand. Therefore, at later times the terms in \eq{d59} that are quadratic in $\omega$ are suppressed. This correspond to the following ``diffusion" approximation:
\ball{g11}
\omega\ll \sigma v^2\gamma^2\,,\quad \omega\ll v\gamma k_\bot\,,
\gal
which is tantamount to
\ball{g13}
x_-\gg \frac{1}{\sigma v^2\gamma^2}\,,\quad x_-\gg \frac{b}{v\gamma}\,,
\gal
where we estimated $k_\bot\sim 1/b$. Electrical conductivity of the quark-gluon plasma at the critical temperature is $\sigma = 5.8$~MeV \cite{Ding:2010ga,Aarts:2007wj,Amato:2013oja,Cassing:2013iz}. For a heavy-ion collision at $\gamma=100$ we estimate $1/\sigma v^2 \gamma^2\sim 3\cdot 10^{-3}$~fm. For $b\sim 10$~fm, $b/\gamma \sim 0.1$~fm. Taking into account that it takes about $1/Q_s\sim 0.2$~fm to release the color charges from the nuclei wave functions, it follows that approximation \eq{g11} applies to the entire lifetime of the QGP. The precise initial conditions do not play an important role in the electromagnetic field evolution.
Since the valence quarks are ultra-relativistic, i.e.\ $\gamma\gg 1$, we will approximate their velocity as $v\approx 1-1/2\gamma^2$. Then, the dispersion relation \eq{d59} in the diffusion approximation takes form
\ball{g15}
\left( i\omega \sigma - k_\bot^2\right)^2-\sigma_\chi^2\left( \omega^2+k_\bot^2\right)=0\,.
\gal
The two solutions of \eq{g15}, describing the two lower poles in \fig{fig1}, are
\ball{g17}
\omega_{1,2}= \frac{-i\sigma k_\bot^2\pm k_\bot \sigma_\chi \sqrt{k_\bot^2-\sigma^2-\sigma_\chi^2}}{\sigma^2+\sigma_\chi^2}\,.
\gal
These are the only poles of the Fourier component of magnetic field $\b B_{\omega,\b k}$ in the complex $\omega$-plane because the upper poles in \fig{fig1} disappear in the limit $v\to 1$. If $k_\bot> \sqrt{\sigma^2+\sigma_\chi^2}$, then both complex-conjugated poles lie in the lower half-plane. If $\sigma_\chi<k_\bot< \sqrt{\sigma^2+\sigma_\chi^2}$, then there are two poles on the imaginary axis in the lower half-plane. Finally, if $k_\bot<\sigma_\chi$, then both poles lie on the imaginary axis, but $\omega_1$ is in the upper-half plane, while $\omega_2$ is still in the lower one.
In the diffusion approximation \eq{d40} reads
\bal
\b B&= -ie\int\frac{d\omega}{2\pi}\int \frac{d^2k_\bot}{(2\pi)^2}
\frac{k_\bot \unit \psi (i\omega \sigma - k_\bot^2)+i\sigma_\chi (\b k_\bot \omega-k_\bot^2\unit z)}{(\sigma^2+\sigma_\chi^2)(\omega-\omega_1)(\omega-\omega_2)}e^{-i\omega x_-+i \b k_\bot \cdot \b b}\label{g18}\\
&=\int\frac{d^2k_\bot}{(2\pi)^2} e^{i \b k_\bot \cdot \b b}\int_{-\infty}^{+\infty} \frac{d\omega}{2\pi} \frac{\b f(\omega)}{(\omega-\omega_1)(\omega-\omega_2)}e^{-i\omega x_-} \label{g19}\,,
\gal
where I denoted
\ball{g20}
\b f(\omega)= -\frac{ie}{\sigma^2+\sigma_\chi^2}
\left[k_\bot \unit \psi (i\omega \sigma - k_\bot^2)+i\sigma_\chi (\b k_\bot \omega-k_\bot^2\unit z)\right]\,.
\gal
Closing the integration contour in \eq{g19} by an infinite semi-circle in the lower half-plane we find
at $x_->0$
\bal
\b B =& \int\frac{d^2k_\bot}{(2\pi)^2} e^{i \b k_\bot \cdot \b b}\frac{i}{\omega_2-\omega_1}\left[ e^{-i\omega_1 x_-}\b f(\omega_1)\theta(k_\bot-\sigma_\chi)- e^{-i\omega_2 x_-}\b f(\omega_2)\right]\theta(x_-)\,. \label{g21}
\gal
The value of $\sigma_\chi$ probably does not exceed a few MeV at best, while typical $k_\bot$ is in the range $20-200$~MeV corresponding to $b$'s in the range $1-10$~fm. Therefore, only the case $k_\bot^2\gg \sigma^2+\sigma_\chi^2$ has a practical significance. This allows us to approximate the poles of \eq{g17} as follows
\ball{g23}
\omega_{1,2}\approx \frac{k_\bot^2(-i\sigma\pm \sigma_\chi)}{\sigma^2+\sigma_\chi^2}=\frac{k_\bot^2}{i\sigma\pm \sigma_\chi}\,.
\gal
Magnetic field at $x_->0$ becomes
\ball{g25}
\b B\approx \int\frac{d^2k_\bot}{(2\pi)^2} e^{i \b k_\bot \cdot \b b}\frac{i}{\omega_2-\omega_1}\left[ e^{-i\omega_1 x_-}\b f(\omega_1)- e^{-i\omega_2 x_-}\b f(\omega_2)\right]\,.
\gal
Its polar component is given by
\ball{g27}
B_\phi = \int\frac{d^2k_\bot}{(2\pi)^2} e^{i \b k_\bot \cdot \b b}\frac{i}{\omega_2-\omega_1} \hat {\b \psi}\cdot\left[ e^{-i\omega_1 x_-}\b f(\omega_1)- e^{-i\omega_2 x_-} \b f(\omega_2)\right]\,,
\gal
where $\phi$ is the angle between the impact parameter $\b b$ and the $x$-axis. Integration over the directions of $\b k_\bot$ given by the polar angle $\psi$ is done as follows:
\ball{g28}
&\int_0^{2\pi} e^{i\b k_\bot \cdot \b b }\unit \psi \,d\psi=\int_0^{2\pi} e^{ik_\bot b \cos(\psi-\phi)}(-\unit x \sin\psi+\unit y \cos\psi) \,d\psi= 2\pi i J_1(k_\bot b)\unit \phi\,,
\gal
Using \eq{g28} in \eq{g27} and substituting \eq{g20},\eq{g23} we have:
\ball{g29}
B_\phi&=-\int_0^\infty \frac{dk_\bot k_\bot}{2\pi}iJ_1(k_\bot b)\frac{ek_\bot}{2(\sigma^2+\sigma_\chi^2)}\left[
(i\sigma-\sigma_\chi)e^{-i \frac{k_\bot^2x_-}{i\sigma+\sigma_\chi}}+(i\sigma +\sigma_\chi)e^{-i \frac{k_\bot^2x_-}{i\sigma-\sigma_\chi}}\right]\,.
\gal
The remaining integral can be done analytically yielding
\ball{g31}
B_\phi=\frac{eb}{8\pi x_-^2}e^{-\frac{b^2\sigma}{4x_-}}\left[
\sigma \cos\left( \frac{b^2\sigma_\chi}{4x_-}\right)+\sigma_\chi\sin \left(\frac{b^2\sigma_\chi}{4x_-}\right)\right]\,.
\gal
Turning to the component of magnetic field aligned along the $\b b$-direction we obtain:
\ball{g33}
B_r= \int\frac{d^2k_\bot}{(2\pi)^2} e^{i \b k_\bot \cdot \b b}\frac{i}{\omega_2-\omega_1} \unit k_\bot\cdot\left[ e^{-i\omega_1 x_-}\b f(\omega_1)- e^{-i\omega_2 x_-} \b f(\omega_2)\right]\,.
\gal
Angular integration is done using
\ball{g32}
&\int_0^{2\pi} e^{i\b k_\bot \cdot \b b }\unit k_\bot \,d\psi= \int_0^{2\pi} e^{ik_\bot b \cos(\psi-\phi)}(\unit x \cos\psi+\unit y \sin\psi) \,d\psi= 2\pi i J_1(k_\bot b)\unit b\,.
\gal
Plugging the $k_\bot$-component of $\b f$ from \eq{g20} and integrating over $k_\bot$ we derive
\ball{g34}
B_r=\frac{eb}{8\pi x_-^2}e^{-\frac{b^2\sigma}{4x_-}}\left[
\sigma \sin\left( \frac{b^2\sigma_\chi}{4x_-}\right)-\sigma_\chi\cos \left(\frac{b^2\sigma_\chi}{4x_-}\right)\right]\,.
\gal
Finally, repeating the by now familiar procedure and using the integral
\ball{g36}
&\int_0^{2\pi}e^{i\b k_\bot \cdot \b b }\unit z\,d\psi = 2\pi J_0(k_\bot b)\unit z
\gal
we find for the longitudinal field component:
\ball{g38}
B_z= -\frac{e}{4\pi x_-}e^{-\frac{b^2\sigma}{4x_-}}\left[
\sigma \sin\left( \frac{b^2\sigma_\chi}{4x_-}\right)-\sigma_\chi\cos \left(\frac{b^2\sigma_\chi}{4x_-}\right)\right]\,.
\gal
It is seen in \eq{g34} and \eq{g38} that the field components $B_r$ and $B_z$ are generated only at a finite chiral conductivity $\sigma_\chi$.
\begin{figure}[ht]
\begin{tabular}{cc}
\includegraphics[height=5cm]{eB1.pdf} &
\includegraphics[height=5cm]{eB2.pdf}
\end{tabular}
\caption{Magnetic field of a point charge as a function of time $t$ at $z=0$. (Free space contribution is not shown). Electrical conductivity $\sigma= 5.8$~MeV. Solid line on both panels corresponds to $B=B_\phi$ at $\sigma_\chi=0$. Broken lines correspond to $B_\phi$ (dashed), $B_r$ (dashed-dotted) and $B_z$ (dotted) with $\sigma_\chi = 15$~MeV on the left panel and $\sigma_\chi=1.5$ MeV on the right panel. Note that the vertical scale on the two panels is different.
}
\label{fig2}
\end{figure}
Eqs.~\eq{g31},\eq{g33} and \eq{g38} is the main result of this paper. It shows that at finite $\sigma_\chi$, magnetic field of a point charge acquires two components that are absent in the chirally neutral medium: the radial and the longitudinal components. All field components oscillate at early times. This is clearly seen in \fig{fig2}. The $B_z$ and $B_r$ components change sign at light-cone times
\ball{g43}
x_{-}^{(n)}=\frac{b^2\sigma_\chi}{4[\arctan\frac{\sigma_\chi}{\sigma}+\pi n]}\,, \quad n=0,1,\ldots\,,
\gal
while the $B_\phi$ components changes sign at
\ball{g44}
\tilde x_-^{(n)}=\frac{b^2\sigma_\chi}{4[-\arctan\frac{\sigma}{\sigma_\chi}+\pi n]}\,, \quad n=0,1,\ldots\,,
\gal
The latest oscillation corresponds to $n=0$; it increases with $\sigma_\chi$.
\section{Discussion and summary}\label{sec:i}
We discussed the chiral topological effect on electromagnetic field in the Quark-Gluon Plasma. In our model the anomalous current density is given by $\b j= \sigma_\chi\b B$ with constant chiral conductivity $\sigma_\chi$. For the energy and time scales of the QGP this model gives a reasonable physical picture of the electromagnetic field space-time evolution. There are two major results presented in this paper.
(i) I showed that solutions to the Maxwell equations are not stable in the presence of the chirality imbalance. It is possible that electromagnetic field collapses into a set of magnetic knots. This problem certainly deserves a dedicated study and may be important in cosmology. However, as far as heavy-ion collisions are concerned, this instability has negligible impact on the QGP because it originates from soft modes $k<\sigma_\chi$ that do not exist in the QGP of realistic dimensions. The maximal growth rate of unstable modes is $\left(\sqrt{\sigma^2+\sigma_\chi^2}-\sigma\right)/2$.
(ii) I derived an analytical expression for magnetic field produced by valence charges in quark-gluon plasma at finite chiral conductivity $\sigma_\chi$. Its components are given by equations \eq{g31},\eq{g34} and \eq{g38}, which indicate emergence of the radial $B_r$ and longitudinal $B_z$ components of magnetic field (as compared to the $\sigma_\chi=0$ case). If $\sigma_\chi$ is not much smaller than $\sigma$, then all components perform oscillations at early times after the collision. Since magnetic field is strongest at early times, these oscillations should have important impact on heavy-ion phenomenology. In particular, they may weaken effects that depend on the magnetic field direction, such as the $B$-dependent elliptic flow \cite{Tuchin:2011jw,Mohapatra:2011ku} and charge separation effect \cite{Kharzeev:2007jp}. This is especially true for the charge separation effect that requires sufficiently large $\sigma_\chi$.
In this paper, I considered the simplest model that incorporates the chiral anomaly in electrodynamics. Its main advantages are that it describes the experimentally observable charge separation in heavy-ion collisions and can be solved analytically. However, it has serious drawbacks as well: chiral conductivity of a realistic plasma is a complicated function of space and time. Thus, the main outstanding problem is to find a more realistic model for the chiral anomaly and verify which of the above results, and to what extent, survive in an improved formulation. This can serve as a benchmark for the full magneto-hydrodynamical treatment of the problem.
\acknowledgments
I am grateful to Dmitri Kharzeev for an informative discussion an comments on a draft version of this manuscript. I would like to thank Qun Wang for pointing out a mistake in Eq.~(62) in an earlier version of this paper.
This work was supported in part by the U.S. Department of Energy under Grant No.\ DE-FG02-87ER40371.
|
2,877,628,089,367 | arxiv | \section*{Preamble}
\noindent
This white paper summarizes the workshop ``U.S. Cosmic Visions: New Ideas in Dark Matter'' held at University of Maryland from March 23-25. The flagships of the US Dark Matter search program are the G2 experiments ADMX, LZ, and SuperCDMS, which will cover well-motivated axion and WIMP dark matter over a range of masses. The workshop assumes that a complete exploration of this parameter space remains the highest priority of the dark matter community, and focuses instead on the science case for additional new small-scale projects in dark matter science that complement the G2 program (and other ongoing projects worldwide). It therefore concentrates on exploring distinct, well-motivated parameter space that will not be covered by the existing program; on surveying ideas for such projects (i.e. projects costing $\sim$\$10M or less); and on placing these ideas in a global context. The workshop included over 100 presentations of new ideas, proposals and recent science and R\&D results from the US and international scientific community.
\section*{Acknowledgements}
\section{Conclusions}
This whitepaper has summarized the science opportunities and experimental ideas presented at the workshop ``US Cosmic Visions: New Ideas in Dark Matter''. The possibilities for dark matter explored by small projects span a wide range of possibilities for the nature of dark matter, extending from the observational lower bound of $10^{-22} \,\mathrm{eV}$ particles up to $30 M_\odot$ black holes. Two particular areas of focus are \emph{ultralight (sub-keV) dark matter}, which behaves as a coherent (bosonic) field, and of which the QCD axion is a particularly motivated and well-known example; and \emph{hidden-sector dark matter}, neutral under Standard Model forces but interacting through a new force, with testable sharp targets in parameter space motivated by DM production mechanisms and anomalies in data. These two broad scenarios stand out as simultaneously well-motivated and accessible to small-scale experiments.
There is a broad and active community of physicists pursuing several new experimental directions: new direct detection experiments, ultralight (sub-eV) DM searches, accelerator-based searches for DM production, and small-scale experiments exploring anomalies in existing data that suggest new forces. Each of these techniques covers broad parameter regions with great sensitivity and decisively explores high-priority science targets; any one of them could revolutionize our understanding of Nature's dark sector. In addition, theory has played a notably essential role in developing new small-scale experiments and the connections among sub-fields on which many of these experimental techniques are based. The experimental approaches presented at the workshop are highly complementary --- each of the working groups has identified models for which particular techniques are uniquely sensitive, while in many other cases a combination of different experimental approaches is required to move from discovery to a physical understanding of dark matter. The breadth and importance of dark matter science therefore strongly motivate a portfolio of small experiments spanning all of the above techniques, as well as continued investment in theory.
\section*{Executive Summary}
Deciphering the fundamental nature of dark matter---its cosmological origin, its constituents, and their interactions---is one of the foremost open questions in fundamental science today with tremendous potential to deepen our understanding of the laws of Nature. The existing dark matter experimental program is focused primarily on weakly-interacting massive particles (WIMPs), which remain of great interest. At the same time, given the importance of dark matter, there is strong motivation to explore a broader set of dark matter candidates. Indeed, the 2014 P5 report calls out the importance of ``search[ing] for dark matter along every feasible avenue.''
In recent years, the field of dark matter has been characterized by the blossoming of many innovative ideas. New dark matter candidates have emerged that, like previous candidates, are highly motivated by beautiful theoretical results or experimental data, but are qualitatively different in their experimental implications. Most notably, some of these candidates can be explored by small experiments with short timescales, where a modest investment can have an outsize impact.
Two broad classes of dark matter models stand out as ripe for exploration:
\noindent {\em Hidden-Sector Dark Matter} candidates are completely neutral under Standard Model forces, but interact through a new force. The low-mass (sub-GeV) parameter space for hidden-sector dark matter is both important and beyond the reach of the existing program. Particularly well-motivated milestones in parameter space are derived from general production mechanisms, theory, and current experimental anomalies. Novel small-scale direct detection experiments, fixed-target experiments, and even astrophysical, nuclear, and atomic probes each have unique sensitivities to fully explore these milestones.
\noindent {\em Ultra-Light Dark Matter} candidates have masses from $10^{-22}$ eV to about a keV, and that can be produced during inflation or phase transitions in the very early Universe. A particularly motivated case is QCD axion dark matter, predicted by the axion solution to the strong CP problem, which defines an important milestone in coupling sensitivity as a function of mass. Much of this parameter space, including low-mass QCD axions, was thought to be completely inaccessible several years ago, but can now be explored by a suite of new experimental approaches.
The community has presented a diverse and innovative set of experimental proposals, including potential game-changers in the search for their respective dark matter candidates. Many exploit unique US-based facilities and/or expertise, and represent natural opportunities for US leadership in the field.
The proposals described in this document demonstrate the vibrancy of the dark matter community in universities and labs in the United States and around the world. Many of the new ideas presented here were spawned not by a programmatic approach to the dark matter problem, but by small groups developing ideas and technologies to tackle a variety of fundamental questions. In many cases, these proposals are the result of close collaboration between experimentalists and theorists and include researchers from disciplines outside of high energy physics, such as nuclear, atomic, and condensed matter physics.
We highlight five important directions (\emph{not} ordered by priority) for a small experiments program and continued innovation in dark matter (DM) physics:
\begin{itemize}
\item Low-threshold direct detection is an active field with a wealth of low-cost new and innovative ideas that can probe a variety of highly motivated Hidden-Sector and Ultralight DM candidates, and affords the only prospect to begin testing the tiny couplings associated with hidden-sector freeze-in through an ultralight mediator. It is important to pursue both DM-electron and DM-nuclear scattering experiments, as they have complementary sensitivities. Some proposals are ready for small-project-scale funding now, while several will be ready for small-project-scale funding within the next 1 to 2 years. In addition, the potential to lower energy thresholds by orders of magnitude motivates continued technology R\&D.
\item A suite of experiments using multiple technologies are required to explore the wide parameter space of light new-force carriers, and in particular the full mass range for QCD axions. The ADMX G2 experiment is currently exploring an exciting region of QCD axion mass range, and many new experimental approaches are in pilot phases now. Together with ADMX, next-generation experiments at the small-project scale can explore much of the highly motivated QCD axion parameter space over the next decade.
\item Accelerator experiments can both produce and detect new particles, such as dark matter and the particles mediating new interactions. This unique ability has enabled beam dump, missing mass/energy, and visible mediator search experiments to achieve world-leading sensitivity to highly sought-after dark matter scenarios. Building on these proven techniques and exploiting existing US accelerator facilities, a small number of fixed-target experiments can broadly explore sub-GeV dark matter and associated forces with sufficient sensitivity to test all predictive thermal DM scenarios. This focused effort is based on established detector technology, with a number of modest-cost proposals ready for funding now to achieve significant science in the next few years.
\item Existing data may already be pointing to dark sector physics. Anomalies in $(g-2)$ of the muon and in the properties of beryllium-8 nuclei provide tentative evidence for a new boson at the 10 MeV-scale that can be tested by nuclear and atomic spectroscopy experiments. The small-scale structure of dark matter halo distributions may be explained by dark matter self-interactions with 1-100 MeV mediators. LIGO's discovery of colliding black holes motivates micro-lensing probes of solar mass black hole dark matter. These puzzles each define sharp, highly-motivated targets that can be resolved by small investments in experiment, simulations, and theory. Typical timescales are 1 to 2 years and budgets are a small fraction of the small projects threshold.
\item Progress in theory has been the driving force behind recent developments in dark matter, particularly proposals for small-scale experiments and innovative connections to other subfields. Additional investments in theory are essential to exploit cosmological and astrophysical data to improve measurements of dark matter's particle properties and to develop the novel connections to nuclear, atomic, and condensed matter physics that have already been identified.
\end{itemize}
All of these directions are scientifically important, and they motivate a portfolio of multiple small experiments in dark matter, including experiments in direct detection, accelerator-based searches, searches for coherent-field dark matter, and broad investigations of dark-sector properties, as well as targeted investments in theory. A healthy dark matter research program should include both large- and small-scale efforts. This document illustrates both the breadth and the promise of small-scale opportunities in dark matter science, any one of which may lead to a breakthrough and transform our understanding of the cosmos.
\subsubsection{Thermal Relic Targets} \label{sssec:HS-thermal}
\input{HS-thermal}
\subsubsection{Targets from quasi-thermal DM production}
\input{HS-quasithermal}
\subsubsection{Light mediators and Freeze-in}\label{sssec:HS-freezein}
\input{HS-freezein}
\subsubsection{Further Opportunities in hidden-sector physics}\label{sssec:HS-opportunities}
\input{HS-opportunities}
\subsubsection{Benchmark Models of Hidden-Sector DM}
The observable signatures of hidden sector DM are dictated by the type of new force coupling the DM to familiar matter, and the nature of the DM coupling to this force.
\paragraph{Mediators and their SM Couplings}
A new force can be mediated by a vector or scalar boson, which may couple to the SM in a variety of ways. A useful characterization of these interactions is by the following simplified models:
\begin{eqnarray}
{\cal L}_V & \supset & V_\mu \bar f (g^V_f \gamma^\mu + a^V_f \gamma^\mu \gamma^5) f \label{vectorSimplified} \\
{\cal L}_S & \supset & \bar f (g^S_f + \gamma_5 a^S_f) f \phi \label{scalarSimplified}
\end{eqnarray}
for (axial) vector mediator $V_\mu$ or (pseudo)-scalar mediator $\phi$.
The structure of the couplings $g_f$ and $a_f$ depends on how the mediator coupling to familiar matter arises. Two important special cases are the ``horizontal portals'' --- the unique renormalizable interactions of an SM-neutral boson compatible with all SM symmetries are \cite{Galison:1983pa,Holdom:1985ag,Patt:2006fw}:
\begin{eqnarray}
\mathcal{L} \supset
\begin{cases}
-\tfrac{\epsilon}{2\cos\theta_W}\, B_{\mu\nu} F'^{\mu\nu} & \textrm{ vector portal} \quad \Rightarrow \quad g^V_f \approx \epsilon e q_f \label{vectorportal} \\
(\mu \phi + \lambda \phi^2) H^\dagger H & \textrm{ Higgs portal}\quad \Rightarrow \quad g^S_f = \mu m_f/m_h^2, \label{higgsportal}
\end{cases}
\end{eqnarray}
where $B_{\mu \nu},~F'_{\mu \nu}\equiv \partial_\mu A'_{\nu}- \partial_\nu A'_{\mu}$ are the hypercharge and dark $U(1)_D$ vector boson field strengths, $e q_f$ the electric charge of each SM particle,
$H$ is the Higgs doublet, $m_{f}$ the mass of fundamental fermion $f$, and $m_h$ the SM Higgs mass.
While these are justifiably emphasized as benchmark models, high-energy extensions of the Standard Model readily open up the more general parameter space of \eqref{vectorSimplified} and \eqref{scalarSimplified} --- for example, vector couplings to anomalous global symmetries of the SM like baryon or lepton number; chiral couplings with non-zero $a^V$ from $Z$-mixing or ``effective $Z^\prime$'' models; and pseudo-scalar couplings or enhanced first-generation scalar couplings from an extended Higgs sector.
It is natural for any of these couplings to be small enough to have escaped detection thus far, yet large enough to explain the primordial generation of dark matter. For example, loops of heavy particles of mass $M$ charged under both $U(1)_Y$ and the new $U(1)_D$ gauge group generate mixing at the level of $\epsilon \sim g' g_D/16\pi^2 \log(M/\Lambda)$, where $g'$ and $g_D$ are the $U(1)_Y$ and $U(1)_D$ charges respectively of the heavy particle, and $\Lambda$ is an ultraviolet cutoff. Assuming an $O(1)$ log and $g_D \sim g$ suggests $\epsilon \sim 10^{-3}-10^{-2}$. Enhanced symmetry of the fundamental theory (e.g. grand unification of SM forces) leads to an approximate cancellation so that the effective log is itself loop-suppressed, suggesting $\epsilon \sim 10^{-5}-10^{-3}$. Such couplings are in the natural ballpark suggested by thermal or quasi-thermal DM generation mechanisms. Even smaller couplings, as needed for DM freeze-in, can easily be generated, for example if $U(1)_D$ is also embedded in a non-Abelian group or is weakly coupled, if the coupling to ordinary matter is suppressed by an additional small mixing angle, or by non-perturbative effects.
We comment briefly on the status of model-independent constraints on the portal couplings:
\begin{itemize}
\item The \emph{Vector portal} is most constrained by muon and electron magnetic dipole moments for sub-GeV mediators \cite{Endo:2012hp,Davoudiasl:2012ig}, and by precision electroweak physics \cite{Hook:2010tw} for heavier mediators. These model-independent constraints are generally (and especially at low mediator masses) surpassed by those arising from searches visible or invisible mediator decays, or from DM physics.
\item The proportionality of \emph{Higgs portal} couplings to particle masses implies strong constraints on these models from heavy meson decays, although some new territory can nonetheless be explored by proposed dark matter experiments (see e.g. \cite{Krnjaic:2015mbs}). It is also worth emphasizing that these constraints are very specific to the minimal model --- scalar portal mixing with a minimal SM Higgs --- and constraints directly on the first-generation couplings of \eqref{scalarSimplified} are many orders of magnitude weaker.
\item Another simple benchmark is the coupling to an SM global symmetry like baryon or lepton number. The resulting interactions of electrically neutral particles lead to additional constraints --- in particular, limits on $e-\nu$ scattering \cite{Bellini:2011rx,Izaguirre:2014dua} and low-energy neutron scattering data \cite{Barbieri:1975xy,Tulin:2013teo} set the tightest constraints on new bosons coupled to lepton and baryon number, respectively. Even so, searches for DM-electron and DM-hadron interactions explore regions allowed by these constraints over most of the relevant DM mass range.
\end{itemize}
In summary, the next generation of searches for DM interactions will probe viable and motivated parameter space for all the portal interactions. The viability of global symmetry couplings underscores the importance of separately exploring DM couplings to leptons and hadrons.
\paragraph{Coupling to the Dark Sector: a vector portal case-study}
Turning our attention to the dark sector, it is once again useful to introduce a simplified model. Focusing for concreteness on vector mediators (though analogous phenomenology arises for scalar mediators), we consider dark sector matter with mass structure
\begin{eqnarray}
\hspace{-0.2cm }-{\cal L} \supset m_D\eta \xi +\frac{m_\eta}{2} \eta \eta \! + \frac{m_\xi}{2} \xi\xi + \mathrm{h.c.}~ (fermion),\\
\hspace{-0.2cm }-{\cal L} \supset \mu^2 \varphi^* \varphi + \frac{1}{2} \rho^2 \varphi \varphi +\mathrm{h.c.}~(scalar).
\end{eqnarray}
where $\eta$ and $\xi$ are Weyl fermions with $U(1)_D$ charge $\pm g_D$ and $\varphi$ a complex scalar with $U(1)_D$ charge $g_D$.
While dark sectors can have much richer structure --- including for example confined or Higgsed non-Abelian gauge groups, or multiple kinematically accessible matter species (see Section \ref{WG4sec:models}) --- these simplified models encapsulate much of the phenomenology of the DM state itself.
In the above, $m_D$ and $\mu$ are $U(1)_D$-preserving mass terms and $m_\eta$, $m_\xi$, and $\rho$ are $U(1)_D$ breaking mass terms. Since $m_{A^{\prime}} \neq 0$ breaks the $U(1)_D$ symmetry, it is reasonable for all mass terms to be present giving rise to two dark-sector mass eigenstates with the $A^{\prime}$ primarily mediating an \emph{inelastic} transition between them. Alternately, residual discrete symmetries can lead to symmetry limits where the $A^{\prime}$-mediated transition is mass-diagonal: Dirac fermions ($m_{\eta,\xi}=0$) or complex scalars ($\rho=0$) charged under $U(1)_D$ or an axially coupled Majorana fermion ($m_D=0$). These distinctions significantly affect the DM phenomenology, especially at the low velocities relevant for direct and indirect detection --- for example, Majorana fermions have $p$-wave annihilation and direct detection cross-sections suppressed by a factor of $(q/m_\chi)^2$, where $q$ is the momentum transfer, relative to Dirac fermions or elastically scattering scalars. A detailed classification, including scalar mediators, can be found in e.g. \cite{Berlin:2015ymu}.
Accelerator experiments searching for DM production are particularly robust probes of models with significant $U(1)_D$-breaking masses, which generally suppress direct detection cross-sections by either velocity factors or higher powers of $\epsilon$. On the other hand, models with small $U(1)_D$-breaking so that $m_{A^{\prime}} \ll m_{DM}$ can have dramatically enhanced direct detection cross-sections due to low momentum transfer. Therefore, a broad experimental program is required to search for hidden-sector DM.
\section{Introduction}
The evidence for dark matter comes from cosmological and astrophysical measurements in many different contexts and over a wide range of scales --- from the shape of the cosmic microwave background (CMB) power spectrum to cluster and galactic rotation curves and gravitational lensing. Yet all of these data are essentially gravitational, and therefore tell us little directly about the particle nature of dark matter. In particular, constituents of dark matter could be as light as $10^{-22} \,\mathrm{eV}$ or as heavy as $100 M_\odot$, and still be consistent with these observations. Deciphering the fundamental nature of the dark matter --- its cosmological origin, its constituents, and their interactions --- is one of the foremost open questions in basic science today. Answering this question involves synergies across multiple levels: between experimentalists and theorist and between high energy physics and other disciplines, such as nuclear, atomic, and condensed matter physics.
The search for dark matter can be focused by putting it in the context of known cosmology and particle physics: how our Universe's cosmic history gives rise to the dark matter abundance, and how the Standard Model both informs and restricts the possibilities for dark matter particles' interactions. That these guiding questions have many possible answers is part of the reason why uncovering the particle nature of dark matter is so important, and so challenging; it also necessitates a multi-faceted program with different techniques optimized for different dark matter mass ranges and interactions.
The 2014 P5 report has called out the importance of a broad dark matter search program: ``It is imperative to search for dark matter along every feasible avenue,'' and the breadth of ``well-motivated ideas for what dark matter could be, [which] include weakly interacting massive particles (WIMPs), gravitinos, axions, sterile neutrinos, asymmetric dark matter, and hidden sector dark matter'' \cite{P5}. Some of these scenarios -- including (with some notable exceptions) WIMP, gravitino, and sterile neutrino DM --- are the purview of larger experiments, as reviewed for example in \cite{Feng:2014uja}.
But much of the well-motivated parameter space for dark matter \textbf{can be explored by small experiments in the near future}.
Two broad classes of dark matter model stand out as strongly motivated possibilities where small experiments can have an outsized impact:
\begin{itemize}
\item Dark matter in the vicinity of Standard Model scales includes WIMPs, which interact through SM forces, and also {\bf hidden-sector DM} --- dark matter that interacts through a new force (sometimes called a ``dark sector''). They share common motivations --- in both cases, the thermal history of the Universe and the couplings to familiar matter play key roles in generating the observed DM abundance. Hidden-sector DM is viable over a wider mass range than WIMPs. The parameter space below the GeV scale is largely invisible to WIMP searches, but presents multiple opportunities for new experiments. For example, low-threshold direct detection experiments can explore well-motivated parameter space even with gram-scale target masses, and exploit a unique kinematic enhancement at low mediator masses to start exploring the very weakly coupled ``freeze-in'' scenarios. Accelerator-based experiments offer a robust probe of sub-GeV hidden-sector DM produced through thermal freeze-out, fully exploring the most predictive models. Moreover, the DM physics in these hidden-sector models also encompasses other effects of the new force, such as DM self-interactions and new reactions of ordinary matter.
\item{\bf Ultra-light dark matter}, in the mass range from $10^{-22}$ to about a keV, includes scalar, pseudo-scalar, and vector boson DM that are produced during inflation or a high-temperature phase transition. In most of this parameter space, the DM acts as an oscillating classical field, whose coupling to matter can be detected by a variety of precision experiments. The QCD axion solution to the strong CP problem motivates axion dark matter, and provides a well defined target in the coupling sensitivity-dark matter mass parameter space. Much of the axion parameter space was believed to be inaccessible several years ago, but the development of several new experimental techniques and advances in detector capabilities opens the possibility of searching a large portion of this space in the near future.
\end{itemize}
These two frameworks, as well as specific production mechanisms within each framework, experimental anomalies, and search techniques, are shown in Fig.~\ref{fig:SummaryPlot}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.98\textwidth]{figs/SummaryPlot.pdf}
\caption{Mass ranges for dark matter and mediator particle candidates, experimental anomalies, and search techniques described in this document. All mass ranges are merely representative; for details, see the text. The QCD axion mass upper bound is set by supernova constraints, and may be significantly raised by astrophysical uncertainties. Axion-like dark matter may also have lower masses than depicted. Ultralight Dark Matter and Hidden Sector Dark Matter are broad frameworks. Mass ranges corresponding to various production mechanisms within each framework are shown and are discussed in Sec.~\ref{sec:sciencecase}. The Beryllium-8, muon $(g-2)$, and small-scale structure anomalies are described in \ref{WG4sec:WG4}. The search techniques of Coherent Field Searches, Direct Detection, and Accelerators are described in Secs.~\ref{sec:WG2experiments}, \ref{sec:WG1experiments}, and \ref{sec:WG3experiments}, respectively, and Nuclear and Atomic Physics and Microlensing searches are described in Sec.~\ref{WG4sec:WG4}.}
\label{fig:SummaryPlot}
\end{center}
\end{figure}
This white paper summarizes the workshop ``U.S. Cosmic Visions: New Ideas in Dark Matter'' held at University of Maryland from March 23-25. The workshop focused on the science case for new small-scale projects in dark matter science, on surveying ideas for such projects (i.e.~projects costing $\sim$ \$10M or less) in the U.S. Dark Matter search program, and on placing these ideas in a global context. The workshop included over 100 presentations of new proposals and recent results from the US and international scientific community.
This report parallels the structure of the workshop. Section \ref{sec:sciencecase} summarizes the science case for exploring dark matter parameter space, as well as high-value parameter-space targets that were highlighted at the workshop and the prospects for testing them. The following sections are organized, parallel to the structure of the workshop, around four working groups:
\begin{itemize}
\item {\bf New Avenues in Direct Detection} (Section \ref{sec:WG1experiments}), including spin-dependent WIMP scattering and many new ideas aimed at direct detection of dark matter lighter than traditional WIMPs, from several GeV down to the meV scale,
\item {\bf Detection of Ultra-Light (sub-eV) Dark Matter } (Section \ref{sec:WG2experiments}), exploiting several physical effects to search for coherent-field effects of dark matter ranging from $10^{-22}-1$ eV, including QCD axions,
\item {\bf Dark Matter Production at Fixed Target and Collider Experiments} (Section \ref{sec:WG3experiments}), searching for sub-GeV dark matter (and related new forces) with small-scale fixed-target experiments to a level of sensitivity motivated by models of light thermal Dark Matter, and
\item {\bf New Candidates, Targets, and Complementarity} (Section \ref{WG4sec:WG4}), surveying new theoretical models, high-value regions of parameter space motivated by existing experimental anomalies and theoretical ideas, the interplay of dark matter searches from different subfields and the complementarity of proposed small-scale experiments with the data expected from the existing program.
\end{itemize}
There is a broad and active community of physicists pursuing these new directions, and developing experiments that, taken together, cover broad parameter regions with great sensitivity and decisively explore several high-priority targets.
The experimental approaches presented at the workshop are highly complementary --- each of the working groups has identified models for which particular techniques are uniquely sensitive, while in many other cases a combination of different experimental approaches is required to move from discovery to a physical understanding of dark matter.
\subsubsection{The QCD Axion}
The best known example of light bosonic DM is the QCD axion, a well-motivated candidate because it can solve the long-standing strong CP problem~\cite{Peccei:1977hh, Peccei:1977ur,Weinberg:1977ma, Wilczek:1977pj}, explaining the puzzle of the vanishing neutron electric dipole moment. It also simultaneously guarantees the production of dark matter at some abundance through a natural production mechanism of vacuum relaxation~\cite{Preskill:1982cy, Dine:1982ah, Abbott:1982af}. Axions and axion-like particles are generic in many UV theories (see for example \cite{Svrcek:2006yi}) and they may also be related to the electroweak hierarchy problem \cite{Graham:2015cka}.
The QCD axion model is quite economical, requiring only a single parameter -- a high mass scale $f_a> 10^9$~GeV at which a postulated new global U(1) ``Peccei-Quinn" symmetry is broken, resulting in a massless Nambu-Goldstone boson -- the axion -- living in the trough of a Mexican Hat potential. During the QCD phase transition, the defining axion-gluon coupling causes the trough of the potential to become tilted by an amount of potential energy density of approximately $\Lambda_{QCD}^4$ when the QCD instantons condense to define the QCD vacuum. The axion field rolls to the bottom of the tilted potential and zeroes out any pre-existing QCD CP-violating ``theta" angle. Simultaneously, the initial potential energy density is released as ultracold dark matter -- excitations about the new potential minimum whose second derivative determines the tiny axion mass $m_a \approx \Lambda_{QCD}^2/f_a$. Meanwhile, all couplings to standard model particles are suppressed by $f_a$ and are determined up to a constant factor of order unity. Axion search experiments typically use the axion mass $m_a$ as the single free model parameter, and aim to cover a range between the ``KSVZ" coupling strength \cite{Kim:1979if, Shifman:1979if} and the ``DFSZ" coupling strength~\cite{Dine:1981rt, Zhitnitsky:1980he} which is around a factor of 3 weaker.
The QCD axion is allowed to lie in the mass range of roughly $10^{-12}$~eV to $10^{-2}$~eV (corresponding to experimental frequencies 250~Hz - 2.5~THz). The lower bound arises from requiring $f_a$ not exceed the Planck scale. The upper bound comes from the neutrino pulse observed from SN1987A having a duration consistent with supernova cooling primarily via neutrino emission, thus placing a bound on the axion-nucleon coupling and immediately constraining all phenomenological features of the single-parameter QCD axion model. However, given astrophysical uncertainties as well as the limited statistics of a single supernova event, one may also obtain a more conservative upper bound of 1~eV axion mass from hot dark matter limits.
The QCD axion model has an intricate interplay with models of cosmic inflation, and discoveries in either field immediately inform the physics of the other. For example, the amount of initial potential energy density $\Lambda_{QCD}^4 \sin^2{(\theta_0/2)}$ to be released as axion dark matter depends on the random initial value $\theta_0$ of the strong CP-violating angle to be zeroed out by the rolling axion field. In models in which the Peccei-Quinn phase transition occurs after cosmic inflation, many topological domains of different $\theta_0$ form and are contained within our cosmological horizon. The average energy density released as dark matter, averaged over all domains is then well-determined -- $\Lambda_{QCD}^4 \times 1/2$. The axion vacuum relaxes to its minimum at cosmological time $1/3H \approx 1/m_a$, during the radiation-dominated era when the photon density is rapidly red-shifting away. Since vacuum energy does not red-shift, small values of $m_a$ would delay too long the release of this energy as dark matter, giving too large a axion/photon ratio and thus overproducing dark matter. Another complication is that topological features such as domain walls and cosmic strings can form, thus temporarily stabilizing the vacuum energy density and further delaying its release. Assuming equal contributions to dark matter production from vacuum relaxation and from topological defect decay, recent lattice calculations estimate that $m_a \lesssim 10-50 \ \mu$eV would not be compatible with this post-inflationary axion scenario~\cite{Dine:2017swf,Berkowitz:2015aua,diCortona:2015ldu,Borsanyi:2016ksw}.
The alternative pre-inflationary scenario is one in which the Peccei-Quinn symmetry is broken prior to inflation so that the initial $\theta_0$ is single-valued throughout our cosmological horizon and nothing then disallows a small value of $\sin^2{(\theta_0/2)} \ll 1/2$ to occur by chance. The much smaller amount of initial vacuum energy could then be released later in cosmic time without overproducing dark matter, thus allowing lower axion masses. This scenario includes the parameter space at large $f_a$ near the GUT or Planck scale which is preferred by string theory~\cite{Svrcek:2006yi}. Cosmic inflation also sources a spectrum of axionic excitations resulting in a potentially observable CMB isocurvature power spectral density scaling as $(H_I/f_a)^2$ where $H_I$ is the Hubble scale during inflation. Constraints on isocurvature then constrain this ratio of inflation and Peccei-Quinn scales~\cite{Fox:2004kb,Hertzberg:2008wr,Hamann:2009yf}.
If a low mass axion with $m_a \lesssim 10-50 \ \mu$eV is discovered, then this immediately implies the pre-inflationary axion scenario. The upper bounds on isocurvature then constrain $H_I$ to a scale too low to produce any potentially observable CMB B-modes (with primordial gravitational wave spectral density $(H_I/M_p)^2$) and dark matter axion studies would become the primary tool to probe cosmic inflation. Conversely, if CMB B-modes are discovered first, then only the post-inflationary axion scenario remains viable, the low axion mass window is closed and dark matter axion searches should be focused on higher masses.
\subsubsection{General phenomenology of sub-meV mass bosonic dark matter (including axions)}
Since we do not know the nature of dark matter, it is important to look broadly to cover all candidates in this entire mass range from $10^{-22}$ eV to 1 keV.
While non-relativistic cold dark matter of any form has very small kinetic broadening, low mass bosonic dark matter particles act collectively as a coherently oscillating semiclassical wave with high mode occupation number. For masses less than a few milli-eV corresponding to signal frequencies less than THz, this property can be used to detect bosonic dark matter via novel experimental techniques targeting continuous wave signals rather than impulse detection. In many cases, these experimental techniques have been well-developed in other fields of physics and had not previously been applied to the problem of dark matter detection. Cost-effective experiments are therefore possible which can quickly explore new parameter space.
These direct detection experiments rely on coupling the coherently oscillating dark matter field to Standard Model (SM) particles via four basic types of operators:
\begin{enumerate}
\item {\bf Electromagnetism}: This coupling allows dark matter to transfer energy into electromagnetic fields to be detected via photon, voltage, or flux sensors. For example, the well-established haloscope technique~\cite{Sikivie:1983ip} uses a microwave cavity to resonantly enhance the transfer of power from the incoming axion or hidden photon dark matter beam into electromagnetic modes.
\item{\bf QCD}: This gluon coupling gives a time-oscillating electric dipole moment (EDM) for nucleons which can be detected via nuclear magnetic resonance (NMR) techniques.
\item {\bf Spins of fermions} (either electrons or nucleons): These couplings cause the spins of electrons or nucleons to precess which can again be detected via NMR or electron spin resonance.
\item {\bf Scalar couplings}: These couplings can give a force directly on SM particles, or can affect fundamental constants such as SM particle masses or charges. For example these are couplings to a fermion's mass (without a $\gamma_5$) or a coupling to a gauge boson's kinetic term. Any precision measurement sensitive to small forces can potentially be modified to search for anomalous AC signal modulations.
\end{enumerate}
It is desirable to have a variety of experiments to probe all of these possible couplings. First, using different couplings of the dark matter leads to a very different and highly complementary detection techniques that can together allow searches through much more of parameter space than would otherwise be possible. Second, we do not know what couplings the dark matter has so it is important to probe all possibilities as broadly as possible. Third, if dark matter is discovered in one of these experiments it will be extremely important to detect it with a different technique both for confirmation and because it is crucial to measure as many couplings as possible in order to learn as much as possible about the dark matter model. Such light dark matter often arises from physics at very high energies. Measuring the mass and couplings of this particle would be in many cases the only way to study such high energy scales experimentally; interesting scales such as the Planck, GUT, or string scales, are far beyond what can be accessed in a collider. Finally, as with any type of dark matter, it is of critical importance to follow-up the direct detection signal with an accelerator or fifth-force type experiment to directly measure the couplings in order to disentangle them from the uncertainty in the intensity of the dark matter flux. These laboratory experiments would presumably be easier to design, once armed with knowledge of the dark matter mass and coupling scale.
The high temporal and spatial coherence of the collectively oscillating modes of bosonic dark matter also leads immediately to some interesting follow-up studies. Because the experiments often rely on narrowband resonant detectors which must be tuned to the signal, they are usually designed to be able to reproduce the signal on very short time scales of minutes to hours. So blind analyses need not be used since a new, independent dataset can be immediately acquired. Moreover, by simply integrating longer before Fourier transforming the time-series signal, the energy spectrum of the dark matter can be measured with higher frequency resolution. This allows the substructure of the dark matter velocity distribution to also be quickly measured with the same detector, as well as its annual modulation. Finally, the high spatial coherence of the bosonic wave on scales of order the deBroglie wavelength allow the use of multiple identical but spatially separated detectors to map out the local wavefront of the dark matter and hence to determine its local phase space distribution. These studies can indicate whether the dark matter is fully virialized or if there is substructure due to recent galaxy merger activity.
Furthermore, all these light fields are produced or influenced by cosmic inflation and their discovery can provide valuable information on the inflationary sector and hence the earliest times in the universe. As discussed above, a measurement of the axion mass can provide critical information on the scale of inflation. As another example, vectors dark matter (hidden or dark photons) are directly produced through quantum fluctuations during inflation and for high scale inflation would naturally be predicted to have a mass in the range that can be searched for in many of these experiments~\cite{Graham:2015rva}. If this vector dark matter is detected, then its power spectrum can be measured, giving a confirmation of this production mechanism and a determination of the scale of cosmic inflation.
Detectors for this ultralight dark matter often rely on very high precision experimental techniques that have a wide range of broader impacts. On the fundamental physics side other applications for these sensors include searching for new forces of nature, violations of the equivalence principle, and detecting gravitational waves. There are also more practical applications including geological mapping, inertial navigation, and a connection with quantum information.
There are a variety of these high precision sensor technologies that are complementary including probing different couplings and complementary mass ranges. Excitingly, experiments now appear able to cover this entire mass range, as discussed in Section~\ref{sec:WG2experiments}.
\subsubsection{Bosonic dark matter from meV-keV}
The same considerations as above apply to meV - keV bosonic dark matter but in this mass range, even the fastest THz electronics cannot resolve the collectively oscillating signal, and micro-calorimetric techniques must be used for detection of individual particle scattering processes. For this ultra-low threshold impulse detection, it has been shown that coherent modes in the detector target material ({\em i.e.} phonons) can be utilized. Bosonic dark matter may be absorbed on a target electron in a superconductor through single phonon emission \cite{Hochberg:2016ajh}, or in a semiconductor through single \cite{Hochberg:2016sqx,Bloch:2016sjj} or multiple \cite{Hochberg:2016sqx} phonon emission (see Section~\ref{sec:WG1experiments}).
The advantage here is that the bosonic dark matter particle can be absorbed onto a fermion line and transfer energy equal to its entire mass, whereas the same microcalorimeter detecting fermionic dark matter can only absorb the recoil kinetic energy which is at most $10^{-6}$ of the dark matter rest mass. The experiments capable of detecting dark matter through absorption over the meV - keV mass range are the same as those searching for keV - GeV mass dark matter via scattering discussed in Section~\ref{sec:WG1experiments}.
\subsubsection{Ultralight DM}
In the case of ultra-light bosonic dark matter, the QCD axion remains one of the best motivated dark matter candidates. While axions provide perhaps the simplest solution to the strong-CP problem, these models also inevitably produce dark matter via release of their initial potential energy density. Axion dark matter searches and cosmic microwave background experiments provide complementary probes of inflation; a measurement of the axion mass by detection of the dark matter beam would also immediately determine or constrain the energy scale of cosmic inflation; Recent phenomenology has also indicated a possibly rich interplay between QCD axions and the electroweak hierarchy problem.
\begin{figure}[htbp]
\includegraphics[width=\textwidth]{figs/sketchUL.pdf}
\caption{\label{fig:ULcomplementarity}Schematic illustration of the complementarity of different types of experiments in exploring QCD axion DM and ultralight DM more generally. The horizontal axis illustrates the observationally allowed mass range for ultralight DM, with an arrow highlighting the viable mass range for the QCD axion specifically. Indicative ranges of sensitivity for different techniques are illustrated by dark blue arrows for coherent field, new-force, and X-ray helioscope techniques (see Sec.~\ref{sec:WG2experiments}, while a red arrow indicates the range of DM masses that can be explored by absorption in direct-detection experiments (see Sec.~\ref{sec:WG1experiments}).}
\end{figure}
Several distinct techniques are proposed to search for the QCD axion (see Sec.~\ref{sec:WG2experiments}), many of which search for a coherent field induced by the axion DM, which oscillates at a frequency $\omega = m_a /\hbar$ for axion mass $m_a$. The large range of viable QCD axion masses, from $10^{-12}$ to $10^{-2}$ eV, implies a correspondingly large range of frequencies for possible DM signals. No one technique can cover this wide range of frequencies. Searching for QCD axion DM over their full parameter space requires a suite of techniques, such as cavity resonators (including ADMX G2) at high frequencies, lumped element resonators at medium frequencies, and nuclear magnetic resonance.
Searches for the QCD axion, including continuation of the current ADMX generation 2 experiment should be given high priority in any future dark matter program. However, it should be noted that more general scalar, pseudoscalar, or vector dark matter models over an even wider range of masses (from $10^{-22}$ eV to $10^3$ eV) are also well motivated. Similarly cost-effective experimental techniques have been identified that would cover these possibilities. Many (see Sec.~\ref{sec:WG2experiments}) import non-traditional detector technology that has nonetheless been well developed in other fields of physics, including atom interferometry, nuclear magnetic resonance, and fifth force measurements. Large improvements in sensitivity to low mass bosonic dark matter can and should be quickly obtained by engaging these other communities in cross-disciplinary collaborations. Meanwhile, the same direct detection experiments that can search for sub-GeV hidden-sector DM (see Sec.~\ref{sec:WG1experiments}) can also be used to search for absorption of ultralight DM particles in the heavier part of its allowed mass range, from meV to keV scales, where THz-scale frequencies for the oscillating DM field make field coherence harder to exploit. The importance of a multi-experiment program to explore these models comprehensively is illustrated in Figure \ref{fig:ULcomplementarity}.
\subsubsection{Hidden-Sector DM}
In contrast, searches for hidden-sector DM are primarily exploring a more focused DM mass range from a keV to several GeV. However, there are very important differences between the sensitivities of different experiments. These differences can be loosely classified along three directions: the difference between relativistic and non-relativistic probes of DM interactions, the importance of probing DM-DM and SM-SM (in addition to DM-SM) interactions, and experimental signals' dependence on the precise nature of mediator interactions with familiar matter. We discuss each of these points in turn below and illustrate them graphically in Figure \ref{fig:hiddenComplementarity}.
\begin{figure}[htbp]
\includegraphics[width=0.8\textwidth]{figs/sketchHS3.png}
\caption{\label{fig:hiddenComplementarity}Schematic illustration of the complementarity of different types of experiments in exploring sharp targets and general regions of interest for hidden-sector DM. Anomalies in data (see Section \ref{sssec:HS-opportunities}) highlight regions of interest in mediator mass and/or coupling to visible or dark matter; the red arrows highlight the suggested regions of mediator mass. Blue horizontal arrows for production mechanisms (see Sections \ref{sssec:HS-thermal}-\ref{sssec:HS-freezein}) indicate the parameter regions over which they are viable (dashed), regions in which they motivate a sharp parameter-space target (solid arrow), and, in the case of asymmetric DM, a ``natural'' range where the DM and baryon number densities are comparable (thick band). Blue and red vertical arrows highlight directions in ``theory space'' that have significant impact on detection strategies, while the green vertical arrows indicate the models to which different experimental approaches are most sensitive. Direct detection is discussed in Section \ref{sec:WG1experiments}, accelerator-based experiments in Section \ref{sec:WG3experiments}, and cosmology and nuclear and atomic physics probes in Section \ref{WG4sec:WG4}.}
\end{figure}
In general, accelerator searches (Sec.~\ref{sec:WG3experiments}) explore the relativistic production and/or interactions of DM candidates, while direct detection experiments (Sec.~\ref{sec:WG1experiments}) search for the scattering of DM in the Milky Way halo off matter, with relative velocity $\sim 10^{-3} c$. The effect of this kinematic difference is that well-motivated scenarios 10--20 orders of magnitude beyond the reach of one technique are accessible to the other. For example, thermal freeze-out of hidden sector DM via a mediator coupled to familiar matter (the ``direct'' channel) represents a precise target of interest. For elastically scattering scalar DM charged under a new force, most of the sub-GeV parameter space for this scenario can be explored by the next generation of both accelerator and direct detection experiments. If instead the DM is axially coupled (as a Majorana fermion must be) or scatters inelastically, then direct detection rates are suppressed by anywhere from 6 to 18 orders of magnitude, while accelerator production rates are within one to two decades. Therefore, while both techniques can explore this possibility, only accelerators are able to do so robustly. The converse is true if the mediator of DM-SM scattering is much lighter than the DM itself. In this case, direct detection rates are parametrically enhanced by up to 12 orders of magnitude, because of their low momentum transfer. This opens the possibility of testing the idea that the DM abundance ``freezes in'' through DM and SM interactions with a very light mediator, which would be too weakly coupled to be seen at accelerators.
It may be that the new force at the heart of hidden sector DM is most readily explored, not through DM-SM interactions as discussed above, but through the new physics it induces within the DM or visible sector. These possibilities imply considerable synergy with disciplines including astrophysics, cosmology, and nuclear, atomic, and condensed matter physics. Indeed, there are existing experimental anomalies that may be pointing towards dark sector physics; testing these is a crucial piece of the search for dark matter and one of the recurring themes of Sec.~\ref{WG4sec:WG4}. For example, dark matter self-interactions have been suggested as an explanation of puzzles in small-scale cosmological structure. Small investments in simulations and astroparticle theory can leverage the enormous amount of cosmological data already being collected to shed light on these puzzles, providing not just {\em constraints} on dark matter candidates, but {\em measurements} of the properties of dark matter. Another sharp motivation for small experiments is the $^8$Be anomaly, a possible signal of a new force interacting with nuclei and electrons. The $^8$Be anomaly strongly motivates proposed followup nuclear experiments that are fast (under 2 years) and cheap (a small fraction of the small projects threshold), as well as isotope shift spectroscopy experiments and accelerator searches for new bosons with masses $\sim 10$ MeV and electron couplings $\varepsilon \sim 10^{-4} - 10^{-3}$. It is quite intriguing that new models, astrophysical observations, and existing experimental anomalies point to the 1 to 100 MeV mass scale as a high-value target region for dark matter and dark mediator searches.
We also highlight the value of exploring both electron and nucleon couplings to DM and dark forces. While the most widely used benchmark model --- a kinetically mixed dark photon --- couples equally to electrons and protons, other possibilities would have one of these couplings much larger than the other (for example, a scalar mediator with mass-proportional couplings or a vector mediator coupled primarily to baryon number, lepton number, or another combination of charges). This provides motivation for a program of small direct-detection experiments that includes both proton and electron recoil experiments, and for an accelerator-based program that includes both electron- and proton-beam experiments to maximize sensitivity. Similarly, ultralight DM can have a variety of couplings and there is good motivation to search for all of them, even if experiments cover overlapping mass ranges.
It is just as important to emphasize that, even when they probe the same DM candidates, various kinds of experiments offer different information about the DM. For example, a discovery of a new particle at an underground direct-detection experiment would constitute strong evidence that such a particle constitutes all or at least part of the DM, but cannot disentangle the particle's couplings from its abundance. In contrast, accelerator-based experiments provide a clear probe of particle properties, but not its stability on cosmological timescales.
\subsubsection{Further Motivations and Opportunities}
While the science case for many of these opportunities falls nicely into either ultralight or hidden-sector DM, these are certainly not the limits of well-motivated possibilities. For example, the LIGO discovery of gravitational waves from colliding black holes has renewed interest in multi-solar-mass primordial black hole DM. The LIGO observation sharply motivates a proposed microlensing search (see Sec.~\ref{WG4sec:WG4}) that can confirm or exclude the possibility of intermediate mass black hole dark matter using existing facilities with minimal funding.
The hunt for dark matter now crosses multiple frontiers and benefits from vibrant communication between many subdisciplines of physics, including astrophysics, cosmology, and nuclear, atomic, and condensed matter physics. Innovations springing from these collaborations have created a wealth of new ideas that can be explored by inexpensive experiments.
Healthy support for theory is essential to maintaining the flow of creative and cross-disciplinary ideas that have been seen in recent years, and which may finally unmask the particle identity of dark matter.
\section{Science Case for a Program of Small Experiments}\label{sec:sciencecase}
Given the wide range of possible dark matter candidates, it is useful to focus the search for dark matter by putting it in the context of what is known about our cosmological history and the interactions of the Standard Model, by posing questions like: What is the (particle physics) origin of the dark matter particles' mass? What is the (cosmological) origin of the abundance of dark matter seen today? How do dark matter particles interact, both with one another and with the constituents of familiar matter? And what other observable consequences might we expect from this physics, in addition to the existence of dark matter? Might \emph{existing} observations or theoretical puzzles be closely tied to the physics of dark matter? These questions have many possible answers --- indeed, this is one reason why understanding the particle nature of dark matter is so important --- with each case pointing to a different range of dark matter properties and hence motivating different techniques to search for dark matter particles.
The WIMP hypothesis that has dominated the DM search program to date offers a good example of many of these motivations: new, weakly interacting particles at or above the weak scale are predicted in many models that address the electroweak hierarchy problem. In this mass range, the weak interactions with familiar matter can naturally explain their abundance. While WIMP dark matter remains highly motivated, several factors motivate a broader approach to the dark matter question. Significant parameter space for both WIMPs and the supersymmetric models that realize WIMP dark matter have been explored by recent advances in direct detection, indirect detection, and LHC searches. New experimental ideas have opened prospects to explore the full viable mass range for axion DM, another long-standing DM candidate motivated by the strong CP problem. Meanwhile, experimental anomalies are suggestive of substantial new interactions between DM particles and of new forces very weakly coupled to ordinary matter. In parallel with (and often spurred by) these experimental developments, theoretical progress has underscored that both WIMP and axion dark matter are special cases of broader theoretical frameworks that have many of the same attractive features.
That these general frameworks can be very effectively explored by small experiments makes them particularly exciting.
\subsection{Broad Frameworks for Dark Matter Motivating Small Experiments}
The dark matter candidates motivating small experiments can be organized into two broad classes, within which we highlight sharp, well-motivated targets that can be explored or robustly tested by a program of small experiments:
{\bf Hidden-sector Dark Matter} is a natural generalization of the WIMP idea to include interactions through a new force rather than just SM forces. These two scenarios are closely related: both suggest DM near Standard Model mass-scales, and in both cases the thermal history of the Universe and coupling to familiar matter play key roles in generating the observed DM abundance --- whether or not the DM ever reached thermal equilibrium with familiar matter. However, the hidden-sector case --- in particular, the parameter space with GeV-scale or lighter DM and/or mediators --- opens up qualitatively new directions for experiment. Cosmological DM production mechanisms, theoretical ideas, and observations point to parameter-space milestones of particular interest. {\bf Proposed small experiments are poised to conclusively test the most predictive possibility for thermal freeze-out of hidden-sector DM, where DM annihilates directly into SM particles, over most of the sub-GeV mass range. They will also explore parameter space for many other production mechanisms, including thermal DM with ``secluded'' annihilation, asymmetric DM, and very weakly coupled DM that ``freezes in'' without reaching equilibrium.} Any new interaction between dark and familiar matter necessarily has consequences in the self-interactions of DM and of familiar matter, as well. Intriguingly, several anomalies in data point to possible new physics, weakly coupled to familiar matter, in the 1-100 MeV scale, while a suppression of cosmological small-scale structure may be explained by DM self-scattering through a mediator in the same mass range.
{\bf Ultralight dark matter}, bosonic particles with sub-keV mass include the QCD axion and generic light scalar, pseudo-scalar, and vector bosons coupled linearly to familiar matter. Such particles can arise from string theory or other high-energy phenomena, with their masses protected by symmetries that generically also lead to exponentially small couplings to matter. The observed DM abundance can be produced during an initial inflationary phase of cosmology or a high-temperature phase transition. A distinctive feature of these models is that for sub-meV mass, the DM mode occupation numbers are high, so that ultralight DM can lead to classical oscillating field signals that in much of the parameter space offer the most promising path to detection. The ``invisible'' QCD axion, long proposed as a solution to the strong CP problem, is an attractive dark matter candidate; while the viable mass range spans eight orders of magnitude, sharp predictions can be made for QCD axion couplings as a function of mass. {\bf Small experiments can explore an enormous amount of ultralight boson dark matter parameter space, with several techniques capable of reaching sensitivity to the QCD-axion.}
\subsection{The Need for a Multi-Experiment Program}
\input{multiExp}
\newpage
\section{Theory Overview and Motivations}\label{sec:theory-overview}
This section further explains the motivations for DM candidates noted in the Science Case, and defines several sharp parameter-space targets and broad regions of interest within their parameter spaces.
\subsection{WIMPs}
WIMP dark matter --- composed of particles that interact through the SM weak interactions, and usually assumed to be produced through thermal freeze-out --- has long been an important benchmark model. Indeed, most of the effort in direct and indirect DM detection, including the G2 program in the US, is motivated by the WIMP hypothesis. There is scientific value to exploring WIMP parameter space beyond the G2 program, but this area is typically the purview of large-scale experiments and is beyond the scope of this report. We do note, however that there are some aspects of WIMP physics complementary to the G2 program where small experiments have an important role to play. In particular, the workshop included discussions of small experiments searching for spin-dependent WIMP interactions and the ``low-mass WIMP" parameter space from $\sim1-10$ GeV. The latter is, in fact, mostly below the mass range for conventional thermal WIMPs, and scientifically motivated by hidden-sector dark matter, discussed below.
Both top-down and bottom-up considerations motivate multi-GeV to TeV-scale WIMP masses. These are the natural mass scales for any particle involved in solving the hierarchy problem, or for a particle whose mass shares a common origin with the Standard Model Higgs. A similar mass range is singled out for annihilation through weak interactions to give rise to the observed DM abundance -- at masses much higher than a TeV or lower than several GeV (the Lee-Weinberg bound), the DM annihilation cross-sections are too small and therefore an overabundance of thermal DM would be expected.
\subsection{Hidden Sector DM}
\input{hiddenSector}
\subsection{Ultralight Dark Matter}\label{sec:Science-ultralight}
\input{lightBoson}
\section{New Avenues in Direct Detection }
\label{sec:WG1experiments}
\subsection{Introduction}
Dark matter (DM) direct-detection experiments are an essential laboratory tool in our quest to identify DM. Their goal is to search for DM particles in our Milky-Way halo that scatter or absorb in a detector target material.
The last few decades have seen enormous advances in designing and building direct-detection experiments that has led to a many orders of magnitude improvement in searches for $\sim$10 keV scale nuclear recoils that are characteristic of spin-independent scattering of Weakly Interacting Massive Particles (WIMPs) with masses $> 10$~GeV.
The next generation ``G2'' LZ experiment is poised to probe a large fraction of the remaining theoretically well-motivated parameter space
for this mass range over the next few years.
Another exciting possibility is that DM has a mass in the $\mathcal{O}$(GeV)
range, and
SuperCDMS, the second ``G2'' direct-detection experiment, is poised to probe this mass range to unprecedented sensitivity.
As described in Part I of this white paper and summarized below, there are several scientifically well-motivated DM candidates
that will not be probed by either LZ or SuperCDMS.
The ``New Avenues In Direct Detection'' working group has identified the following four additional areas in which novel theoretical ideas and impressive experimental advances enable new small projects that can probe orders of magnitude of previously unexplored DM parameter space:
\begin{enumerate}
\item {\bf Sub-GeV Dark Matter (Electron Interactions)}
\item {\bf Sub-GeV Dark Matter (Nucleon Interactions)}
\item {\bf Searches down to the Neutrino Floor for $\mathcal{O}$(GeV) Dark Matter}
\item {\bf WIMP Spin-Dependent Interactions (Proton)}
\end{enumerate}
A fifth area of parameter space --- high-mass WIMPs ($m_{\rm DM}\gtrsim 10$~GeV) --- was also identified as
scientifically well-motivated. However, to probe this region beyond the projected LZ sensitivity will require
experiments with very large target masses and significant funds ($\gtrsim$ 10 million dollars). Consequently, this parameter space falls outside of the scope of the workshop and will not be discussed further in this white paper.
\subsection{Summary of Science Case for New Small-Scale Direct-Detection Experiments}\label{subsec:science-DD}
Direct-detection experiments play a unique and essential role in our quest to identity the DM. Several proposals and ideas exist for new experiments that present a low-cost opportunity --- well within the ``small-project'' scale --- to {\bf probe DM with masses between the meV to GeV scale}, many orders of magnitude in mass below the planned searches by the G2 experiments LZ and SuperCDMS (see Fig.~\ref{fig:overview} for a schematic overview).
In fact, the working group recognizes that recent advances in theory and experiment means that {\it now} is the right time for targeted investments to bring to fruition several recent new ideas and proposals and develop them into real experiments.
\begin{figure}[t]
\includegraphics[width=0.8\textwidth]{figs/overview}
\caption{Ideas to probe low-mass DM via scattering off, or absorption by, nuclei (NR) or electrons (ER).
}
\label{fig:overview}
\end{figure}
Several well-motivated DM candidates can be probed.
In several cases, {\it sharp theory targets} in parameter spaces can be identified, which can be probed by first-generation, low-cost experiments with target exposures of as little as 100 gram-days.
These sharp targets have been discussed in Section~III. They assume that the basic interaction between the DM
and SM particles are through a dark photon, which allows the DM to couple to all electrically charged particles:
\begin{itemize}
\item {\bf Elastic Scalar} -- a (complex) scalar particle, $\chi$, can obtain the observed relic abundance from thermal freeze-out of
the ``direct-annihilation'' process $\chi + \chi^* \leftrightarrow A'^* \to {\rm SM}+{\rm SM}$, where $A'$ is the dark photon~\cite{Boehm:2003hm}.
The annihilation cross section, $\sigma_{\rm ann}$ is proportional to $\alpha_D \epsilon^2 \mu_{\chi,e}/m_{A'}^4$, and has precisely the same
dependence as the direct-detection cross section, $\sigma_{\rm DD}$ does on the fundamental parameters, $m_{A'}$ (the dark-photon mass),
$\epsilon$ (the kinetic mixing), and $\alpha_D$ (the ``fine-structure constant'' of the dark U(1))~\cite{Essig:2015cda}
($\mu_{\chi,e}$ is the DM-electron reduced).
In fact, since the final DM relic abundance, $n_\chi$, is proportional to $1/\sigma_{\rm ann}$, the direct-detection rate
is proportional to $n_\chi \sigma_{\rm DD} \sim \sigma_{\rm DD}/\sigma_{\rm ann}$, which is a constant for a given $m_\chi$.
So even if $\chi$ constitutes only a subdominant component of the entire DM, the ``target'' cross section on the $\sigma_{\rm DD}-m_\chi$ plane is a fixed line.
\item {\bf Asymmetric Fermion} -- a Dirac fermion can obtain the correct relic abundance from an initial asymmetry and provides an ``asymmetric'' DM candidate~\cite{Kaplan:2009ag}. However,
direct annihilation between DM and SM particles from $\chi + \bar\chi \to A'^* \leftrightarrow {\rm SM}+{\rm SM}$
produces also a symmetric component, whose abundance is smaller for larger annihilation cross sections~\cite{Lin:2011gj}.
The symmetric component can annihilate and, if its abundance is too large, distort the Cosmic Microwave Background power spectrum.
The CMB thus sets a lower bound on the annihilation cross section and, therefore, on $\sigma_{\rm DD}$~\cite{Essig:2015cda}.
\item {\bf ELDER} -- An ``elastically decoupling relic'' (ELDER) has its relic abundance set by its elastic scattering off
SM particles through $A'$ exchange (as opposed to annihilation into SM particles as in the thermal freeze-out scenario)~\cite{Kuflik:2015isi}.
This again predicts a specific line in the $\sigma_{\rm DD}-m_\chi$ plane.
\item {\bf SIMP} --
A strongly interacting massive particle (SIMP) obtains the correct relic abundance from $3\to2$ DM to DM annihilations
while remaining at the same temperature as SM sector due to its elastic scattering off SM particles~\cite{Hochberg:2014dra,Hochberg:2014kqa}. This defines a region in the $\sigma_{\rm DD}-m_\chi$ plane, with the lower boundary being set by the ELDER line mentioned previously, and the upper boundary being set by the $2\to2$ DM to SM thermal relic line (the elastic scalar line mentioned above).
\item {\bf Majorana} -- A Majorana fermion can obtain the observed
relic abundance through thermal freeze-out, but has a velocity suppressed scattering cross section off SM particles.
This scenario again predicts a line in the $\sigma_{\rm DD}-m_\chi$ plane, but lies at lower cross sections than the targets mentioned above, due to the low velocity of the DM in the Milky-Way halo.
The Majorana line is given in terms of the elastic scalar freeze-out line mentioned above, but multiplied by a factor of
$2 (\mu_{\chi, e/n}^2/m_\chi^2) v_\chi^2$, where $\mu_{\chi, e/n}$ is the DM-electron or DM-nucleon reduced mass (as applicable) and
$v_\chi$ is the DM halo velocity.
\item {\bf Freeze-in} -- An initially empty hidden sector, which remains thermally decoupled from the SM sector,
can be populated by SM particles annihilating to hidden-sector
DM particles through the process ${\rm SM}+{\rm SM} \to A'^*\to \chi + \bar\chi$ (we assume $m_{A'}\ll\ $~keV).
We say that the abundance is obtained through ``freeze-in''~\cite{Hall:2009bx}.
This again fixes the model parameters and predicts a line in the
$\sigma_{\rm DD}-m_\chi$ plane~\cite{Essig:2011nj,Chu:2011be,Essig:2015cda}.
\end{itemize}
We note that other sharp targets exist, so the above list is not complete; for example,
the DM could also interact only with baryons, or only with leptons.
This emphasizes the need for experiments that probe DM couplings to electrons as well as experiments that probe
DM couplings to nuclei.
Beyond the above sharp theory targets, new direct-detection experiments can probe orders of magnitude of DM parameter space that is well-motivated but does not have a sharp target.
This includes the scenario in which DM obtains its relic abundance from thermal freeze-out by annihilating into a hidden
sector (the ``secluded annihilation DM scenario''), from the misalignment mechanism, and others.
It includes well-motivated bosonic DM candidates such as axion-like particles and dark photon DM.
We emphasize that the {\it same} experiment can probe {\it many} DM candidates.
Indeed, several of the above theory targets lie close to each other in the direct-detection parameter space.
Moreover, an experiment sensitive, for example, to keV DM scattering off electrons will simultaneously also be sensitive to absorption of a
meV DM particle by an electron (see Fig.~\ref{fig:overview}).
We emphasize that direct detection provides the {\it only} possibility to test the above freeze-in scenario (with $m_{A'}\ll$~keV).
This is perhaps surprising:
since the DM is never in thermal equilibrium with ordinary matter in the early Universe, the interactions between DM and SM particles
are necessarily tiny. Nevertheless, if the mediator is ultralight ($\ll\ $keV), there is a large enhancement of the direct-detection cross section at low momentum transfers, which for the above freeze-in scenario is given by
\begin{equation}
\sigma_{\rm DD} \sim 4 \pi \alpha_D \epsilon^2 \alpha \frac{\mu_{\chi,e}^2}{q^4}\,,
\end{equation}
where the momentum transfer $q$ is at most $q_{\rm max} \sim \mu_{\chi,e} v_\chi$. Because this momentum transfer is so small in direct-detection experiments, the direct-detection experiments receive a parametric enhancement relative to higher energy experiments, allowing
new low-threshold experiments to probe couplings much smaller than accessible through other types of experiments.
A discovery of a new particle at an underground direct-detection experiment would constitute strong evidence that such a
particle constitutes all or at least part of the DM.
For some models of DM, new direct-detection and accelerator-based experiments can cover overlapping parameter space.
This is very exciting, as it allows for testing a potential DM signal by using entirely different approaches.
However, we note that there are also models that can be probed either by accelerators alone or by direct detection alone.
For example, models in which the DM scatters inelastically, or a Majorana DM particle that has a velocity-suppressed scattering cross section
off SM particles, are best probed with accelerator-based experiments
(due to the DM's non-relativistic velocity in the Milky-Way halo), while models for which the mediator is ultralight (e.g.~axion-like or dark-photon DM) or some models of freeze-in are best probed by direct-detection experiments.
This emphasizes that a small-scale program will be most successful if it contains a multitude of approaches to probe DM.
A key point emphasized by the working group is that by leveraging new theoretical ideas together with technological advances that allow
for the detection of low-threshold signals, vast regions of DM parameter space can be explored by small detectors that are only a fraction
of the cost of the G2 experiments.
The close collaboration between theorists and experimentalists has been essential in developing these new ideas, which are
now ripe for implementation.
\begin{figure}[t]
\includegraphics[width=0.175\textwidth]{figs/DM-N-scattering}
~~~~~~\includegraphics[width=0.185\textwidth]{figs/DM-e-scattering}
~~~~~~\includegraphics[width=0.225\textwidth]{figs/DM-N-brems}
\\ \vskip 1.0cm
\includegraphics[width=0.23\textwidth]{figs/DM-absorb}
~~~~~~\includegraphics[width=0.25\textwidth]{figs/DM-absorb-phonon}
~~~~~~\includegraphics[width=0.27\textwidth,height=0.23\textwidth]{figs/multiphonon}
\caption{Sample processes considered in this section to detect DM, $\chi$.
{\it Top left:} DM-nucleus scattering.
{\it Top middle:} DM-electron scattering.
{\it Top right:} DM-nucleus scattering with emission of a photon.
{\it Bottom left:} Absorption by an electron of a bosonic DM particle (a vector $A'$, scalar $\phi$, or pseudoscalar $a$).
{\it Bottom middle:} Absorption by an electron of a bosonic DM particle, made possible by emission of a phonon $\Phi$.
{\it Bottom right:} Emission of multiple phonons in DM scattering off helium.
}
\label{fig:feynman}
\end{figure}
\subsection{New Directions for Low-Mass Dark Matter Searches}
\subsubsection{Energy Threshold}
The fundamental technical challenge in searching for sub-GeV DM is simply the size of the detectable signal. This is because the velocity of bound DM within the Milky Way galaxy, $v_{\chi}$, is non-relativistic and limited by the galactic escape velocity ($\sim 10^{-3}c$), and thus the maximum possible energy transfer to the detector decreases as the DM mass, $m_{\chi}$, is lowered.
For the traditional nuclear recoil signals from DM scattering elastically off nuclei (Fig.~\ref{fig:feynman}, top left), the need to conserve both momentum and energy suppresses the recoil energy even further for sub-GeV masses.
In particular, the nuclear recoil energy is given by
\begin{equation}
E_{ \rm NR}=\frac{q^2}{2 m_N} \le \frac{2\mu_{\chi N}^2 v_\chi^2}{m_N} \lesssim 190 \mathrm{~eV} \times\left(\frac{m_\chi}{500\mathrm{~MeV}}\right)^2\left(\frac{16\mathrm{~GeV}}{m_N}\right) \,,
\label{eq:E_NR}
\end{equation}
In the latter inequality we take the DM speed to be the galactic escape velocity plus the Earth velocity, $v_\chi \simeq (544+220)$~km/s,
to estimate the maximum nuclear recoil energy.
We see that the energy transfer to a nucleus from an elastic DM scatter is inefficient, decreasing as $m_\chi^2$ as the DM mass is lowered
below the GeV scale, and quickly falls below the threshold of the most sensitive current generation DM experiments.
\subsubsection{Ideas to Probe Low-Mass Dark Matter}
Over the past decade, several strategies have been proposed that maximize the energy transfer to the target. In some cases this is at the expense of a modest rate suppression, but this is at least
partially offset by the larger DM particle flux expected as $m_\chi$ is lowered. These interactions include:
\begin{itemize}
\item {\bf DM-Electron Scattering (1~keV -- 1~GeV):}
For low-mass DM elastic scattering (Fig.~\ref{fig:feynman}, top middle), the DM energy is transferred far more efficiently to an electron than to a nucleus~\cite{Essig:2011nj}. If the DM is heavier than the electron, the maximum energy
transfer is equal to the DM kinetic energy,
\begin{equation}
\label{eq:maxkin}
E_e \le \frac{1}{2} m_\chi v_\chi^2 \lesssim 3~{\rm eV} \left(\frac{m_\chi}{{\rm MeV}}\right)\,.
\end{equation}
Bound electrons with binding energy $\Delta E_B$ can thus in principle produce a measurable signal for
\begin{equation}
m_\chi \gtrsim 0.3~{\rm MeV} \times \frac{\Delta E_B}{1~{\rm eV}}\,.
\end{equation}
This allows low-mass DM to produce ionized excitations in drift chambers ($\Delta E_B \sim 10$~eV) for $m_\chi \gtrsim 3$~MeV~\cite{Essig:2011nj,Essig:2012yx,Essig:2017kqs}, to promote electrons from the valence band
to the conduction band of semiconductors producing ionized excitations (Ge, Si)~\cite{Essig:2011nj,Graham:2012su,Lee:2015qva,Essig:2015cda} or scintillation photons (GaAs, NaI, CsI)~\cite{Derenzo:2016fse} ($\Delta E_B \sim 1-5$~eV) for $m_\chi \gtrsim$~0.3~MeV, and to eject an electron from a two-dimensional material such as graphene~\cite{Hochberg:2016ntt}.
DM-electron scattering searches have already illustrated their potential, probing down to $m_\chi \sim 5$~MeV~\cite{Essig:2012yx,Essig:2017kqs} using XENON10
data~\cite{Angle:2011th} sensitive to single electrons and down to $m_\chi \sim 35$~MeV~\cite{Essig:2017kqs} using XENON100 data~\cite{Aprile:2016wwo}.
We note that the {\it typical} recoil energy of an electron in a crystal or in the outer atomic shells of an atom is a few eV,
and while larger recoils are possible, they are suppressed by an atomic or crystal form factor~\cite{Essig:2015cda}.
However, this implies that as new experiments lower their thresholds, an enormous increase in the DM-electron scattering rate will lead to
a much larger sensitivity than might naively be expected.
Several proposals summarized below are thus expected to significantly improve on both the mass threshold and the cross-section sensitivity
beyond the current constraints.
If the DM is lighter than the electron, the target electron velocity, $v_T$, becomes essential for extracting all of the available DM kinetic energy \cite{Hochberg:2015fth}:
\begin{equation}
E_e \simeq \frac{1}{2}\frac{{\bf q}^2}{m_e} + {\bf q} \cdot {\bf v}_T + \Delta E_B,
\end{equation}
where ${\bf q}$ is the momentum transfer in the scattering.
The target electron velocity is important in a proposal to utilize superconductors to detect DM \cite{Hochberg:2015pha,Hochberg:2015fth}, which is possible as long as the DM kinetic energy is larger than the quasi-particle binding energy ($\Delta E_B \sim $~few meV), allowing superconductors to probe DM as light as $m_\chi \gtrsim$~1~keV.
\item {\bf DM Absorption on Electrons (1 meV -- 1 keV):} Besides DM {\it scattering} off electrons, bosonic DM can also be {\it absorbed}
by an electron in an atom (e.g.~in xenon~\cite{An:2014twa}) (Fig.~\ref{fig:feynman}, bottom left), in a
superconductor through single phonon emission~\cite{Hochberg:2016ajh}, or in a semiconductor through emission of one~\cite{Hochberg:2016sqx,Bloch:2016sjj} or more~\cite{Hochberg:2016sqx} phonons (Fig.~\ref{fig:feynman}, bottom right) (the phonon emission ensures momentum conservation).
Both the recoiling electron and the emitted phonons from the absorption can be detected in principle.
The resulting signal is the same in both cases, but in the case of absorption the electron recoil energy is simply given by the DM mass.
This means that bound electrons can produce a measurable signal for
\begin{equation}
m_\chi \ge 1~{\rm eV} \times \frac{\Delta E_B}{1~{\rm eV}}\,.
\end{equation}
Using the same target materials as described above, this allows bosonic DM to be probed down to $m_\chi \gtrsim 1\ $meV.
\item {\bf DM-low-$Z$ elastic nucleus interactions (1 MeV -- 10 GeV):} By switching to smaller nuclei (H, He, O), kinematic matching is improved and the characteristic nuclear recoil energy is boosted by an order of magnitude from those for larger nuclei used in traditional WIMP searches (Fig.~\ref{fig:feynman}, top left). If paired with roton (He) or athermal phonon (O) excitation readout with 100 meV energy threshold, such experiments can probe DM masses as low as $\sim 10-100$~MeV while in the ultimate limit of single roton ($\sim$2 meV) sensitivity, such experiments would have sensitivity down to 1 MeV DM~\cite{Schutz:2016tid}.
Experiments based on ionization readout of low-$Z$ nuclear recoils in gaseous drift chambers have also been proposed. Due to ionization production thresholds, such experiments would be sensitive to DM throughout the $1-10$~GeV mass range.
\item {\bf DM-off-shell nuclear interactions (1 keV -- 1 MeV):}
In 3-body scatters, all kinematic constraints disappear and thus the entire DM kinetic energy can be transferred to the target. Specifically, a scatter that produces 2 nearly back-to-back nuclear excitations can transfer all the kinetic energy of the DM to the target nuclei while conserving total momentum \cite{Schutz:2016tid,Knapen:2016cue}. Of course, one must pay a penalty factor in the expected rate since the process is
off-shell, but even so, a He detector sensitive to two rotons ($\sim$4 meV recoils) would probe decades of unexplored parameter space DM down to the warm DM limit of $\mathcal{O}$(keV).
\item {\bf Bremsstrahlung in inelastic DM-nucleus scattering (10 MeV -- 1 GeV):}
The emission of a photon when DM scatters off a nucleus (Fig.~\ref{fig:feynman}, top right) can produce a measurable signal in a detector well below the threshold for detecting an elastic DM-nucleus scattering event~\cite{Kouvaris:2016afs}.
Since the emitted photon will typically produce an ionization signal, this signal is similar to an electron recoil signal, but originates from a DM interaction with a nucleus. Constraints from XENON10, XENON100, and LUX already exist, and improvements are expected from upcoming experiments~\cite{Kouvaris:2016afs,McCabe:2017rln}.
\item {\bf DM-induced Chemical-Bond Breaking (10 MeV -- 10 GeV):}
DM scattering off nuclei can break chemical bonds between atoms, which includes the dissociation of molecules and the creation of defects in a lattice such as color centers~\cite{Essig:2016crl,Budnik:2017sbu}. With thresholds of a few to 10's of eV, such an experiment could probe the nuclear couplings of DM particles as light as a $\mathcal{O}$(MeV). This requires the ability to detect single defects in a macroscopic bulk of material.
\item {\bf DM-induced spin-flip avalanches (10 keV -- 10 MeV)}
Single molecule magnets are crystals in which the molecular spins act as independent nano-magnets. A crystal can be prepared with spins in a meta-stable state, such that localized heat generated by DM-nuclei inelastic scattering can cause the spins in that region to flip and release their stored (Zeeman) energy~\cite{Bunting:2017net}. This constitutes a positive feed-back loop, which results in a spin-flip ``magnetic bubble'' avalanche that generates a measurable magnetic flux change. The avalanche threshold can be tuned analogously to the tuning of a conventional bubble chamber, and can range from a few 10s of eV down to a few meV.
\end{itemize}
\subsubsection{Backgrounds and Exposure}
For high mass WIMPs ($>$10 GeV), the rarity of the expected interaction requires that the active mass of experiments be quite large, $O$(10 tons). Secondly, any backgrounds that are indistinguishable from the DM signal must be strictly controlled. Thus, the experiment must be located underground to suppress cosmogenic backgrounds and be constructed from materials with excellent radiopurity. Furthermore, since common radioactive backgrounds such as beta decays and compton scattering produce electronic recoils with characteristic energies of $O$(100 keV) that significantly overlaps the expected WIMP-nucleus recoil signal spectrum, the capability to distinguish between nuclear and electron recoils has been found to be essential.
Coherent scattering of solar neutrinos off nuclei will also soon become an important background that mimics the
DM signal. For example, LZ will likely be sensitive to $^8$B solar neutrinos that will limit their sensitivity to 5--10~GeV WIMP masses and will approach the atmospheric neutrino floor for higher masses.
Due to the dearth of experimental constraints on sub-GeV DM particles as well as the fact that the number density and thus the flux of DM varies inversely with $m_{\chi}$, the requirements on active target mass and background rejection to probe unexplored parameter space are significantly relaxed for a low-mass DM search (of course, a detector needs to have a sufficiently low energy threshold to see the DM signal in the first place).
Sub-GeV DM experiments, for example, need only have e.g.~a mass of 100~g and run for less than $\mathcal{O}$(minutes) to probe unexplored parameter space, while those experiments in the 1--6~GeV range require O(50 kg) active mass to reach the $^{8}$B neutrino floor.
Searches for lighter mass DM also have to contend with radioactive and neutrino backgrounds, in addition to controlling new backgrounds that exist only at low energy.
However, the small recoil energies of these interactions means that there is very little overlap with the flat Compton and beta backgrounds with characteristic energies of $O$(100 keV). Thus, underground operation and use of radiopure materials developed for high mass WIMP searches alone should largely be sufficient to guarantee subdominant radioactive backgrounds for ``first-generation'' sub-GeV searches, while 1--10 GeV searches that reach the neutrino floor will still require either some level of electron/nuclear recoil discrimination capability, or achieve significant reduction in the total radioactive-background rate.
Here we list and expand on the possible backgrounds:
\begin{itemize}
\item {\bf Radioactivity.} Unlike traditional WIMP searches, in the search for MeV-GeV mass DM, radioactive backgrounds are not expected to be important for exposures $\lesssim 1\ $kg-year, given shielding and radioactivity levels comparable to those in existing experiments. Experiments typically achieve radioactive background rates of
$\lesssim 1~{\rm dru} \simeq 0.4$~event/kg/year/eV through
a combination of high target-material purity and shielding of the detectors.
These backgrounds have been measured down to 50~eV~\cite{Agnese:2015nto,Aguilar-Arevalo:2016ndq,Ramanathan:2017dfn},
and are expected to be approximately flat at lower energies.
Electron recoils from sub-GeV DM scattering off electrons in a semiconductor or scintillator target
have typical recoil energies of a few eV.
\item {\bf Dark Counts.} Thermal fluctuations or other detector-specific processes can mimic the DM signal and constitute perhaps
the most significant background challenge in probing sub-GeV DM.
For example, the current XENON10 limit for $m_\chi \gtrsim 5$~MeV is limited by a dark-count rate,
a significant fraction of which is likely due to ionized electrons, originally created by highly ionizing background events outside of
the DM scattering region of interest, that become trapped at the liquid-gas interface and are released spontaneously at a later
time~\cite{Sorensen:2017ymt}. In general, systems that are maintained out of equilibrium with respect to a signal of interest can be expected to have dark count rates. For example, photomultipliers have dark current because their photocathodes are subject to electric fields, and cathodic surfaces under high field can be expected to emit electrons.
\item {\bf Vibrations.} The energy sensitivity of these detectors can be significantly degraded by environmental noise induced by vibrations, for example from cryocoolers \cite{Agnese:2014aze}. For experiments using thermal readout, vibration noise can arise from frictional slipping between mechanical support structures and the detector.
\item {\bf Electromagnetic Interference.} Spurious low frequency noise can be induced by external electronics, if there is not sufficient filtering at electrical feedthroughs. For experiments seeking very low energy thresholds, protections must be taken to minimize electromagnetic environmental interference from coupling to the detector.
\item {\bf Solar neutrinos.} For electron-recoil searches for sub-GeV DM, coherent solar neutrino-nucleus scattering will only
be a background for exposures of $\gtrsim 1\ $kg-year~\cite{Essig:2011nj,Hochberg:2015fth}.
For nuclear recoil searches for sub-GeV DM, coherent neutrino-nucleus scattering is a less significant background than in the 1--10 GeV range. This is because the DM signal becomes concentrated in a smaller and smaller energy range with decreasing mass, the solar neutrinos have characteristic energies in the hundreds of keV to several MeV with much less flux below these energies, and because the neutrino scattering cross-section scales as $E^2$ and thus decreases significantly at low energy.
\item {\bf Coherent photon background for sub-MeV DM searches.} It is common for DM direct detection experiments to have a significant background from Compton scattering of gamma rays, included above under {\bf Radioactivity}. But for experiments designed to reach extremely low energy thresholds, one must also take into account the coherent scattering of these gamma rays from atoms in the target material. The cross-section for coherent scattering is large, scaling as $Z^2$ of the target material, and this scattering can be significant in the energy regime less than $\sim$ 100 meV. Consequently, active Compton vetoes must be considered for beyond pathfinder
experiments~\cite{Robinson:2017prd}.
\end{itemize}
\subsection{New Directions for Spin-Dependent (Proton) Interaction Searches}
Xenon contains two spin isotopes that have an unpaired neutron, $^{129}$Xe (spin-1/2) and $^{131}$Xe (spin-3/2) with an abundance
of about $\mathcal{O}(20-25\%)$ each. XENON100 and LUX have thus set the best constraints on spin-dependent DM-neutron
interactions~\cite{Aprile:2013doa,Akerib:2016lao}, and LZ is expected to provide the best constraint in future.
However, the G2 experiments will not probe spin-dependent DM-proton couplings as effectively as an experiment using a target material with unpaired protons. The strongest constraint currently comes from PICO-60, using C$_3$F$_8$ in which $^{19}$F contains an unpaired
proton~\cite{Amole:2017dex}.
Additional experiments using C$_3$F$_8$ or other appropriate target nuclei have been proposed and will be summarized below.
\subsection{Brief Descriptions of Experimental Efforts}
In this subsection, we summarize various R\&D and other experimental efforts.
One-page summaries for some of these efforts can be found in~\cite{WG1-one-page-summaries}.
A summary of these efforts appears in Table~\ref{tab:all-experiments}, together with an estimated cost and timescale.
{\it We emphasize that several proposals can probe more than one science target, but we have grouped each idea into only one (primary)
science target.}
\subsubsection{Sub-GeV Dark Matter (Electron Interactions)}
\begin{itemize}
\item
{\bf SENSEI:}
SENSEI will use a recently demonstrated technological breakthrough to search for DM-electron scattering interactions to explore a wide range
of currently unconstrained DM candidates with masses in the 1~eV -- few~GeV range.
This project would use a thick fully depleted silicon CCD in the far sub-electron regime ($\sim 0.05$~rms/pix) using a new generation of Skipper-CCDs designed by the LBL Micro Systems Lab.
For the first time, it has been demonstrated that the charge in each pixel of a CCD --- in a detector consisting of millions of pixels --- can be measured with sub-electron noise~\cite{Tiffenberg:2017aac}.
A 1-gram detector is already operating in the NUMI access tunnel.
A larger project (100 grams) can be deployed at a deeper site on a timescale of $\sim 1-2$ years
if funding is obtained (the required funding is well within the small-project scale).
A $\mathcal{O}(100)$-gram detector running for one year is expected to be essential free of radioactive backgrounds, assuming a
background level of $\approx 5$~dru, which has already been demonstrated by the current DAMIC detector operating at SNOLAB.
Moreover, dark counts are expected to be negligible for a threshold of two or three electrons, allowing SENSEI
to achieve unprecedented sensitivity to Hidden-Sector and Ultralight DM.
\item
{\bf DAMIC-1K:} DAMIC-1K is a low-background ($\approx$ 0.1 dru), low-threshold
($2 e^-$) experiment with a detector mass of $\approx 1$~kg. It builds on
the success of the DAMIC experiment at SNOLAB, which employs
high-resistivity, thick CCDs to detect sub-keV energy deposits in the bulk
silicon. The technology to fabricate DAMIC-1K CCDs is already proven, with
modest increase in area and thickness of the DAMIC detectors. Skipper
design --- developed, tested, and implemented by the SENSEI collaboration ---
will be used to reach sub-electron noise, combined with digital
filtering for fast readout. Improvements in the design of the shielding,
in the selection of materials, and in handling procedures will be
implemented to reach a radiogenic background of $\approx 0.1$~dru.
DAMIC-1K will search for low-mass DM in a broad range from 1~eV -- few GeV
with unprecedented sensitivity to DM-electron scattering and hidden-photon
DM. Also, DAMIC-1K will demonstrate the rejection of cosmogenic $^{32}$Si
--- the dominant background for SuperCDMS Si-HV --- through spatial
correlation of candidate events with the decay of the $^{32}$P daughter,
providing a path to the exploration of low-mass DM interactions down to
the Neutrino Floor.
\item
{\bf UA$'$(1):} Direct detection of dark sector DM via counting single to few electron ionization events in a liquid xenon target. A primary goal of this experiment will be to understand and mitigate the electron backgrounds in a two-phase xenon detector. Such mitigation R\&D needs to happen in a small (10 kg scale) target and flexible test bed. Studies ultimately need to be carried out underground due to the long lifetime of trapped electrons at the liquid xenon surface. While this experiment is expected to be sensitive to new parameter space, success in mitigation of electron backgrounds would be a great success on its own, because it could enable much larger detectors (e.g. LZ) to perform far more sensitive searches for this class of DM.
\item
{\bf Cryogenic GaAs(Si,B) scintillator for transition edge sensor readout:}
In~\cite{Derenzo:2016fse}, n-type GaAs was suggested as a promising target material for sub-GeV DM detection due to its commercial availability in high purity and large sizes (15~cm), and its known fluorescent properties at cryogenic temperatures. GaAs has a direct gap of 1.52~eV, and thus a DM particle can scatter off a valence-band electron exciting it into the conduction band. Doping with Si (n-type donor) and boron (p-type) creates trapping sites that scintillate, producing 1.33~eV photons with a measured scintillation yield of 30~photons/keV in crystals with non-optimized dopant densities.
The production of scintillation photons at long time scales after a particle interaction (``afterglow") has been seen in e.g.~NaI and CsI, and is a primary background concern. Recent measurements by the scintillation research group at LBNL, however, have seen no thermally stimulated emission after cryogenic x-ray bombardment. This suggests that highly-doped n-type GaAs has no afterglow, probably because it has delocalized electrons that can easily annihilate any metastable radiative states. Radioactive backgrounds are also expected to be non-limiting, since $^{3}$H and other cosmogenic spallation contamination can be minimized by limiting surface exposure following crystal production, since no sensor fabrication occurs on the GaAs crystal itself (unlike with SuperCDMS). Furthermore, U/Th-chain radioactive backgrounds are expected to be sub-dominant since commercial GaAs is highly purified.
This project plans to find dopant concentrations that optimize scintillation performance to hopefully approach the theoretical limit of 200~photons/keV, optimize surface roughness / use of anti-reflective coatings to improve transmission, as well as to develop large-area detectors sensitive to single optical photons within the next 2 years. A 10~kg pathfinder experiment could then be run in 2019 at the CUTE facility.
\item
{\bf NICE:} The intrinsic light yields of pure NaI/CsI at 77~K have been found to be about twice higher than those of thallium-doped NaI/CsI at room temperature. Integrated with light sensors working at cryogenic temperatures, those pure crystals can be used for various rare event detections. In a phased approach, the first step would be to use cylindrical crystals (about 1~kg) wrapped with PTFE tape, watched by 2 photomultipliers from the ends, cooled by liquid nitrogen or argon, with a background measurement down to 0.2~keV$_{ee}$. In a second step, the system could be switched to SiPM readout for higher quantum efficiency and to explore an active veto with liquid argon and neon. Finally, the project could move to transition edge tensor readout for 100\% quantum efficiency, single photon trigger and an accompanying phonon signal.
\item
{\bf Germanium Detector with Avalanche Ionization Amplification:}
We propose to develop ionization amplification technology for Ge in which very large localized E-fields are used to accelerate ionized excitations produced by particle interaction to kinetic energies larger than the Ge bandgap at which point they can create additional
$e^{-}/h^{+}$ pairs, producing intrinsic amplification. This amplified charge signal could then be readout with standard high-impedance JFET- or HEMT-based charge amplifiers. Such a system would potentially be sensitive to single ionized excitations produced by DM interactions with both nuclei and electrons. In addition, purposeful doping of the Ge could lower the ionization threshold by $\sim \times$10 ($\sim\ $100~meV), making the detector sensitive to 100~keV DM via electron recoils.
A 3 year R\&D program could develop both the avalanche ionization amplification and impurity ionization technology, after which a 10~kg pathfinder experiment could be constructed in 2 years.
\item
{\bf PTOLEMY-G$^{3}$:} In the PTOLEMY-G$^{3}$ experiment, graphene field-effect transistors (G-FETs) arranged into a fiducialized volume of stacked planar arrays, called a graphene
cube (G3), would be used to search for MeV DM scattering events that liberate an electron from the graphene target. A narrow, vacuum-separated front-gate of
the G-FET imposes a kinematic discrimination on the maximum electron recoil energy, and the FET-to-FET hopping trajectory of an ejected electron indicates the scattering direction, shown to be correlated to the DM wind. High radio-purity wafer-level fabrication, ultra-low ratio 14C/C graphene growth, a cryogenic fiducialized volume, and the coincidence of the FET-to-FET trajectories of electron recoils would provide the conditions for a low background observatory of MeV DM interactions. The evaluation of the G3
active target and low background methods are an important step for the PTOLEMY project whose long-term goal is the direct detection of the cosmic neutrino background. PTOLEMY-G3 is the only proposed experiment with direct directional detection capability for MeV DM.
\item
{\bf Superconducting aluminum:} Superconducting detectors can be sensitive to O(meV) electron recoils from DM-electron scattering, using the superconducting gap (e.g in aluminum this is 0.6 meV). Such devices could detect DM as light as a few keV. The use of superconductors as DM targets would be a natural extension of the TES-based DM detection program, as TES resolution reaches the meV scale.
\end{itemize}
\subsubsection{Sub-GeV Dark Matter (Nucleon Interactions)}
\begin{itemize}
\item
{\bf Superfluid helium with transition edge sensor readout:} Superfluid helium is an extremely pure material with no intrinsic radioactivity and little coupling of vibrations from surrounding solid materials, allowing substantial background suppression. In addition, while superfluid helium produces electronic excitations like the other noble liquids, it is also amenable to calorimetric readout since it remains a liquid at extremely low temperatures. In this detector concept, superfluid helium scintillation light and triplet excimers are detected using athermal, cryogenic sensors with transition edge sensor readout \cite{Guo:2013,Car:2017}. Rotons and phonons are detected using quantum evaporation, using a bolometer array suspended above the superfluid helium. Background rejection efficiency has been estimated, using the ratio of scintillation light to heat in the form of phonons and rotons. Such detectors should enable DM searches sensitive to extremely low energy deposition, as the phonon/roton signal is amplified through the helium atom desorption/adsorption process. Very low-mass DM candidates might be detected using multi-excitation processes. in which back-to-back phonons or rotons are produced, enabling extraction of the DM kinetic energy while conserving energy and momentum. Assuming gamma ray backgrounds comparable to those projected for SuperCDMS-SNOLab, existing transition edge sensor technology, coupled to a $\sim 1$~kg superfluid helium target, would allow sensitivity to DM candidates with mass as low as 10-30 MeV.
\item
{\bf Evaporation and detection of helium atoms by field ionization:} In this variation on a superfluid helium-based DM detector, nuclear recoils would be detected using a scheme based on quantum evaporation of helium atoms followed by field ionization~\cite{Maris:2017xvi}. WIMP scattering events from nuclei with recoil energy of the order of 1 meV produce quasiparticle excitations (phonons and rotons) which can desorb a helium atom from the surface of superfluid helium or other crystalline target materials. The ability to detect single helium atoms in the gas by field ionization thus obtains a threshold energy sensitivity low enough to search for ~ 1 MeV DM particles. A helium atom becomes field-ionized when one of its electrons tunnels into a positively charged metal tip through a field-distorted barrier. The helium ion then accelerates from a high potential, typically several keV, to a cathode which can be a calorimeter, a channeltron, or a microchannel plate. The impact of a single ion is easy to detect. Given that the field-ionization approach could be applied to a range of target materials, from superfluid helium to solids with a long mean free path for phonons, a dedicated research effort extending over several years is required to develop a scalable fabrication process, establish the quantum detection efficiency, and investigate the possibility of dark counts.
\item
{\bf Color centers:} This experimental initiative involves using defects in crystals created by nuclear recoils with energy of the order of 10 eV. This probe for light DM elastic scattering is in principle sensitive down to DM masses of $\sim$100 MeV. On top of that, sensitivity to solar neutrinos is reached with exposures of about 100 kg year. The defects live practically forever, and in many cases are spectroscopically active. The concept is to look at a bulk of these and count extra defects as they form. Challenges are many; to list the most important: Finding a handle of the optimal signal, rejecting backgrounds, removing existing defects (production, annealing), as well as calculations of rates, branching ratios and response.
\item
{\bf Magnetic bubble chambers:} A proof-of-concept magnetic bubble chamber~\cite{Bunting:2017net} with a $\sim\,$eV energy threshold is currently under development. This prototype will aim to demonstrate stability of the proposed detector; a neutron beam will further be used to demonstrate and perform calibration of the spin avalanche mechanism. The initial design is based on a (1\,cm$^2)\times(\sim\,$few mm) powder sample of compound 3 of~\cite{doi:10.1021/ja068961m} placed in a 50 mK fridge, and shielded with four inches of low activity, cadmium-lined lead. The use of single molecule magnet crystals with lower ($\sim\,$meV) energy thresholds would follow successful experimental demonstration. \end{itemize}
\subsubsection{Searches down to the Neutrino Floor for $\mathcal{O}$(GeV) Dark Matter}
\begin{itemize}
\item
{\bf SuperCDMS SNOLAB G2+:} The currently funded G2 experiment SuperCDMS SNOLAB can probe large areas of unexplored DM parameter space in the DM mass range 0.5 -- 6 GeV/c$^{2}$
with an ultimate sensitivity that is expected to be limited by $^{3}$H $\beta$ decays produced by cosmogenic spallation during detector fabrication at $\sim \times$20 the neutrino floor. Thus, developing new detector technology with 1:20 electronic recoil / nuclear recoil background discrimination for sub-keV recoils would allow a subsequent upgrade to reach the neutrino floor. The athermal phonon sensor technology and Luke-Neganov phonon amplification techniques developed by SuperCDMS lead to two natural detector concept evolutions that could achieve this capability:
\begin{enumerate}
\item Encoding 3D position information and ionization yield into the Luke-Neganov phonon signal: SuperCDMS HV detectors currently use Luke-Neganov phonons produced during the drifting of e$^{-}$/h$^{+}$ across a planar electro-static potential in a semi-conductor (Luke-Neganov gain), to lower the energy threshold so as to be sensitive to recoils from very light mass DM.
We propose to develop a high voltage detector with 2 interdigitated phonon sensors that replicate the E-field pattern found in the High Mass SuperCDMS iZIP detector design, thereby regaining the $z$-position and electronic/nuclear recoil discrimination capabilities seen in the phonon pulse shape and energy partition.
\item Encoding ionization yield into Luke-Neganov quantization offsets: If the sensor resolution of the SuperCDMS detectors can be decreased significantly below the drift voltage across the crystal, then the total phonon energy spectrum will resolve into quantized peaks depending upon the number of e$^{-}$/h pairs generated by the interaction. Since the average recoil energy to produce a given number of e$^{-}$/h is vastly different for electronic and nuclear recoils, the quantized peaks for nuclear recoils will be offset from that for electronic recoils, and consequently recoil type discrimination should be possible.
\end{enumerate}
\item
{\bf NEWS-G:} The goal of the NEWS-G (New Experiments with Spheres - filled with Gas) collaboration is to search for galactic DM particles in the 0.1 to few GeV mass region. Detectors are constituted of spherical metallic vessels, each equipped with a small ball sensor set at high voltage at the center of the sphere. Each sphere is filled with a noble gas mixture (Ar, Ne, He, H), operated in proportional mode at pressure up to 10 bar.
\item
{\bf NEWS-dm:} NEWSdm is meant to be the first experiment with a solid target for directional DM searches: the use of a nuclear emulsion based detector, acting both as target and tracking device, would allow to extend DM searches beyond the neutrino floor and provide an unambiguous signature of the detection of Galactic DM. The novel emulsion technology, based on the use of nuclear emulsion films with nanometric AgBr crystals (NIT), makes it possible to record the sub-micrometric tracks produced by the WIMP scattering off a target nucleus. In March 2017 the NEWSdm Collaboration has installed an experimental setup for the exposure of a ~10g detector at the Gran Sasso INFN Underground Laboratories. This test aims at measuring the detectable background from environmental and intrinsic sources and to validate estimates from measurements and simulations. The confirmation of a negligible background will pave the way for the construction of a pilot experiment with an exposure on the $\sim 10$~kg year scale. This pilot experiment will act as a demonstrator to further extend the sensitivity towards the neutrino floor.
\item
{\bf CYGNUS HD-10:} This directional DM experiment would be a 10~m$^{3}$ gas target time projection chamber with a He:SF$_6$ gas mixture. The SF$_6$ component enables negative ion drift (for reduced diffusion) and 3D fiducialization via minority carriers.
High resolution charge readout, via resistive Micromegas, will be used to image ionization from nuclear recoils in 3D (``charge cloud tomography'').
This is expected to enable excellent electron-event rejection, fiducialization techniques via transverse diffusion of drift charge, 3D-directionality for unambiguous WIMP discovery, and penetration of the neutrino floor.
A first, 10~m$^3$ CYGNUS HD-10 detector is expected to have sensitivity competitive with the G2 experiments, to
both SD and SI interactions, with improved electron rejection for low WIMP masses. The proposed He:SF$_6$ gas mixture is a starting point, and could be optimized to target primarily SD or (at low masses) SI interactions with further improvements in sensitivity. Detailed imaging of ionization allows sensitivity to DM models with multiple-particle final states. CYGNUS HD-10 would be a first step towards a large-scale CYNUS directional detector capable of unambiguously demonstrating the cosmic origin of a WIMP signal, penetration of the neutrino floor, and eventually, WIMP astronomy.
\item
{\bf Scintillating bubble chambers:} These detectors combine the extremely effective electron rejection and simple instrumentation of a bubble chamber with the event-by-event energy resolution of a liquid scintillator. Recently simultaneous scintillation and bubble nucleation by low-energy nuclear recoils in superheated xenon has been demonstrated. Superheated water bubble chambers are also being pursued to take advantage of advances in water-based scintillators. The goal of the scintillating bubble chamber effort is the development of detectors with the scalability, target flexibility, and background discrimination needed to push WIMP sensitivity towards the neutrino floor (or follow-up a new signal) after the G2 program. The energy information provided through scintillation will allow reduction of backgrounds from high-light-output alpha particles, as well as non-scintillating backgrounds like dust particulates. In addition, scintillating bubble chambers will have even higher rejection of minimum-ionizing backgrounds than non-scintillating bubble chambers, by virtue of having a new energy loss mechanism for particle interactions. The scintillating bubble chamber technique is now established in liquid xenon at the 30-gram scale, and the key next step is the construction of an O(10)-kg scale xenon bubble chamber to demonstrate the scalability of these detectors~\cite{Baxter:2017ozv}.
\end{itemize}
\subsubsection{Spin-Dependent (Proton) Interactions}
\begin{itemize}
\item
{\bf PICO:} The PICO bubble chamber detectors can be made very large, have extremely low backgrounds, and work with diverse target nuclei. Most important recent scientific impacts have come from $\rm C_{3}F_{8}$ targets, where the $\rm ^{19}F$ nucleus gives unique sensitivity to spin-dependent WIMP couplings to the proton. Due to coherent enhancement of the background neutrino rate, the ultimate background from atmospheric and solar neutrinos is expected to be two orders of magnitude lower for $\rm C_{3}F_{8}$ than for xenon, when cast in terms of spin-dependent sensitivity. In addition to the $\rm C_{3}F_{8}$ program, the PICO Collaboration is investigating alternative targets for future searches in PICO-40L, PICO-500, or an array of PICO-500 detectors. These include hydrocarbons for low-mass WIMP searches, $\rm CF_{3}I$ to search for coupling to proton orbital angular momentum in iodine or as follow-up to a spin-independent signal in xenon, and superheated nobles (argon, xenon) to take advantage of the extra discrimination and event-by-event energy information provided by the scintillation signal.
\end{itemize}
\begin{table}[htbp]
\footnotesize
\begin{center}
\begin{tabular}{ | p{3.3cm} | p{2.9cm} | p{2.8cm} | p{2.5cm} | p{4.1cm} |}
\hline
Main Science Goal & Experiment & Target & Readout & Estimated Timeline \\
\hline
\hline
\multirow{ 7}{3cm}[-1cm]{Sub-GeV Dark Matter (Electron Interactions)}
& SENSEI \newline & Si & charge & ready to start project \newline (2 yr to deploy 100g) \\
\cline{2-5}
& DAMIC-1K \newline & Si & charge & ongoing R\&D \newline 2018 ready to start project \newline (2 yr to deploy 1 kg) \\
\cline{2-5}
& UA$'$(1) \newline liquid Xe TPC & Xe & charge & ready to start project \newline (2 yr to deploy 10kg) \\
\cline{2-5}
& Scintillator w/ TES readout & GaAs(Si,B) & light & 2 yr R\&D \newline 2020 in sCDMS cryostat \\
\cline{2-5}
& NICE; NaI/CsI cooled crystals & NaI \newline CsI & light & 3 yr R\&D \newline 2020 ready to start project \\
\cline{2-5}
& Ge Detector w/ Avalanche Ionization Amplification & Ge & charge & 3 yr R\&D \newline 1 yr 10kg detector \newline 1 yr 100kg detector \\
\cline{2-5}
& PTOLEMY-G3, 2d graphene & graphene & charge \newline directionality & 1 yr fab prototype \newline 1 yr data \\
\cline{2-5}
& supercond.~Al cube & Al & heat & 10+ yr program \\
\hline
\hline
\multirow{ 5}{3cm}[-0.7cm]{Sub-GeV Dark Matter (Nucleon Interactions)}
& Superfluid helium with TES readout & He & heat, light & 1 yr R\&D; 2018 ready to start project; 2022 run \\
\cline{2-5}
& Evaporation \& detection of He-atoms by field ionization & superfluid helium, crystals with long phonon mean free path (e.g. Si, Ge) & heat & 3 yr R\&D; 2020 ready to start project R\&D \\
\cline{2-5}
& color centers & crystals (CaF) & light & R\&D effort ongoing \\
\cline{2-5}
& Magnetic bubble chamber & Single molecule magnet crystals & Spin-avalanche (Magnetic flux) & R\&D effort ongoing \\
\hline
\hline
\multirow{ 5}{3cm}[-0.7cm]{Searches down to Neutrino Floor for $\mathcal{O}$(GeV) Dark Matter}
& SuperCDMS-G2+ & Ge & heat, ionization & 3 yr R\&D; 1 yr fabrication; 2022 start running \\
\cline{2-5}
& NEWS-G \newline & H, He & charge & 140cm sphere installed at SNOLAB in 2018 \\
\cline{2-5}
& NEWS-dm \newline emulsions & Si, Br, I, C, O, N, H, S & charge \newline directionality & R\&D phase complete. \newline Now technical test \\
\cline{2-5}
& CYGNUS HD-10 & SF$_6$, He \newline flexible & charge \newline directionality & 1 yr R\&D; 1 yr 1 m$^3$; \newline 2 yr 10 m$^3$ \\
\cline{2-5}
& Scintillating bubble chamber & Xe, Ar \newline C$_6$F$_6$, H$_2$0 & light \newline heat(bubble) & 2 yr program; test 10kg Xe chamber with CENNS \\
\hline
\hline
Spin-Dependent \newline (Proton) Interactions & PICO \newline bubble chambers & wide range & heat(bubble) & 40 l chamber now \newline PICO 500 l next \\
\hline
\end{tabular}
\caption{\label{tab:all-experiments}
{\footnotesize Proposals and ideas for new experiments, grouped according to their main science target as identified in Working Group 1: 1) Sub-GeV DM (Electron Interactions), 2) Sub-GeV DM (Nucleon Interactions), 3) Searches down to the Neutrino Floor for $\mathcal{O}$(GeV) Dark Matter, and 4) Spin-dependent (Proton) Interactions. {\it Note that several proposals can probe more than one science target.}
Within each category, the proposal/idea is ordered roughly according to the timescale needed to start the project. The target material and main readout channel are also listed.}
}
\end{center}
\end{table}
\subsection{Facilities}
The capability of the direct DM search community to develop the next generation of detectors
depends in great manner of the availability of facilities for detector R\&D, calibration, and early science
tests. Some of the facilities recently developed, or planned for the near future, are discussed here. The existence
of these facilities reduces significantly the time and cost from detector idea to early science, and also calibration.
\subsubsection{Nuclear Recoil Calibration Facility at TUNL}
The community needs precision measurements of the quenching factor for
several detector technologies in order to perform direct DM searches
with nuclear recoils. A facility to perform these measurements has been established at
TUNL (Triangle Universities Nuclear Laboratory). This facility is set to produce pulsed,
tunable, and quasi-mono-energetic neutron beams, with very flexible beam configurations.
Because of the dedicated space available (3 target areas), it is possible to do calibrations
requiring long setup times.
The TUNL facility uses a 10 MV Tandem accelerator, with bunching and chopping
capabilities. The facility can operate with various ion sources ($^1$H,$^2$H,$^3$He,$^4$He),
a maximum current of 1 $\mu$A, and 70~keV to 15~MeV neutron energies.
The backing neutron detectors needed for scattering experiments are available at the facility. Several
experiments have already used, or a planning to make use of TUNL for their quenching factor
measurements.
\subsubsection{Northwestern Experimental Underground Site at Fermilab (NEXUS)}
A clean, low-background, testing facility with convenient access for prototyping and testing the next-generation
cryogenic detectors is being set at the NuMI access tunnel at Fermilab (300 m.w.e.). The facility is being established
by the Northwestern SuperCDMS group, and will also be available for other users 30\% of the time. This depth
gives a muon rate of 3.4 muons/cm$^2$/day, ideal for long term detector testing without
the risk of cosmogenic activation. A dilution fridge with a 10~mK base temperature will be available at this facility
with large experimental volume (33~cm diameter $\times$ 53~cm height), and 150~$\mu$W of cooling power at 100~mK.
Background is expected to be 100~dru, and a D-D neutron generator will be available for
calibration purposes. This facility is expected to be online early in 2018.
The University of Minnesota group is developing a very low energy nuclear recoil calibration technique
that could be implemented at NEXUS. This technique is based on thermal neutron capture and
would produce calibrations for recoils with energies in the 100~eV -- 400~eV range. It is expected that the first measurement
of these recoils in solids will take place at the University of Minnesota in the Summer of 2017. Improvements to the
technique are hoped to allow the method to be employed at planned calibration facilities discussed here (NEXUS, CUTE, TUNL).
\subsubsection{Cryogenic Test Underground Facility (CUTE) at SNOLAB }
This facility operated by the Queen's SuperCDMS group provides a very low background cryogenic test stand
at SNOLAB. CUTE is set to perform low background studies, and can also be used as a science platform. A dilution fridge
similar to the NEXUS (same vendor) will be available, with lower background of 3--30~dru. The fridge will be
installed inside a water tank for shielding.
\subsubsection{SuperCDMS cryogenic facility at SNOLAB}
The SuperCDMS project is building the SNOBOX facility at SNOLAB. This is the dilution fridge to be used for the G2 program with
5~$\mu$W cooling power at 15~mK.
SNOBOX will have a very large volume available, capable of holding 31 SuperCDMS towers, with an expected
background that is 30 times lower than CUTE. The SuperCDMS G2 program is scheduled to start science operations
in 2020 with a 4 tower payload. The additional space available
is the ultimate location for either a large payload or a very-low background measurement. The additional space
available in SNOBOX offers an opportunity for operating other low background cryogenic detectors.
\subsection{Projected Sensitivities and Yield Estimates}
In this section, we summarize the sensitivities for the experimental ideas mentioned above.
We also show a few benchmark targets that were discussed in Sec.~III and summarized in Sec.\ref{subsec:science-DD}.
These benchmark targets assume that the DM scattering is mediated through a dark photon.
For this case, we define the DM-electron scattering cross section $\bar\sigma_e$ and the DM form factor $F_{\rm DM}(q)$
as~\cite{Essig:2011nj,Essig:2015cda}
\begin{eqnarray}
\overline\sigma_e = \frac{16\pi\mu^2_{\chi e} \alpha \epsilon^2\alpha_D}{(m_{A'}^2+\alpha^2 m_e^2)^2}
\simeq
\begin{cases}
\frac{16 \pi \mu_{\chi e}^2 \alpha \epsilon^2 \alpha_D}{m_{A'}^4}\,, & m_{A'} \gg \alpha m_e \\
\frac{16 \pi \mu_{\chi e}^2 \alpha \epsilon^2 \alpha_D}{(\alpha \, m_e)^4}\,, & m_{A'} \ll \alpha m_e
\end{cases}\,, \\
F_{DM}(q) = \frac{m_{A'}^2+\alpha^2m_e^2}{m_{A'}^2+q^2} \simeq
\begin{cases}
1\,, & m_{A'} \gg \alpha m_e \\
\frac{\alpha^2 m_e^2}{q^2}\,, & m_{A'} \ll \alpha m_e
\end{cases}\,.
\end{eqnarray}
Here $\alpha_D\equiv g_D^2/4\pi$, $\mu_{\chi e}$ is the DM-electron reduced mass, and $q$ is the momentum transfer between the
DM and electron.
\begin{figure}[!t]
\includegraphics[width=0.48\textwidth]{figs/CVWhitePaperReviewGenericElectronFDM1Plotv2}
~~
\includegraphics[width=0.48\textwidth]{figs/CVWhitePaperReviewGenericElectronFDMq2Plotb} \\
\includegraphics[width=0.48\textwidth]{figs/CVWhitePaperReviewDDLDMHeavyDarkPhotonmApEq3mXPlotNRprojections}
~~
\includegraphics[width=0.48\textwidth]{figs/CVWhitePaperReviewDDLDMLightDarkPhotonPlotv2}
\caption{
\footnotesize{Constraints and projections for the {\bf DM-electron scattering} cross section $\bar{\sigma_e}$.
The left (right) plots assume a momentum-independent (dependent) interaction, $F_{\rm DM}=1$ ($F_{\rm DM}=(\alpha m_e/q)^2$).
Existing constraints from XENON10 (XENON100)~\cite{Essig:2012yx,Essig:2017kqs} are shown in the blue (red) shaded regions.
Projections show 3 events for a 1-year exposure~\cite{Essig:2015cda,Essig:2012yx,Hochberg:2016ntt,Hochberg:2015pha,Hochberg:2015fth,Derenzo:2016fse}; the label includes the threshold (in terms of number of electrons, photons, or the electron recoil energy)
and target mass.
Solid/dashed/dotted lines indicate an estimate of the time to start taking data, corresponding roughly to a
short/medium/long timescale, respectively.
A solid line indicates a mature technology: data taking can begin in $\lesssim 2$ years
and a zero background (radioactivity or dark currents) is reasonable for the indicated thresholds.
A dashed line indicates more R\&D is required and, if successful, data taking could start in $\sim 2-5$~years;
the projected sensitivity assumes that backgrounds can be controlled.
A dotted line indicates longer-term R\&D efforts.
{\bf Bottom left} plot assumes {\bf DM scatters through an $A'$ with $m_{A'}= 3 m_\chi$}.
Five theory targets are shown as explained in Section~\ref{subsec:science-DD}.
In addition to electron-recoil experiments, we show projections from nuclear-recoil experiments (from Fig.~\ref{fig:scattering-NR}).
Gray shaded regions are constraints from LSND, E137, BaBar, and current WIMP nuclear-recoil searches~\cite{Essig:2015cda}.
{\bf Bottom right} plot assumes {\bf DM scatters through an $A'$ with $m_{A'} \ll\ $keV}; a freeze-in target is shown.
Shaded gray regions are bounds from WIMP nuclear-recoil searches, stellar, and BBN
constraints~\cite{Essig:2015cda}.
The superconductor projection in bottom plots include in-medium effects for an $A'$ and assume a dynamic range of 10~meV--10~eV.}
}
\label{fig:scattering-ER}
\end{figure}
The following pages contain several representative figures, showcasing that orders of magnitude of new parameter space can
be probed beyond existing constraints with first-generation small-scale experiments.
{\it We emphasize that these plots are meant to be illustrative of the enormous parameter space
that could be covered in the next few years by several small projects. Not all theoretical ideas for experiments, and proposed
experimental projects, appear on a plot. Moreover, additional motivated DM candidates exist that are not represented with a plot in this
white paper.}
\begin{itemize}
\item Fig.~\ref{fig:scattering-ER}: Four plots show projections for DM scattering off an electron through a mediator
with mass $\gg\ $keV ({\bf left two plots}) and mass $\ll\ $keV ({\bf right two plots}),
leading to a momentum-independent scattering ($F_{\rm DM}(q)=1$) and momentum-dependent scattering ($F_{\rm DM}=(\alpha m_e/q)^2$),
respectively.
The {\bf bottom left} plot shows several model scenarios in which the scattering occurs through a dark photon with $m_{A'}= 3 m_\chi$.
Five theory targets are shown, four of which can be probed by first-generation small-scale experiments sensitive to
electron recoils.
Additional projections from experiments sensitive to nuclear recoils are also included. This assumes that the mediator interacts with
both electrons {\it and} nuclei, as is the case for a dark photon mediator but not necessarily the case for other types of mediators.
The nuclear-recoil projections have been converted to electron-recoil
projections using
\begin{equation}\label{eq:NR-to-ER}
\bar\sigma_e = 4 \ \frac{\mu_{\chi,e}^2}{\mu_{\chi,N}^2}\ \sigma_{\rm N}\,.
\end{equation}
We show two Majorana targets, since the Majorana target for DM-nucleus scattering differs by a factor of $\mu_{\chi,n}^2/\mu_{\chi,e}^2$ from the
Majorana target for DM-electron scattering.
It is exciting to note that new accelerator-based probes (not shown) can cover similar parameter space, and a detection of
a new particle with both probes would provide compelling evidence for the properties of the DM particle.
The {\bf bottom right} plot shows a model scenario in which the scattering occurs through an ultralight dark-photon mediator ($m_{A'}\ll\ $keV).
A benchmark target (freeze-in) is shown, which can be probed by first-generation small-scale experiments sensitive to electron recoils.
Direct-detection experiments are essential to probe this benchmark target, since they are sensitive to low momentum transfers (accelerator-based probes have large momentum transfers).
Note that proposed nuclear-recoil searches, shown on bottom left plot, have sensitivity to this dark-photon mediator scenario;
these projections are left to future work.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{figs/CosmicVisions_axionDM_3evtskgyr}
\includegraphics[width=0.48\textwidth]{figs/CosmicVisions_aprimeDM_3evtskgyr}
\caption{
{\bf Event rates} for the {\bf absorption by an electron of axion-like particle (ALP) DM} ({\it left})
and {\bf dark-photon ($A'$) DM}
({\it right}), assuming that the ALP/$A'$ constitutes all the DM~\cite{Bloch:2016sjj,Hochberg:2016sqx,Hochberg:2016ajh}.
The {\it solid colored lines} show the ALP-electron coupling $g_{aee}$ or the kinetic-mixing parameter $\epsilon$
needed to produce 3 events for an exposure of 1 kg-year.
{\it Blue regions} show constraints from WIMP direct-detection experiments
\cite{Aprile:2014eoa,Armengaud:2013rta,Ahmed:2009ht,Aalseth:2008rx,An:2014twa,Bloch:2016sjj,Aguilar-Arevalo:2016zop}.
Gray regions show stellar cooling constraints.
In-medium effects are included for all $A'$ constraints and projections.
Shaded orange region in left plot is consistent with an ALP possibly explaining the white dwarf luminosity function.
}
\label{fig:absorption-WG1}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=0.48\textwidth]{figs/CVWP_WIMP_limit}
~~\includegraphics[width=0.48\textwidth]{figs/CVWP_WIMP_limit_zoom}
\caption{
{\bf Left:}
Constraints and projections (90\% c.l.) for the {\bf DM-nucleon scattering} cross section.
Thick gray lines are current world-leading constraints~\cite{Aprile:2017iyp,Amole:2017dex,Agnese:2015nto,Angloher:2015ewa}.
Projections are shown with solid/dashed/dotted lines indicating a short/medium/long timescale, respectively,
with the same meaning as in Fig.~\ref{fig:scattering-ER}.
Blue lines denote the DoE G2 experiment projections.
Yellow region denotes the WIMP-discovery limit from~\cite{Ruppin:2014bra}
extended to lower masses for He-based experiments.
{\bf Right:}
As in left plot, but focused on the 100~MeV to 10~GeV DM mass range.
}
\label{fig:scattering-NR}
\end{figure}
\begin{figure}[b!]
\includegraphics[width=0.48\textwidth]{figs/SD}
\caption{
Constraints from direct-detection experiments (solid lines), colliders and indirect detection (labelled, dashed), and projections for new experiments (labelled, dashed/dotted lines) for the {\bf spin-dependent scattering cross section for protons or neutrons off nuclei}.
Constraints are shown from PICO-60~\cite{Amole:2017dex}, LUX~\cite{Akerib:2017kat},
PICO-2L~\cite{PhysRevD.93.061101}, PICO-60 ${\mathrm{CF}}_{3}\mathrm{I}$~\cite{PhysRevD.93.052014},
and IceCube~\cite{Aartsen2017}. Projections from PICO (proton) and LZ (neutron) are also shown~\cite{Akerib:2016lao}.
The expected background from atmospheric, supernova and solar neutrinos in both xenon and C$_3$F$_8$
is shown by the shaded regions~\cite{Ruppin:2014bra}.
}
\label{fig:SD}
\end{figure}
\item Fig.~\ref{fig:absorption-WG1}: Event rates for an electron absorbing bosonic DM, such as an axion-like particle ({\bf left}) or
a dark photon ({\bf right}).
\item Fig.~\ref{fig:scattering-NR} shows projections for DM scattering off nuclei for a wide mass range ({\bf left}),
and focused on the 100~MeV to 10~GeV mass range ({\bf right}).
We note that the neutrino floor for low DM masses was calculated by assuming a liquid He-4 detector with 100\% recoil energy acceptance across the entire energy range, with coherent neutrino-nucleus scattering as the only background, and no nuisance parameters. Four combinations of exposure and energy threshold, which were chosen to represent an expected background rate of 40 events, were calculated and combined by choosing the lowest cross-section at each WIMP mass: (1 meV, 100 kg-yr), (90 eV, 350 kg-yr), (380 eV, 2000 kg-yr), (1500 eV, 3500 kg-yr).
\item Fig.~\ref{fig:SD}: Projections for DM scattering off nuclei through spin-dependent interactions.
\end{itemize}
These figures contain various solid, dashed, or dotted lines, which show an estimate of the time to start taking data,
corresponding approximately to a short, medium, and long timescale, respectively.
A solid line indicates that the technology exists and that data taking has either already started or can begin in $\lesssim 2$ years.
Moreover, the assumption of zero backgrounds (radioactive or dark currents) is reasonable for the indicated thresholds.
A dashed line indicates experiments that require more R\&D, and if the R\&D is successful, data taking could start in $\sim 2-5$~years and
potentially reach the sensitivity shown for the indicated target mass.
A dotted line indicates longer-term R\&D efforts.
We note that all exposures are approximate; experiments may run for more or less than 1-year, and may be deployed
in stages with increasing target mass.
\subsection{Summary of Key Points}
\begin{itemize}
\item The direct detection of DM is a crucial experimental avenue to identify the nature of the DM particle.
\item The direct-detection community is healthy and active, with several clear ideas to go beyond the funded G2 experiments.
\item There are numerous science targets for searches for WIMPs and sub-GeV DM.
These include {\it sharp targets} in parameter space from simple, predictive, and motivated DM candidates, as well as several general
{\it regions of interest} in parameter space in which DM could hide. In most cases, the proposed experimental ideas and experiments
probe {\it several} sharp targets {\it and} general regions of interest.
\item {\bf Several small projects, each with a cost of less than a few million dollars, can probe {\it orders of magnitude of new parameter space}, covering both sharp targets and general regions of interest for WIMPs as well as sub-GeV DM down to $\mathcal{O}$(MeV) masses,
with project start-dates of FY19 and even earlier.}
\item Research and Development (R\&D) funding, in parallel to funds for small-scale projects,
allows future projects to push below MeV masses and improve cross section sensitivities on a few-year timescale.
\end{itemize}
Small-scale experiments with a few-million dollar price tag can explore vast areas of new parameter space beyond G2, since they use
novel detection techniques and/or new target materials, and in many cases make use of advances in detector technology that
allow for lower thresholds.
Similarly to the Large Hadron Collider, which probed unexplored parameter space immediately when turning on due to its higher center-of-mass
energy, new small-scale experiments can probe unexplored parameter space immediately when turning on due to lower thresholds.
In this way, detectors with target masses as small as $\sim 1$~gram to $\sim 1$~kg can make enormous improvements over existing
sensitivities in a parameter region complementary to that probed by the G2 experiments and for only a fraction of their cost.
\clearpage
\section{Detection of Ultra-Light (sub-milli-eV) Dark Matter}
\label{sec:WG2experiments}
The axion and hidden photon are well-motivated dark matter candidates with models providing both viable production mechanisms and testable phenomenology. To date, only a tiny fraction of the parameter space for such ultralight dark matter (as discussed in Section \ref{sec:Science-ultralight}) has been probed by existing experiments. Excitingly, thanks to significant growth in interest in this area recently, there are now experiments or proposals which cover the entire viable mass range down to $10^{-22}$ eV. These experiments are highly complementary in their mass reach as well as coupling type; together they search for all four different possible types of couplings the dark matter can have (discussed in Section \ref{sec:Science-ultralight}). Figure \ref{fig:massrange} is a rough cartoon of the complementary nature of these experiments, both in mass and coupling. In particular, it now seems likely that a combination of these experiments can reach sensitivity to the QCD axion over a broad range of axion masses.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\textwidth]{figs/massrangefig}
\caption{\label{fig:massrange} Mass range for ultralight dark matter. Very rough optimal frequency ranges are shown for each experimental technique discussed in WG2. Names of particular experiments and proposals discussed in this section are shown below their corresponding technique. The names are color-coded by the DM coupling being searched for. This is only meant as a cartoon -- for details of each experiment's sensitivity see the relevant discussion below.}
\end{center}
\end{figure}
Searches for dark matter in this mass range use techniques which are very different than those used in traditional particle physics experiments. In this range the dark matter can more usefully be thought of as a field (or wave) oscillating at a frequency equal to its mass. Unlike a traditional particle detector (e.g. WIMP detection experiments) which looks for the energy deposited by a single hard collision, detectors searching for such light dark matter must look for the collective effect of all the dark matter particles in the wave. This is analogous to gravitational wave detectors which search not for individual graviton scattering but for the semiclassical effect induced by the entire wave. Thus, experiments searching for low mass bosonic dark matter utilize high precision sensors of continuous wave signals as opposed to the traditional impulse detectors used for single particle scattering. Such sensors come from many areas of physics including condensed matter and atomic physics and are based on a wide range of techniques such as high-precision magnetometry, nuclear magnetic resonance (NMR), electromagnetic resonators, atomic clocks, and laser interferometry.
Indeed, the ability to reach pristine sensitivity to very weakly coupled bosonic dark matter with low cost experiments relies on the cross-disciplinary transfer of detector technology originally developed for other applications. For example, some of these dark matter experiments rely on the high precision now achievable in magnetometers or atomic clocks developed by the quantum electronics and atomic physics communities. Looking beyond the quick gains obtained from the initial technology transfer, the new dark matter application also provides vital and immediate motivation for further improvement of the sensitivities of these detector technologies which would also benefit science beyond just dark matter. For example, torsion balances are also one of the best ways to search for new forces and equivalence principle violation. Atomic interferometers allow searches for gravitational waves as well as sensitive equivalence principle tests and measurements of the fine structure constant, and have practical applications in geological mapping and inertial navigation. Several of these technologies also have connections with work in quantum information and may be valuable for both fields. As the above examples illustrate, improvement of these continuous wave detector technologies would have many broader impacts beyond just the bosonic dark matter search and cross-disciplinary collaborations should be encouraged.
The new direct dark matter detection projects discussed below are either already operating or constructing pathfinder experiments or in an advanced stage of hardware prototyping. Future support would enable construction of full-sized detectors with signifiant reach into uncovered dark matter parameter space. The timeline for these future experiments is around a few years in most cases. Roughly ordered from high mass coverage to low mass coverage, these experiments include:
\begin{enumerate}
\item ADMX
\item HAYSTAC
\item LC Circuit
\item DM Radio
\item ABRACADABRA
\item CASPEr-electric
\item Torsion Balances
\item Atom interferometry
\end{enumerate}
Note that while ADMX is in fact a current DOE Generation 2 dark matter project, future funding would enable upgrades to the existing project which would expand the axion frequency range covered. The first five experiments are electromagnetic detectors, with the first two using cavity resonators and the next three using lumped element (`LC circuit') resonators. The first five experiments are all designed to reach sensitivity to the QCD axion as well as to simultaneously provide world-leading sensitivity to hidden photons. CASPEr-electric searches specifically for the model-defining axion-gluon coupling, and the last two techniques are sensitive to ultralight scalar dark matter.
There are also two experiments which are not directly searching for dark matter, but can cover interesting axion dark matter parameter space:
\begin{enumerate}
\item Mini-IAXO
\item ARIADNE
\end{enumerate}
And then additionally there are several areas of R\&D work which would enable significant future dark matter experiments. These include work on high frequency electromagnetic resonators to allow detectors to push above cavity frequencies, high field magnet development for many axion experiments, and a full-scale IAXO project.
\subsection{Dark Matter Direct Detection Experiments}
\subsubsection{ADMX}
\noindent {\bf Ongoing search for QCD axions from 500~MHz-2~GHz and 2-10~GHz extension:}
The second generation Axion Dark Matter Experiment aims to discover or exclude QCD axion dark matter with sensitivity even to the most pessimistic axion-photon couplings predicted by the Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) models at axion masses in the well-motivated 500~MHz-10~GHz range. ADMX is an axion haloscope experiment that relies on dark matter axions converting into microwave photons in a strong magnetic field \cite{Sikivie:1983ip}. This conversion is enhanced by a tuned high-Q cavity resonator, and the resulting photons are detected in a radiofrequency receiver. ADMX at present uses a 50 cm bore 7.6 T magnet containing a single cylindrical microwave cavity, frequency-tuned by moving rods from the edge to the center of the cavity. Previous versions of ADMX have already demonstrated sensitivity to the optimistically-coupled KSVZ axions. Key to the sensitivity of ADMX G2 is the low thermal background obtained with a dilution refrigerator that cools the experiment to below 150 mK, and the quantum-limited SQUID and Josephson parametric amplifiers that are the first stage of the receiver chain. At higher frequencies, the single cavity will be replaced with multiple power-combined cavities with higher resonant frequencies, but tuned in a similar manner.
Sited at the University of Washington, ADMX G2 is an approved and funded DOE Generation 2 project for its first stage operations which will cover axion masses up to 2~GHz. The experimental collaboration includes around 30 members in 10 institutions. The experiment is currently taking data with DFSZ sensitivity in the 600~MHz range. The hardware to explore up to 1~GHz is constructed, and the hardware to reach 2~GHz is currently being built. Operations in the 500~MHz-1 GHz will be completed in 2017, with 1-2 GHz covered the following year. Designs are being prepared for the 2-10~GHz resonators and a proposal to continue the project in this higher frequency range will be submitted shortly. Covering the entire frequency range under the current strategy is estimated to take 6 years.
\bigskip
\noindent {\bf Longer-term R\&D for axion mass \textgreater 10~GHz:}
A subset of ADMX collaborators are engaged in longer term detector R\&D to enable dark matter axion searches at even higher masses. This work includes high frequency resonators being developed at the University of Washington and LLNL, and novel single photon detectors being developed at FNAL and LLNL.
The practice of cavity resonators with size matched to the photon wavelength will work up to 10 GHz, but at higher frequencies a number of issues need to be addressed. Because of the smaller size of the individual cavities, the large number of combined resonators required to maintain large detection volume becomes intractable at higher frequencies. Furthermore, the quality factor the high frequency cavities decreases, and the quantum-limited noise of microwave amplifiers increases with frequency. The ADMX R\&D addresses these issues with two separate thrusts. The first is to develop sophisticated, tunable multiwavelength resonators that have good coupling to axions while maintaining large volume and high Q. One example of these is an open Fabry-Perot type resonator with strategically placed dielectrics to allow good coupling to the axion. The second thrust is to develop single microwave photon detectors that evade the quantum-limited measurement noise by putting the backreaction into the unobserved phase quadrature. The most promising direction for this thrust is to use the single photon manipulation hardware developed by the quantum computing community.
The strategy is to develop the cavity and detector technologies separately in prototype experiments that probe previously unexplored axion-photon couplings, and when mature combine them to build an experiment sensitive to the weaker couplings predicted by QCD axion models. Prototype tunable resonators have been built and operated at room temperature, and work is being done to developed higher-Q cryogenic prototypes. Hardware for single frequency photon counters has been constructed and a prototype is under construction. This R\&D is primarily funded by the Heising-Simons foundation. In order to continue the US axion program at these frequencies, the right timescale to transform this R\&D into a full experimental proposal is 6 years, so it coincides with the end of the ADMX G2 program.
\subsubsection{HAYSTAC}
Another cavity-based haloscope experiment, the HAYSTAC (Haloscope At Yale Sensitive To Axion CDM) axion search \cite{Sikivie:1983ip}, supported by the NSF and the Heising-Simons Foundation is both an innovation works and a data pathfinder in the 2.5-12 GHz (~10-50 $\mu$eV) mass range. A collaboration of Yale (S. Lamoreaux, PI), UC Berkeley (K. van Bibber, PI) and University of Colorado (K. Lehnert, PI) began design work in 2011, commissioned the experiment in 2015, and transitioned to data-taking in 2016. First science results have recently been published \cite{Brubaker:2016ktl}, and has an instrumentation paper \cite{Kenany:2016tta}.
The experiment as currently configured utilizes a superconducting 9 T magnet, a dilution refrigerator, and Josephson Parametric Amplifiers (JPA); the first experiment to do so. Also for the first time, the experiment has achieved an operational system noise temperature only a factor of 2 above the Standard Quantum Limit, providing the best limits to date on the axion-photon coupling at these higher frequencies with a cavity volume of only 1.5 L.
In the coming year, HAYSTAC will be validating in operations two critical technologies: (i) A squeezed-state receiver developed at Colorado, which by evading the Standard Quantum Limit will dramatically improve the sensitivity and scan speed of the experiment. The switch-over to this system will occur in Summer 2017. (ii) A new cavity design that will enable the use of high harmonic TM$_{0n0}$ modes for higher frequency searches, and with the spectrum cleansed of interfering TE modes. This will occur in early 2018. Should both these technologies meet spec in operation, HAYSTAC will have validated a microwave cavity experiment as a full system with sensitivity between KSVZ and DFSZ models, up to 50 $\mu$eV in mass. It should be noted that this development and demonstration requires no new resources; it is already funded and in progress.
An ultimate experiment that could reach up to 100 $\mu$eV, or beyond, with sensitivity better than DFSZ would require some additional R\&D for development of a resonator based on Photonic Band Gap (PBG) concepts and metamaterials; this work has already begun at Berkeley. It will also require development of new JPA geometries to be developed at Colorado. To integrate and commission an operational experiment based on these capabilities, Yale will alsoneed to procure a higher field Nb$_3$Sn magnet of order 10 L volume. Nonetheless, including student/postdoc support during the R\&D phase, this project would fall into the small experiments category. If the R\&D to purge the spectrum of TE mode-crossings at all frequencies is fully successful, the run time to cover up to 25 GHz (100 $\mu$eV) could be very short, of order 5 years.
\subsubsection{LC Circuit}
Another method to search for low-mass dark-matter axions is by using a lumped element $LC$ resonator instead of a cavity resonator in the strong magnet~\cite{sikivie2014proposal}. The premise of the method is that the axion field alters Maxwell's equations; in the presence of an external magnetic field $B_0$ there is an effective current parallel to the external field, $\vec j = -g\vec B_0{\partial a \over \partial t}$ where $a$ is the axion field and $g$ the axion coupling to two photons. The current oscillates at a frequency $\omega = (m_a c^2 +K)/\hbar$ where $m_a$ is the axion mass and $K\approx m_a c^2/10^6$ is the kinetic energy due to the axion orbital motion in our Galaxy. The current $\vec j$ produces by Ampere's law an AC magnetic field $\vec B_a$. A loop antenna occupying half the magnet bore with its plane perpendicular to $\vec B_a$ will have an emf induced in it by the time-varying flux associated with the field.
As in the cavity detector exemplified by ADMX, , there is an enhancement in the antenna circuit by making it resonant at frequency $\omega$. The circuit is tuned by a variable capacitor and the output of the $LC$ circuit is brought to a low noise amplifier, mixed to audio frequencies, and detected.
The $LC$ circuit is sensitive to QCD axions and also to low-mass axion-like particles~\cite{sikivie2014proposal}. Using the ADMX magnet and operated at milliKelvin temperatures, the circuit would be able to scan the $g_{a\gamma\gamma}$ vs. $m_a$ region shown in grey in Figure \ref{fig:LCCircuit}. Because this search involves a reuse of the ADMX magnet, the cost is not large. Capital costs are estimated to be under \$1M, in order to add another stage of cooling; the experiment should be operated at 1 mK or better, reachable with adiabatic demagnetization. Operating costs will be similar to the operating costs of ADMX.
The University of Florida is planning a ``pilot experiment,'' building an LC circuit detector as a PhD student thesis project. It will employ a NbTi loop in series with parallel plate capacitor. The target $Q$ is 10,000 and a goal of the pilot experiment is to investigate the challenges of achieving this performance. It will use 8 T magnet (15 cm diameter, 45 cm length). The magnet volume is about 4\% of the ADMX magnet. We will scan 12 to 100 neV (3-30 MHz). The sensitivity goal of the pilot experiment, with loop at 0.4 K, is shown as the blue-outlined region in Figure \ref{fig:LCCircuit}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{figs/LCCircuitsfig.pdf}
\caption{\label{fig:LCCircuit} (Left) Sensitivity of the $LC$ circuit in the ADMX solenoid magnet. (Right) The SNR=1 sensitivity curves in the axion-photon coupling $g_{a\gamma\gamma}$ vs axion mass
$m_a$ for ABRACADABRA in broadband only and resonant only readout
modes after 1 year of data taking. The assumed aspect ratio for the toroidal magnet is
$R_{\rm in}=\frac12R_{\rm out}=\frac13 h$. The QCD axion region is
in shaded red.}
\end{figure}
\subsubsection{DM Radio}
Dark Matter Radio (DM Radio) is a well-shielded, tunable lumped-element LC resonator for the detection of sub-eV hidden photon and axion dark matter\cite{dmradio2015prd}. A superconducting shield surrounding the resonator blocks ordinary electromagnetic signals, but is easily penetrated by hidden photons and axions. Hidden photons/axions generate an effective background current density within the shield that couples to the inductor. If the resonator is matched to the frequency (mass) of the hidden photon/axion field, it will ring up and generate a measurable voltage that may be sensed by a SQUID amplifier. The expected reach of the liter-scale DM Radio Pathfinder detector, a 30L Stage 2 detector, and 1m$^3$ Stage 3 detector are shown in Fig.~\ref{fig:dmradioscience}. The Pathfinder experiment is currently under construction at Stanford University and is expected to begin taking data Summer 2017, funded through the SLAC LDRD program\cite{silva2017design}. The development of the $\sim$\$1.3M Stage 2 experiment has received initial support from the Heising-Simons Foundation. A multi-site, multi-orientation implementation of the Stage 3 experiment would be within the purview of a new small experiments program. The $\sim$15-person collaboration is a mixture of scientists from SLAC, Stanford, KIPAC, UC Berkeley, UC Davis, and Princeton.
\begin{figure}
\begin{center}
\includegraphics[width=3.1in]{figs/dmradio-hp.pdf} \ \ \ \includegraphics[width=3.1in]{figs/dmradio-axion.pdf}
\caption{\label{fig:dmradioscience}Left: DM Radio sensitivity to hidden photons: upper-level constraints on photon-hidden-photon coupling $\varepsilon$ as a function of hidden photon mass $m_{\gamma'}$ Right: DM Radio sensitivity to axions: upper-level constraints on axion-photon coupling $g_{\alpha\gamma\gamma}$ as a function of axion mass $m_a$ and applied DC magnetic field $B$.}
\end{center}
\end{figure}
\subsubsection{ABRACADABRA}
In contrast, ABRACADABRA is a 1\,m scale {\it broadband} axion search designed to search
for axion and axion-like dark matter over the mass range
$10^{-12}\lesssim m_a\lesssim 10^{-6}$\,eV. The detector itself
consists of a toroidal magnet with a large magnetic field (of order
Tesla), with a superconducing pickup loop inside which is readout by a
SQUID current sensor. The mass range to which ABRACADABRA is
sensitive corresponds to the frequency range
$1\lesssim 2\pi/m_a\lesssim 1\times10^8$\,Hz. In this range, the axion
wavelength is very long compared to the size of the detector and the
oscillating axion DM field generates \emph{effective currents} in the
toroid which in turn generate an oscillating magnetic field through
the center of the toroid. A sensitive magnetometer should be able to
detect this oscillating field \cite{Kahn:2016aff}.
The integrated flux through the center of the toroid is given by
\begin{equation}
\Phi_a(t)=g_{a\gamma\gamma}B_{\rm max}\sqrt{2\rho_{\rm DM}}\cos(m_at)\mathcal{G}_V V.
\end{equation}
Where $B_{\rm max}$ is the maximum field in the toroid, $V$ is its
volume, $\mathcal{G}_V$ is a geometric factor that depends on the
aspect ratio and is typically $\sim$0.1.
The key advantage of this approach is that the field in the center
region of the toroid should ordinarily be zero. So with sufficient
shielding, ABRACADABRA will be searching for a small signal on top of
a nearly zero background.
Depending on the geometry, ABRACADABRA could be sensitive to the QCD
axion regime within a few years of continuous running. A $\sim$10\,cm prototype is being built at MIT and is expected to have data before the end of
2017. This prototype will not only give a better idea of the
challenges facing a larger 1\,m version, but will itself be sensitive
to unexplored regions of parameter space after only 1 month of data
taking.
\subsubsection{CASPEr}
The Cosmic Axion Spin Precession Experiment (CASPEr) searches for axion and axion-like dark matter via its interaction with nuclear spins. CASPEr-electric, the US-based experiment located at Boston University, searches for the axion-gluon coupling which gives rise to an oscillating nucleon electric dipole moment. The collaboration consists of two experimental groups and two theory groups. The primary physics goal is to reach experimental sensitivity that allows searching for axion and axion-like dark matter with couplings at the QCD axion level in a wide range of axion masses: approximately $10^{-12}$~eV to $10^{-6}$~eV, corresponding to frequencies $\sim$200~Hz to 200~MHz~\cite{Budker:2013hfa}.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=6cm]{figs/CASPErfig}
\caption{(Left) CASPEr experimental scheme for measuring an AC nucleon electric dipole moment $d_N$. (Right) CASPEr projected sensitivity to the EDM coupling $g_d$ where $d_N = g_d \cdot a$ and $a$ is the local amplitude of the coherently oscillating dark matter wave.}
\label{fig:CASPEr}
\end{center}
\end{figure}
The experimental approach utilizes the existing technology of magnetic resonance and precision sensors. Briefly, the nuclear spins ($I=1/2$) in a solid ferroelectric sample experience an oscillating torque due to interaction with the axion dark matter field. If the constant bias magnetic field $B_0$ is such that the frequency of this torque (ie axion Compton frequency $\omega_a$) matches the nuclear spin Larmor frequency, the spins tilt and undergo Larmor precession, creating a transverse oscillating magnetization, detected by a sensitive magnetometer, such as a SQUID, see fig.~\ref{fig:CASPEr}. The search strategy is to sweep the value of the bias magnetic field, thus sweeping through a range of axion masses.
The pathfinder experiment (CASPEr-now) is currently under construction, this will prove feasibility and place limits on axion-like dark matter beyond current astrophysical constraints -- see fig.~\ref{fig:CASPEr}. A full experiment would be similarly cost-effective and reach couplings at the QCD axion level over a wide range of masses over a 3-5 year time scale.
\subsubsection{Torsion Balances}
The sensitivity range for ultra-light dark matter can be dramatically extended by building two new state-of-the-art torsion balances. One device will measure the time-dependent differential-acceleration signature of directly-coupled (hidden photon) dark matter \cite{Graham:2015ifn} and the other will probe the time-dependent spin-precession signature of spin-coupled (axion) dark matter \cite{Graham:2013gfa}. Together these detectors will probe both classes of ultra-light dark matter over the lightest ~30\% of their possible log mass-ranges, from 10$^{-22}$ eV to 10$^{-15}$ eV, including the very well-motivated target of 10$^{-22}$ eV that would solve two outstanding discrepancies of conventional cold-dark-matter simulations\cite{Hu2000} \cite{Hui2017}.
These devices will improve upon the very-high sensitivity rotating torsion-balance techniques developed by the Eot-Wash group that set the tightest bounds on DC long-range forces \cite{Wagner2012}, \cite{Heckel2008}. The dark matter-induced twist on the pendulum will be triply-modulated, at the product of the turntable, the earth's rotation, and the DM Compton frequencies \cite{Graham:2015ifn}. This distinctive signal will allow highly effective suppression of systematics and $1/f$ noise, allowing the new DM experiment to exploit fully the raw sensitivity of the torsion balance.
The following innovations will increase the raw sensitivity relative to previous torsion balances:
\vspace{-0.6\topsep}
\begin{enumerate}
\itemsep-0.2em
\item ultra-low-noise, high-stability fused-silica suspension fibers (40-80 reduction in the low-frequency side of the noise)
\item longer optical-levers and interferometric twist-angle readout (10-100 times reduction in the high-frequency side of the noise)
\item beryllium-polypropylene test bodies (4 times increase in sensitivity to B-L coupled DM) \cite{Adelberger2009}
\item very high stability turntables (to allow higher-frequency rotation)
\end{enumerate}
\vspace{-0.6\topsep}
Combined, these will give a factor 100 or greater sensitivity to the DM coupling strength.
The R\&D and the construction of the two new balances can be accomplished in 2 years, by two FTE researchers collaborating with the Eot-Wash group. Once operational and tested at the DOE lab at the Center for Experimental Nuclear Physics and Astrophysics at the University of Washington, the apparatus would be moved to the Sanford Underground Research Facility where environmental noise is 10-100 times smaller \cite{Harms2010}. It should be noted that the sensitivity improvements and development-time estimates given above are conservative, being based on the decades of torsion balance experience of the Eot-Wash group.
\subsubsection{MAGIS-100: Atom interferometry for dark matter and gravitational waves}
\begin{figure}
\begin{center}
\includegraphics[width=2.9in]{figs/AI-NuMI-DM.pdf} \ \ \ \includegraphics[width=3.3in]{figs/DoE_Figure_HMv2.pdf}
\caption{Left: Sensitivity of the MAGIS-100 atom interferometer to scalar DM-electron coupling $d_{m_e}$ measured relative to gravitational strength~\cite{Arvanitaki:2016fyj}. Right: An initial BTBAI interferometer (dashed red) will reach coupling sensitivity of $10^{-15}\,g/$Hz$^{1/2}$ for $B-L$ dark matter \cite{Graham:2015ifn}.
The green curves give the range probed by the EPTA and future SKA pulsar timing arrays.
\label{fig:MAGIS-sensitivity} }
\end{center}
\end{figure}
MAGIS (Mid-band Atomic Gravitational wave Interferometric Sensor) is a new kind of atom interferometric sensor that aims to search for ultralight dark matter as well as gravitational waves. Ultralight dark matter candidates with mass in the range $10^{-13}~\text{eV} - 10^{-16}~\text{eV}$ can cause time-varying atomic energy levels in the $0.1~\text{Hz}-10~\text{Hz}$ frequency range that can be detected with the proposed sensor \cite{Arvanitaki:2016fyj}. The MAGIS detector would also provide unique sensitivity to gravitational waves in this mid-band frequency range -- between the frequency bands targeted by LIGO and LISA~\cite{graham2013new,hogan2015heterodyne}. The discovery potential in this frequency band appears exciting, ranging from observation of new astrophysical sources (e.g. black hole and neutron star binaries) to searches for cosmological sources of stochastic gravitational radiation in addition to the searches for dark matter \cite{ResonantGW}.
The detector is based on a new atomic sensor concept that is a hybrid between an atomic clock and an atom interferometer. Gravitational radiation is sensed through precise measurement of the light flight time between two distantly separated (atomic) inertial references \cite{graham2013new}. Time is recorded by the accumulation of phase by these atoms, which also serve as precise differential clocks. This same configuration is also sensitive to time-variations in the atomic energy levels caused by couplings to ultralight dark matter, since such energy level shifts change the phase accumulation by the separated atomic clocks \cite{Arvanitaki:2016fyj}. Current work is focused on building a small-scale (10-meter) prototype detector to demonstrate required detector performance characteristics, including laser noise suppression. Longer detector baselines are required to reach scientifically interesting strain sensitivity and dark matter couplings.
The MAGIS-100 proposal is to build a 100-meter long detector to be located at Fermilab in an existing 100-meter vertical access shaft at the NuMI neutrino beam facility. One atomic source would be located at the top of the shaft and one midway down, allowing for over 3 seconds of free-fall time and hence measurements at frequencies $<1~\text{Hz}$. The initial detector would use state-of-the-art atom interferometry \cite{kovachy2015quantum,Asenbaum2016curvature} including $100 \hbar k$ enhanced atom optics and an atom flux of $10^6~\text{atoms/s}$ (``AI-Initial'' in Fig.~\ref{fig:MAGIS-sensitivity}). Planned upgrades include larger atom optics ($1000 \hbar k$) and a larger atom flux of $10^8~\text{atoms/s}$ (``AI-future'' in Fig.~\ref{fig:MAGIS-sensitivity}). This small experiment could be conducted over the course of 3 years.
\subsubsection{Berkeley Thick-Beam Atom Interferometer}
The Berkeley thick-beam atom interferometer (BTBAI) will be located at UC Berkeley's New Campbell Hall in a room with heavy acoustic and electromagnetic shielding. Control over systematic effects will be taken to the extreme by using a thick ($\sim 30\,$cm diameter), high-power (kW) laser beam that enables efficient atomic beam splitters, large atomic samples, and increases accuracy \cite{PhysRevLett.115.083002} and beam-splitter fidelity hundred- to several thousand-fold relative to atom interferometers with cm-sized beams. The experiment will look for dark matter and more general dark-sector constituents in two ways. First, it will search for $B-L$, topological, and Higgs-portal dark matter by looking for oscillating accelerations on rubidium atoms in a differential measurement between rubidium isotopes at $10^{-15}\,g/$Hz$^{1/2}$ sensitivity ($g=9.8\,$m/s$^2$), see Figure~\ref{fig:MAGIS-sensitivity} \cite{Graham:2015ifn}. The high efficiency of large-momentum transfer beam splitters in BTBAI will be instrumental in reaching this goal. Second, it will search for dark photons in the MeV-GeV mass range
by measuring the fine structure constant at better than $10^{-11}$ accuracy and comparing with measurements of the electron's gyromagnetic anomaly $g_e-2$ \cite{PhysRevLett.100.120801} which now reach $2.2\times 10^{-10}$ accuracy but are expected to improve by an order of magnitude.
The estimated time for this small experiment and R\&D effort is five years. BTBAI will enable broad progress in atom interferometry and laser technology, contributing to the development of gravitational wave detectors as well as to tools for geophysics.
\subsection{Non-Direct Detection Experiments}
\subsubsection{Mini-IAXO}
\begin{figure}[b!]
\centering
\includegraphics[width=.5\textwidth]{figs/miniIAXO_sensitivity.pdf}
\caption{\label{fig:i} Preliminary estimate of the mini-IAXO sensitivity in terms of axion coupling $g_{a \gamma\gamma}$ versus mass, indicating its complementarity to existing haloscope data.
Mini-IAXO would be able to test models of axion-like particles favored by certain astrophysical observations.}
\end{figure}
The International Axion Observatory (IAXO)\cite{Irastorza:2011gs,Armengaud:2014gea,Irastorza:1567109} will be a fourth generation axion helioscope with the primary physics goal to look for axions or axion-like particles (ALPs) originating in the Sun via the Primakoff effect~\cite{Primakoff:1951}. Mini-IAXO is proposed as a small pilot experiment that will increase the sensitivity of the currently most powerful axion helioscope, CAST~\cite{Aune:2011rx, Arik:2008mq, Andriamonje:2007ew,Zioutas:1998cc,Arik:2013nya, Barth:2013sma}, reaching sensitivity to axion-photon couplings $g_{a \gamma\gamma}$ down to a few ${10^{-11}}$~GeV$^{-1}$ and thus probing new axion and ALP parameter space as shown in Fig.~\ref{fig:i}.
The preliminary design for mini-IAXO includes a single-bore, $10$~m superconducting magnet that will be instrumented with a focusing X-ray telescope and low-background X-ray detector to explore the above mentioned axion parameter space. Mini-IAXO will utilize existing infrastructure to produce the optics, consisting of multi-layer coated slumped-glass substrates~\cite{jakobsen2013}, and the low-background X-ray detector~\cite{Aune:2014}, consisting of gaseous Micromegas detectors. Such technologies have recently been developed for and demonstrated on CAST. This approach of combining very low-background detectors and efficient x-ray optics has led to a record-setting experimental sensitivity that has resulted in an upper limit on axion-photon coupling $g_{a \gamma\gamma}$ comparable to those obtained from astrophysical constraints~\cite{anastassopoulos:2017ftl}.
\subsubsection{ARIADNE}
\begin{figure}
\begin{center}
\includegraphics[width=0.8\columnwidth]{figs/fig1v_halfpage.pdf}
\caption{\small{ (left) Setup: a sprocket-shaped source mass is rotated so its ``teeth'' pass near an NMR sample at its resonant frequency. (right) Projected reach for monopole-dipole axion mediated interactions. The band bounded by the red (dark) solid line and dashed line denotes the limit set by transverse magnetization noise, depending on achieved coherence time $T_2$. Current constraints and expectations for the QCD axion also are shown, adapted from Ref. \cite{Arvanitaki:2014dfa} \label{setup}.}}
\end{center}
\end{figure}
ARIADNE aims to detect axion-mediated spin-dependent interactions between an unpolarized source mass and a spin-polarized $^3$He low-temperature gas \cite{Arvanitaki:2014dfa}.
The axion can mediate a monopole-dipole (mass-spin) interaction between fermions (e.g. nucleons) with the scalar and dipole coupling constants $g_s^N$ and $g_p^N$, respectively. Since it couples to $\hat{\sigma}$ which is proportional to the nuclear magnetic moment, the axion coupling can be treated as a fictitious magnetic field $B_{\rm{eff}}$. This fictitious field is used to resonantly drive spin precession in a laser-polarized cold $^3$He gas. This is accomplished by spinning an unpolarized tungsten mass sprocket near the $^3$He vessel. As the teeth of the sprocket pass by the sample at the nuclear larmor precession frequency, the magnetization in the longitudinally polarized He gas begins to precess about the axis of an applied field. This precessing transverse magnetization is detected with a superconducting quantum interference device (SQUID).
The experiment sources the axion in the lab, and can explore all mass ranges in its sensitivity band simultaneously (Fig. \ref{setup}).
A time scale of 2-3 years for construction followed by a 3-year operating phase is envisioned.
\section{Dark matter production at fixed target and collider experiments}\label{sec:WG3experiments}
Accelerator experiments are increasingly recognized as essential tools in probing the particle nature of dark
matter (DM), and are especially suited to probing DM in the vicinity of known SM mass scales, roughly MeV - TeV. Indeed, a new
generation of fixed target and collider experiments
would be strongly positioned to test light thermal DM. The thermal paradigm is arguably one of the
most compelling possibilities, and has driven much of the experimental DM program during the last decades. Among the
thermal DM parameter space, light (sub-GeV) hidden sector thermal DM annihilating directly into Standard Model particles
(the ``thermal relic target'') stands out for its predictiveness and testability. The virtue of an accelerator program relies on
the fact that the rate of relativistic DM production is largely independent of the details of the dark sector and is predicted from
freeze-out, whereas the rate of non-relativistic DM scattering is highly sensitive to the DM particle nature. As a consequence, accelerators
can probe {\it all} predictive direct annihilation scenarios, while the majority of these targets remain beyond the current capabilities
of proposed direct
detection experiments.
While a broad international program of accelerator experiments is currently focused on exploring light dark
matter and any associated new forces, many of the theoretical milestones are only beginning to be probed. New, small-scale
projects present an opportunity for the US DM program to play the leading role in light DM and dark sector
physics during the next decade. By leveraging existing technologies and facilities, a high-impact program could
be quickly deployed to achieve significant science in the next few years.
In addition to light, directly annihilating thermal DM, many of these proposals would also explore a wide parameter
space for secluded thermal DM, as well as DM with quasi-thermal origins ({\it e.g.} asymmetric DM, SIMP/ELDER
scenarios), and freeze-in models of light DM with a ``heavy'' mediator. More generally, they would offer
sensitivity to any long-lived neutral particles below the GeV-scale, and provide a unique gateway to explore generic low-mass
hidden sectors.
In the following, we briefly review the phenomenology of thermal DM, and underline the importance of the direct annihilation
target. We discuss the need for an accelerator-based program, and provide a detailed discussion of future proposals to robustly
test this scenario.
\subsection{The Thermal Target at Accelerators}
The theoretical framework of hidden-sector thermal DM is summarized in Section~\ref{ssec:thermal}, including the definition
of ``secluded'' and ``direct'' annihilation. We briefly review the science case for accelerators. The secluded annihilation rate
is dictated by DM-mediator interaction strengths within the hidden sector~\cite{Finkbeiner:2007kk,Pospelov:2007mp}. Since arbitrarily
small values of the SM-mediator coupling can be compatible with thermal light DM, this scenario is less predictive, although viable
models, such as DM annihilation into two scalar or pseudo-scalar mediators, can still be probed with laboratory
experiments~\cite{Dolan:2014ska, Krnjaic:2015mbs}. Direct annihilation, on the other hand, is controlled by the same couplings
that are relevant for direct DM scattering, leading to well-defined predictions. In the case of scalar DM annihilating into leptons
through the vector portal, the annihilation rate is given by:
\begin{eqnarray}
\langle \sigma v \rangle \approx \frac{1}{6\pi}\frac{g_{D}^2 \, g_{\rm SM}^2 \, m_{\rm DM}^2 \, v^2 }{(m_{\rm MED}^2 - 4 m_{\rm DM}^2)^2 + m_{\rm MED}^2 \Gamma_{\rm MED}^2 \!\!},
\end{eqnarray}
for $v \ll c$ and neglecting $m_e/m_{\rm DM}$ corrections. Sufficiently far away from the resonance region ($m_{\rm MED} = 2 m_{\rm DM}$) and
assuming $m_{\rm MED} \gg \Gamma_{\rm MED}$, this cross-section depends on the dark sector parameters only through the DM mass $m_{\rm DM}$
and the dimensionless parameter
\begin{eqnarray}\label{eq:generic-thermal-target}
y\equiv\frac{g_D^2 \, g_{\rm SM}^2 }{16\pi^2} \left(\frac{m_{\rm DM}}{m_{\rm MED}}\right)^4
\end{eqnarray}
The observed DM abundance imposes a minimum bound on this cross-section, requiring $\langle \sigma v \rangle > \langle \sigma v \rangle_{\rm relic}$.
Perturbativity and constraints on the mass ratio $m_{{\rm DM}}/m_{\rm MED}$, at most ${\cal O}(1)$ in this regime, imply in turn a {\it lower} bound
on the magnitude of the SM-mediator and DM-mediator couplings to be compatible with a thermal history. In other words, a
lower bound on the direct annihilation scenario. This constraint can be translated into a minimum value of $y$, which is qualitatively valid for
every DM/mediator particle nature variation provided that $m_{\rm DM} < m_{\rm MED.}$. Larger values of $y$, which correspond to models where direct annihilation is not the dominant process that determines the DM abundance, could also be probed at accelerators. One caveat to the arguments above is the case where the DM mass is near the mediator's resonance region, {\it i.e.,} when $m_{\rm MED} \approx 2 m_{\rm DM}$. In this case, DM annihilation becomes extremely efficient, and thus freeze-out can be achieved with smaller couplings \cite{Feng:2017drg}.
The argument presented above is generic and equally applicable to any of the minimal portals between the SM and the DS. However,
the vector/kinetic mixing portal is by far the most viable~\cite{Dolan:2014ska, Krnjaic:2015mbs,Alexander:2016aln} among the renormalizable operators.
This portal should be viewed as both a concrete UV complete benchmark, as well as a simplified model, since it is representative of
models where the mediator couples preferentially to baryonic (leptophobic DM), leptonic (leptophilic DM), or $(B - L)$ currents. In what
follows, we denote the dark mediator by $A'$, its mixing with the SM by $\epsilon e$, and its coupling to DM by $g_d$. The terms
mediator, dark photon or dark vector will be used interchangeably. Interestingly, the bottom-up values of the $\epsilon$ parameter in the kinetic
mixing benchmark that are needed for a thermal target are well aligned with the top-down $\epsilon$ range motivated for all hidden
sectors (Section~\ref{ssec:thermal}).
While we focus the remainder of the discussion on directly annihilating light thermal DM, since the scientific impact of reaching this sharp
milestone is substantial and the opportunity to do so is timely, the scope of the accelerator-based program is much more extensive.
The experimental approaches discussed below directly apply to many other important models, since analogous mappings allow constraints on
the CMB and DM-SM scattering cross sections to be translated onto the $y$ parameter space. These models include asymmetric DM~\cite{Zurek:2013wia},
in which the DM abundance arises from a primordial asymmetry instead of from thermal freeze-out; SIMP DM~\cite{Hochberg:2015vrg}, which encompasses
production of new DS resonances that can decay back to the SM directly or through hidden-valley-like topologies~\cite{Strassler:2006im}; models
with different cosmological histories, {\it e.g.} ELDER DM~\cite{Kuflik:2015isi}; freeze-in models of sub-MeV DM with a ``heavy'' (GeV-scale) mediator (see Refs.~\cite{Giudice:2000dp, Gelmini:2004ah} for aspects on the cosmology of similar models); new force carriers decaying to SM particles~\cite{Alexander:2016aln} or searches for millicharged DS particles, either through missing energy signatures or through minimum
ionizing signals~\cite{Prinz:1998ua,Haas:2014dda}.
We finally emphasize that a comprehensive program, including both accelerator and direct detection experiments, would be most
successful in exploring light dark matter. While accelerator-based experiments offers key advantages in probing several DM
scenarios, some possibilities could only be explored with direct detection techniques, such as freeze-in models with an
ultralight mediator, and models of ultralight bosonic DM. Other cases could be accessible to both accelerators and direct detection
experiments, opening the exciting prospect of testing potential DM signal by different approaches.
\subsection{The Scientific Need for an Accelerator-based Program}
While important progress has been achieved from dedicated searches at current facilities or re-interpretation
of previous results, new experiments are needed to cover decisive levels of sensitivity to the thermal-target
parameter space. Compared to other approaches, accelerator experiments offer key advantages to robustly probe
direct annihilation scenarios:
\begin{itemize}
\item {\bf Reduced dependence on DM particle nature:} Accelerator-based experiments are much less sensitive to
the details of the DM particle nature than direct detection experiments, as illustrated in Fig.~\ref{fig:accDDcomp}. In some
models, e.g. Majorana fermion DM, the direct detection cross-section is drastically reduced through its dependence
on the halo DM velocity, well below current detection capabilities. On the other hand, DM is produced relativistically
at accelerators, and the DM scattering cross section is only weakly dependent on the velocity. In missing energy/missing
momentum experiments, the DM presence is inferred through energy/momentum imbalance, almost entirely insensitive to
the DM velocity.
\item {\bf Kinematic thresholds in the DS.} This effect can occur, for example, if the DM particle is part of a pseudo-Dirac
fermion pair. In this scenario, DM (which we now label as $\chi_1$) is accompanied by a heavier excited state $\chi_2$. DM
annihilation or scattering through the light mediator can feature dominantly off-diagonal couplings between the light
mediator and the $\chi_1$ and $\chi_2$ particles, as opposed to diagonal mediator-$\chi_1$-$\chi_1$ couplings. The
direct detection tree-level scattering can be reduced or altogether turned off whenever the DM's kinetic energy is
insufficient to produce the excited state $\chi_2$~\cite{TuckerSmith:2001hy}. Instead, the dominant contribution at
direct detection experiments could arise from exchange of two virtual light mediators. At accelerators, in contrast,
the ground state can efficiently up-scatter into the excited state $\chi_2$ when detected through scattering off a
SM target. Missing energy/momentum experiments yields are unaffected by kinematic thresholds as well. The
heavier state $\chi_2$ may even exhibit macroscopic lifetimes that could be searched for at accelerator
probes~\cite{Morrissey:2014yma,Izaguirre:2017bqb}.
\item {\bf Sensitivity to dark sector structure.} The mass of the mediator is not only accessible in scenarios where
it decays dominantly into SM particles, but also in specific types of measurements for invisible decays. In the kinetic
mixing portal, the dark photon can be resonantly produced and subsequently
reconstructed as a dileptonic resonance or a peak in the $e^+e^- \rightarrow \gamma A'$ missing mass spectrum. The nature
of the mediator-SM coupling, another fundamental property, can be investigated as well. In proton beam dump experiments, the
mediator can be emitted by the incoming proton, or if kinematically allowed, from rare SM meson decays, while detection could
proceed through DM-nucleon scattering. Thus, proton beam-dump experiments are uniquely sensitive to the coupling to quarks. On
the other hand, leptonic couplings can be studied in electron beam-dump and fixed target experiments, where the
mediator is radiated off the incoming electron beam. The DM is identified through its scattering off electrons at a
downstream detector, or its presence is inferred as missing energy/momentum.
\end{itemize}
\begin{figure}[t!]
\center
\includegraphics[height=8.3cm]{figs/WG3_collapsing_DD.pdf} \hspace{0.5cm}
\includegraphics[height=6.3cm]{figs/WG3_collapsing_acc.pdf}
\caption{Direct annihilation thermal freeze-out targets and asymmetric DM target for (left) non-relativistic e-DM scattering
probed by direct-detection experiments and (right) relativistic accelerator-based probes. The thermal targets include scalar, Majorana,
inelastic, and pseudo-dirac DM annihilating through the vector portal. Current constraints are displayed as shaded areas.
Both panels assume $m_{\rm{MED}} = 3 m_{\rm{DM}}$ and the dark fine structure constant $\alpha_D\equiv g_D^2/4\pi=0.5$. These choices
correspond to a conservative presentation of the parameter space for accelerator-based experiments (see section~\ref{accproj}).}
\label{fig:accDDcomp}
\end{figure}
\subsection{Experimental approaches and future opportunities}
The light DM paradigm has motivated extensive developments during the last few years, based on a combination of
theoretical and proposed experimental work. As a broad organizing principle, these approaches can be grouped into
the following generic categories:
\begin{itemize}
\item {\bf Missing mass:} The DM is produced in exclusive reactions, such as $e^+e^- \rightarrow \gamma (A' \rightarrow \chi \bar\chi)$
or $e^-p \rightarrow e^- p (A' \rightarrow \chi \bar\chi)$, and identified as a narrow resonance over a smooth background in the recoil
mass distribution. This approach requires a well-known initial state and the reconstruction of all particles besides
the DM. A large background usually arises from reactions in which particle(s) escape undetected, and detectors with good
hermeticity are needed to limit their impact.
\item {\bf Missing momentum/energy:} The DM is produced in the fixed-target reaction $eZ \rightarrow eZ(A'\rightarrow \chi \bar\chi)$
and identified through the missing energy/momentum carried away by the escaping DM particles. This approach relies heavily on the
detector hermeticity to achieve excellent background rejection, a critical aspect. In some implementations, the ability to measure
the incoming electrons individually is also required. This method typically offers a better signal yield than beam dump experiments
for a similar luminosity, as the DM particles are not required to scatter in the detector.
\item {\bf Electron and proton beam dump:} The DM is produced in $\pi^0/\eta^{(')} \rightarrow \gamma
(A' \rightarrow \chi \bar\chi)$ decays, $p Z \rightarrow p Z A', A' \rightarrow \chi \bar\chi$ events
(proton beam dump) or $e Z \rightarrow e Z (A'\rightarrow \chi \bar\chi)$ events (electron beam dump) and
typically detected via $e \chi \rightarrow e \chi$ or $N \chi \rightarrow N \chi$ scattering in a
downstream detector. This approach has the advantage of probing the DM interaction twice, providing
sensitivity to the dark sector-mediator coupling, but requires a large proton/electron flux to compensate
for the reduced yields. However, the signature is similar to that of neutrino interactions, which
often constitutes the limiting factor on the sensitivity. Beam-dump experiments are also sensitive to
the decay of excited states in the DS~\cite{Morrissey:2014yma,Izaguirre:2017bqb}, which can naturally
occur in models where the DM is Pseudo-Dirac.
\item {\bf Direct dark photon searches:} focused on identifying the mediator through its decay into SM particles. This
approach is of particular importance when $m_{A'} < 2m_\chi$, in which case the mediator decays visibly. The
production mechanisms include among others $e^+e^- \rightarrow \gamma A'$, $eZ \rightarrow eZ A'$ or neutral
meson decays, and the mediator is usually reconstructed through its leptonic decays $A' \rightarrow e^+e^-,
\mu^+\mu^-$ as a narrow resonance over a wide background. The sensitivity is often limited by irreducible backgrounds,
requiring large luminosities to extend the experimental reach.
\end{itemize}
\subsection{Current constraints and on-going efforts}
Before discussing future experimental opportunities for DM searches, we briefly review the current constraints on the direct
annihilation scenario. While the LHC experiments can search for invisible final states by looking for mediator
or DM production in association with one or more visible objects, their sensitivities are limited to SM-mediator couplings
$\epsilon$ of roughly $10^{-1}$~\cite{Izaguirre:2015yja}. Stronger bounds in the $\,\mathrm{MeV}-\,\mathrm{GeV}$ range are provided by the NA64
experiment~\cite{Banerjee:2016tad}, mono-photon
searches at \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}~\cite{Lees:2017lec}, and from a dedicated search for DM-nucleon scattering at MiniBooNE~\cite{Aguilar-Arevalo:2017mqx}.
Estimated bounds from previous fixed target experiment in the low-mass range have been derived by reinterpreting prior measurements
from LSND~\cite{deNiverville:2011it} and the E137 experiment~\cite{Batell:2014mga}. Finally, a
re-analysis of electron-scattering data from direct detection experiments has led to constraints in the sub-$\,\mathrm{GeV}$ DM
region~\cite{Essig:2012yx,Essig:2017kqs}. Combinations of these bounds are displayed in Fig.~\ref{DMAProj1}-\ref{DMAProj3}.
Constraints on visible decays are driven by dileptonic resonance searches~\cite{Lees:2014xha,Batley:2015lha,Merkel:2014avp}
and re-interpretations of previous fixed target measurements in the low-mass region~\cite{Riordan:1987aw,Bjorken:1988as,Bross:1989mp}.
A detailed discussion can be found in Ref.~\cite{Alexander:2016aln}.
On-going experimental efforts are summarized in the following (non-exhaustive) list:
\begin{itemize}
\item {\bf APEX at JLab} (direct mediator search): search for prompt visible dark photon decays with fixed target experiment at
Hall A at JLab using the CEBAF electron beam. Dark vectors are produced on a high-Z target and reconstructed with an existing
high-resolution dual arm spectrometer.\\
Timeline: engineering run 2010 to demonstrate method, a one-month physics run is expected in 2018-2019. \\
Sensitivity: $\epsilon^2 \gtrsim 10^{-7}$ for $60 < m_{A'} < 550~\,\mathrm{MeV}$. References:~\cite{Essig:2010xa,Abrahamyan:2011gv}.
\item {\bf HPS at JLab} (direct mediator search): visible dark vector decay searches with fixed target experiment installed within
the Hall B at JLab using the CEBAF electron beam. Dark vectors are produced in a thin tungsten target and detected by a forward silicon
tracker and a calorimeter. Sensitive to both prompt and displaced dark photon decays.
Timeline: engineering runs in 2015 and 2016, physics run taking place 2018-2020. \\
Sensitivity: $\epsilon^2 \gtrsim 10^{-6}$ for prompt decays in the range $18 < m_{A'} < 500~\,\mathrm{MeV}$. Vertex reach still under evaluation.
\item {\bf MiniBooNE at FNAL} (proton beam-dump): DM scattering in a neutrino detector at the $8~\,\mathrm{GeV}$ Booster Neutrino
Beamline at FNAL. MiniBooNE is a 800 ton mineral oil Cherenkov detector situated 490 m downstream of the beam dump. The DM production
and detection mechanisms are similar to LSND. First results based on $1.8\times10^{20}$ POT have been published for DM-nucleon
scattering, and on-going analyses of electron elastic scattering ($\chi e \rightarrow \chi e$) and inelastic production of the
$\Delta$ resonance. \\
Timeline: on-going analyses. Sensitivity: $y \gtrsim 10^{-9}$ for $m_\chi < 400~\,\mathrm{MeV}$. Reference:~\cite{Aguilar-Arevalo:2017mqx}.
\item {\bf NA64 at CERN} (missing energy): missing energy experiment with a $100~\,\mathrm{GeV}$ secondary electrons beam from the SPS beam
line at CERN. The detector consists of a magnetic spectrometer (tracker + bending magnet), followed by a calorimeter system composed
of an ECAL, a charged track VETO, and a highly hermetic HCAL. The dark mediator is directly produced in the ECAL, and the signal is defined as
a reconstructed track, an energy deposition in the ECAL below a certain threshold, and no activity in the VETO or the HCAL.
The sensitivity is currently limited by the beam luminosity. By using a $\sim 150~\,\mathrm{GeV}$ muon beam instead of an electron beam, NA64
could also search for a new mediator $Z'$ charged under the $L_\mu-L_\tau$ symmetry or leptophilic dark scalars.\\
Timeline: taking data, expect to collect $10^{11}$ EOT in 2017. Sensitivity: $\epsilon^2 \gtrsim 10^{-10}$ for $m_{A'} < 1~\,\mathrm{GeV}$.
Reference:~\cite{Banerjee:2016tad,Gninenko:2014pea,Chen:2017awl}.
\item {\bf TREK at J-PARC} (direct mediator search): visible dark vector decay searches at kaon decay experiments
(TREK/E36 and TREK/E06) at J-PARC. A dark vector could be detectable in kaon and in pion decays,
{\it e.g.} $K^+\rightarrow \mu^+ \nu (A' \rightarrow e^+e^-)$, $K^+\rightarrow \pi^+ (A' \rightarrow e^+e^-)$ or
$\pi^0\rightarrow \gamma (A' \rightarrow e^+e^-)$. The experiment is currently analyzing its data, and TREK/E06 is
planned upon realization of the Hadron Hall extension at J-PARC.\\
Timescale: E36: data currently analyzed, E06: planned. Sensitivity: N/A. Reference:~\cite{Albrow:2016ibs,TREKWeb}.
\end{itemize}
\subsection{Future experimental initiatives}
Future opportunities for DM searches are summarized in the following sections. We start by surveying international efforts before
discussing new US-based proposals. While there has been a growing interest abroad to cover the thermal targets outlined in the
introduction, relying on current constraints and international efforts only is {\it not} enough to robustly test the scientific
goals outlined earlier in this chapter. The next generation of US-based experiments, such as BDX, LDMX, COHERENT
and SBN, is needed to decisively test the direct annihilation scenarios. A key feature of these proposals is the ability
to leverage existing technologies, enabling their rapid deployment.
\subsubsection{Future international initiatives}
\begin{itemize}
\item {\bf Belle-II at KEK:} missing mass and visible decay searches at the electron-positron
collider at KEK. Belle-II is a multi-purpose detector with sensitivity to invisible $A'$ decays
via mono-photon final state in the range $m_{A'} < 9.5~\,\mathrm{GeV}$. The sensitivity is limited by the
calorimeter hermeticity and the tracker coverage, as well as the total luminosity. Belle-II can
also search for visible $A'$ decays and more complex dark sector signatures ({\it e.g.} dark
Higgs boson $h'$ in $e^+e^- \rightarrow A' (h' \rightarrow A'A')$). The large coupling between the
SM Higgs boson and the $b$-quark also offers the opportunity to probe the scalar portal in
$\Upsilon(nS) \rightarrow \chi \bar\chi (\gamma)$ decays. \\
Timeline: First data expected in 2018, and about 50 ab$^{-1}$ of data around 2025. Sensitivity:
$\epsilon^2 \gtrsim 10^{-9}$ for $m_{A'} < 9.5~\,\mathrm{GeV}$ with full data set. Reference:~\cite{HeartyBelleII,Seong}.
\item {\bf MAGIX at MESA:} visible dark photon decay searches with a dipole spectrometer at the $105~\,\mathrm{MeV}$ polarized
electron beam at MAMI. The detector is a twin arm dipole spectrometer placed around a gas target. Production mechanism
similar to HPS and identification through a di-electron resonance. The possibility of a beam dump setup similar to BDX is
under study.\\
Timeline: Proposal in 2017 with targeted operations in 2021-2022.
Sensitivity: $\epsilon^2 \gtrsim 10^{-9}$ for $10 < m_{A'} < 60~\,\mathrm{MeV}$. Reference:~\cite{Denig:2016dqo}
\item {\bf PADME at LNF:} missing mass searches at the BTF in LNF. The principle is similar to the MMAPS
experiment, using a $550~\,\mathrm{MeV}$ positron beam on a diamond target. In addition to invisible $A'$ decays, PADME
is studying its sensitivity to diphoton decays of axion-like particles and dark Higgs decays.\\
Timeline: Expected to collect $10^{13}$ positron on target by end of 2018. Proposal to move PADME at Cornell
if new positron beamline is approved. Sensitivity: $\epsilon^2 \gtrsim 10^{-7}$
in the range $m_{A'} < 24~\,\mathrm{MeV}$. Reference:~\cite{Raggi:2014zpa, Raggi:2015gza}.
\item {\bf SHIP at CERN:} DM scattering in neutrino detector at the $400~\,\mathrm{GeV}$ SPS beamline
at CERN. The detector consists of OPERA-like bricks of laminated lead and emulsions placed in a magnetic
field. The DM production mode is similar to that of MiniBooNE, and the detection occurs via electron elastic
scattering ($\chi e \rightarrow \chi e$). The dominant backgrounds are expected to come from elastic,
quasi-elastic, deep-inelastic and resonant neutrino scattering processes, and can be reduced using several
topological and kinematical variables.\\
Expected to be able to deliver $10^{20}$ POT. Timeline: after 2026.
Sensitivity: $y \gtrsim 10^{-12}$ for $m_\chi < few~\,\mathrm{GeV}$. Reference:~\cite{Alekhin:2015byh,Anelli:2015pba}.
\item {\bf VEPP3 at BINP:} missing mass and visible decay searches at BINP at Novosibirsk. Dark photons are
produced by colliding a $500~\,\mathrm{MeV}$ positron beam on an internal gaseous hydrogen target, and both visible
and invisible (via the missing mass mode) final state are identified. Elastic scattering will be used for a $17~\,\mathrm{MeV}$
signal search.\\
Timeline: First run is anticipated for 2019-2020.
Sensitivity: $\epsilon^2 \gtrsim 10^{-8}$ in the range $5 < m_{A'} < 22 ~\,\mathrm{MeV}$.
Reference:~\cite{Wojtsekhowski:2012zq}.
\end{itemize}
\subsubsection{Future US-based initiatives}
\begin{itemize}
\item {\bf BDX at JLab} (electron beam-dump): DM scattering in a scintillating crystal detector at the CEBAF(A) beam dump
at JLab. The detector consists of 0.5 $\rm m^3$ of CsI(Tl) scintillating crystals situated 20 m downstream
of the beam dump. The experiment is sensitive to elastic DM scattering $e \chi \rightarrow e \chi$ in the
detector after production in $eZ \rightarrow eZ(A' \rightarrow \chi \bar\chi)$, as well as inelastic or pseudo-dirac
DM scattering $\chi_1 (e/Z/N) \rightarrow \chi_2 (e/Z/N)$ or excited state decay-in-detector $\chi_2 \rightarrow \chi_1 e^+e^-$ following
$eZ \rightarrow eZ(A' \rightarrow \chi_1 \chi_2)$ production. It seeks to improve upon E137 sensitivity by benefiting from the
high intensity JLab beam, and by positioning the detector closer to the dump.
The sensitivity is ultimately limited by the irreducible neutrino background, expected at the level of $\mathcal{O}(10)$ events for $10^{22}$
electrons on target. A different detection technique with directional capabilities based on a large drift chamber
(BDX-DRIFT) is also explored.\\
Timeline: conditional approval at JLAB (PAC 44 in 2016). BDX-DRIFT
at proposal stage. Sensitivity: $y \gtrsim 10^{-13}$ for $1 < m_\chi < 100~\,\mathrm{MeV}$ with $10^{22}$
EOT per year. Project cost within small-scale experiment guideline. Reference:~\cite{Battaglieri:2016ggd,Bondi:2017gul}.
\item {\bf COHERENT at ORNL} (proton beam-dump): DM scattering in scintillating crystals and liquid argon detectors
at the Spallation Neutron Source at ORNL. The primary goal of the COHERENT experiment is to measure the coherent
elastic neutrino nucleus scattering process. The current experimental setup includes ${\cal O}$(10kg) NaI(Tl)
and CsI(Tl) detectors, and a 35 kg single-phase LAr scintillation detector. Possible upgrades to a 1-ton LAr
or NaI detectors are envisioned. The dark matter is mainly produced via $\pi^0/\eta \rightarrow \gamma A'$ decays
out of collisions from the primary proton beam, and identified through coherent scattering leading to a detectable
nuclear recoil. The experiment seeks to exploit the large neutrino flux produced in the nearby target. Its sensitivity
is limited by the DM flux and uncertainties on the neutrino-nucleon cross sections, and beam-unrelated backgrounds.\\
Timeline: currently taking data, upgrade after 2019. Sensitivity: $y \gtrsim 10^{-13}$
for $m_\chi < 60~\,\mathrm{MeV}$. Project cost within small-scale experiment guideline. Reference:~\cite{deNiverville:2015mwa,CoherentWeb}.
\item {\bf DarkLight at JLab} (missing mass) missing mass and visible decay searches at the Low Energy
Recirculating Facility (LERF) at Jefferson Lab. Dark photons are produced in the reaction
$e^-p \rightarrow e^-pA'$ colliding the $100~\,\mathrm{MeV}$ electron beam on a gaseous hydrogen target.
The main advantage of this setup is the possibility to detect the scattered electron and recoil
proton, enabling the reconstruction of invisible $A'$ decays via the missing mass technique, and
providing a robust signature of visible $A'\rightarrow e^+e^-$ decays thanks to the fully reconstructible
final state. The sensitivity is limited by the very large continuum QED background generated from the high-intensity
beam. DarkLight is pursued in several stages to demonstrate the feasibility of the approach.\\
Timeline: phase I is currently
taking data; on-going design studies for phase II. Sensitivity: $\epsilon^2 \gtrsim 10^{-6}$
in the range $10 < m_{A'} < 80~\,\mathrm{MeV}$. Project cost within small-scale experiment guideline. Reference:~\cite{Balewski:2014pxa}.
\item {\bf LDMX at SLAC or JLab} (missing momentum): missing momentum experiment at the DASEL beamline at SLAC or
at the CEBAF facility at JLab. LDMX proposes to use a low current, high-repetition electron beam to achieve high statistics,
with an energy in the few $\,\mathrm{GeV}$ range. DM is produced
from interactions between a thin target and the electron beam via $eZ \rightarrow eZ (A' \rightarrow \chi \bar\chi)$. The
experimental signature consists of a soft wide angle scattered electron, characteristic of DM production at an electron fixed-target
reaction, and missing energy. The detector is composed of a tracker surrounding the target to measure each incoming and outgoing
electron individually, and a fast hermetic calorimeter system capable of sustaining a $\mathcal{O}(100)$ MHz rate while vetoing
few-multiplicity SM reactions that can mimic the DM signal.\\
Timeline: $>$ 2020. Sensitivity: $\epsilon^2 \gtrsim 10^{-12}$ (phase I) and
$\epsilon^2 \gtrsim 10^{-14}$ (phase II) for $m_\chi < 400~\,\mathrm{MeV}$. Project cost within small-scale experiment guideline.
Reference:~\cite{LDMXWeb}.
\item {\bf MMAPS at Cornell} (missing mass): the principle of MMAPS consists of producing a dark vector in
$e^+e^- \rightarrow \gamma A'$ reactions with a $5.3~\,\mathrm{GeV}$ positron beam on a fixed Be target. The beam is extracted
in a slow spill from the Cornell synchrotron. The $A'$ mass is inferred by measuring the outgoing photon kinematics
with a CsI calorimeter. This $A'$ search method provides sensitivity to all possible decay modes limited only by detector
resolution and QED background from large-angle photon production, such as $e^+e^- \rightarrow \gamma \gamma$ or
$e^+e^- \rightarrow \gamma e^+e^-$, where charged final particle(s) sometimes escape undetected.\\
Timeline: proposal stage, no starting date ($>$2020).
Sensitivity: $\epsilon^2 \gtrsim 10^{-8}$ in the range $20 < m_{A'} < 75~\,\mathrm{MeV}$. Project cost within small-scale
experiment guideline.
Reference:~\cite{cornell}.
\item {\bf SBN at FNAL} (proton beam-dump): DM scattering in liquid argon TPC detectors at the $8~\,\mathrm{GeV}$ Booster
Neutrino Beamline at FNAL. The SBN program consists of three detectors of 112 ton (SBND), 89 ton (microBooNE),
and 476 ton (ICARUS-T600) situated at 110 m, 470 m and 600 m downstream the beam dump, respectively. The
dark matter beam is primarily produced via pion decays out of collisions from the
primary proton beam, and identified via DM-nucleon or DM-electron elastic scattering in the detector. The neutrino-induced
background could be significantly reduced by steering the proton beam around the production target in dedicated
dark matter running modes. SBND is expected to yield the most sensitive results and could improve upon MiniBooNE
by more than an order of magnitude with $6\times10^{20}$ POT. Further improvement could be achieved by replacing
the neutrino horn with an iron target or building a new target station to allow simultaneous neutrino
and dark matter running modes.
Timeline: Detector commissioning and running in 2018.
Sensitivity: $y \gtrsim 10^{-12}$ for $m_\chi < 400~\,\mathrm{MeV}$. Project cost within small-scale experiment
guideline. Reference:~\cite{Antonello:2015lea,SBNRVW}.
\item {\bf SeaQuest} (direct mediator search): visible dark photon decay searches at the muon spectrometer at
the $120~\,\mathrm{GeV}$ Main Injector beamline at FNAL. Parasitic run with the SeaQuest/E1039 polarized target
experiment. Sensitive to prompt and long-lived dark photon dimuon decays, as well as more complex dark sector
signatures ({\it e.g.} dark higgs, SIMP). \\
Timeline: Run with SeaQuest in 2017 and E1039 in 2018-2020 if funded. Potential upgrade to E1067
in 2020-2025. Sensitivity: $\epsilon^2 \gtrsim 10^{-8}$ for $2m_\mu < m_{A'} < 9~\,\mathrm{GeV}$ (prompt decays),
$\epsilon^2 \sim 10^{-14} - 10^{-8}$ for $m_{A'} < 2~\,\mathrm{GeV}$ (displaced decays). Project cost within small-scale
experiment guideline. Reference:~\cite{doi:10.1142/S0217732317300087,Gardner:2015wea}.
\end{itemize}
\subsection{Facilities}
The facilities required to operate these experiments constitutes an important part of the accelerator
program. A key aspect of several proposals is the possibility to leverage facilities which have already
been operating successfully, or are close to starting operations, thus drastically reducing development
time. A few proposals would require new beamlines at different US laboratories, which could be
developed in the near future. We stress that these facilities would be used in parasitic/symbiotic mode,
and current scientific activities would not be impacted by the proposed experiments.
\begin{itemize}
\item {\bf CEBAF and LERF at JLAB:} The Continuous Electron Beam Accelerator Facility (CEBAF) provides an electron beam
to four experimental areas with energies up to $12~\,\mathrm{GeV}$. The electron beam polarization is near 90\% and the
Hall-D beamline includes a linearly polarized photon beam. Both the HPS and APEX experiments are approved to
run at CEBAF. LERF is a one-pass energy recovery linac with a maximum electron beam energy of $170~\,\mathrm{MeV}$, currently
hosting the DarkLight experiment. The BDX experiment has been recently conditionally approved at JLab. The M{\o}ller
experiment, which will perform a precise measurement of the Weinberg angle $\theta_W$ and offer sensitivity to light
DM, has received DOE approval.
Reference:~\cite{Dudek:2012vr,Freyberger:2015rfv}.
\item {\bf CESR at Cornell:} Cornell University operates a high intensity positron source for the CESR storage ring
which with the addition of an extraction beam line could provide a $6~\,\mathrm{GeV}$ extracted positron beam for the DM search. Such
a beam line will operate parasitically to the CESR SR program. This is the only place in the world where a $\,\mathrm{GeV}$ energy
large duty factor positron beam could be arranged at low cost. Reference:~\cite{cornell}.
\item {\bf DASEL at SLAC:} The Dark Sector Experiments at LCLS-II (DASEL) will deliver an almost CW beam of $4~\,\mathrm{GeV}$
electrons using the LCLS-II superconducting RF (SCRF) linac in a parasitic mode. LCLS-II
is under construction at SLAC as part of the photon science FEL program. The approach consists of
filling unused RF buckets with a low current and diverting them to an experimental area without
impacting the FEL program. LCLS-II is expected to operate more than 5000 hours per year,
and an upgrade to increase the beam energy to $8~\,\mathrm{GeV}$ has received CD-0 from the DOE.
Timeline: 2020+. Reference:~\cite{DASEL1,DASEL2}
\item {\bf SBN at FNAL:} The SBN facility features 8 GeV protons at the Booster Neutrino Beamline. Three Liquid Argon TPC
detectors (LArTPC) of 112 ton, 89 ton, and 476 ton are situated 110 m, 470 m, and 600 m downstream the beam dump,
respectively. Production and detection channels similar to MiniBooNE. The current plan is to collect 6$\times$10$^{20}$ POT,
beginning in 2018, in on-target mode. Can be configured to collect 2$\times$10$^{20}$ POT in beam-dump mode after
on-target run. Upgrades to BNB in 2016 will enable simultaneous on/off-target running. It would be possible to replace
the neutrino horn with an iron target to improve sensitivity of future DM searches. Reference:~\cite{Antonello:2015lea,SBNRVW}.
\item {\bf Very asymmetric collider:} A design of a high-luminosity, very-asymmetric collider has been recently proposed.
New advances in accelerator technology including the nano-beam scheme, high-current Energy Recovery Linacs,
and magnetized beams have lead to the proposal of a very asymmetric collider capable or reaching luminosity
greater than $10^{34}\rm \, cm^{-2}s^{-1}$ at a center-of-mass energy below $1~\,\mathrm{GeV}$. Such a machine could be deployed
at any facility with a positron storage ring. Timeline: N/A. Project scale unknown. Reference:~\cite{Wojtsekhowski:2017szs}.
\item{\bf AWAKE at CERN:} The AWAKE experiment at CERN aims to demonstrate proton-driven plasma wakefield acceleration as a viable
scheme to accelerate electrons to high energy. By the late 2020s, it may be possible to construct a facility where electrons are
accelerated up to about 100 GeV in about 100 m of plasma. Such an accelerator could improve by a factor 1000 the luminosity
delivered to the NA64 experiment. This unique, high energy electron beam may have other applications. Timeline: late 2020s.
Project scale unknown. Reference:~\cite{Caldwell:2015rkk, Adli:2016wah}.
\end{itemize}
\begin{sidewaystable}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
Experiment & Machine & Type & $\rm E_{beam}$ ($\,\mathrm{GeV}$) & Detection & Mass range ($\,\mathrm{GeV}$) & Sensitivity & First beam & Ref. \\\hline
\hline\multicolumn{9}{|c|}{\vspace{-0.2cm}}\\\multicolumn{9}{|c|}{\bf Future US initiatives}\\\multicolumn{9}{|c|}{\vspace{-0.2cm}}\\\hline
BDX & CEBAF @ JLab & electron BD & 2.1-11 & DM scatter & $0.001 < m_\chi < 0.1$ & $y \gtrsim 10^{-13}$ & 2019+ &~\cite{Battaglieri:2016ggd,Bondi:2017gul}\\
COHERENT & SNS @ ORNL & proton BD & 1 & DM scatter & $m_\chi < 0.06$ & $y \gtrsim 10^{-13}$ & started &~\cite{deNiverville:2015mwa,CoherentWeb}\\
DarkLight & LERF @ JLab & electron FT & 0.17 & MMass (\& vis.)& $0.01 < m_{A'} < 0.08$ & $\epsilon^2 \gtrsim 10^{-6}$ & started &~\cite{Balewski:2014pxa} \\
LDMX & DASEL @ SLAC & electron FT & 4 (8)* & MMomentum & $m_\chi < 0.4$ & $\epsilon^2 \gtrsim 10^{-14}$ & 2020+ &~\cite{LDMXWeb} \\
MMAPS & Synchr @ Cornell & positron FT & 6 & MMass & $0.02 < m_{A'} < 0.075$ & $\epsilon^2 \gtrsim 10^{-8} $ & 2020+ &~\cite{cornell} \\
SBN & BNB @ FNAL & proton BD & 8 & DM scatter & $m_\chi < 0.4$ & $y \sim 10^{-12} $ & 2018+ &~\cite{Antonello:2015lea,SBNRVW} \\
SeaQuest & MI @ FNAL & proton FT & 120 & vis. prompt & $0.22 <m_{A'} < 9$ & $\epsilon^2 \gtrsim 10^{-8} $ & 2017
&~\cite{doi:10.1142/S0217732317300087} \\
& & & & vis. disp. & $ m_{A'} < 2$ & $\epsilon^2 \sim 10^{-14} - 10^{-8}$& & \\
\hline\multicolumn{9}{|c|}{\vspace{-0.2cm}}\\ \multicolumn{9}{|c|}{\bf Future international initiatives}\\\multicolumn{9}{|c|}{\vspace{-0.2cm}}\\\hline
Belle II & SuperKEKB @ KEK & $e^+e^-$ collider & $\sim 5.3$ & MMass (\& vis.)& $0< m_\chi < 10$ & $\epsilon^2 \gtrsim 10^{-9}$ & 2018 &~\cite{HeartyBelleII} \\
MAGIX & MESA @ Mami & electron FT & 0.105 & vis. & $0.01 <m_{A'} < 0.060$ & $\epsilon^2 \gtrsim 10^{-9} $ & 2021-2022 &~\cite{Denig:2016dqo} \\
PADME & DA$\Phi$NE @ Frascati& positron FT & 0.550 & MMass & $m_{A'} < 0.024$ & $\epsilon^2 \gtrsim 10^{-7}$ & 2018 &~\cite{Raggi:2014zpa, Raggi:2015gza} \\
SHIP & SPS @ CERN & proton BD & 400 & DM scatter & $m_\chi < 0.4$ & $y \gtrsim 10^{-12}$ & 2026+ &~\cite{Alekhin:2015byh,Anelli:2015pba} \\
VEPP3 & VEPP3 @ BINP & positron FT & 0.500 & MMass & $0.005 < m_{A'} <0.022$& $\epsilon^2 \gtrsim 10^{-8}$ & 2019-2020 &~\cite{Wojtsekhowski:2012zq} \\
\hline\multicolumn{9}{|c|}{\vspace{-0.2cm}}\\\multicolumn{9}{|c|}{\bf Current and completed initiatives}\\\multicolumn{9}{|c|}{\vspace{-0.2cm}}\\\hline
APEX & CEBAF @ JLab & electron FT & 1.1-4.5 & vis. & $0.06 < m_{A'} < 0.55$ & $\epsilon^2 \gtrsim 10^{-7}$ & 2018-2019 &~\cite{Essig:2010xa,Abrahamyan:2011gv}\\
\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ & PEP-II @ SLAC & $e^+e^-$ collider & $\sim 5.3$ & vis. & $0.02 < m_{A'} < 10$ & $\epsilon^2 \gtrsim 10^{-7}$ & done &~\cite{Lees:2012ra,Lees:2014xha,TheBABAR:2016rlg}\\
Belle & KEKB @ KEK & $e^+e^-$ collider & $\sim 5.3$ & vis. & $0.1 < m_{A'} < 10.5$ & $\epsilon^2 \gtrsim 10^{-7}$ & done &~\cite{TheBelle:2015mwa}\\
HPS & CEBAF @ JLab & electron FT & 1.1-4.5 & vis. & $0.015 <m_{A'} < 0.5$ & $\epsilon^2 \sim 10^{-7}$** & 2018-2020 &~\cite{Battaglieri:2014hga} \\
NA/64 & SPS @ CERN & electron FT & 100 & MEnergy & $m_{A'} < 1$ & $\epsilon^2 \gtrsim 10^{-10} $ & started &~\cite{Banerjee:2016tad} \\
MiniBooNE & BNB @ FNAL & proton BD & 8 & DM scatter & $m_\chi < 0.4$ & $y \gtrsim 10^{-9} $ & done &~\cite{Aguilar-Arevalo:2017mqx} \\
TREK & $K^+$ beam @ J-PARC & $K$ decays & 0.240 & vis. & N/A & N/A & done &~\cite{Albrow:2016ibs,TREKWeb} \\
\hline
\end{tabular}
\label{accTable}
\caption{Summary table of current light DM experiments and future proposals. The sensitivities are quoted either for the kinetic mixing or the variable $y$, whichever
is most relevant (see the text and the corresponding figures for more detailed predictions). The range quoted for experiments sensitive to both
visible and invisible decays refers to the invisible case. Starting dates are subject to variations. {\it Legend:} beam dump (BD), fixed target (FT), dark
matter scattering (DM scatter), missing mass (MMass), missing momentum (MMomentum), missing energy (MEnergy), prompt/displaced visible decays (vis).
{\it Notes:} *LDMX beam energy is $4~\,\mathrm{GeV}$ for phase I, and could be upgraded to $8~\,\mathrm{GeV}$ for phase II. **Sensitivity to displaced vertices under study.}
\end{sidewaystable}
\subsection{Projections}
\label{accproj}
In order to compare the reach of the different proposals, a few assumptions have to be made. When presented in the $m_\chi~\rm{vs}~y$ parameter space,
the thermal target is an {\it invariant}, while the sensitivity of different experiments is usually not~\cite{Izaguirre:2015zva}. In particular, the DM
signal yields at accelerator experiments are primarily sensitive to $\epsilon^2$ (missing mass and energy), or to $\epsilon^4 \alpha_D$ (beam dump) for
a fixed $m_\chi/m_{A'}$ ratio, where $\alpha_D\equiv g_D^2 / 4\pi$ is the analogous dark sector fine structure constant in the vector portal. To express the
reach of missing mass/energy/momentum experiments on the $m_\chi~\rm{vs}~y$ plane, one must choose a specific value of $\alpha_D$. In the following, we
purposely adopt {\it a conservative approach} by setting the value of $\alpha_D$ near unity and a $\mathcal{O}(1)$ choice of the ratio $m_\chi/m_{A'}$
when required. This choice is conservative in the sense that smaller values of these parameters correspond to stronger experimental sensitivities, i.e. the
quoted reach is the least constraining. One must also note that fixing the value of $\alpha_D$ near unity
ensures perturbativity up to the Weak scale~\cite{Davoudiasl:2015hxa}.
Current constraints and sensitivity estimates on the benchmark model of a dark photon kinetically mixed with hypercharge are shown in Fig.~\ref{DMAProj1} for
various light DM experiments based on the missing mass, missing energy and missing momentum approaches. The corresponding curves on the parameter $y$ are
also shown, assuming $m_{A'} = 3 m_\chi$ and $\alpha_D = 0.5$. Decreasing $\alpha_D$ pushes missing
mass/energy/momentum curves downward. The analogous figures for electron and proton beam dump experiments are displayed in Fig.~\ref{DMAProj2}, assuming
only electron (leptophilic) and nucleon (leptophobic) couplings, respectively. The combined projections and constraints are shown in Fig.~\ref{DMAProj3} as
a function of the DM particle nature.
For much smaller values of $\alpha_D$, the direct annihilation thermal scenarios are largely ruled out by previous experiments, namely
LSND, \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\, and E137. Fig.~\ref{fig:alphadvsmass} depicts the thermal-target area in the $(m_\chi,\alpha_D)$ parameter space still viable. Again, we
fix the ratio $m_\chi = 3 m_{A'}$. Each point in the $(m_\chi,\alpha_D)$ plane corresponds to a SM-mediator coupling $\epsilon$ which has been fixed to
the value needed to obtain the correct DM abundance. By and large, the open parameter space in these models corresponds to DM-mediator coupling strengths
that are SM-like.
It is worth noting that the dimensionless variable $y$ is no longer a suitable parameter for presenting results when
$m_\chi > m_{A'}$, as the DM annihilation proceeds trough $\chi \bar\chi \rightarrow A' A'$, independent of the kinetic mixing strength. However,
accelerators can still probe interesting parameter space through off-shell DM production and through direct mediator searches, where the mediator
decays back to Standard Model Final States. The present status and prospects for visibly-decaying $A'$ searches are shown in
Fig.~\ref{fig:visible-decay}. These searches are set to continue testing the top-down motivated values of $\epsilon$ in the near future.
\begin{figure}[htb]
\center
\includegraphics[height=6.4cm]{figs/WG3_eps2_mA.pdf} \hspace{0.5cm}
\includegraphics[height=6.4cm]{figs/WG3_yplot_MMEonly.pdf}
\caption{Current constraints (shaded regions) and sensitivity estimates (dashed lines) on the SM-mediator coupling
$\epsilon = g_{\rm{SM}}/e$, for various experiments based on the missing mass, missing energy and missing
momentum approaches. The green band show the values required to explain the muon (g-2)$_\mu$
anomaly~\cite{Pospelov:2008zw}. Right: Corresponding curves on the parameter $y$, plotted alongside various
thermal relic target. These curves assumes $m_{A'} = 3 m_\chi$ and $\alpha_D = 0.5$. For larger mass ratios or
smaller values of $\alpha_D$, the experimental curves shift downward, but the thermal relic target remains
invariant. The asymmetric DM and ELDER targets (see text) are also shown as solid orange and magenta lines,
respectively. Courtesy G. Krnjaic.}
\label{DMAProj1}
\end{figure}
\begin{figure}[htb]
\center
\includegraphics[height=6.4cm]{figs/WG3_Leptophilic.pdf} \hspace{0.5cm}
\includegraphics[height=6.4cm]{figs/WG3_Leptophobic.pdf}
\caption{Current constraints (shaded regions) and sensitivity estimates (dashed/solid lines) on the parameter $y$
for (left) leptophilic and (right) leptophobic dark forces coupled to dark matter for beam-dump experiments. The
prescription $m_{A'} = 3 m_\chi$ and $\alpha_D = 0.5$ is adopted where applicable. The asymmetric DM and ELDER
targets (see text) are also shown as solid orange and magenta lines, respectively. Courtesy G. Krnjaic, P. deNiverville.}
\label{DMAProj2}
\end{figure}
\begin{figure}[htb]
\center
\includegraphics[width=7.9cm]{figs/WG3_ScalarElastic.pdf}\hspace{0.5cm}
\includegraphics[width=7.9cm]{figs/WG3_ScalarInelastic.pdf} \vspace{0.0cm} \\
\includegraphics[width=7.9cm]{figs/WG3_Majorana.pdf}\hspace{0.5cm}
\includegraphics[width=7.9cm]{figs/WG3_PseudoDirac.pdf} \\
\caption{Combined constraints (shaded regions) and sensitivity estimates (dashed/solid lines) on the parameter $y$ for scalar elastic,
scalar inelastic, Majorana and pseudo-Dirac DM. The prescription $m_{A'} = 3 m_\chi$ and $\alpha_D = 0.5$ is adopted where applicable. For larger
ratios or smaller values of $\alpha_D$, the accelerator-based experimental curves shift downward, but the thermal relic
target remains invariant. See section~\ref{sec:WG2experiments} for sensitivity estimates for direct detection experiments.
Courtesy G. Krnjaic.}
\label{DMAProj3}
\end{figure}
\begin{figure}[t!]
\center
\includegraphics[width=16.5cm]{figs/WG3_alphaD.pdf}
\caption{Thermal ``area'' target in the $m_{\rm DM}$ vs $\alpha_D\equiv g_d^2/4\pi$ plane, for $m_{A'}/m_{\rm DM} = 3$, for
pseudo-Dirac (left) and Majorana DM (right). The area in the white background is compatible with a thermal origin. Current
constraints have largely ruled out the parameter space where the DM-mediator coupling constant is much smaller the $\mathcal{O}(1)$. The
new proposals surveyed in this chapter aim to cover the region where the DM-mediator coupling is SM-like in strength. Courtesy N. Toro.}
\label{fig:alphadvsmass}
\end{figure}
\begin{figure}[htb]
\center
\includegraphics[width=11cm]{figs/WG3_VisibleDecay.pdf}
\caption{Constraints on visibly-decaying mediators (shaded regions) and projected sensitivities of currently running or upcoming
probes (solid lines). Visible decays of the mediator dominate in the $m_\chi > m_{A'}$ secluded annihilation regime. Courtesy R. Essig.}
\label{fig:visible-decay}
\end{figure}
\subsection{Summary and key points}
This chapter has reviewed the science case for an accelerator-based program and outlined a path forward to reach
decisive milestones in the paradigm of thermal light DM. The key points of the discussion could be summarized
as follows:
\begin{itemize}
\item The scenario in which DM directly annihilates to the SM defines a series of {\bf predictive, well-motivated and bounded targets.}
Exploring this possibility is an {\bf important scientific priority.}
\item A new generation of small-scale collider and fixed-target experiments {\bf is needed to robustly test this scenario}. The
accelerator-based approach has the attractive feature of offering considerable model-independence in its sensitivity to the details
of the dark sector, and can {\bf uniquely probe all predictive models.}
\item Most experimental proposals are based on {\bf existing, proven technology}, and {\bf could be operational in the near future.}
\item {\bf A complementary approach is required to fully explore light, directly annihilating thermal DM.} Experiments relying on missing
energy and missing momentum approaches generally offer the most favorable sensitivity. On the other hand, beam dump proposals enable access
to the DM-mediator couplings and are especially well suited to probe the pseudo-Dirac DM scenario by looking for decays of the excited
state inside the detector. Moreover, leptophobic DM models are best tested with proton beam-dump proposals. And finally, missing mass
experiments offer a robust signature and a clean method to precisely determine the mass scale of the mediator in a largely model
independent way.
\item The strategies discussed in this chapter can be readily applied {\bf to study other well-motivated scenarios
with quasi-thermal production mechanisms}, such as asymmetric DM; as well as models in which the cosmological abundance is set by processes
other than DM annihilation into SM particles ({\it e.g.} SIMPs and ELDERs); freeze-in models where the mediator is comparatively heavy
with respect to DM; new light force carriers; and particles millicharged under electromagnetism.
\end{itemize}
In summary, small-scale accelerator-based experiments could test important milestones of light DM parameter space. Among them, light
thermal DM annihilating into SM final states, an important and well-motivated target, could be uniquely and robustly probed. By exploiting
established detector technology and existing facilities, many proposals are ready for funding now and could achieve significant science
in the next few years. Through a strong, vibrant contribution, the US dark matter program has the opportunity to play the leading role in
light dark matter and dark sector physics during the next decade.
\section{New Candidates, Targets, and Complementarity}
\label{WG4sec:WG4}
\subsection{Introduction}
\label{WG4sec:introduction}
For many years, research on the particle nature of dark matter focused on three classic candidates: WIMPs, the QCD axion, and sterile neutrinos. These remain highly motivated particle candidates, and ongoing searches for them continue to be of great interest. In the last few years, however, many new dark matter candidates have emerged. This progress is striking for at least two reasons. First, the motivations for these candidates are extremely varied, with some inspired by experimental discrepancies, others by theoretical considerations, and others representing ``lamppost physics,'' viable ideas that in many cases highlight broad swathes of parameter
space that have not been experimentally investigated, but can be. Second, these developments have strikingly diverse implications for experiments and observations. Some have strengthened the well-known, but still remarkable, synergy between particle physics and astrophysics, and others have generated completely new connections between dark matter and other subfields, including nuclear, atomic, and condensed matter physics.
The recent progress in dark matter makes it difficult to form a coherent picture of the field and to chart future directions. In this section, we give an overview of recent developments with the goal of providing some of the necessary background for the future prioritization of proposed experiments. Our subjects include ``New Candidates,'' novel particle physics models and frameworks for dark matter, and ``Targets,'' particle candidates and regions of parameter space (for example, dark matter masses and couplings) that are of special interest, given compelling experimental puzzles or theoretical ideas. We also discuss ``Complementarity,'' a breathtakingly broad catch-all term that includes the complementarity of proposed (small-scale) experiments with existing (large-scale) experiments; the complementarity of proposed experiments with each other; the complementarity of dark matter probes from the many relevant subfields of physics and astronomy; the complementarity of different experiments in their potential to discover dark matter; and the complementarity of experiments to precisely determine the properties of dark matter after the initial discovery.
The number of recent developments and the complex inter-relationships between them make it impossible to comprehensively summarize all of them in neatly disjoint categories. Below we organize our discussion into four broad and overlapping areas: Experimental Anomalies and Hints in~\secref{anomalies}, Cosmology and Astrophysics in~\secref{astrophysics}, Models and Relic Abundances in~\secref{models}, and Complementarity in~\secref{complementarity}. We close in~\secref{conclusions} with some conclusions, including a few targets that seem especially ripe for experimental searches at this time.
\subsection{Experimental Anomalies and Hints}
\label{WG4sec:anomalies}
Discrepancies between experimental results and SM predictions have been, and should be, among the prime motivations to search for new particles and forces. Examples include the GeV excess seen from the direction of the Galactic Center~\cite{Hooper:2010mq,TheFermi-LAT:2017vmf}, and the 3.5 keV X-ray line seen from galaxies and galactic clusters~\cite{Bulbul:2014sua,Boyarsky:2014jta,Abazajian:2017tcc}. In this section we discuss three leading experimental anomalies with relevance for the experiments discussed in this document: the anamalous magnetic moment of the muon, the proton radius, and the Beryllium-8 anomaly.
\ssection{Anomalous Magnetic Moment of the Muon} Among the most longstanding puzzles is the $3.5\sigma$ discrepancy between experiment and theory in $(g-2)_{\mu}$, the anomalous magnetic moment of the muon~\cite{Miller:2012opa}. This may be resolved by weakly-interacting particles with milli-charged couplings to muons and 1 to 100 MeV masses~\cite{Pospelov:2008zw}. Although dark photons with these properties are now excluded~\cite{Alexander:2016aln}, other light bosons remain viable solutions, as discussed below. In the near future, the Muon $(g-2)$ Experiment at Fermilab is expected to reduce the experimental uncertainty in $(g-2)_{\mu}$ by a factor of four~\cite{Grange:2015fou}.
\ssection{Proton Radius}
Another muon-related anomaly is the proton radius puzzle, the $5.6 \sigma$ discrepancy between the proton electric charge radius $r_E^p = 0.8751(61)$ measured from a combination of electron scattering and (regular, electronic) hydrogen spectroscopy~\cite{Mohr:2015ccw}, and the radius $r_E^p=0.84087(26)(29)$ measured from muonic hydrogen spectroscopy~\cite{Antognini:1900ns}. (These numbers correspond to the CODATA 2014 adjustment of constants, and the updated 2013 CREMA analysis.)
The large size of this discrepancy and its surprising appearance in seemingly well-known systems have motivated numerous theoretical and experimental efforts across particle, nuclear and atomic physics. The puzzle is likely to lead to a dramatic revision of the fundamental constants $r_E^p$ and the Rydberg constant, and it has forced a reexamination of lepton-nucleon scattering methodology, impacting in particular the long-baseline neutrino program~\cite{Hill:2017wzi}.
\ssection{Proton Radius: New Particle Physics}
Taking the data at face value, it is also interesting to consider potential implications for physics beyond the SM. In its original form, the puzzle is a discrepancy between electron-based and muon-based measurements of the proton charge radius, $r_{\mu {\rm H}} < r_{e{\rm H}} \sim r_{e-p}$. This hierarchy has been accommodated in phenomenological models involving $\sim$MeV force carriers with muon specific couplings. New preliminary results for the hydrogen 2S-4P splitting have been reported by Beyer et al.~\cite{BeyerTalk}, with a ``small'' radius and error comparable to the existing hydrogen average. The possible revision of the electronic hydrogen results, in agreement with muonic hydrogen, would leave a discrepancy between bound state and electron-proton scattering determinations of the radius: $r_{e{\rm H}} \sim r_{\mu {\rm H}} < r_{e-p}$. Such a hierarchy would be predicted by an attractive Yukawa force mediated by a force carrier with a mass between the atomic Bohr momentum, $\sim m_\mu \alpha$, and momentum transfers probed in scattering experiments, $\sim 50\,{\rm MeV}$~\cite{Bernauer:2013tpr}. One such interpretation is a dark photon model and the preferred parameter region is $\kappa/m_{A'} \sim \Delta r/\sqrt{6} \sim 10^{-4}~\text{MeV}^{-1}$, where $\kappa$ is the kinetic mixing parameter and $m_{A'}$ is the dark photon mass.
\ssection{$^8$Be Anomaly} The $^8$Be anomaly is a 6.8$\sigma$ discrepancy reported by the ATOMKI group in their observations of the decays of excited $^8\rm{Be}$ nuclei to their ground state and an electron-positron pair, $^8\text{Be}^* \to {}^8\text{Be} \, e^+ e^-$~\cite{Krasznahorkay:2015iga}. A bump-shaped excess above the SM internal pair creation background appears in the distribution of $e^+e^-$ opening angles with a very high statistical significance. Its interpretation as a cascade decay of $^8\text{Be}^* \to {}^8\text{Be} \, X $ followed by $X \to e^+ e^-$, where $X$ is a new boson, fits with a $\chi^2/\text{dof} =1.07$ for milli-charged couplings and $m_X\approx 17~\text{MeV}$. In contrast to the previous two anomalies, where new physics solutions involve virtual particles that can be as heavy as the weak scale, the $^8$Be anomaly, if taken as evidence for new particles, requires real particle production, and can only be resolved by light, weakly-coupled particles.
\ssection{$^8$Be Anomaly: New Particle Physics} One may consider several spin-parity assignments for $X$ candidates that could account for the observed decay rate~\cite{Feng:2016jff,Feng:2016ysn}. Scalar candidates are forbidden by parity conservation in nuclear decays~\cite{Feng:2016jff}, but pseudoscalars are a possibility~\cite{Ellwanger:2016wfe}. Spin-1 bosons are also possible, but are constrained by null results from searches for $\pi^0\to \gamma X$~\cite{Batley:2015lha}; such constraints exclude, for example, dark photons as an explanation. However, such $\pi^0$ decays are axial-anomaly driven~\cite{Sutherland:1967vf,Veltamn1967}, and so any particles that decouple from this decay, including protophobic gauge bosons~\cite{Feng:2016jff,Feng:2016ysn} and axial vectors~\cite{Kozaczuk:2016nma} are possible solutions. We discuss these in turn.
A viable protophobic vector candidate has milli-charged couplings to neutrons and electrons, and suppressed couplings to protons~\cite{Feng:2016jff}. Such a particle can arise naturally as the force carrier of a spontaneously broken U(1)$_B$ or U(1)$_{B-L}$ symmetry that kinetically mixes with the photon~\cite{Feng:2016ysn}. In this case, the predicted leptonic couplings can be large enough to simultaneously ameliorate the discrepancy in $(g-2)_\mu$, providing an viable alternative to the now-excluded dark photon explanation. These scenarios could be directly tested by repeating the experiment with $^8\rm{Be}$ or looking for similar decays in other nuclei (see below), or by testing the required electron couplings at $e^\pm$-beam-based experiments. A number of accelerator experiments may probe the relevant couplings in the near future, Fig.~\ref{fig:Be8searchregion}.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=0.50\textwidth]{figs/8Befuture.pdf}
\caption{The $^8$Be signal region in the $(m_X, \varepsilon_e)$ plane, along with current constraints (gray) and projected sensitivities of the indicated future experiments. From Ref.~\cite{Feng:2016ysn}.}
\label{fig:Be8searchregion}
\end{center}
\end{figure}
An alternative explanation
is a light gauge boson that couples predominantly axially to quarks~\cite{Kozaczuk:2016nma}. In this case, the vector does not have to be protophobic, since the decay $\pi^0 \rightarrow \gamma X$ is forbidden in the chiral limit if $X$ has purely axial couplings, and so the constraints from NA48/2 on light vectors~\cite{Batley:2015lha} do not apply. A light axial vector with mass $m_X\approx 17$ MeV can explain the ATOMKI result without violating existing experimental constraints, and such a particle can also arise from a self-consistent UV complete theory~\cite{Kozaczuk:2016nma}. (For a related discussion of existing constraints and model-building, see Ref.~\cite{Kahn:2016vjr}.) The strongest constraints on the axial vector quark couplings in this scenario are from the non-observation of a corresponding bump in the predominantly isovector 17.64 MeV $^8$Be transition to the ground state. This illustrates the potential for nuclear decay experiments to provide experimental probes of light vectors that are complementary to those afforded by existing experiments. (Note that the potential for nuclear decay experiments to search for light, weakly coupled particles was pointed out some time ago~\cite{Donnelly:1978ty, Treiman:1978ge}.) Furthermore, both the axial- and protophobic vector interpretations of the $^8$Be anomaly highlight the importance of experimentally targeting both the leptonic and quark couplings of light hidden particles, since the relationship between them is model-dependent.
\ssection{$^8$Be Anomaly: Nuclear Theory} In investigating the $^8$Be anomaly, it is, of course, critical to know if the SM nuclear theory prediction is accurate. The theoretical predictions for internal pair creation referred to in Ref.~\cite{Krasznahorkay:2015iga} are based on the classic work of Ref.~\cite{Rose:1949zz}. This work are known to be incomplete, as it does not include pair emission anisotropy and interference between the relevant EM multipole transitions. The nuclear theory predictions have been improved recently~\cite{Zhang:2017zap} in work inspired by the nuclear-cluster-based effective field theory framework~\cite{Hammer:2017tjm, Zhang:2015ajn}. In this work, the multipole interferences have been included, and the relative weights of the $\ensuremath{e^+e^-}\xspace$ production have been constrained by photon production data, allowing a direct cross check with the weights extracted in (future) $\ensuremath{e^+e^-}\xspace$ experiments. This work puts the nuclear theory expectations on solid footing. The refined predictions cannot explain the observed $^8$Be anomaly, and the discrepancy with experiment remains. Furthermore, the possible form factor associated with the M1 transition that would be needed to explain the anomaly requires an unrealistically large length scale (on the order of tens of fm) associated with the $^8\rm{Be}$ nucleus. The refined model can be used for analyzing future experiments of this type and can also be adapted to study the interplay between the virtual-photon and new-boson decay mechanisms.
\ssection{$^8$Be Anomaly: Nuclear Experiments} Given the statistically significant discrepancy between experiment and refined nuclear theory predictions, as well as the existence of viable new physics explanations, it is clear that the original $^8$Be results should be followed up with dedicated optimized experiments. In searching for $X$, advantage can be taken of its long lifetime $\sim 10^{-13}$ s, meaning the spectrometer mass resolution dominates the measured mass width in the observation. In one proposed approach from Purdue~\cite{LangUSCosmicVisions}, 7 HPGe detectors with energy resolution $\delta E/E \sim 0.1\%$ are used to accurately measure the energy of the electron-positron pair. In addition, a magnetic field will be used to measure charge, reducing backgrounds due to Compton-scattered electron pairs. Additional particle ID will disentangle the proposed signal from various instrumental backgrounds. Tracks will be measured using two layers of silicon pixel detectors having measuring resolution of 25 micron and the track will be constrained by the production vertex. Because silicon detectors can be operated in vacuum, the spectrometer will not require a vacuum pipe between the detectors and the production target. This configuration will further reduce energy loss of the electron-positron pair in passing through the material of the spectrometer and will greatly reduce Compton-produced backgrounds within the structural elements supporting the charged particle detectors. With these improvements, the overall mass resolution should improve from 1.5 MeV, as observed in the ATOMKI experiment, to $<70$ keV, greatly improving the $\chi$ signal to noise ratio by more than an order of magnitude. Most of the equipment for this proposed spectrometer are already in hand at Purdue and can be installed at the Purdue Tandem facility, PRIME Lab, making the required funds~\cite{LangUSCosmicVisions} a small fraction of the small project threshold and allowing data taking to start rapidly.
A similar proposal~\cite{LeachUSCosmicVisions} to address the $^{8}$Be anomaly, again with higher resolution and statistics, uses a different experimental technique: radiative proton capture on a $^7$Li target at the University of Notre Dame Nuclear Structure Laboratory (NSL). The NSL has a 5 MV single-end 5U accelerator, which is typically dedicated to similar radiative proton capture experiments and can provide proton beams with beam intensities of up to 200 $\mu A$, a factor of 200 increase relative to the original ATOMKI measurement~\cite{Krasznahorkay:2015iga}. To provide the improvement in $e^+ e^-$ correlation detection, a simple array of silicon strip detectors in a cubic configuration is used to provide high position resolution, followed by thick Si wafers to provide total energy information for the emitted leptons. The Si strip detectors have a very high granularity and will be configured to increase the angular resolution by up to an order of magnitude. If $\gamma$-ray tagging is required, there is access to high-resolution and high-efficiency HPGe clover detectors through the DOE clover-share program. The proposed setup is almost entirely achievable with existing equipment at the NSL, and the required funds area small fraction of the small project threshold; details are available in Ref.~\cite{LeachUSCosmicVisions}. This project is currently in the design stage. However, given the availability of equipment and facility (no program advisory committee is used at the NSL), we estimate that the equipment could be constructed, commissioned, and ready for the first physics run in less than 2 years after funding is available. If this work is successful, and the measurement has been confirmed, additional light nuclei are planned to be studied with a more sophisticated setup.
\ssection{Isotope Shift Spectroscopy}
There exist alternative ways to test for light gauge bosons. One new frontier is to use precise tabletop atomic physics experiments to test for the existence of new light degrees of freedom. To convert the high precision of atomic and molecular spectroscopy measurements into sensitivity to fundamental new physics, one has to either calculate atomic structure to high accuracy, or to find observables that are insensitive to theoretical uncertainties. Recently Refs.~\cite{Delaunay:2016brc,Frugiuele:2016rii,Delaunay:2016zmu} have shown that precision isotope shift (IS) spectroscopy provides a probe of spin-independent couplings of light bosonic fields to electrons and neutrons that does not rely on precise theoretical predictions.
Being data driven, this proposal has the advantage of not relying of any theoretical prediction for the background. On the other hand, new regions of the parameter space can be explored if and only if the SM background fits a particular (linear) shape in a so-called ``King plot''~\cite{ISKing}, a comparison of isotope shifts of two narrow transitions. This proposal is particularly interesting for the reported $^8$Be anomaly~\cite{Krasznahorkay:2015iga}, because it probes the coupling to neutrons and electrons for a spin-independent interaction, which are precisely the couplings predicted by the protophobic gauge boson interpretation of the data~\cite{Feng:2016ysn}. In the future, by looking at Yb$^+$ transitions with 1 Hz precision, IS measurements will probe all the couplings that could explain the $^8$Be anomaly~\cite{Berengut:2017zuo}, provided the data is compatible with King linearity.
\subsection{Cosmology and Astrophysics}
\label{WG4sec:astrophysics}
At present, all evidence for dark matter is from the impact of its gravitational interactions on cosmological and astrophysical observations. The astounding progress in cosmology in the past two decades has led to increasingly strong and varied evidence for dark matter and has now precisely determined the amount of dark matter in the Universe. In recent years, however, advances in cosmology have begun to stringently constrain dark matter's particle properties and production mechanisms and even to motivate new ideas in particle physics, with the field on the threshold of even greater insights in the near future. In this section, we illustrate the complementarity of astrophysics and cosmological probes with three topics: small scale structure, the cosmic microwave background, and supernovae.
\ssection{Small Scale Structure} The microphysical properties of dark matter not only determine its detectability in the laboratory, but also dark matter's cosmological clustering. Thus, astronomical probes of dark matter constitute a \emph{measurement} of dark matter's microphysical properties. The key characteristics of the thermal WIMP dark matter candidate---its relative heaviness and electroweak-scale couplings with SM particles---lead to non-relativistic (``cold'') freeze-out with only minimal kinetic coupling to the SM after thermal decoupling, leaving the primordial inflationary perturbation spectrum untouched by free-streaming or interactions down to tiny scales. In this ``cold dark matter'' (CDM) paradigm, the non-linear evolution is determined only by gravity. On non-linear scales, the thermal WIMP/CDM paradigm makes its most striking prediction: the existence of a hierarchy of dense dark-matter halos down to free-streaming and kinetic decoupling ($\sim$Earth-mass) scales~\cite{diemand2005}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.65\textwidth]{figs/darkphotonmodel.pdf}
\caption{The 95\% CL regions of dark matter-mediator parameter space preferred by lower-than-expected central dark matter density in dwarf galaxies (red), low surface brightness spiral galaxies (blue), and clusters (green). The combined 95\% CL (99\% CL) region is enclosed by the solid (dashed) contours. The dark matter self-coupling strength is taken to be $\alpha'=\alpha$. The regions below the dot-dashed and long-dashed contours are excluded by the Bullet Cluster and the ensemble of merging clusters, respectively. From Ref.~\cite{Kaplinghat:2015aga}.}
\label{fig:DMselfinteractions}
\end{center}
\end{figure}
The resulting hierarchical structure formation is an excellent description of the distribution and properties of galaxies on large scales. However, there are unexplained puzzles on scales much smaller than the virial radius of galaxies, such as the core-cusp problem, the missing satellites problem, and the too-big-to-fail problem~\cite{Tulin:2017ara}. Dark matter with significant self-interactions has been argued to retain all the successes of CDM on large scales, while providing an economical solution to the small-scale puzzles~\cite{Tulin:2017ara}. The required ratio of scattering cross section to mass is a few barns per GeV, comparable to the strength for neutron-proton scattering. To be compatible with the measured dark matter densities in relaxed clusters of galaxies, this cross section must decrease with velocity, providing strong constraints on model building~\cite{Kaplinghat:2015aga,Kamada:2016euw}. For example, for a simple model with dark matter interactions mediated by a single particle, the constraints on the dark matter and mediator masses are given in Fig.~\ref{fig:DMselfinteractions}; the favored mediator mass is $\sim 1 - 100$ MeV. For direct searches, the implied momentum dependence in scattering off nucleons could be an important discriminant between WIMPs and SIDM candidates.
There are significant opportunities to leverage existing DOE investments in dark energy science for an enormous return on dark matter science. The first step is to measure the halo mass function on small scales. DES and LSST together can measure the galaxy luminosity function, and halo mass function, to unprecedented scales. Halo masses $\lesssim 10^8-10^9 \, M_\odot$ are the domain of substructure lensing searches, which are on the cusp of realising their full potential. DES is playing a major role in greatly increasing the sample of lens systems suitable for substructure studies~\cite{lin2017}. DES and LSST will also open the door for novel astronomical probes of dark-matter physics~\cite{Bovy:2016irg,Kim:2016ujt}. Using astronomy to measure dark-matter physics relies on a strong theory program, to make an accurate map between particle theory space and the astronomical observable and marginalize over uncertainties in the effects of galaxy formation. A number of the tools required for this exercise are in place or are nearly so (high-resolution hydrodynamic simulations, semi-analytic codes), but require a modest increase in person-power to knit the tools together~\cite{vogelsberger2016,wetzel2016}.
\ssection{Cosmic Microwave Background} Measurements of the cosmic microwave background (CMB) can be used to set strong, robust, and largely model-independent constraints on annihilating or decaying dark matter. These constraints arise from the production of extra free electrons during the cosmic dark ages, by the cooling of electromagnetically interacting particles produced by dark matter annihilation/decay. Consequently, they can be evaded if annihilation is suppressed at late times or at low dark matter velocities (e.g., in the case of $p$-wave annihilation), or if the dark matter annihilates entirely to neutrinos or invisible particles; conversely, they become even stronger if the annihilation rate is enhanced at late times or low dark matter velocities (e.g., in the case of Sommerfeld enhancement due to a light dark mediator). Because the observable effect, to first order, depends only on the total ionizing energy liberated by dark matter annihilations/decays, the constraints only depend on details of the dark matter model via an overall efficiency factor (computed for general scenarios in Refs.~\cite{Slatyer:2015jla,Slatyer:2016qyl}) and are quite insensitive to the spectrum of annihilation/decay products. These bounds are particularly competitive for light dark matter with sub-GeV masses, where many direct dark matter searches lose sensitivity and other indirect searches may have difficulty detecting the products of annihilation. In particular, these limits exclude thermal relic dark matter annihilating to SM particles via $s$-wave processes for dark matter masses below $\sim 10$ GeV~\cite{Ade:2015xua,Slatyer:2015jla}, and decaying dark matter in the keV-TeV mass range with a lifetime shorter than $\sim 10^{{23}-{25}}$ seconds~\cite{Slatyer:2016qyl}, with the limit depending on the dark matter mass and decay channels. The proposed CMB-S4 experiment~\cite{Abazajian:2016yjj} is expected to extend these constraints on the DM annihilation cross section by approximately a factor of 2 beyond the Planck results~\cite{Madhavacheril:2013cna}.
\ssection{Supernovae} Supernova 1987A created an environment of extremely high temperatures and nucleon densities during the core collapse supernova of a massive star in the Large Magellanic Cloud. The rough agreement between predictions of core collapse models and observations of ``neutrino burst'' lasting $\sim 10$ s has provided an opportunity to set bounds on a wide range of new physics models. In Ref.~\cite{Chang:2016ntp}, updated bounds on a dark sector model involving only a dark photon were presented. Among other novelties, these updates include finite-temperature effects on the production and trapping of the new particles for the first time. They utilize a more realistic model of the high-mixing parameter space by including a fully energy-dependent differential optical depth, and they have investigated systematic uncertainties inherited from the wide range of progenitor models. Additional improvements include providing an exact calculation of the lifetime of dark photons below twice the electron mass, where derivative corrections to the Euler-Heisenberg Lagrangian are qualitatively important~\cite{McDermott:2017qcg}, and an investigation of the impact of invisible decays on the core collapse explosion~\cite{ChangEssigMcDermott:upcoming}.
\subsection{Models and Relic Abundance}
\label{WG4sec:models}
Precise cosmological observations have not only provided overwhelming evidence for dark matter, but they have also determined the amount of dark matter in the Universe to the percent level. The relic abundance of dark matter provides yet another criterion for selecting high-value targets: dark matter candidates and parameter regions that have thermal relic abundances in accord with observations merit special attention. For this reason, axions with $\mu$eV masses have traditionally been viewed as the ideal target for axion searches, and WIMPs with TeV-scale masses have been a prime target of the worldwide effort to find dark matter.
In this subsection, we consider new particle candidates and the masses favored by their relic abundances. If dark matter is confined to have SM interactions, the weak force is the only viable possibility, and TeV-scale particles are favored by relic density considerations; this is the coincidence known as the WIMP miracle. Once one considers dark sectors, however, other mass scales may be preferred. For example, dark matter can be lighter and more weakly interacting and still have the correct relic density, an alternative coincidence known as the WIMPless miracle~\cite{Feng:2008ya}. In the following sections, we review new progress that has found still other motivations for other mass scales and even models in which the mass of dark matter is not well defined.
\ssection{Non-Abelian Dark Sectors and Strongly Interacting Dark Matter} Dark sectors with Abelian symmetries, leading to dark photons, have become a standard reference model for light, weakly-interacting particles. However, dark sectors with dark matter charged under a \emph{non-Abelian} SU($N$) gauge group is also perfectly viable. Non-Abelian models are particularly interesting, because they have the potential to undergo confinement at a scale $\Lambda$. This naturally leads to strongly self-interacting dark matter, as may be indicated by the small scale structure issues discussed above, and more generally, allows dark matter to exhibit substantially different phenomenological features between the early and late universe.
In a pure gauge theory, the hidden sector consists only of hidden gluons, which form into hidden glueballs below the confinement scale. There are $3\to 2$ scattering processes that keep the hidden glueballs in kinetic equilibrium and deplete their number density; thus, freeze-out is driven by this ``cannibalization'' mechanism rather than standard $2\to 2$ annihilations~\cite{Carlson:1992fn}. If the hidden sector is secluded, the lightest hidden glueball state is dominant and stable with a mass $\sim\Lambda$, so cannibalization has little effect on the relic abundance~\cite{Boddy:2014yra}.
Alternatively, if the lightest state can efficiently decay into SM particles, the relic abundance of the heavier and presumably longer-lived states are dictated by the cannibalization process~\cite{Forestell:2016qhc}. In a supersymmetric framework, the hidden sector also consists of hidden gluinos.
Under certain SUSY-breaking scenarios, the standard freeze-out process of gluinos can naturally produce a weak-scale dark matter relic abundance~\cite{Feng:2011ik}. Post-confinement, the self-interaction of hidden glueballinos can address small-scale structure anomalies~\cite{Boddy:2014yra}. As an additional consequence, the hidden glueballino spectrum has a hyperfine splitting of the right order to address the unexplained 3.5~keV line observed in the Perseus cluster~\cite{Boddy:2014qxa}.
\ssection{SIMPs and ELDERs} The possibility of significant $3 \to 2$ processes modifies freeze-out and provides an alternative to the ``WIMP miracle'' that has motivated so much of dark matter research to date. For example, dark matter may appear with a mass not at the weak scale but near the QCD confinement scale, $\Lambda_{\rm QCD}\sim 100$ MeV. Such a dark matter particle could be a meson or a baryon of a ``mirror copy'' of the familiar QCD in the hidden sector, e.g., in twin Higgs models. In such a scenario, cannibalization processes naturally lead to a thermal relic abundance consistent with observations. This apparent coincidence, similar in spirit to the well-known ``WIMP miracle," was noted in Refs.~\cite{Hochberg:2014dra,Hochberg:2014kqa,Hochberg:2015vrg}, which dubbed such dark matter candidates Strongly Interacting Massive Particles (SIMPs).
A viable SIMP model requires elastic scattering between the SIMP and SM particles to keep the two sectors in kinetic equilibrium until the $3\to 2$ scattering freezes out. If the kinetic decoupling of the two sectors occurs before freeze-out, the dark matter sector will enter the cannibalization regime. In this case, the relic density is determined by the elastic scattering cross section, leading to the Elastically Decoupling Relic (ELDER) scenario~\cite{Kuflik:2015isi}. Elastic scattering between SIMP/ELDER and SM can be mediated by a dark photon. These scenarios make predictions for dark photon masses and couplings that are shown in Fig.~\ref{fig:wimpsimpelders} and will be probed by next-generation dark photon searches. The ELDER scenario also makes a robust prediction for the cross section of elastic scattering between $\chi$ and electrons, since it is precisely this process that sets the $\chi$ relic density. This prediction will be tested in future direct detection experiments that will search for electron recoils from interactions with ambient dark matter.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.65\textwidth]{figs/WIMP_SIMP_ELDER.pdf}
\caption{Regions of parameters corresponding to the observed relic density, showing the smooth connection between the WIMP, SIMP and ELDER scenarios described in the text. Dark matter elastic scattering and the cannibalization process scale as $\langle \sigma_{\mathrm{el}} v\rangle=\epsilon^2/m_\chi^2$ and $\langle \sigma_{3\rightarrow 2} v^2\rangle=\alpha^3/m_\chi^5$ respectively. From~Ref.~\cite{Kuflik:2015isi}.}
\label{fig:wimpsimpelders}
\end{center}
\end{figure}
\ssection{Non-Abelian Dark Sectors at Fixed Target Experiments} There are a number of ways to realize a dark matter scenario in which the dark matter is kept in kinetic equilibrium with the SM through a dark photon $A'$, but freezes out through the cannibalization process discussed above. For example, the $3\rightarrow 2$ dark matter depletion mechanism can be realized in QCD-like confining dark sectors, where dark matter is composed of stable pions $\pi$ associated with the spontaneous breaking of a global chiral symmetry. In this case, the correct relic abundance is obtained in models where the vector mesons $V$ typically have $m_V \sim 2m_\pi$. When $m_V < 2m_\pi$, these vector mesons must decay to the SM via $V^0\rightarrow \bar f f$ or $V^\pm \rightarrow \pi^\pm \bar f f$. These dark sector states can be produced in fixed target collisions through the decays $A^\prime \to V\pi, \pi\pi, VV$. The vector mesons are unstable, but naturally long-lived. This gives rise to a displaced vertex signature accessible to present and future fixed target experiments. Different experimental baselines provide complementary coverage of the model parameter space. For example, the SLAC beam dump experiment E137 probed very long-lived vector mesons corresponding to small kinetic mixing parameter $\varepsilon$~\cite{Bjorken:1988as}. The currently operating HPS experiment can search for these displaced decays at larger $\varepsilon$~\cite{Battaglieri:2014hga}. In particular, a near-future HPS run can probe theories where the hidden sector pions make up all of dark matter. Future experiments, such as LDMX and long-baseline proton beam dumps like SeaQuest and SHiP are also sensitive to these signals.
\ssection{Co-annihilating Light Dark Matter} Dark sectors with coannihilating thermal relics are well motivated, easy to engineer, and arise in many extensions of the SM, yet they can be difficult to probe with traditional searches. In a representative class of such models, the dark matter abundance arises via $\chi_1 \chi_2 \to$ SM coannihilation, where $\chi_1$ is the stable dark matter candidate, and $\chi_2$ is a heavier unstable dark sector state. After freeze-out, $\chi_2$ is depleted and annihilation shuts off, so these scenarios are safe from CMB bounds~\cite{Izaguirre:2014dua}, but the absence of $\chi_2$ in the halo also eliminates indirect detection observables. Furthermore, upscattering at direct detection requires $\chi_1 \to \chi_2$ transitions, which are kinematically forbidden for $m_2 - m_1 \gtrsim 100~\,\mathrm{keV}$, so this mechanism can only be tested with accelerator probes.
For dark matter masses in the few GeV to TeV range, the most powerful probes involve searches for $\chi_2 \to \chi_1 + \text{SM}$, where the final state SM particles yield displaced vertices at BaBar and the LHC~\cite{Izaguirre:2015zva}. For masses in the MeV to few GeV range, it was shown in Ref.~\cite{Izaguirre:2017bqb} that powerful searches are possible through $\chi_{i} N \to \chi_j N$ scattering or $\chi_2 \to \chi_1+ \text{SM}$ processes observable in detectors positioned downstream of proton and electron beam dumps at MiniBooNE and BDX~\cite{Aguilar-Arevalo:2017mqx,Izaguirre:2013uxa,Battaglieri:2016ggd}. Similarly powerful probes involve missing energy and momentum at LDMX and NA64~\cite{Izaguirre:2014dua,Banerjee:2016tad}. The combined reach of these efforts can comprehensively test the thermal relic parameter space for $\chi_1\chi_2 \to$ SM coannihilation.
\ssection{Sexaquark Dark Matter} Without considering dark sectors, there exists a natural non-Abelian gauge group that could be involved in dark matter dynamics---QCD of the SM. It has recently been pointed out that there may be an as-yet-undiscovered \emph{stable} particle in the SM, which would be an excellent dark matter candidate~\cite{fSDM17}. The particle, called $S$, is a neutral, scalar sexaquark ($uuddss$) with baryon number $\text{B}=2$ and strangeness $\text{S}=-2$~\cite{fSableS17}. Baryon number conservation implies that the $S$ is absolutely stable if its mass is $ \le 2 \, (m_p + m_e) = 1877.6$ MeV. Its relic abundance can naturally be of the right order, and during nucleosynthesis it acts as inert relic dark matter, so nucleosynthesis constraints on baryons do not apply. Simulations indicate that hadronic interactions with gas in the galaxy can bring the dark matter in the solar neighborhood into co-rotation, so it has too little energy to have been detected so far~\cite{fSDM17,wfCoRotation17}.
Two accelerator experiments are proposed to discover the $S$ if it exists through the processes $K^- p \rightarrow S \bar{\Lambda}$, and $\Upsilon [\rightarrow {\rm gluons}] \rightarrow S \, \bar{\Lambda} \, \bar{\Lambda}$ (or charge conjugate). The $\Lambda \, (\bar{\Lambda})$'s can be reconstructed with high efficiency, and their 4-momenta well measured. If all final particles but the $S$ are detected, missing mass gives the mass of the $S$; B and S conservation establish its distinctive quantum numbers. The $K^- p \rightarrow S \bar{\Lambda}$ experiment can be done with the NA61 detector and beam; all that is needed are simulations to optimize background rejection and a dedicated run with appropriate beam and target. Rate estimates for $\Upsilon$ decay suggest that events with $S$ or $\bar{S}$ in the final state have already been collected; only the resources for their analysis is needed. The two approaches are complementary and can be completed quickly and at very low cost.
\ssection{Dynamical Dark Matter} Although many non-standard dark matter scenarios transcend the traditional WIMP or axion frameworks and involve new regions of dark matter parameter space, perhaps none do so as dramatically as those that arise within the Dynamical Dark Matter (DDM) framework~\cite{DDM1,DDM2}. In DDM scenarios, the requirement of dark matter stability is replaced by a balancing of lifetimes against cosmological abundances across a large ensemble of individual dark matter species that exhibit a broad spectrum of masses, lifetimes, and abundances. Thus, in such scenarios, it is the entire DDM ensemble that collectively serves as the dark matter ``candidate,'' albeit one which cannot be characterized by a single mass or cross section.
This change in the nature of the dark-matter candidate has numerous consequences for experimental dark matter searches. First, there are fundamental differences in the search strategies best suited for discovering DDM ensembles at traditional dark matter experiments. For example, in collider experiments, the distributions of relevant kinematic variables can be significantly modified~\cite{DDMLHC}, with major changes for standard experimental handles (such as the ``mass edge'' often apparent in the invariant-mass distributions of particles produced alongside the dark matter constituents). Likewise, at direct detection experiments, dark-matter recoil energy distributions can also be modified in dramatic ways~\cite{DDMDD}. Second, and perhaps even more interestingly, entirely new experimental techniques may also now become relevant for probing the DDM dark sector. For example, a proposed experiment such as MATHUSLA~\cite{MATHUSLA}---a surface detector designed to detect long-lived (but not cosmologically stable) particles at the LHC by searching for displaced vertices on $\mathcal{O}(10^2\mathrm{~m})$ length scales---can serve as a probe of certain otherwise inaccessible regions within the DDM ensemble. This leads to additional complementarities between existing and new probes of the dark sector.
\subsection{Complementarity}
\label{WG4sec:complementarity}
As noted in \secref{introduction}, classic dark matter candidates and existing experiments remain interesting, because ongoing and planned searches continue to probe highly motivated regions of parameter space, these programs will continue to dominate the funding profile in dark matter for the foreseeable future, and they provide the context in which new experiments and complementarity are to be evaluated. In addition, there is continual progress in both theory and experiment in these areas. In this section, we discuss recent developments in this area, as well as a few ``exotic'' dark matter candidates where novel experimental searches have been proposed.
\ssection{Mixed WIMP/Axion Dark Matter}
Models with supersymmetry yield neutralino WIMP dark matter, and models with Peccei-Quinn symmetry~\cite{Peccei:1977hh} yield axion dark matter. An attractive possibility is to consider models with both supersymmetry and PQ symmetry that simultaneously solve the gauge hierarchy and strong CP problems~\cite{Bae:2013hma}. Natural models of supersymmetry have a light Higgsino ($\sim 100-200\,\mathrm{GeV}$), and the Higgsino may be the only supersymmetric state at the weak scale. Such a Higgsino-like WIMP would be the LSP and, if thermally produced, would only contribute as a sub-dominant part of the dark matter. However, in supersymmetric models with PQ symmetry, the axion can contribute as the remainder of the dark matter abundance. When combined with supersymmetry to form the SUSY DFSZ solution~\cite{Kim:1983dt}, the breaking of PQ symmetry generates the Higgsino $\mu$-term, $\mu\sim f_a^2/M_p$, and the SUSY spectrum admits a natural little hierarchy $\mu\sim f_a^2/M_p \sim 100-200 \,\mathrm{GeV} \ll m_{\text{SUSY}}\sim m_{3/2} \sim 1-20 \,\mathrm{TeV}$. Some experimental consequences of this setup are that Higgsino dark matter could be detectable by multi-ton-scale noble liquid detectors, the axion lies in the range $3\times 10^{-7}\,\mathrm{eV} \lesssim m_a \lesssim 3\times 10^{-4}\,\mathrm{eV}$, and the presence of Higgsinos in the loop generating the $g_{a\gamma\gamma}$ coupling may further suppress this coupling, requiring deeper axion probes~\cite{Bae:2017hlp}.
\ssection{Future Argon Direct Detection Experiments} In addition to new regions of dark matter parameter space, we must continue to search for ``conventional" dark matter using ever more precise techniques.
Researchers from four collaborations who have pioneered the development of argon dark matter searches are in the process of forming a joint collaboration towards a coordinated global future dark matter program. Argon is unique in that it allows excellent pulse-shape discrimination using scintillation light in a single-phase detector, and a TPC exploiting the ratio of primary scintillation and ionization (S1/S2) can be used to increase background rejection. The collaboration, numbering over 350 researchers, brings together complementary expertise from miniCLEAN, DEAP-3600, ArDM and DarkSide.
The new collaboration will develop and operate DarkSide-20k at LNGS (DS-20k). The DS-20k program will enhance our sensitivity to WIMPs, particularly at high WIMP mass, using 20 tonnes of UAr and will also be the first large-scale detector to make use of Silicon Photomultipliers (SiPMs) for light readout. DS-20k has a dark matter sensitivity competing with that of future searches using xenon, to which it is complementary with a ``background-free'' technology (detection in both targets allows better determination of mass and cross section). The DS-20k program also complements LHC searches, with the direct argon search sensitive to a higher mass range than accessible with colliders. DS-20k is designed to collect an exposure of 100 tonne-years completely free of neutron-induced nuclear recoil background and all electron recoil background. DS-20k is set to start operating by 2021 and, for 1 TeV WIMPs, will probe WIMP-nucleon spin-independent cross sections of $1.2\times 10^{-47}$ cm$^2$ ($7.4\times 10^{-48}$ cm$^2$) after 5 (10) years. The collaboration is also targeting a longer-term multi-hundred tonne LAr detector that will follow DS-20k, which reaches down to the neutrino floor and is immune to the solar $pp$ elastic scattering neutrino background, which is a concern for xenon detectors with 1/2 event per tonne-year after recoil discrimination. The program includes further development of underground argon, SiPM photosensors, and low background materials screening. DOE support in these areas will be extremely valuable to this effort.
\ssection{Future Two-phase Xenon Experiments} As well as proposed new targets and technologies, existing approaches are being strengthened and extended. For WIMP masses above 4 GeV, searches for spin-independent and neutron-spin-dependent dark matter interactions are being led by two-phase xenon experiments, such as XENON~\cite{Aprile:2015uzo}, LUX~\cite{Akerib:2016vxi}, and PandaX~\cite{Tan:2016zwf}. At a WIMP mass of 50 GeV$/c^2$, WIMP-nucleon spin-independent cross sections above $1.1\times 10^{-46}\mathrm{cm}^2$ are excluded by LUX, which had a 250 kg active xenon mass. PandaX-II is currently operational with a 500 kg active xenon mass, and the PandaX collaboration is designing a new experiment at CJPL, PandaX-nT. XENON1T is beginning operations with a 2000 kg active mass, has a projected sensitivity of $1.6\times 10^{-47}\mathrm{cm}^2$ at 50 GeV, and is planned to be upgraded to XENONnT, which will use the same infrastructure at LNGS. Currently under construction, the LUX-ZEPLIN (LZ) experiment~\cite{Mount:2017qzi} will use a 7000 kg dual-phase xenon time projection chamber for the direct detection of dark matter. Suppression of backgrounds is achieved through fiducialization and a veto strategy involving anti-coincidence between the main time projection chamber and outer detectors (an instrumented xenon ``skin'' and liquid scintillator detector). LZ is projected to have a baseline sensitivity of $2.3\times 10^{-48}$ $\mathrm{cm}^2$ for a 40 GeV/$c^2$ WIMP mass. Operation of LZ will begin in 2020 at the Sanford Underground Research Facility (SURF).
\ssection{Cherenkov Telescope Array} The Cherenkov Telescope Array (CTA)~\cite{2013APh....43....3A} will provide an order-of-magnitude improvement in sensitivity over current imaging air Cherenkov telescopes in the 100 GeV -- 10 TeV energy range, along with new sensitivity from 20 GeV up to 300 TeV. An 8$^\circ$ field of view combined with 2-3 arcmin angular resolution enables efficient surveys and studies of extended and diffuse emission. Better than 10\% energy resolution above 100 GeV enables resolution of spectral features. For deep observations ($\sim$500 hrs) of the Galactic Center, CTA will have the sensitivity to reach the thermal relic cross section for a broad range of WIMP particle masses and annihilation channels~\cite{2015PhRvD..91l2003L}. CTA is especially powerful in searching for WIMPs with masses at the TeV scale and higher, making it a necessary complement to other techniques to span the full dark matter discovery parameter space~\cite{2013arXiv1305.0302W,2015PhRvD..91e5011C}. CTA will contribute more broadly to dark matter science via measurements of the cosmic-ray electron spectrum to several 10's of TeV or higher, depending on whether local sources or more exotic production mechanisms (such as dark matter) contribute, and via searches for axion-like particles through studies of the $\gamma$-ray opacity of the universe~\cite{2013APh....43..189D}.
\ssection{MeV Gamma-Ray Detectors} Gamma-ray observations in the MeV energy range with future telescopes, such as e-ASTROGAM~\cite{DeAngelis:2016slk}, offer opportunities to search for signals from the annihilation or decay of particle dark matter. Intriguingly, MeV ``excesses'' have been identified, both in the MeV diffuse extragalactic gamma-ray background, as well as in the Galactic MeV emission compared to expected astrophysical background~\cite{Strong:2004ry,Strong:2004de,Lacki:2012si}. Such excesses could be associated, for example, with dark matter decay~\cite{Cembranos:2006gt}. Dark matter particles with masses in the MeV range can generically produce detectable MeV gamma-ray signals that are compatible with existing constraints from BBN and CMB, as recently studied in, for example, Refs.~\cite{Boddy:2015efa, Bartels:2017dpb, Gonzalez-Morales:2017jkx} and in Ref.~\cite{Boddy:2016fds} in the context of dynamical dark matter models.
\ssection{ATLAS and CMS Searches} The LHC provides stringent limits on dark matter production via spin-0 and spin-1 mediators~\cite{Abercrombie:2015wmb,CMS:DP2016057,CMS:2017xrr,ATLAS:PUBPAGE}. Reinterpretations of CMS and ATLAS results in terms of dark matter-nucleon cross sections demonstrate the complementarity between collider and direct detection measurements, and the assumptions involved in both approaches. In particular, collider limits typically become the most stringent for spin-dependent cross sections and for light dark matter masses $\lesssim$ 100 GeV, assuming the mediators involved in DM production are not light.
The High-Luminosity LHC (HL-LHC) will provide ATLAS and CMS with 3000 fb$^{-1}$, a factor of 100 times more data than currently collected. Projections for dark matter searches at the HL-LHC indicate that the reach of collider experiments will extend below the coherent neutrino scattering limit~\cite{Buchmueller:2014yoa}, where direct detection experiments have little sensitivity. Thanks to the large amount of available data and plans to broaden the dark matter program beyond the currently explored signatures, HL-LHC will also extend its discovery reach to weaker coupling scenarios. For this program to be successful, it is imperative that the performance of the ATLAS and CMS detectors is improved beyond their present levels. One of the primary challenges for the HL-LHC experiments will be the extremely large number of interactions per beam crossing (pile-up). Pile-up mitigation can be achieved by introducing tracking information in the hardware triggers of the experiments, allowing information from fully reconstructed events to be leveraged early in the process of data selection. A remarkable background rate reduction will be achieved, while keeping good efficiency and measurement resolutions for dark matter signals.
\ssection{LHCb Searches} Many light dark sector candidates, e.g., the dark photon, are expected to be produced in rare meson decays. The high luminosity environment of the LHC creates copious numbers of mesons, which provide a complementary and already existing avenue to dark sector searches. However, data acquisition of rare meson decays in high pile-up LHC events is experimentally challenging. The LHC beauty experiment (LHCb) was purpose-built for detecting rare $B$-hadron decays and, consequently, is ideally suited for rare meson decay searches. LHCb has a flexible trigger system where real-time detector calibration, low transverse-momentum thresholds, and full event reconstruction at trigger level allow for the acquisition of high statistic data samples with rare meson decays. During Run 3 of LHC operation, the LHCb data acquisition system will be upgraded to a triggerless readout with full software reconstruction, enabling even more efficient data collection.
LHCb searches have already been performed for dark bosons produced in $B^0(B^+) \to K^{*0}(K^+) \mu^+ \mu-$ decays~\cite{Aaij:2015tna,Aaij:2016qsm} and Majorana neutrinos in $B^- \to \pi^+ \mu^- \mu^-$ decays~\cite{Aaij:2014aba}. Two dark photon searches using inclusive di-muon production~\cite{Ilten:2016tkc} and $D^{*0} \to D^0 \, e^+ e^-$ decays~\cite{Ilten:2015hya} were recently proposed. Because LHCb has an excellent lifetime resolution of $\approx 50~\mathrm{fs}$, prompt and displaced searches can be performed simultaneously, allowing much of the open parameter space in the kinetic mixing parameter $\varepsilon$ between prompt and beam-dump limits to be covered with the full Run 3 LHCb dataset. The $D^{*0}$ search will cover dark photon masses from the di-electron mass threshold up to $1.9~{\rm GeV}$, and the inclusive di-muon search will cover masses from the di-muon threshold upwards. Further channels are being considered to cover the gap between these two channels. Although the $D^{*0}$ search requires Run 3 triggers, the inclusive di-muon search is already possible with the current Run 2 LHCb data.
\ssection{Light Dark Matter at Neutrino Facilities} Neutrino facilities can probe light dark matter-nucleon coupling in a fashion complementary to present and future direct detection experiments~\cite{Batell:2009di}. If dark matter interacts with quarks via a light mediator, a dark matter beam is produced at these facilities along with the neutrino beam, and the dark matter particles then enter the near detector and scatter with the nucleons inside, just as for neutrinos. Hence, the challenge of this program is the suppression of the neutrino background.
The MiniBooNE (MB) collaboration has already carried out the first dedicated search for light dark matter at a neutrino facility~\cite{Aguilar-Arevalo:2017mqx}. They placed strong bounds on the 1 MeV--1 GeV mediator mass region, covering a significant part of unexplored territory. Building on the success of MB, the authors of Refs.~\cite{Dobrescu:2014ita,Coloma:2015pih,Frugiuele:2017zvx} investigated the reach of facilities near the 120 GeV proton beam at the Main Injector facility and the future LBNF facility, both at Fermilab. These higher energy proton beams offer the possibility to extend the reach up to 7--8 GeV mediator masses. The signal in this case is given by deep inelastic scattering events. The neutrino background, which presents a problem for quasi-on-axis detectors like MINOS or NOvA, can be sufficiently suppressed by going far off-axis. The ideal position to maximize signal over background is $ 6.5^\circ$ off-axis and 200 m away~\cite{Coloma:2015pih}, which is, coincidentally, very close to the location of the MB detector relative to the Main Injector ($6.5^\circ$ and 750 m). Therefore, by analyzing existing data coming from the Main Injector, MB can extend its reach for light dark matter and also can get similar sensitivity for sub-GeV mediators~\cite{Frugiuele:2017zvx}. This proposal is that it is completely symbiotic to the neutrino program.
\ssection{Trapped Atom Search for Sterile Neutrino Dark Matter} An alternative, and perhaps equally well-motivated, candidate to WIMP dark matter is the sterile neutrino~\cite{Shi:1999}. The HUNTER Experiment (Heavy Unseen Neutrinos from Total Energy-momentum Reconstruction)~\cite{Smith:2016vku} would achieve this in a non-accelerator experiment using a medical isotope and existing technologies. High-precision, kinematically complete measurements of K-capture decays are made, using a sample of $>10^8$ radioactive $^{131}$Cs atoms contained in a laser atom trap. Any sterile neutrino would be produced in the decay as an addition to the electron neutrino mixture of $\nu_1$, $\nu_2$, $\nu_3$. Momenta of the recoiling $^{131}$Xe atom and the X-ray and Auger electron(s) produced in the decay are all measured with the requisite accuracy using the MOTRIMS spectroscopic method~\cite{Ullrich:2003}, and the neutrino 4-momentum and mass are reconstructed. A sterile neutrino signal appears as a separated population of events at nonzero neutrino mass. The initial implementation would probe sterile neutrino masses in the range of ten to a few hundred keV and coupling constants in the high 10$^{-5}$ range, requiring a three-year program and funding at a small fraction of the small projects portfolio. Subsequent upgrades of the trap and the detection system would probe coupling constants down to the 10$^{-11}$ range.
\ssection{Mirror Neutron Searches}
A novel approach to probe the nature of dark matter is to search for neutron oscillations into a hidden dark sector. Neutron oscillations are predicted by theories that postulate a parallel sector with identical particles and interactions as in the SM, such as mirror matter~\cite{Berezhiani:2005hv}. Big Bang nucleosynthesis and cosmological limits imply mirror matter should be colder than ordinary matter and therefore helium-dominated with faster cosmological evolution. Mirror matter could explain baryogenesis and some or all of dark matter~\cite{Berezhiani:2003xm}.
Ultracold neutron storage measurements place limits on the oscillation time of $\tau<448$~s, assuming no mirror magnetic field $B'$~\cite{Serebrov:2008hw}. When a nonzero $B'$ is considered, some conflicting results have been reported for $\tau<10$~s~\cite{Berezhiani:2012rq, Altarev:2009tg}. To clarify the situation, a disappearance-regeneration ``beam-dump'' type experiment has been proposed~\cite{Berezhiani:2017azg}. This type of experiment is uniquely sensitive to a certain class of dark matter. Existing cold neutron beamlines such as at the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory or the National Institute of Standards and Technology Center for Neutron Research (NIST NCNR) could produce results in a few years for a small fraction of the small project threshold~\cite{BroussardRyboltUSCosmicVisions}, which will either exclude those controversial results or discover a new phenomenon that will inform us about the nature of dark matter.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.55\textwidth]{figs/PBH.pdf}
\caption{Existing microlensing constraints on the fraction of the mass in the Milky Way halo that can be composed of intermediate mass MACHO dark matter. The masses of detected black holes \cite{Abbott:2016blz,Wyrzykowski:2015ppa,LoebNature} are indicated in orange. From Ref.~\cite{DawsonUSCosmicVisions}.}
\label{fig:PBHchimney}
\end{center}
\end{figure}
\ssection{Microlensing Searches for Black Hole Dark Matter} LIGO's recent discovery of $30M_\odot$ black holes have renewed interest in the possibility that dark matter consists entirely of intermediate mass black holes formed less than one second after the Big Bang.
Massive compact halo object (MACHO) searches from the 1990's constrained the MACHO content of the universe for MACHO masses below $15M_\odot$. While the original CMB and wide-binary constraints from the 2000's appeared to exclude the intermediate mass (IM) MACHO parameter space above $2M_\odot$, recent studies have shown that the complex and poorly constrained astrophysical assumptions that the CMB and wide-binary constraints relied on were incorrect. Once again opening the parameter space above $15M_\odot$ where black holes like those detected by LIGO may account for the majority of dark matter (Fig.~\ref{fig:PBHchimney}). Rather than rely on probes with complex astrophysical assumptions and associated systematics, one can exploit direct detection microlensing probes of the IM MACHO population to determine if they comprise all of the dark matter.
With a 5-year, 700-square-degree, multi-band survey of the Galactic bulge with DECam (4 nights/month, 8 months/year, resulting in $\sim 60$ measurements/year of $\sim 500$ million stars) one expects $\sim 100$ microlensing events by intermediate mass MACHOs in the $15-10^4 M_\odot$ range. LSST also has potential to be the ideal survey for this science (with $\sim 1000$ expected microlensing events); however, microlensing is currently outside its purview. Additionally, current survey plan options, which only observe the Milky Way in the first year, will unnecessarily preclude microlensing dark matter science. It is pressing that we begin this effort now while there is still time to influence the LSST survey strategy. By leveraging existing DOE investments in DECam and LSST, LLNL and FNAL computing, and LLNL LDRD staff support, this survey can be carried out for a small fraction of the small project threshold~\cite{DawsonUSCosmicVisions}.
\subsection{New Candidates, Targets, and Complementarity--Summary}
\label{WG4sec:conclusions}
Dark matter has long been one of the leading scientific open questions of our time, but the field has entered a new era. Simply put, dark matter has been transformed in recent years by innovative cross talk across many fields of physics. Previously confined to the cosmic frontier of particle physics, it now spans the cosmic, energy, and intensity frontiers, has become an incredibly fertile field for creative ideas about new particles and forces, and is the source of fascinating new connections between particle physics, astrophysics, and many other subfields, including nuclear, atomic, and condensed matter physics. Most notably, these innovative ideas have opened up completely new directions that can be explored by inexpensive experiments, creating opportunities not seen since the early days of particle searches for dark matter in the 1980's.
In this chapter we have briefly reviewed the new candidates, target regions of parameter space, and complementarity between different approaches that have emerged. There are many exciting opportunities for high-value investments in dark matter research. We highlight a few of them here:
\begin{itemize}
\item {\em Importance of Investment in Theory.} Theory has played a particularly important role in these recent developments, motivating new models and regions of parameter space, suggesting new experiments, and drawing connections between disparate phenomena. {\bf Healthy support for theory is essential to maintaining the flow of creative and cross-disciplinary ideas that have been seen in recent years, and which may finally unmask the particle identity of dark matter.}
\item {\em Nuclear and Accelerator Tests of the $^8$Be Anomaly.} The $6.8\sigma$ anomaly in $^8$Be nuclear decays may be evidence for a new 17 MeV boson and provides a well-defined target for future experiments. The experimental results disagree with new and refined nuclear theory predictions, and several viable new particle explanations exist. Additional experiments are needed to confirm or exclude the anomaly. {\bf The $^8$Be anomaly strongly motivates proposed followup nuclear experiments that are fast (under 2 years) and cheap (a small fraction of the small projects threshold), as well as isotope shift spectroscopy experiments and accelerator searches for new bosons with masses $\sim 10$ MeV and electron couplings $\varepsilon \sim 10^{-4} - 10^{-3}$.}
\item {\em Synergy with Cosmology and Astrophysics.} Precision cosmology now probes the microscopic particle properties of dark matter. Observations of the CMB and supernovae constrain regions of parameter space inaccessible to particle experiments, and small-scale structure has motivated self-interacting dark matter models with implications for particle experiments. {\bf Small investments in simulations and astroparticle theory can leverage the enormous amount of cosmological data already being collected and are certain to provide not just {\em constraints} on dark matter {\em candidates}, but {\em measurements} of the properties of {\em dark matter}.}
\item {\em Importance of the 1 to 100 MeV Mass Scale.} The WIMP miracle has motivated searches for weak-scale dark matter, but recent developments have broadened the range of interesting masses for both dark matter particles and the particles mediating their interactions with the standard model. For example, motivations for sub-GeV dark matter from hidden thermal relics and asymmetric dark matter are discussed above. In the work discussed in this section, a diverse set of new considerations motivate the 1 to 100 MeV mass scale for dark sector particles. These considerations include thermal relic abundances in SIMP and ELDER models with significant $3\to2$ interactions, small-scale structure puzzles that motivate QCD-like self-interactions induced by 1 to 100 MeV mediators, and existing anomalies, such as the muon $(g-2)$, the proton radius and $^8$Be anomalies. {\bf New models, astrophysical observations, and existing experimental anomalies point to the 1 to 100 MeV mass scale as a high-value target region for dark matter and dark mediator searches.}
\item {\em Microlensing Searches for Solar Mass Black Hole Dark Matter.} The LIGO observation of colliding $\sim 30 M_{\odot}$ mass black holes has renewed interest in the possibility that such black holes make up some or all of the dark matter. {\bf The LIGO discovery of gravitational waves from colliding black holes strongly motivate a proposed microlensing search that can confirm or exclude the possibility of intermediate mass black hole dark matter using existing facilities with minimal funding.}
\end{itemize}
\subsubsection{WG1}
Direct-detection experiments play a unique and essential role in our quest to identify the DM. Several proposals and ideas exist for new experiments that present a low-cost opportunity --- well within the ``small-project'' scale --- to probe DM with masses between the meV to GeV scale, many orders of magnitude in mass below the planned searches by the G2 experiments LZ and SuperCDMS. In fact, the working group recognizes that recent advances in theory and experiment means that {\it now} is the right time for targeted investments to bring to fruition several recent new ideas and proposals and develop them into real experiments.
Several well-motivated DM candidates can be probed. This includes DM that obtains its relic abundance from thermal freeze-out, from an initial asymmetry between DM particles and anti-particles, from freeze-in, through $3\to 2$ annihilations in models of Strongly Interacting Massive Particles (SIMPs), from its elastic scattering off Standard Model particles in the ``Elastically Decoupling Relics'' (ELDERs) scenario, or from the misalignment mechanism. In several cases, {\it sharp theory targets} in parameter spaces can be identified, which can be probed by first-generation, low-cost experiments with target exposures of as little as 100 gram-days.
Direct detection also provides the {\it only} possibility to test several freeze-in scenarios: in these scenarios, DM is never in thermal equilibrium with ordinary matter in the early Universe --- necessitating tiny interactions between DM and Standard Model particles --- but a small residual annihilation of Standard Model to DM particles through an ultralight mediator allows the correct relic abundance to be obtained. Although the couplings are small, the ultralight mediator leads to an enhancement of the direct-detection cross section at low momentum transfers, allowing this scenario to be probed with new direct-detection experiments but not with other experiments.
We emphasize that the {\it same} experiment can probe {\it many} DM candidates. At the same time, there is a need for experiments that probe DM couplings to electrons as well as experiments that probe DM couplings to nuclei.
A discovery of a new particle at an underground direct-detection experiment would constitute strong evidence that such a particle constitutes all or at least part of the DM.
This emphasizes that a small-scale program will be most successful if it contains a multitude of approaches to probe DM.
\subsubsection{WG2}
In the case of ultra-light bosonic dark matter, the QCD axion remains one of the best motivated dark matter candidates. While axions provide perhaps the simplest solution to the strong-CP problem, these models also inevitably produce dark matter via release of their initial potential energy density. Axion dark matter searches and cosmic microwave background experiments provide complementary probes of inflation; a measurement of the axion mass by detection of the dark matter beam would also immediately determine or constrain the energy scale of cosmic inflation; Recent phenomenology has also indicated a possibly rich interplay between QCD axions and the electroweak hierarchy problem.
Searches for the QCD axion, including continuation of the current ADMX generation 2 experiment should be highly prioritized in any future dark matter program. However it should be noted that similarly cost-effective experimental techniques have been identified that would cover a broad range of masses for well-motivated and more general scalar, pseudoscalar, or vector dark matter models. These experiments would import non-traditional detector technology that has nonetheless been well developed in other fields of physics – atom interferometry, nuclear magnetic resonance, fifth force measurements. Large improvements sensitivity to low mass bosonic dark matter can and should be quickly obtained by engaging these other communities in cross-disciplinary collaborations.
\subsubsection{WG3}
The next generation of small-scale accelerator experiments offer an opportunity for the US DM program to play {\it the} leading role in light DM and dark sector physics over the next decade. Accelerator experiments have become essential tools in probing DM in the vicinity of known Standard Model mass scales, roughly MeV - TeV. A thermal origin is a well-motivated paradigm that has received much attention during the last decades, and it provides sharp theoretical guidance for new experimental probes. Among the new DM parameter space highlighted in this report, sub-GeV hidden sector thermal DM that can annihilate directly into Standard Model final states stands out both for its predictiveness and testability in laboratory experiments. Accelerators can cover {\bf all} of the direct annihilation thermal targets as well as much of the natural parameter space for the secluded annihilation DM scenario.
New precision fixed-target and collider experiments are uniquely capable of decisively testing this possibility,
benefiting from the fact that the rate of relativistic DM production at accelerators is predicted from thermal freeze-out,
whereas the rate of non-relativistic DM scattering is highly sensitive to the details of the dark sector particles.
The discussion in this section is focused on sub-GeV directly annihilating thermal DM, because of the dramatic scientific impact of reaching this milestone as well as the opportunity to do so timely. At the same time, we emphasize that the reach of an accelerator-based program is broad.
The proposed experiments will explore a wide parameter space for secluded thermal DM, despite the absence of a sharp sensitivity milestone for this scenario, as well as models of DM with a quasi-thermal origin such as asymmetric DM and the SIMP/ELDER scenarios.
Finally, a key feature of the next generation accelerator program highlighted in this working group is that the various experimental methods we survey leverage proven techniques and existing technology, thus enabling the deployment of the majority of the proposals in the very near future.
\subsubsection{WG4}
The hunt for dark matter now crosses multiple frontiers and benefits from vibrant communication between many subdisciplines of physics, including astrophysics, cosmology, and nuclear, atomic, and condensed matter physics. Innovations springing from these collaborations have created a wealth of new ideas that can be explored by inexpensive experiments. In addition, there are various existing experimental anomalies that may be pointing towards dark sector physics. We highlight a few areas where research support is particularly timely and will lead to great improvements in our understanding of dark matter and the dark sector.
\begin{itemize}
\item {\em Importance of Investment in Theory.} Healthy support for theory is essential to maintaining the flow of creative and cross-disciplinary ideas that have been seen in recent years, and which may finally unmask the particle identity of dark matter.
\item {\em Nuclear and Accelerator Tests of the $^8$Be Anomaly.} The $^8$Be anomaly strongly motivates proposed followup nuclear experiments that are fast (under 2 years) and cheap (a small fraction of the small projects threshold), as well as isotope shift spectroscopy experiments and accelerator searches for new bosons with masses $\sim 10$ MeV and electron couplings $\varepsilon \sim 10^{-4} - 10^{-3}$.
\item {\em Synergy with Cosmology and Astrophysics.} Small investments in simulations and astroparticle theory can leverage the enormous amount of cosmological data already being collected and are certain to provide not just {\em constraints} on dark matter {\em candidates}, but {\em measurements} of the properties of {\em dark matter}.
\item {\em Prominence of the 1 to 100 MeV Mass Scale.} New models, astrophysical observations, and existing experimental anomalies point to the 1 to 100 MeV mass scale as a high-value target region for dark matter and dark mediator searches.
\item {\em Microlensing Searches for Solar Mass Black Hole Dark Matter.} The LIGO discovery of gravitational waves from colliding black holes strongly motivate a proposed microlensing search that can confirm or exclude the possibility of intermediate mass black hole dark matter using existing facilities with minimal funding.
\end{itemize}
|
2,877,628,089,368 | arxiv | \section{Introduction}
Phases of matter with intrinsic topological order are characterized by a pattern of long-range quantum entanglement~\cite{Wenbook,Chen_Gu_Wen,Kitaev_2003}.
This entanglement structure endows such states with a rich phenomenology.
Indeed, they exhibit exotic properties such as a topological ground state degeneracy, ``fractionalized'' bulk excitations with novel quantum statistics, and quantized response properties~\cite{Wen_90, Wilczek82, Wen89, Read89, Kivelson89}.
While such phases were first discovered in the context of the fractional quantum Hall effect~\cite{Tsui82,Laughlin83}, they were later theoretically generalized to quantum spin systems and connected with Anderson's resonating valence bond liquid~\cite{ANDERSON,Anderson87,Wen_90,ReadSachdev} to establish the notion of a quantum spin liquid (QSL).
Since then, QSLs have been the subject of decades of sustained interest within the context of condensed matter physics~\cite{SpinLiquidsReviewBalents,Knolle19,WenRMP,moessner2021topological,Broholmeaay0668}.
In addition to their intriguing material properties, QSLs can also be utilized as a platform for fault-tolerant quantum computation \cite{Kitaev_2003,Kitaev06,Nayak_RMP}.
Namely, the novel quantum statistics of bulk excitations above a QSL can be used to implement logical gates upon it.
Since the statistics of these excitations are topologically protected and information is stored non-locally in the state, such states are intrinsically fault-tolerant at the hardware level.
Accordingly, the prospect of building a ``topological quantum computer'' has stimulated large-scale investigations of such phases in the context of quantum information science.
As a consequence, there have been persistent efforts toward realizing QSLs in solid-state materials \cite{SpinLiquidsReviewBalents,Knolle19,WenRMP,moessner2021topological,Broholmeaay0668}.
The key challenge here is that the requirements for realizing topological order in equilibrium are very restrictive.
In a broad set of circumstances, there are two essential ingredients.
The first is that, at low energies, the system is described by an \textit{emergent gauge theory}---its low-energy states satisfy a local energetic constraint typically due to either geometric or interaction frustration.
Such constraints define the notion of a local Gauss law and lead to an extensive number of energetically low-lying Gauss law-satisfying states.
The second ingredient is the existence of strong quantum resonances connecting these low-lying states which stabilize a thermodynamically extensive quantum superposition of them.
Such a superposition ensures that the excitations of the emergent gauge theory are deconfined, leading to the celebration notion of \textit{anyons} \cite{Leinaas77,Wilczek82}.
While local constraints are routinely found in frustrated magnetic systems (see e.g. Refs.~\onlinecite{liebmann1986statistical,Bramwell_2001,Castelnovo2008}), strong quantum resonances between states satisfying these constraints often require many local rearrangements of the state.
Because naturally occurring Hamiltonians typically only contain few body terms, such resonances must typically be generated perturbatively.
As a consequence, these resonances are typically very weak leading to a small energy gap above a putative topologically ordered phase, making such phases unstable.
Recently, however, there have been a number of pioneering experiments that see signatures of QSLs in programmable quantum simulators~\cite{Semeghini21, Satzinger21}.
Notably, building on a theory proposal \cite{Verresen21}, a recent experiment \cite{Semeghini21} on Rydberg atom tweezer arrays \cite{Browaeys_2020} found QSL-like signatures by placing atoms on the bonds of a kagome lattice (i.e. the atoms live on the so-called ``ruby lattice'') which interact via the Rydberg blockade \cite{Lukin01,Jaksch00,Gaetan09,Urban09}.
Intriguingly, the experiment was able to find QSL signatures in a regime of parameter space where a careful numerical study predicted QSL order would not be stable in the ground state.
The supplementary material of the aforementioned experimental paper \cite{Semeghini21} along with a follow-up numerical and variational studies \cite{Giudici22dyn,Cheng_2021} provided strong evidence that the origin of the QSL signatures could be traced back to the dynamical state preparation protocol used to explore the ground state phase diagram of the experiment.
In particular, using small system size numerics, these two studies found that the process of preparing the quantum simulator in a trivial phase and then dynamically tuning the Hamiltonian to a parameter regime predicted to be in the confined phase of the system's emergent gauge theory led to QSL signatures consistent with those observed in the experiment.
Despite strong numerical evidence, the Rydberg atom experiment and the subsequent numerical study leave open an intriguing theoretical question regarding the precise mechanism underlying the non-equilibrium preparation of a QSL-like state.
This open question inspires the present work.
The overarching goal of this work is to identify whether and when unitary quantum dynamics can approximately prepare exotic states of matter, even when these are not the ground state.
In answering this question, we pinpoint the precise dynamic regime where a parameter sweep can prepare spin liquids of restricted sizes---which we christen \emph{quantum spin lakes}. These findings are of considerable interest for at least three complementary reasons.
First, our results provide insight into how unitary quantum dynamics can give rise to surprisingly \emph{structured} entangled states, even in non-equilibrium regimes where one might not have expected them.
Notably, we study a regime of dynamics that is not contained within the two typically studied paradigms: we are neither close to equilibrium where adiabatic approximations and universal scaling theories directly hold \cite{AdiabaticTheorem_DeRoeck, Zurek_1985, Zurek_Review, ZUREK_1996, Kibble_1976, KIBBLE_1980, Chandran_2013, Chandran_2012}, nor are we so far out of equilibrium that our final state lacks any of the characteristics of the emergent low-energy physics \cite{Abanin_Review, Else_Review, Vedika_review, srednicki1994chaos, fisher2022random}.
Indeed, we combine elements from these two approaches by studying systems with two emergent degrees of freedom and working in a dynamical regime which is \textcolor{edit}{adiabatic} relative to the first and sudden relative to the other.
Second, in evincing the mechanism underlying the dynamical preparation of QSL-like states, we gain an understanding of how the preparation procedure scales as a function of system size.
This serves as both an important theoretical question to answer but also practically addresses the applicability of the mechanism in future quantum simulation experiments with potentially larger numbers of qubits.
In particular, we identify the tuneable lattice features which inevitably restrict the size of the resulting spin lake.
Consequently, we conclude that preparing a thermodynamically large spin liquid still requires realizing ground state physics.
Finally, our results make it possible to prepare a wide-range of topological states in analog noisy-intermediate scale quantum (NISQ) devices~\cite{Preskill2018NISQ} where probing quantum dynamics is more natural than cooling to a many-body ground state~\cite{QuantumSimulators}.
Notably, this goes well beyond the case of a $\mathbb{Z}_2$ spin lake which had been sighted in the Rydberg tweezer array context \cite{Semeghini21}.
In particular, we illustrate how non-equilibrium dynamics can even prepare certain states which do not appear as stable ground states in generic two-dimensional systems. For instance, we describe the preparation of a $U(1)$ spin liquid in a dimer model on a bipartite lattice, which appears in equilibrium as a fine-tuned Rokshar-Kivelson point \cite{SpinLiquidsReviewBalents,RK,RKhoneycomb, ReadSachdev1989PRL,READSACHDEV_1989_Nuc, ReadSachdev_PRB,SachdevED,PolyakovBook}.
Motivation in hand, in the next section we will provide an overview of the ideas underlying the dynamical preparation of the QSL-like state and will provide an outline for the rest of this work in Section~\ref{subsec-Outline}.
\section{Intuitive Overview and Key Ideas} \label{sec-Key}
In this section, we will provide an intuitive picture of the mechanism that underlies the dynamical preparation of the QSL-like state, which will be made more precise and supported numerically in subsequent sections.
In particular, in Section~\ref{subsec-SLinEq} we will start by recounting the basic physics of spin liquids in equilibrium, using the toric code as a paradigmatic example.
Subsequently, in Section~\ref{subsec-Slakedynamics} we will provide an provide a physical picture for the mechanism underlying the dynamical preparation of a QSL-like state.
We will conclude in Section~\ref{subsec-Outline} with an outline of the remainder of this paper.
\subsection{Spin Liquids in Equilibrium} \label{subsec-SLinEq}
\begin{figure}[t]
\centering
\includegraphics[width = 247pt]{Figure_1_v6.pdf}
\caption{\textbf{Equilibrium Quantum Spin Liquids and Out-of-Equilibrium Quantum Spin Lakes.} (a) In equilibrium, QSLs are characterized by a lack of condensed $e$ and $m$-anyons (middle panel).
%
When $K$ (defined in Eq.~\eqref{eq-KitaevTC}) is small, violations of the Gauss law ($e$-anyons) condense leading to a Higgs phase (left panel).
%
Alternatively when $K$ is too large, perturbatively generated resonances are small relative to confining fields leading to the condensation of $m$-anyons (right panel).
%
The confined phase and Higgs phase are known to be adiabatically connected to one another but can occasionally be separated via a first-order phase transition.
(b) During a dynamical sweep, it is possible to remain in equilibrium relative to $e$-anyons but out-of-equilibrium relative to $m$-anyons.
%
As a result, $e$-anyons are mostly equilibrated out during the sweep while $m$-anyons fail to be nucleated in due to experiencing a `sudden' approximation.
%
This leads to a state that is nearly defectless over a large length scale, which we brand a quantum spin lake.
}
\label{fig:intuitivepicture}
\end{figure}
We start by recounting the physics of spin liquids in equilibrium.
Readers familiar with spin liquids may choose to skip this subsection and move to Section~\ref{subsec-Slakedynamics}.
As outlined in the introduction, the requirements for a broad class of spin liquids in equilibrium are twofold: (1) the presence of an energetic constraint on spin configurations that appear at low-energies and (2) the presence of terms in the Hamiltonian that connect such states.
More precisely, the first condition gives us an effective constrained Hilbert space---typically no longer having a tensor product structure---where the local constraint can be interpreted as the Gauss law of an emergent gauge theory. The second condition introduces quantum fluctuations within this constrained space; if these fluctuations are large enough this can give rise to a ``deconfined'' or topological phase in the ground state \cite{Kitaev03}.
These two ingredients are manifest in Kitaev's famous toric code model \cite{Kitaev_2003}:
\begin{equation} \label{eq-KitaevTC}
H_{\text{TC}} = -K\sum_v \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (1.5,0) -- (-1.5, 0);
\draw[gray] (0,1.5) -- (0,-1.5);
\node at (0.75, 0) {\normalsize $Z$};
\node at (-0.75, 0) {\normalsize $Z$};
\node at (0, 0.75) {\normalsize $Z$};
\node at (0, -0.75) {\normalsize $Z$};
\end{tikzpicture} - J \sum_p
\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (-1, -1) -- (-1, 1) -- (1, 1) -- (1, -1) -- cycle;
\node at (0.0, -1) {\normalsize $X$};
\node at (0.0, 1) {\normalsize $X$};
\node at (-1, 0.0) {\normalsize $X$};
\node at (1, 0.0) {\normalsize $X$};
\end{tikzpicture}
\end{equation}
where qubits are placed at the links of the square lattice and the sum over $v$ and over $p$ indicate a sum over vertices and plaquettes of the square lattice respectively.
To be explicit, the first term in the Hamiltonian, enforces a constraint on low-energy spin configurations that:
\begin{equation} \label{eq-TCGaussLaw}
G_v = \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (1.5,0) -- (-1.5, 0);
\draw[gray] (0,1.5) -- (0,-1.5);
\node at (0.75, 0) {\normalsize $Z$};
\node at (-0.75, 0) {\normalsize $Z$};
\node at (0, 0.75) {\normalsize $Z$};
\node at (0, -0.75) {\normalsize $Z$};
\end{tikzpicture} = +1
\end{equation}
This means that low-energy spin configurations satisfy the property that the number of down spins surrounding each vertex must be even.
If we treat our spins as a $\mathbb{Z}_2$ valued electric fields $E$
$\left\{\ket{ \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (1, 0);
\node at (0.5, 0) {\normalsize $\downarrow$};
\end{tikzpicture}}= \ket{\frac{}{}\begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[red,line width=0.5mm] (0.0, 0) -- (1, 0);
\end{tikzpicture}} \quad
\ket{ \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (1, 0);
\node at (0.5, 0) {\normalsize $\uparrow$};
\end{tikzpicture}} = \ket{\frac{}{} \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (1, 0);
\end{tikzpicture}}\right\}$, then Eq.~\eqref{eq-TCGaussLaw} defines a local Gauss law constraint $(\nabla \cdot E) = 0 \text{ mod } 2$ and the low-energy manifold of states defines an emergent $\mathbb{Z}_2$ gauge theory consisting of all closed loops of electric fields.
The second term of the Hamiltonian commutes with the first and resonates between states that satisfy the Gauss law.
Consequently, the ground state of this Hamiltonian is an equal weight and equal phase superposition of all closed electric field loops:
\begin{equation}\label{eq-TCWF}
\ket{\psi_\textrm{TC}} = \ket{\frac{}{} \quad } + \ket{\frac{}{} \hspace{-0.5mm} \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[red,line width=0.5mm] (-0.5, -0.5) .. controls (-0.4, -0.7) and (-0.2, -0.9) .. (0.0, -0.5) -- (0.0, -0.5) .. controls (0.1, -0.425) and (0.5, -0.2) .. (0.25, 0.1) -- (0.25, 0.1) .. controls (0, 0.25) and (-0.3, 0.4) .. (-0.5, 0.1) -- (-0.5, 0.1) .. controls (-0.7, -0.2) and (-0.7, -0.4) .. (-0.5, -0.5);
\draw[red,line width=0.5mm]
(0.3, 0.5) ellipse (0.3 and 0.15) -- cycle;
\end{tikzpicture}
} + \ket{\hspace{-2mm}\frac{}{} \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[red,line width=0.5mm] (-0.5, -0.5) .. controls (-1, -0.2) and (-0.1, 0.2) .. (-0.5, 0.5) -- (-0.5, 0.5) .. controls (-0.9, 1) and (0.4, 1) .. (0.5, 0.5) -- (0.5, 0.5) .. controls (0.6, 0) and (0.9, -0.3) .. (0.5, -0.5) -- (0.5, -0.5) .. controls (0.1, -0.7) and (-0.1, -0.3) .. (-0.5, -0.5) -- cycle;
\end{tikzpicture}\hspace{-1mm}} + \cdots
\end{equation}
The excitations above this TC state are so-called anyons. In fact, violations of the first (second) term in Eq.~(1) are called $e$-anyons ($m$-anyons), which are created at the end of string operators composed of products of Pauli-$X$ ($Z$) operators \cite{Kitaev_2003}.
As such, $X$ or $Z$ fields locally create anyon pairs, such that introducing strong fields gives a way of driving a transition out of the topological phase, which can be interpreted as ``condensing'' either of these anyons into the ground state.
This is captured by the minimal model \cite{FradkinShenker,tupitsyn_topological_2010, Nahum_Z2_2021}:
\begin{equation} \label{eq-KitaevTC-in-F}
H_{\text{TC} + \text{f}} = H_{\text{TC}} - h_x \sum_\ell X_{\ell} - h_z \sum_\ell Z_{\ell}
\end{equation}
where $h_x$ drives $e$-condensation (where loops are broken into open strings) and $h_z$ $m$-condensation (where loops are no longer in a massive superposition).
Since $e$-anyons are the electric charges of this emergent $\mathbb Z_2$ gauge theory, one can also refer to the $e$-condensate as the ``Higgs phase''.
Due to the non-trivial braiding between $e$ and $m$, condensing the latter implies that the former is no longer deconfined, such that the $m$-condensate is also called the ``confined phase'' \cite{FradkinShenker}.
Although it is known that these two condensates form a single trivial phase \cite{FradkinShenker}, there can be an unnecessary (first-order) transition between them \cite{tupitsyn_topological_2010}. See Fig.~\ref{fig:intuitivepicture}(a) for two generic scenarios occurring in the parameter regimes that we will be exploring in this work; a detailed analysis of the phase diagram is found in Sec.~\ref{sec-DTC}).
\subsection{Spin Lakes from Quantum Dynamics}\label{subsec-Slakedynamics}
We now turn to understanding how a QSL-like state can be produced via dynamics even when the ground state of the system does not resemble a QSL.
More precisely, we envision the case where the ground state is still in a constrained Hilbert space, imposed by an energetic Gauss law, but we might not be in the deconfined phase (e.g., the $m$-condensate discussed above).
To be concrete, we will consider the model of Eq.~\eqref{eq-KitaevTC-in-F} without any plaquette resonances ($J = 0$), though the discussion below is quite general.
When $J = 0$, the ground state of the aforementioned model will fail to be a QSL aside from a small portion of the phase diagram (See Sec.~\ref{sec-DTC}).
Nevertheless, we will show in this subsection that short-time quantum dynamics (such as those available in analog NISQ devices) can create a state which has QSL-like signatures over large but inevitably finite patches of the system.
An intuitive picture for the dynamical preparation of a QSL-like state can be understood as follows and is depicted in Fig.~\ref{fig:intuitivepicture}(b).
Envision starting with a finite size system and initializing it in the ground state of the $e$-condensate, i.e., Higgs phase [$h_x \gg K, h_z$; left-most panel of Fig.~\ref{fig:intuitivepicture}(b)].
We then ramp up the value of $K$ (energetically enforcing the Gauss law) slow enough to be \emph{\textcolor{edit}{adiabatic}} with respect to the $e$-anyons such that the density of the $e$-anyons will go to zero at the end of the sweep.
At the same time, we will show that is possible to guarantee that this parameter sweep is much faster than the $m$-anyon energy scale, allowing for a \emph{sudden approximation} where $m$-anyon dynamics is frozen.
In conclusion, we equilibrate out the fast $e$-anyons present in the initial state, and prevent the nucleation of the relatively slow $m$-anyons.
As such, at the end of the sweep, the state prepared will be characterized by a lack of condensation of any anyons and the final state will be the deconfined phase of the emergent gauge theory.
We can use this effective picture to develop a prediction for the final state of the system following a sweep:
\begin{equation} \label{eq-sweeping-proj}
\ket{\psi(T)} \; \propto \; \mathcal{P}_{G} \ket{\psi(0)}
\end{equation}
where $\ket{\psi(0)}$ is the state at the beginning of the sweep, $\ket{\psi(T)}$ is the state at the end of the sweep, $\mathcal{P}_{G}$ is the operator that projects out violations of the Gauss Law ($= \prod_v (1 + G_v)/2$ for the toric code example, see Eq.~\eqref{eq-TCGaussLaw}).
To illustrate above in the simplest possible setting, consider the initial state $\ket{\psi(0)} = \ket{+}^{\otimes N}$ which is the ground state of at $h_z=K=0$.
Observe that by expanding this product state in the diagonal ($Z$) basis and using the above visual representation, it is the sum of all \emph{closed and open} string states:
\begin{equation}\label{eq-plus_state_strings}
\ket{\psi(0)} = \ket{\frac{}{} \quad } + \ket{\frac{}{} \hspace{-0.5mm} \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[red,line width=0.5mm] (-0.5, -0.5) .. controls (-0.4, -0.7) and (-0.2, -0.9) .. (0.0, -0.5) -- (0.0, -0.5) .. controls (0.1, -0.425) and (0.5, -0.2) .. (0.25, 0.1) -- (0.25, 0.1);
\draw[red,line width=0.5mm] (-0.1, 0.4) ellipse (0.3 and 0.15);
\end{tikzpicture}
} + \ket{\hspace{-2mm}\frac{}{} \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[red,line width=0.5mm] (-0.5, -0.5) .. controls (-1, -0.2) and (-0.1,
0.2) .. (-0.5, 0.5) -- (-0.5, 0.5) .. controls (-0.9, 1) and (0.4, 1) .. (0.5, 0.5) -- (0.5, 0.5) .. controls (0.6, 0) and (0.9, -0.3) .. (0.5, -0.5) -- (0.5, -0.5) .. controls (0.1, -0.7) and (-0.1, -0.3) .. (-0.5, -0.5) -- cycle;
\end{tikzpicture}\hspace{-1mm}} + \ket{\hspace{-2mm}\frac{}{} \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[red,line width=0.5mm] (-0.5, -0.5) .. controls (-1, -0.2) and (-0.1,
0.2) .. (-0.5, 0.5) -- (-0.5, 0.5) .. controls (-0.9, 1) and (0.4, 1) .. (0.5, 0.5) -- (0.5, 0.5);
\end{tikzpicture}\hspace{-1mm}} + \cdots
\end{equation}
Hence, if we can project out all states containing open strings, we obtain the topological state in Eq.~\eqref{eq-TCWF}, i.e., $\ket{\psi_\textrm{TC}} \propto \mathcal P_G \ket{\psi(0)}$.
We claim that this projection can (approximately) be achieved by the aforementioned non-equilibrium parameter sweep, where we attempt to be adiabatic with respect to $e$-anyons (which gradually enforces $G_v=1$) and sudden with respect to $m$-anyons (i.e., keeping the coefficients in Eq.~\eqref{eq-TCWF} approximately constant).
In fact, in the fine-tuned case of $h_z=0$, quantum numbers prevent any $m$-anyon dynamics, but we will explore the more interesting and generic\footnote{While this example might suggest that our mechanism requires a nearby flux(plaquette)-conserving model, this is not the case; our ruby lattice example in Sec.~\ref{sec-Experiment} will illustrate this.} case of $h_z\neq 0$.
However, in the thermodynamic limit, we will argue that it will not be possible\footnote{Here, we consider the generic case, i.e., $h_z\neq 0$, such that the plaquette resonance is not a conserved quantity.} to sweep the Hamiltonian at a rate that is both slow relative to the energy scale of the initially condensed defects (such as the $e$-anyons above) and fast relative to energy scale within the constrained space (such as the $m$-anyons above). In those cases, the final state will not be a perfect QSL.
Nevertheless, we will argue that correlations in the final state will be similar to those found in a QSL in any large patch of the system [right-most panel of Fig.~\ref{fig:intuitivepicture}(b)].
As a consequence, it will be appropriate to brand the final state as either a \textit{quantum spin puddle} or \textit{quantum spin lake} (depending on one's philosophical bend).
Since the authors are glass-half-full, we will henceforth refer to such states somewhat optimistically as quantum spin lakes which, though not thermodynamic QSLs, are states that enable studying QSL physics in finite-size quantum simulation experiments available in the NISQ era.
More generally, the effective picture presented above provides a route to applying projection operators on quantum states by using non-equilibrium unitary dynamics!
\subsection{Outline of the Paper}\label{subsec-Outline}
The remainder of this work will be focusd on fleshing out the above intuitive idea, providing numerical confirmation, identifying its limitations, building a bridge to existing experimental data, and finally providing generalizations.
First, Section~\ref{sec-Qutrit} makes the above picture more precise in the simplest possible context: a single qutrit model that has the essential ingredients of the setup above.
Subsequently, in Section~\ref{sec-DTC} we will provide numerical support for this picture by performing large-scale matrix product state numerics on Eq.~\eqref{eq-KitaevTC-in-F} without explicit plaquette resonances ($J = 0$).
Equipped with the numerical evidence backing the intuitive picture presented above, we then turn to considering the validity of this picture for thermodynamically large systems in Section~\ref{sec-Summary} where we will make precise the notion of a quantum spin lake.
We use this notion to make comments on the relevance of these ideas for explaining the recent Rydberg atom experiment (Section~\ref{sec-Experiment}).
Driving the above intuition to its logical conclusion suggests that dynamically preparing QSL-like states works best in models with vanishing $m$-anyon dynamics, which we exemplify by simulating a model on a tree lattice in Sec.~\ref{sec-Tree}.
Remarkably, we find that such tree numerics can even be used as a tool to accurately describe experimental data within the timescales used to prepare the quantum spin lake.
Although the bulk of the paper focuses on the preparation of $\mathbb{Z}_2$ spin lakes, we conclude by highlighting the generality of the mechanism by demonstrating the preparation of a $U(1)$ spin lake (Section~\ref{sec-U1}).
\section{Single Qutrit Toy Model for Dynamical QSL Preparation}\label{sec-Qutrit}
\begin{figure*}
\centering
\includegraphics[width = 485pt]{Figure_2_v4.pdf}
\caption{\textbf{Level Structure and Dynamics of single qutrit Model.} Our dynamical protocol for implementing the projection operator by combining adiabatic and sudden approximations can already be illustrated in a three-state toy model. (a) Representative level structure of single qutrit model as a function of $K/h_x$ (shown for parameters $h_z = 1/3$ and $\Delta = 0$). Here, the ground state is depicted in black, the first excited state in blue, and the second excited state in orange. Note that $\Delta$ controls the gap between the ground state and the orange level and $h_z$ controls the gap between the ground state and the blue level when $K > 0$.
In panels (b, c), we start with an initial ground state $\ket{\psi(0)}$ at $K/h_x = -20$ (Eq.~\eqref{eq-qutritpsi0}) and dynamically sweep to $K/h_z = 20$ over a total time $T$: we plot the overlap between the final state $\ket{\psi(T)}$ and the initial state with constraint violations projected out $\mathcal{P}_G\ket{\psi(0)}$ (Eq.~\eqref{eq-qutritpsiproj}).
%
We find an intermediate regime $[T_e, T_m]$ where this overlap is maximized, which qualitatively corresponds to sweep rates which are \emph{below} the energy scale set by the orange curve (in (a)) but \emph{above} the splitting $\sim h_z$ of the lowest two states for $K>0$; we are thus approximately adiabatic (sudden) with respect to the former (latter) branch.
%
In (b), by plotting this projected overlap as a function of $T$ for $\Delta = 0, 7, 30$ (corresponding to the lightest to darkest orange lines) and fixed $h_z = 1/3$, we find that we can move $T_e$ to shorter times.
%
In (c), by plotting this overlap as a function of $T$ for $\Delta = 7$ and $h_z = 1/15, 1/3, 2/3$ (lightest to darkest lines), we find that we can tune $T_m$ to shorter times as well.}
\label{fig-SingleQutrit}
\end{figure*}
In Section~\ref{subsec-Slakedynamics}, we discussed an effective picture for how a QSL-like state is created during a dynamical sweep.
This picture suggested a natural but striking prediction for the final state of the dynamics (Eq.~\eqref{eq-sweeping-proj}): the final state is the initial state of the sweep but with Gauss law violations projected out.
Here, we show how this picture emerges and confirm this prediction in a truly minimal setting: a single qutrit model that mimics the setup of the last section.
In particular, consider the following Hamiltonian for a single qutrit:
\begin{equation} \label{eq-qutritHam}
H_{\text{qutrit}} = -K \mathcal{Z}^2 - h_x \mathcal{X} - h_z\mathcal{Z}
\end{equation}
where $\mathcal{X}, \mathcal{Z}$ are spin-one Pauli matrices:
\begin{equation}
\mathcal{Z} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{pmatrix},\quad \mathcal{X} = \frac{1}{\sqrt{2}}\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}.
\end{equation}
Such a Hamiltonian is a nice $(0 + 1)$D analogue of the Hamiltonian of Eq.~\eqref{eq-KitaevTC-in-F} with $J = 0$.
In particular, the feature we would like to focus on is that for large positive $K$, we have a ``constrained low-energy Hilbert space'' where $\mathcal Z^2 = 1$, i.e., $\{ \ket{1}, \ket{-1}\}$, where we define this basis such that $\mathcal{Z} \ket{\alpha} = \alpha \ket{\alpha}$, $\alpha = -1, 0, 1$.
\subsection{Projection and Superpositions via Dynamics}
We will start at large negative $K$, where the ground state is nearly classical:
\begin{equation}
\ket{\psi(0)} \approx \ket{0} + \varepsilon \left( \ket{1} + \ket{-1} \right) \label{eq-qutritpsi0}
\end{equation}
provided that $|h_z|\ll |h_x| \ll |K|$, such that $\varepsilon \sim \frac{|h_x|}{|K|} \ll 1$.
Our claim is that one can use a non-equilibrium sweep towards large positive $K$ to effectively project our initial state into the constrained space defined by $\mathcal{Z}^2 = +1$, i.e., we obtain
\begin{equation}
\mathcal{P}_G\ket{\psi(0)} \propto \ket{1} + \ket{-1} . \label{eq-qutritpsiproj}
\end{equation}
Note that this superposition of constrained states is in stark contrast to what would be the \emph{ground state} in this parameter regime: for any $h_z > 0$ and positive $K \gg h_x$, the ground state is approximately $\ket{1}$.
To justify this claim, it is useful to examine the spectrum of this three-level model as a function of $K/h_x$ at fixed but small $h_z$ which is shown in Fig.~\ref{fig-SingleQutrit}(a).
For large values of $K$, we see two low-energy states (the black and blue lines) corresponding to the ``constrained'' space spanned by $\ket{1}$ and $\ket{-1}$. Well-separated above this, we see $\ket{0}$ (orange line).
As such, starting with the state in Eq.~\eqref{eq-qutritpsi0} and sweeping from negative to positive $K$, we should throughout remain adiabatic with respect to third (orange) curve; as a result the final wavefunction will be in the constrained subspace.
If at the same time we remain faster than the splitting of the blue and black lines in this constrained space, we can use the sudden approximation indicating that the portion of the initial wavefunction \eqref{eq-qutritpsi0} that was within this space does not time-evolve, achieving the projection in Eq.~\eqref{eq-qutritpsiproj}.
The above discussion highlighted the two necessary ingredients for dynamics to produce the desired projected state: the sweep rate should be slow relative to the orange curve and fast relative to the blue curve (for $K \geq 0$).
The validity and applicability of these conditions is in principle set by the following two parameters.
First, $h_z$ determines the splitting between the two constrained states [as indicated in Fig.~\ref{fig-SingleQutrit}(a)].
Second, by pushing up the third level by an amount $\Delta$\footnote{We can do this by making our Hamiltonian time-dependent, adding a term $\Delta \cdot \mathcal{P}_3(t)$ which pushes the highest level up in energy at each instance by $\Delta$.}, we can tune the gap at the ``transition'' into the constrained space.
Hence, we expect the projection in Eq.~\eqref{eq-qutritpsiproj} to become a better approximation for the non-equilibrium time-evolution when $h_z$ is small and $\delta$ is large.
We now test and confirm these expectations quantitatively.
\subsection{Numerical Confirmation and Timescales}
We numerically confirm the expectations above by exactly simulating the dynamics of the qutrit.
In particular, we initialize the qutrit in its ground state at large negative $K = -20$ with $h_x = 1$ and $h_z$ fixed at three representative values, and then linearly increase $K$ to $K = 20$ over a total time $T$.
We subsequently plot the overlap the normalized~\footnote{Throughout this work, we will use $\tilde {\mathcal P}$ to denote a projector followed by normalization.} projected state defined in Eq.~\eqref{eq-qutritpsiproj} with the final state of the sweep $\ket{\psi(T)}$ as a function of $T$ [See Figs.~\ref{fig-SingleQutrit}(b, c)].
We find that, for any fixed value of $\Delta$ and $h_z$ [any of the curves in Fig.~\ref{fig-SingleQutrit}(b, c)], that the overlap with the projected state displays three distinct regimes as a function of total time $T$, demarcated by two time scales which we will call $T_{e}$ and $T_{m}$ ($T_e < T_m$).
Here, $1/T_e$ is the rate below which our ground state energy level is adiabatic with respect to the orange level throughout the evolution and $1/T_m$ is the rate above which the ground state level is sudden with respect to the blue level around $K = 0$. Within $T_e < T < T_m$, we find that the projected state \eqref{eq-qutritpsiproj} is a good approximation to the result of the non-equilibrium sweep.
As stated in the previous subsection, by increasing the value of $\Delta$ and fixing the value of $h_z$ in Fig.~\ref{fig-SingleQutrit}(b), we find that the time scale $T_m$ remains fixed and $T_e$ shifts to smaller times because it is possible to be adiabatic relative to the orange level while sweeping faster when $\delta$ is large.
Additionally, as $\Delta$ is increased, the approximation that the final state tends to the projected state becomes more exact as predicted.
If instead we fix the value of $\Delta$ and increase the value of $h_z$ [Fig.~\ref{fig-SingleQutrit}(c)], we find that $T_e$ remains fixed and $T_m$ shifts to smaller times.
This is because one needs to sweep faster in order to be sudden relative to the splitting between the ground state and blue level which confirms our expectations.
\subsection{Analogy with Toric Code}
We can reframe the results for the single qutrit model in a language that is closer to the one used to discuss the toric code.
A full dictionary between the two is enumerated in Table~\ref{table:single-qutrit}.
Notably, the constraint $\mathcal{Z}^2 = +1$ which holds when $K$ is large and positive can be reinterpreted as a ``Gauss law'' similar to the Gauss law of the toric code [Eq.~\eqref{eq-TCGaussLaw}].
Then, the orange level for $K > 0$ can be thought of as the ``$e$-anyon'' as it represents a violation of the Gauss law $\mathcal{Z}^2 = + 1$.
As such, when $K$ is large and negative, we can interpret the ground state as though this $e$-anyon has ``condensed'' (gained an expectation value in the ground state) corresponding to the Higgs phase of the toric code.
Similarly, the splitting between the ground state and the blue level can be thought of as the energy scale associated with the ``$m$-anyon'' as it respects the Gauss law.
At any finite $h_z$, the ground state can be interpreted as being the analogue of the toric code's ``confined phase.''
In this language, we can reinterpret the results of the dynamical sweep.
Namely, we prepare the projected state [which is equivalent to the deconfined phase due to being a superposition of constrained states (See Table.~\ref{table:single-qutrit})], when we remain in adiabatic relative to the energy level connected to the qutrit's $e$-anyon and sudden relative to the energy scale associated with the qutrit's $m$-anyon.
In being adiabatic relative to the $e$-anyon, it is equilibrated out as we exit the Higgs phase.
Moreover, in being sudden relative to the $m$-anyon, it fails to be nucleated in as we enter the confined phase.
Having tested and verified our intuition in this toy model, we study the analogous effect in a truly many-body system, namely the toric code model.
\begin{table}
\begin{center}
\begin{tabular}{||c c||}
\hline
Toric Code & Single Qutrit\\ [0.5ex]
\hline\hline
Gauss Law & $G = \mathcal{Z}^2 = +1$ \\
\hline
$h_x X_{\ell}, h_z Z_{\ell}$ & $h_x \mathcal{X}, h_z \mathcal{Z}$ \\
\hline
Higgs & $\ket{0}$ \\
\hline
QSL & $\ket{\Omega} = \frac{1}{\sqrt{2}} (\ket{+1} + \ket{-1})$ \\
\hline
Confined & $\ket{1}, \ket{-1}$ \\
\hline
$e$-anyon & $e^{\dagger} \ket{\Omega} = \mathcal{X} \ket{\Omega}$ \\
\hline
$m$-anyon & $m^{\dagger} \ket{\Omega} = \mathcal{Z} \ket{\Omega}$ \\
\hline
\end{tabular}
\caption{\textbf{Conceptual dictionary between Toric Code and Single Qutrit Model.}\label{table:single-qutrit} }
\end{center}
\end{table}
\section{Deformed Toric Code Model} \label{sec-DTC}
We now investigate how the effective picture of Section~\ref{sec-Key} and the conjecture of Eq.~\eqref{eq-sweeping-proj} appear in a many-body context.
In particular, let us consider the model of Eq.~\eqref{eq-KitaevTC-in-F} without explicit plaquette resonances ($J = 0$):
\begin{equation} \label{eq-TCf_2}
H_{\text{TC + f}} = -K\sum_v \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (1.5,0) -- (-1.5, 0);
\draw[gray] (0,1.5) -- (0,-1.5);
\node at (0.75, 0) {\normalsize $Z$};
\node at (-0.75, 0) {\normalsize $Z$};
\node at (0, 0.75) {\normalsize $Z$};
\node at (0, -0.75) {\normalsize $Z$};
\end{tikzpicture} - h_x \sum_{\ell} X_{\ell} - h_z \sum_{\ell} Z_{\ell}
\end{equation}
Before exploring the dynamical preparation of QSL-like states in this model, in Subsection~\ref{subsec-TCGS}, we first consider its ground state physics, where we will find a thin sliver of topological order.
Subsequently, in Subsection~\ref{subsec-TCSpinLakes}, we test the prediction of Eq.~\eqref{eq-sweeping-proj} by calculating the local overlap of the time-evolved state and the projected state.
This indeed suggests a spin liquid-like state which is vastly more extended in the phase diagram compared to the ground state physics.
Here we focus on detecting these spin liquid-like properties in finite regions; we postpone the discussion of scaling and the thermodynamic limit to Section~\ref{sec-Summary}.
\subsection{Ground State Phase Diagram} \label{subsec-TCGS}
Using the density matrix renormalization group (DMRG) \cite{White92, White93, Stoudenmire12, Hauschild18} on an infinite cylinder of circumference $L_y = 4$ (the qualitative properties of interest do not sensitively depend on this choice; see Appendix~\ref{app-Ly5}), we find the ground state phase diagram of the model of Eq.~\eqref{eq-TCf_2} in Fig.~\ref{fig-dTC}~(a), finding the three distinct phases which we schematically discussed in Section \ref{sec-Key}. (See Appendix~\ref{app-Ly5} for $L_y=5$ and further numerical details.)
The first phase we observe is the $e$-condensed (or Higgs) phase which occurs when $K, h_z \ll h_x$, its fixed-point limit being the product state in the $X$-basis, $\ket{+}^{\otimes N}$.
The second two phases---the toric code (TC) phase and the confined phase---can be understood as follows.
When $K \gg h_x, h_z$, the ground state manifold will be nearly degenerate, consisting of all states that satisfy the Gauss law $G_v=1$ [see Eq.~\eqref{eq-TCGaussLaw}].
The effect of $h_z$ and $h_x$ can then be treated perturbatively.
Namely, using degenerate perturbation theory in $h_z$ and $h_x$, the effective Hamiltonian governing these states will contain the plaquette term of Eq.~\eqref{eq-KitaevTC} with $J_{\text{eff}} \sim \mathcal{O}(h_x^4/K^3)$.
As a consequence, when $h_z \ll J_{\text{eff}}$, the model will be in the $\mathbb{Z}_2$ QSL phase of the toric code model of Eq.~\eqref{eq-KitaevTC}.
However, when $h_z \gg J_{\text{eff}}$, $m$-anyons in the system will condense corresponding to the confined phase, whose fixed-point limit is $\ket{\uparrow}^{\otimes N}$ as $h_z \to +\infty$.
As a final remark, we note that throughout the phase diagram of Fig.~\ref{fig-dTC}, the energy scale associated with $m$-anyon excitations is set by $h_z$ and potentially a plaquette term generated at fourth order in perturbation theory, both of which are small relative to $h_x$ and $K$ which set the $e$-anyon dynamics.
These small energy scales naturally signal that the dynamics of $m$-anyons will be slow relative to the $e$-anyons.
\begin{figure}
\centering
\includegraphics[width = 247pt]{Figure_3_v8.pdf}
\caption{\textbf{Creation of a Quantum Spin Lake in a Deformed Toric Code Model.} We consider a version of the toric code model in a field \emph{without} plaquette resonances: Eq.~\eqref{eq-TCf_2}. (a) Ground state phase diagram of as a function of $h_z/h_x$ and $K/h_x$. There is only a tiny sliver of Toric Code (TC) topological order.
%
%
(b) In contrast, projecting a product state (i.e., ground state at $K=0$ for different $h_z/h_x$) into the Gauss-law preserving states $G_v=1$ yields a robust spin liquid phase (see Sec.~\ref{subsec-TCproof} for an analytic derivation).
%
In panels (c, d, e), we study the dynamical sweeps along the three colored arrows in panel (a) corresponding to $h_z=0.01,0.05,0.1$; we numerically confirm that non-equilibrium dynamics can effectively implement the aforementioned projection.
%
(c) By increasing the total time of the sweep, $e$-anyons are pushed out of the final state as detected by the expectation value of the Gauss Law $\langle G_v \rangle \approx 1$ [defined in Eq.~\eqref{eq-TCGaussLaw}]. This (quasi-)adiabatic approximation for $e$-anyons holds for all three sweeps as evidenced by the overlapping curves.
%
(d) Having established that we dynamically sweep into the constrained subspace, we test whether the resulting state $\ket{\psi(T)}$ has a large overlap with the corresponding projected state (the latter is in a topological phase as shown in (b)).
%
We observe a window $T_e < T <T_m$ where the overlap density between the final state of the sweep and the projected state is large; here we are approximately adiabatic (sudden) w.r.t. $e$-($m$-)anyons.
%
(e) We make explicit the presence of three dynamical regimes by plotting the return probability for an echo experiment that sweeps back and forth through the transition.
}
\label{fig-dTC}
\end{figure}
\subsection{Quantum Dynamics and Spin Lakes} \label{subsec-TCSpinLakes}
Given the equilibrium phase diagram, let us now contrast it with the state prepared via a dynamical sweep simulated using MPO methods \cite{Zaletel15}.
Due to the separation in energy scales between the $e$- and $m$-anyons, we anticipate that we will be able to prepare a QSL-like state [or quantum spin lakes (see Section~\ref{sec-Summary} for more details)], extending well beyond the spin liquids in the ground state phase diagram.
In particular, the discussion in Section~\ref{sec-Key} suggests that sweep rates which are slow with respect to $e$ and fast with respect to $m$ should approximately project out Gauss law violating states from the initial state [see Eq.~\eqref{eq-sweeping-proj}].
If we take initial states at $K=0$ (where the ground state is a product state), then projecting these into $G_v=1$ gives the phase diagram in Fig.~\ref{fig-dTC}(b) (which will be derived in the next subsection).
Crucially, we see that the topological (i.e., deconfined) phase extends over a broad range of parameter space, up to $h_z / h_x \lessapprox 0.46$. This is contrast to the tiny sliver of toric code phase found in the ground state (Fig.~\ref{fig-dTC}(a)).
We now numerically test the prediction that appropriate sweeping rates can approximately prepare this projected wavefunction through dynamics.
We initialize the system in the product state ground state at $K = 0$ and small values of $h_z$ (in the `Higgs phase' of Fig.~\ref{fig-dTC}(a)).
Subsequently, we ramp $K$ linearly at a rate $1/T$ and investigate the nature of the final state.
By simulating Eq.~\eqref{eq-TCf_2} on an infinite cylinder using matrix product state techniques, we are able to investigate properties of the final state numerically as a function of the total time $T$.
First, we verify that as we increase the total time $T$ (thereby decreasing the sweeping rate), there is a time-scale $T_e$ above which our dynamics are nearly in equilibrium relative to $e$-anyons.
In particular, above $T_e$, we expect that the density of $e$-anyons will be nearly zero similar to the ground state for $K$ large and greater than zero (though, for any finite $T$, the ground state will have a non-zero density of $e$-anyon defects; see Sec.~\ref{sec-Summary} for a discussion of finite sizes and scaling).
To verify this, in Fig.~\ref{fig-dTC}(c), we plot the expectation value of the Gauss law operator [Eq.~\eqref{eq-TCGaussLaw}] $\langle G_v \rangle$ as a function of the total time of the sweep and three values of $h_z$.
We find that above a characteristic value of $T_e \sim 0.5 h_x^{-1}$, the value of $\langle G_v \rangle$ rapidly increases and saturates to a near maximal value (consistent with the equilibrium value) independent of the value of $h_z$.
As $h_z$ controls the energetics of the $m$-anyon, this is to be expected.
Next, we confirm that beyond $T_e$, we enter a regime where the dynamics is simultaneously fast relative to $m$-anyons and slow relative to $e$-anyons.
Here, we expect that the final state will have a high overlap density with the normalized projected state $\tilde{\mathcal{P}}_G \ket{\psi(0)}$, which is a spin liquid for the parameters chosen (See Fig.~\ref{fig-dTC}(b) for phase diagram of the projected state, proved in the next subsection).
By plotting the overlap density per site between $\ket{\psi(T)}$ and $\tilde{\mathcal{P}}_G \ket{\psi(0)}$ in Fig.~\ref{fig-dTC}(d) (where the tilde simply denotes that we have normalized the state), we find that there is a window $[T_e, T_m]$ where indeed this occurs, in agreement with the prediction of Eq.~\eqref{eq-sweeping-proj}.
Furthermore, we find that as we increase $h_z$, the coupling responsible for nucleating $m$-anyons, $T_m$ decreases and hence the window shrinks.
This is consistent with our expectations that, as we increase $h_z$, the time-scale in which $m$-anyons are nucleated decreases and hence our dynamics can be slow relative to both $e$ and $m$ (i.e. quasi-\textcolor{edit}{adiabatic}) at faster rates (shorter total times).
We confirm in Appendix~\ref{app-GSrecovery} that indeed, beyond $T_m$, the system recovers the ground state.
This provides strong numerical evidence for our effective picture wherein $e$-anyons are in equilibrium and $m$-anyons are frozen for intermediate time sweeps.
We can independently verify the existence of the two time-scales $T_e$ and $T_m$ as well as the intermediate regime via the follow numerical ``echo'' experiment.
Namely, we consider sweeping $K$ linearly from $K = 0$ to a maximal value of $K$ for a total time $T$ and subsequently sweeping $K$ linearly backwards back to zero for the same amount of time.
Generically, the state that one will recover will not be initial state of the sweep.
Nevertheless, we expect that when the dynamics is purely \textcolor{edit}{adiabatic} or purely sudden, the initial state will be recovered.
Moreover, since our proposed mechanism involves the dynamics relative to $e$ being quasi-\textcolor{edit}{adiabatic} and the dynamics relative to $m$ being quasi-sudden, a non-trivial prediction of our effective picture is that, in the intermediate regime, the initial state will also be recovered.
We numerically simulate this experiment and plot the overlap per site of the final state $\ket{\psi(2T)}$ with the initial state in Fig.~\ref{fig-dTC}(e).
Let us first observe that when we are fully out-of-equilibrium relative to $e$ and $m$ ($T \ll T_e$) or nearly in-equilibrium relative to both ($T \gg T_m$), we indeed find that this overlap is near maximal.
More interestingly, when we are deep within the regime $T_e \ll T \ll T_m$, we also see a very large revival of the initial state, consistent with certain degrees of freedom being quasi-\textcolor{edit}{adiabatic} and others being frozen.
\begin{figure}
\centering
\includegraphics[width = 247pt]{FM_Plot_v3.pdf}
\caption{\textbf{Fredenhagen-Marcu Order Parameter for Final State of Dynamics.} We consider the same dynamical sweeps in the deformed toric code model as introduced in Fig.~\ref{fig-dTC}. (a) Here, we report the maximum value of the $\langle X \rangle_{\text{FM}}$ order parameter (for three different string lengths) obtained during a dynamical sweep that takes place over a total time $T$. The particular sweep we take linearly ramps the value of $K/h_x$ from $0$ to $4$ at $h_z/h_x = 0.1$ (this corresponds to the red arrow in Fig.~\ref{fig-dTC}). We find that during the window $[T_e, T_m]$, its value flows downwards with increasing length. (b) In this panel, we plot the value of $\langle Z \rangle_{\text{FM}}$ versus total time $T$ at the same time in the sweep where $\langle X \rangle_{\text{FM}}$ was maximized. Here, for total times $T \in [T_e, T_m]$, the value of $\langle Z \rangle_{FM}$ is nearly zero and is confirmed to flow downwards. The decreasing flow of these two order parameters during the window $[T_e, T_m]$ cements the fact that in this window, the system exhibits QSL-like properties, consistent with the large overlap between the time-evolved state and the topologically ordered projected wavefunction in Fig.~\ref{fig-dTC}(d). For numerics performed, we used a bond- dimension of $\chi = 256$ and trotter step size of $dt = 0.0025$ (with convergence in both parameters verified). Moreover, see Appendix~\ref{app-FM} for FM order parameters in equilibrium. }
\label{fig-FMdTC}
\end{figure}
Until now, we have only verified that states produced through dynamics in the intermediate regime $[T_e, T_m]$ look like QSL's by utilizing the overlap density with the projected state (which is provably a QSL; see next subsection).
We conclude this subsection by independently verifying this through the use of the so-called Fredenhagen-Marcu (FM) order parameter~\cite{Fredenhagen83,Fredenhagen86, Marcu86,Gregor11,Chandran_2013,Verresen21}.
To define the FM order parameter, we first introduce the following string operators:
\begin{equation}
\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1} {
\draw[gray] (\i, -0.5) -- (\i, 1.5);
\draw[gray] (-0.5, \i) -- (1.5, \i);
}
\draw[orange(ryb), dashed, line width = 0.3 mm] (-0.5, 0.5) -- (1.5, 0.5);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1} {
\draw[gray] (\i, -0.5) -- (\i, 1.5);
\draw[gray] (-0.5, \i) -- (1.5, \i);
}
\node at (0, 0.5) {\normalsize $Z$};
\node at (1, 0.5) {\normalsize $Z$};
\end{tikzpicture}
\quad \quad
\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1} {
\draw[gray] (\i, -0.5) -- (\i, 1.5);
\draw[gray] (-0.5, \i) -- (1.5, \i);
}
\draw[dodgerblue, decorate, decoration={snake, segment length=1.2mm, amplitude =.4mm}, line width = .25mm, opacity = 1] (-0.5, 0) -- (1.5, 0);
\end{tikzpicture}
\ =\
\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1} {
\draw[gray] (\i, -0.5) -- (\i, 1.5);
\draw[gray] (-0.5, \i) -- (1.5, \i);
}
\node at (-0.5, 0) {\normalsize $X$};
\node at (0.5, 0) {\normalsize $X$};
\node at (1.5, 0) {\normalsize $X$};
\end{tikzpicture}
\end{equation}
the first (second) of which is called the 't Hooft (Wilson) line operator and creates $m$($e$)-anyons at its endpoints.
String operators in hand, the FM order parameter is defined as:
\begin{align} \label{eq-FMOP}
\langle Z \rangle_{\text{FM}} = \frac{\left \langle\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[orange(ryb), dashed, line width = 0.4 mm] (2,0) arc(0:-180:1);
\node at (0, 0) {\normalsize $m$};
\node at (2, 0) {\normalsize $m$};
\end{tikzpicture} \right \rangle}{\sqrt{\left \langle\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[orange(ryb), dashed, line width = 0.4 mm] (0,0) circle (1);
\end{tikzpicture} \right \rangle}\ } \quad \
\langle X \rangle_{\text{FM}} = \frac{\left \langle\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[dodgerblue, decorate, decoration={snake, segment length=1.2mm, amplitude =.4mm}, line width = .25mm, opacity = 1] (2,0) arc(0:-180:1);
\node at (0, 0) {\normalsize $e$};
\node at (2, 0) {\normalsize $e$};
\end{tikzpicture} \right \rangle}{\sqrt{\left \langle\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[dodgerblue, decorate, decoration={snake, segment length=1.2mm, amplitude =.4mm}, line width = .25mm, opacity = 1] (0,0) circle (1);
\end{tikzpicture} \right \rangle}\ }.
\end{align}
where the length of the string in the denominator has twice the length of the numerator and we have drawn schematically the $e$ and $m$ anyon excitations at the endpoints of the string.
Broadly speaking, the FM order parameter detects the lack of condensation of anyons in a topologically ordered phase.
In particular, the numerator is similar to a two-point function for either the $e$ or $m$-anyon, with the endpoints connected by a string.
Since any string operator will generically have some line tension causing it to decay exponentially regardless of whether the anyons are condensed or not, the denominator is chosen to cancel the contribution of this line tension.
The expectation is then that in a topologically ordered phase, the value of both $\langle \mathcal{Z} \rangle_{\text{FM}}$ and $\langle X \rangle_{\text{FM}}$ will go to zero with increased string length.
Meanwhile, in either the Higgs or confined phase, it will tend to a non-zero value.
In Fig.~\ref{fig-FMdTC}, we use the FM order parameter to diagnose the presence of QSL-like order in the final state of the sweep performed at $h_z = 0.1$ (see Appendix~\ref{app-FM} for the other values of $h_z$).
In particular, in Fig.~\ref{fig-FMdTC}(a), we show the value of the minimum value of $\langle X \rangle_{\text{FM}}$ (for three different string lengths) obtained during a sweep of total time $T$ \footnote{The minimum value is reported because the FM order parameter oscillates towards the end of the sweep. The time trace for this oscillation is shown in Appendix~\ref{app-FM}.}.
Similarly, we show the value of $\langle Z \rangle_{\text{FM}}$ obtained at the same time that $\langle X \rangle_{\text{FM}}$ was minimized.
In doing so, we find that, indeed, the intermediate regime $[T_e, T_m]$ is characterized by both order parameters decaying to zero with increased string length.
This confirms that the intermediate regime $[T_e, T_m]$ displays QSL-like signatures.
\subsection{When is the Projected State a Spin Liquid?}\label{subsec-TCproof}
So far, we have presented strong numerical evidence that, for a window of sweep rates, the final state of a dynamical sweep will be the initial state with Gauss law violations projected out.
Nevertheless, we have yet to discuss when such a projected state will be a quantum spin liquid.
This can be answered analytically, but first we provide an intuitive explanation.
Note that the deep in the Higgs phase, if $h_z = 0$, then the ground state has no $m$-anyons because the expectation value of the plaquette term will be $+1$ everywhere.
As such, when we project out Gauss law violations, the resulting state now has no $e$-anyons while remaining free of $m$-anyons, and is consequently a quantum spin liquid.
More precisely, the ground state deep in the Higgs phase when $h_z = 0$ is the state where all the qubits are in the $\ket{+}$ state.
When this state is expanded in the $Z$ basis, it looks like the equal weight super position of all open and closed string states [in the electric field representation of our spins (See Sec.~\ref{subsec-SLinEq})].
As such, when we project out all Gauss law violations (equiv. states with open strings), the resulting state is the sum of all closed string configurations which is precisely the toric code ground state of Eq.~\eqref{eq-TCWF}.
If we instead start with a product state for a nonzero $h_z$, there will be a small number of virtual $m$-anyon fluctuations in the initial state (or equivalently, the strings in the wavefunction will have a small line tension).
Hence, as we increase $h_x$ beyond some threshold value, we will eventually fail to create a QSL under projection.
To make the above arguments mathematically precise, note that the initial state of the dynamical sweep, when $K = 0$, is a product state of the form:
\begin{equation}
\ket{\psi(0)} = \bigotimes_{\ell} \left[ \cos(\theta) \ket{+} + \sin(\theta) \ket{-} \right]_{\ell} = \frac{e^{\frac{\beta}{2} \sum_{\ell} Z_{\ell}} }{\mathcal{Z}}\ket{+}^{\otimes N}
\end{equation}
where $\theta = \frac{1}{2}\arctan\left( \frac{h_z}{h_x}\right)$ and the second equality is an exact reparameterization in terms of $\tanh(\beta/2) = \tan(\theta)$.
We want to know when:
\begin{equation}\label{eq-proj_state_pre_KW}
\mathcal{P}_{G}\ket{\psi(0)}
\propto
e^{\frac{\beta}{2} \sum_{\ell} Z_{\ell}} \ket{\psi_{TC}}
\end{equation}
is a quantum spin liquid.
In the above equation, we used the fact that $\mathcal{P}_{G}$ commutes with $e^{\frac{\beta}{2} Z_{\ell}}$ and $\mathcal{P}_{G} \ket{+}^{\otimes N} \propto \ket{\psi_{TC}}$ where $\ket{\psi_{TC}}$ is the toric code wavefunction defined in Eq.~\eqref{eq-TCWF}.
To do so, we first map the state in Eq.~\eqref{eq-proj_state_pre_KW} to its dual under the Kramers-Wannier map defined in the case as:
\begin{equation}
\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, -1) -- (0, 1);
\draw[gray] (-1.5, 1) -- (1.5, 1);
\draw[gray] (-1.5, -1) -- (1.5, -1);
\node at (0,0){\normalsize $Z$};
\end{tikzpicture} \Longleftrightarrow \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, -1) -- (0, 1);
\draw[gray] (-1.5, 1) -- (1.5, 1);
\draw[gray] (-1.5, -1) -- (1.5, -1);
\node at (-1,0){\normalsize $Z$};
\node at (1,0){\normalsize $Z$};
\end{tikzpicture}
\end{equation}
and
\begin{equation}
\begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (-1, -1) -- (-1, 1) -- (1, 1) -- (1, -1) -- cycle;
\node at (0.0, -1) {\normalsize $X$};
\node at (0.0, 1) {\normalsize $X$};
\node at (-1, 0.0) {\normalsize $X$};
\node at (1, 0.0) {\normalsize $X$};
\end{tikzpicture} \Longleftrightarrow \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (-1, -1) -- (-1, 1) -- (1, 1) -- (1, -1) -- cycle;
\node at (0.0, 0.0) {\normalsize $X$};
\end{tikzpicture}
\end{equation}
which maps our model defined on the links of the square lattice to a model defined on the plaquettes of the square lattice.
Note that under this duality, $G_v$ is restricted to be $+1$ and hence, the toric code wavefunction, which is stabilized by the $G_v$ operator and the plaquette resonance operator, is mapped to the the $+1$ eigenstate of the $X_p$ operator.
Hence,
\begin{equation}
e^{\frac{\beta}{2} \sum_{\ell} Z_{\ell}} \ket{\psi_{TC}} \Longleftrightarrow e^{\frac{\beta}{2}\sum_{\langle p, p' \rangle } Z_p Z_{p'}} \ket{+}^{\otimes N}.
\end{equation}
We thus obtain a state whose diagonal correlations are set by the classical Ising model at inverse temperature $\beta$.
It is known that the disordered phase of the classical Ising model maps to the QSL phase of the toric code model under Kramers-Wannier.
Thus, the projected state is a quantum spin liquid when the initial state is such that $\beta > \beta_c$ where $\beta_c =\frac{\ln(1+\sqrt{2})}{2}$ is the transition temperature of the classical Ising model \cite{kramersandwannier}.
Translating this to our parameters, we obtain that there is a transition out of the topological phase at:
\begin{align}
\frac{h_z}{h_x} &= \tan \left( 2 \arctan \left( \tanh \left( \frac{\ln(1+\sqrt{2})}{4} \right) \right) \right) \\
&= \sqrt{ \frac{\sqrt{2}-1}{2} } \approx 0.4551,
\end{align}
as shown in Fig.~\ref{fig-dTC}(b).
We should remark that before our dynamical protocol reaches such values of $h_z/h_x$, our system will fail to satisfy our dynamical requirements of being slow with respect to dyanmics of $e$-anyons and fast with respect the dynamics of $m$-anyons.
As a consequence, for all values of $h_x$ wherein our mechanism applies, we see a QSL-like state.
In the next section, we remark on what happens as the system fails to satisfy the aforementioned dynamical requirements. In particular, we will argue that as we increase our system size, the requirements are eventually bound to fail, leading to a finite-size `quantum spin lake'.
\section{Scaling and Limitations of the Mechanism}
\label{sec-Summary}
In the previous section, we saw QSL-like properties emerge in the non-equilibrium dynamics of the toric code model of Eq.~\eqref{eq-TCf_2}, even when the ground state was not topologically ordered.
This could be understood in a nice effective picture.
By dynamically sweeping the value of $K$ in the Hamiltonian at a rate that was slow relative to $e$-anyons, we gradually pushed them out of the initial state.
If this rate was simultaneously fast relative to the dynamics of $m$-anyons, they were effectively not created during the sweep.
This led to a toric-code-like state as evidenced by the overlap density with the projected state and FM order parameters.
In this section, we will make this picture more precise by elucidating both what is meant by slow and fast, and investigate the fate of this mechanism as we scale the system to the thermodynamic limit.
We will argue that, as we increase the system size, it will not be possible to globally remain in equilibrium relative to $e$-anyons and out-of-equilibrium relative to $m$-anyons due to (1) the presence of a phase transition as we exit the Higgs phase and (2) a finite $m$-anyon energy scale.
Nevertheless, the local correlations of the system will present QSL-like signatures defining the quantum spin lake.
In what follows, we will first consider the case where our dynamical sweep crosses the second order transition (Subsection~\ref{subsec-secondorder}) and then consider what happens when we cross the first order transition (Subsection~\ref{subsec-firstorder}).
In both cases, we argue that a finite-size spin lake is created and explain the dependence on sweeping rate and total time.
We conclude by numerically testing and confirming the scaling of this mechanism (Subsection~\ref{subsec-limitationsnumerics}).
\subsection{Crossing the Second Order Phase Transitions} \label{subsec-secondorder}
In this subsection, we discuss the dynamics of both $e$ and $m$-anyons during a dynamical sweep that crosses a second order phase transition [e.g. the sweep shown with the yellow arrow on Fig.~\ref{fig-dTC}(a)].
To get a better understanding of what will happen across the transition, it will be useful to recall lessons learned from studying the single qutrit model.
In that model, our system started off in the qutrit's ground state and adiabatically followed it until we approached the parameter regime around $K/h_x = 0$.
In particular, here, for a sufficiently fast rate, our system fell out of equilibrium relative to both the orange level and the blue level (See Fig.~\ref{fig-SingleQutrit}(a)).
By pushing up the orange level through $\Delta$, we found that we were able to avoid this issue and always remain in equilibrium relative to the orange level.
Nevertheless, after entering the regime close to $K/h_x = 0$, we were always out-of-equilibrium relative to the blue level.
It was for this reason that, after $K/h_x = 0$, our wavefunction remained orthogonal to the orange level (thereby being within the constrained subspace), but its dynamics were slow within the constrained subspace.
Under an assumption that these dynamics were perfectly slow, the final wavefunction would just be the wavefunction at the instance when it fell out of equilibrium relative to the blue level, but with constraint violations projected out.
In the rest of this subsection, we will argue that a similar picture arises in the many-body context by leveraging universal properties of the transition out of the Higgs phase.
In the many-body context, far before the transition, a similar story plays out: while the system is deep in the Higgs phase, it has a large gap and the dynamics are adiabatic, largely tracking the many-body ground state \cite{AdiabaticTheorem_DeRoeck}.
However, as we approach the critical point of the transition, this will no longer be the case.
In particular, in the vicinity of the transition, there will be an emergent notion of $e-$ and $m$-anyons.
The former will uncondense across the transition and as such will have a gap at the critical point scaling as $\sim 1/L$~\cite{cardy1996scaling}.
On the other hand, the $m$-anyon excitation at the critical point will not generically be gapless.
Nevertheless, we know that deep on the other side of the transition, the gap and bandwidth of single $m$-anyon excitations are small because they are set by small microscopic energy scales and perturbatively generated resonances.
As such, generically we can assume that the bandwidth of $m$-anyons remains small close to the transition and remains small as we move far past the transition.
This makes precise what is meant by the phrase ``fast with respect to $m$-anyons'': the rate at which one crosses the transition is faster than the time-scale associated with the bandwidth of the \textit{emergent} $m$-anyon excitation in the critical regime around the transition, which is presumed to not drastically change as we move past the transition (and hence, can be estimated through microscopics).
The small gap to both $e$ and $m$-anyons across the transition, implies that prior to the transition the system will fall out of equilibrium and the long-distance dynamics of $e$ and $m$-anyons will be slow near the transition.
Indeed, in the parlance of the Kibble-Zurek mechanism, this corresponds to the so-called Kibble-Zurek ``freeze-out'' regime \cite{Kibble_1976, KIBBLE_1980, Zurek_1985, ZUREK_1996, Zurek_Review, Chandran_2013, Chandran_2012}.
After exiting the Kibble-Zurek regime, the gap to $e$-anyon excitations will rapidly increase and the ``frozen'' system will go back into equilibrium relative to the $e$-anyon (similar to how the dynamics in the single qutrit went back into equilibrium with the orange level).
This approach to equilibrium is believed to occur through a process called coarsening wherein ordered (in this context, constraint satisfying) regimes will exist with the final state exhibiting a dilute density of $e$-anyons, $n_e$.
Kibble-Zurek makes a sharp prediction (tested in Subsection~\ref{subsec-limitationsnumerics}) that this density will be determined by the rate that one crosses the transition; given a fixed rate, the density of $e$-anyon defects is predicted to remain roughly constant provided that one has spent sufficiently long after the transition to have ``coarsened.''
This density further defines a length scale $L_e = 1/\sqrt{n_e}$ in which our system will ``look'' like it obeys the Gauss law.
This length scale will be the size over which our system will exhibit QSL-like signatures and defines the size of our spin lake.
Moreover, this discussion clarifies that phrase ``slow relative to $e$-anyons'' implies that one travels at a sufficiently slow rate such that $L_e$ is of appreciable size.
Apart from the dynamics of $e$-anyons, after the Kibble-Zurek ``freeze-out regime'', our dynamics will continue to be sudden relative to dynamics of single $m$-anyon excitations.
Similar to the single qutrit case, under the approximation that these dynamics were perfectly sudden, the only dynamics of the system across the transition would be to equilibrate out $e$-anyons.
Assuming that the wavefunction when the system fell out of equilibrium is similar (up to a short-depth unitary) to the initial wavefunction of the sweep, this motivates the ansatz that the final wavefunction is the initial wavefunction with Gauss law violations projected out.
Of course, since the energy scale associated with $m$-anyons is finite, there is a time-scale above which we will start to nucleate $m$-anyon defects above our state.
As such, we can predict that the density of $m$-anyon defects will increase as we sweep for longer times around and past the transition.
This is in contrast to the case of the $e$-anyons where the total time swept is irrelevant as long as the rate is kept constant.
This prediction will also be tested in Subsection~\ref{subsec-limitationsnumerics}.
Given the discussion above, a few remarks are in order.
First and most importantly, the above considerations on the density of $e$ and $m$-anyons imply that it will be impossible to produce a full thermodynamic QSL via our dynamical mechanism--- the divergence of the $e$-anyon timescale and the finiteness of the $m$-anyon timescale implies that it is impossible to be slow with respect to the former and fast with respect to the latter.
Nevertheless, depending on the local energy scale of the $m$-anyons, as mentioned earlier, it will be possible to respect the time-scale conditions and create a QSL-like state over a length scale of size $L_e$, which precisely defines the quantum spin lake.
Second, we remark that the discussion in the previous paragraphs is quite general; we expect that dynamical sweeps into constrained subspaces with multiple emergent excitations, some of which are fast and others of which are slow, can be used to prepare exotic finite-size orders.
In the examples that we discuss in this paper, these excitations have a nice microscopic description which enables us to make predictions as to the final state of the sweep (i.e. via the projection formula of Eq.~\ref{eq-sweeping-proj}).
Nevertheless, in general, this need not be the case; dynamical sweeps could project out emergent degrees of freedom.
\subsection{Crossing the First Order Phase Transition} \label{subsec-firstorder}
We now turn to the case where we cross a first order transition during our dynamical sweep.
In the the model of Eq.~\eqref{eq-TCf_2}, this first order transition is between the Higgs phase and the confined phase and occurs when a level crossing occurs between the two phases.
While such a level crossing leads to sharp and discontinuous change in the nature of the ground state, such a level crossing does not impact the dynamics in the vicinity of the transition.
This is because the Higgs ground state and confined ground state are macroscopically distinct from one another and hence the ground state transition cannot be detected by local dynamics.
Deep enough beyond a 1st order transition we expect to become sensitive to false vacuum decay~\cite{Vidal_False_Vacuum, Coleman_1977, devoto2022false}, but this process is mediated by m-anyon dynamics which we have assume to be slow relative to total time of our dynamical sweep.
Hence, our local dynamics will effectively encounter instead the above second order transition and the considerations of the previous section will follow.
We now test this prediction.
\subsection{Numerical Confirmation} \label{subsec-limitationsnumerics}
\begin{figure}
\centering
\includegraphics[width = 247 pt]{Limitations_Figure_v3.pdf}
\caption{ \textbf{Numerical Confirmation of Rate and Evolution Time Scaling of $e$- and $m$-Anyon Generation.} Based on Kibble-Zurek type arguments, we expect that the density of $e$-anyons in the final state is controlled by the rate with which one traverses the transition.
%
In panel (a), we check this expectation numerically by simulating a dynamical sweep which ramps $K$ linearly from $0$ to $7$ at two rates and fixed $h_z/h_x = 0.1$ (red arrow in Fig.~\ref{fig-dTC}(a)).
%
We find that after crossing the transition, the expectation value of $\langle G_v \rangle$, which measures the Gauss law, is largely insensitive to the final time provided that one allows for the system to sufficiently coarsen after crossing the transition but depends sensitively on the rate, as predicted.
%
We remark that, at $h_z = 0.1$, the dynamics strictly crosses a first order phase transition (shown with the dotted line) but is insensitive to its presence (see discussion in main text).
%
However, the value of $\langle G_v \rangle$ starts to change dramatically shortly after a putative or `hidden' second order transition (dashed line).
%
In panel (b), we check the dynamics of $m$-anyons by plotting the overlap density of our instantaneous wavefunction $\ket{\psi(t)}$ with the projected state as a function of evolution time $t$.
%
%
We find that following the putative transition (shown in the dashed lines) and coarsening, when $e$-anyons are back in equilibrium, $m$-anyons decrease the projected overlap density linearly with evolution time as expected.
%
In our numerics, we utilized a bond dimension of $\chi = 256$ and a trotter step size of $dt = 0.0025$. \label{fig:scaling}}
\end{figure}
We now seek to numerically verify the predictions of the last two subsections.
To summarize, our predictions are three-fold.
First, we predict that across the local dynamics of our system is insensitive to the presence of the first-order transition and instead effectively sees the presence of a second-order transition.
Second, we predict that the density of $e$-anyons (as detected by $\langle G_v\rangle$) at the end of the sweep is set by the rate $(dK/dt)/h_x$ that one crosses the effective second-order transition as opposed to the total time of the sweep.
Finally, we predict that the density of $m$-anyons produced during the sweep is determined by the total time spent after the transition as opposed to the rate that we sweep across the transition.
To test these, we start by plotting the expectation value of the Gauss law operator as we cross the transition as a function of location $K(t)/h_x$ along the sweep for sweep done at two different rates [See Fig.~\ref{fig:scaling}(a)].
We first find that the expectation value of the Gauss law operator shows no signature as we cross the point where the ground state undergoes the first-order transition indicating that indeed our system is insensitive to its presence.
Moreover, we find that after crossing the putative location of the effective second order phase transition, the expectation value of the Gauss law operator approximately saturates to a constant value.
This constant value appears set by the rate with the value of $\langle G_v \rangle$ increasing with decreasing (slower) rate.
This is consistent with the predictions of Kibble-Zurek.
We remark that there is a slight increase in the value of $\langle G_v \rangle$ as the value of $K(t)$ increases.
This is due to the fact that as we increase $K$, the emergent Gauss law of the low-energy constrained subspace becomes closer to the bare Gauss law: $G_v$.
This effect similarly occurs within the ground state.
Lastly, we want to confirm whether the total $m$-anyon density is set by the total time that the system evolves passed the transition (as opposed to the rate).
To do so, we plot the value of the overlap of the instantaneous wavefunction $\ket{\psi(t)}$ with the projected wavefunction $\mathcal{N} \cdot \mathcal{P}_{G} \ket{\psi(0)}$ as a function of the evolution time $t$.
We do so for two different rates that we cross the transition.
Since we have confirmed that the rate determines the $e$-anyon density, decrease in the projected overlap with time signals the nucleation of $m$-anyons.
Our prediction would signal that the slope with which the projected overlap decreases should be independent of rate.
Remarkably, we find that this is indeed the case in Fig.~\ref{fig:scaling}(b)!
This is strong evidence in support of our predictions.
\section{Experimental Relevance: Rydberg Atom Ruby Dimer Liquid} \label{sec-Experiment}
Thus far we have carefully studied the mechanism for creating a quantum spin lake, first in the qutrit toy model (Section~\ref{sec-Qutrit}) and subsequently in a genuine many-body toric code model (Section~\ref{sec-DTC}), which we then also used to study and display how it does (not) scale (Section~\ref{sec-Summary}). In this section, we present a concrete application of our theory. In particular, we consider how it applies to the Rydberg atom quantum simulator experiment of Ref.~\onlinecite{Semeghini21} based on the proposal by Ref.~\onlinecite{Verresen21}.
Therein, Rubidium-87 atoms are placed at the links of the kagome lattice (equivalently the sites of the ruby lattice):
\vspace{-2mm}
\begin{equation*}
\begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1}{
\foreach \j in {0,...,1}{
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2*1/2}, {\j * 0.866025404 + 1/2*0.866025404}) -- cycle;
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2*1/2}, {\j * 0.866025404 - 1/2*0.866025404}) -- cycle;
\filldraw ({\i + \j * 1/2 + 1/4}, {\j * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/4}, {\j * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 + 1/8 }, {\j * 0.866025404 + 1/4 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/8 }, {\j * 0.866025404 - 1/4 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 + 1/8 + 1/4 }, {\j * 0.866025404 + 1/4 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/8 -1/4}, {\j * 0.866025404 - 1/4 * 0.866025404}) circle (1 pt);
}
}
\draw[red, fill=red, fill opacity=0.1, dashed, thick](1/4,0) circle (0.51);
\filldraw[red] (1/4, 0) circle (1 pt);
\draw [-stealth, thick] (1/4,0) -- (1/4 - 0.44, 0.25);
\draw [stealth-stealth, line width = 0.1 mm] (5/4 + 0.1 + 0.04,-0.1 + 0.01 ) -- (5/4 + 0.25 + 0.025, 0.25 - 0.1 + 0.025 );
\node at ({-1/4 -1/6}, 0.4) {\normalsize $R_b$};
\node at (5/4 + 0.25 + 0.2, 0.25 - 0.15 ) {\normalsize $a$};
\end{tikzpicture}
\end{equation*}
Each atom encodes a qubit (or hardcore boson) using a hyperfine atomic ground state $\ket{\downarrow}$ and Rydberg state $\ket{\uparrow}$ of the atom.
These atoms then interact via the following Hamiltonian~\cite{Fendley04}:
\begin{equation} \label{eq-HPXP}
H = \frac{\Omega}{2} \sum_{i} X_i - \delta \sum_i n_i + \frac{1}{2}\sum_{i,j} V_{i, j} n_{i} n_{j}
\end{equation}
where $i, j$ runs over all qubits on the lattice and $n_i = \frac{1}{2}(1 + Z_i)$.
In the experiment, $\Omega$ and $\delta$ corresponds to the Rabi frequency and frequency detuning of the laser that addresses the ground-Rydberg transition and $V_{i, j}$ is the van der Waals interaction potential of two atoms in their Rydberg states which $\approx \Omega \times \left( \frac{R_b}{r_i-r_j} \right)^6$ [See Fig.~\ref{fig-RydbergPotential}(a)], where $R_b$ is called the blockade radius (shown above to be at least $2a$) and $r_i$ is the position of atom $i$.
\subsection{Rydberg Model in Equilibrium} \label{subsec-RydEq}
\begin{figure}
\centering
\includegraphics[width = 247 pt]{Rydberg_Potential2.pdf}
\caption{\textbf{Interaction Potentials Utilized to Study Rydberg Atom Systems.} (a) Typically, the potential between Rydberg atoms is treated as a standard van der Waals potential which falls off with distance $r$ as $\sim (R_b/r)^6$ where $R_b$ is a characteristic radius called the blockade radius \cite{Lukin01, Jaksch00}. (b) Since the van der Waals interaction potential is much larger within the blockade radius than outside it, it is typically approximated to be nearly infinite in the blockade radius and zero outside the blockade radius. In this limit, the effective Hamiltonian governing the dynamics of Rydberg atoms is the so-called PXP model of Eq.~\eqref{eq-actualHPXP}. (c) To incorporate the effects of the long-range tails either numerically or analytically, to study a truncated version of the full van der Waals potential. Here, the full $(R_b/r)^6$ of the van der Waals model is taken into account for $r \leq R_{\text{trunc}}$ but the interaction potential is sharply truncated for $r > R_{\text{trunc}}$. }
\label{fig-RydbergPotential}
\end{figure}
Before discussing the dynamics in the experiment, we briefly review the equilibrium physics of the Rydberg model, following Ref.~\onlinecite{Verresen21}.
One of its key features is that, since the energy of exciting two atoms within the blockade radius is very large, at low-energies, states satisfy the ``blockade constraint'' wherein two nearby atoms cannot be simultaneous excited.
If we represent the states of our Rydberg qubits with dimers, $\ket{ \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (1, 0);
\node at (0.5, 0) {\normalsize $\uparrow$};
\end{tikzpicture}}= \ket{\frac{}{}\begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (1, 0);
\draw[red, fill = red] (0.5,0) ellipse (0.45 and 0.1);
\end{tikzpicture}}$
and $\ket{ \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (1, 0);
\node at (0.5, 0) {\normalsize $\downarrow$};
\end{tikzpicture}} = \ket{\frac{}{} \begin{tikzpicture}[scale = 0.5, baseline = {([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (1, 0);
\end{tikzpicture}}$, then the blockade constraint, defined such that $R_b$ contains the six nearest neigbors (i.e., $2a < R_b < \sqrt{7} a$), implies that two dimers cannot share the same vertex.
If the detuning $\delta$ in the Eq.~\eqref{eq-HPXP} is large, low-energy states will have as many atoms in their Rydberg states as possible while still obeying the blockade constraint.
As such, in such a regime, the system will behave like a dimer model \cite{Verresen21} characterized by the Gauss law:
\begin{equation}\label{eq-RydbergGauss}
G_v = \begin{tikzpicture}[scale = 1.2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-0.5 * 1/2}, {0.5 * 0.866025404}) -- ({0.5 * 1/2}, {0.5 * 0.866025404}) -- cycle;
\draw[gray] (0,0) -- ({-0.5 * 1/2}, {-0.5 * 0.866025404}) -- ({0.5 * 1/2}, {-0.5 * 0.866025404}) -- cycle;
\draw[orange(ryb), dashed, line width = 0.4 mm] (0,0) circle (7 pt);
\end{tikzpicture} = -1 \quad \quad
\begin{tikzpicture}[scale = 1.2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-1.5*(0.5) * 1/2}, {-(1.5*0.5) * 0.866025404}) -- ({(1.5*0.5) * 1/2}, {-(1.5*0.5) * 0.866025404}) -- cycle;
\draw[orange(ryb), dashed, line width = 0.4 mm] (-1.5*0.3, 1.5*-0.25) -- (1.5*0.3, 1.5*-0.25);
\end{tikzpicture}\ \ =\ \
\begin{tikzpicture}[scale = 1.2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-1.5*(0.5) * 1/2}, {-(1.5*0.5) * 0.866025404}) -- ({(1.5*0.5) * 1/2}, {-(1.5*0.5) * 0.866025404}) -- cycle;
\node at ({-1.5*(0.5) * 1/4}, {-(1.5*0.5) * 0.866025404/2}) {\normalsize $Z$};
\node at ({-1.5*(0.5) * 1/4 + 0.75/2}, {-(1.5*0.5) * 0.866025404/2}) {\normalsize $Z$};
\end{tikzpicture}
\end{equation}
where we have defined the 't Hooft loop operator shown in orange and $v$ refers to a particular vertex of the kagome lattice.
The presence of this local Gauss law implies that, at low-energies, the Rydberg model is an emergent $\mathbb{Z}_2$ gauge theory.
The deconfined phase of this gauge theory can be characterized by its fixed-point wavefunction \cite{RK,Moessner_2001,Misguich02}:
\begin{equation} \label{eq-RydRVB}
\ket{\text{RVB}} = \ket{ \includegraphics[scale = 0.3, valign = c]{dimer_config1.pdf} } + \ket{ \includegraphics[scale = 0.3, valign = c]{dimer_config2.pdf} } + \ket{ \includegraphics[scale = 0.3, valign = c]{dimer_config3.pdf} } + \cdots
\end{equation}
which is the equal-weight, equal-phase superposition of all full-packing dimer configurations (dimer configurations that have no untouched vertices) and is the dimer analogue of Anderson's resonating valence bond (RVB) state of singlets \cite{ANDERSON}. Such a superposition of dimer states represents a $\mathbb Z_2$ spin liquid owing to the kagome being a nonbipartite lattice \cite{Read91,Sachdev92,Sachdev_Triangle,Moessner01,Misguich02} (for approaches to emergent dimer models on other lattices see Refs.~\onlinecite{Glaetzle_2014,Samajdar_2021,SamajdarJoshi22,Yan22}).
The above state is the unique state that is stabilized by the 't Hooft loop of Eq.~\eqref{eq-RydbergGauss} and the following Wilson loop operator \cite{unifying}:
\begin{equation} \label{eq-RydbergWilson}
W_{\varhexagon} = \begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-0.5 * 1/2}, {0.5 * 0.866025404}) -- ({0}, {0.866025404}) -- ({0.5}, {0.866025404}) -- ({ 0.5 +0.5 * 1/2}, {0.5 * 0.866025404}) -- (0.5, 0) -- cycle;
\draw[dodgerblue,decorate, decoration={snake, segment length=1.5mm}, line width = .35mm] (0,0) -- ({-0.5 * 1/2}, {0.5 * 0.866025404});
\draw[dodgerblue,decorate, decoration={snake, segment length=1.5mm}, line width = .35mm] ({-0.5 * 1/2}, {0.5 * 0.866025404}) -- ({0}, {0.866025404});
\draw[dodgerblue,decorate, decoration={snake, segment length=1.5mm}, line width = .35mm] ({0}, {0.866025404}) -- ({0.5}, {0.866025404});
\draw[dodgerblue,decorate, decoration={snake, segment length=1.5mm}, line width = .35mm] ({0.5}, {0.866025404}) -- ({ 0.5 +0.5 * 1/2}, {0.5 * 0.866025404});
\draw[dodgerblue,decorate, decoration={snake, segment length=1.5mm}, line width = .35mm] ({ 0.5 +0.5 * 1/2}, {0.5 * 0.866025404}) -- (0.5, 0);
\draw[dodgerblue,decorate, decoration={snake, segment length=1.5mm}, line width = .35mm] (0.5, 0) -- (0,0);
\end{tikzpicture} = +1
\quad \quad
\begin{tikzpicture}[scale = 1.2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-1.5*(0.5) * 1/2}, {-(1.5*0.5) * 0.866025404}) -- ({(1.5*0.5) * 1/2}, {-(1.5*0.5) * 0.866025404}) -- cycle;
\draw[dodgerblue,decorate, decoration={snake, segment length=1.5mm}, line width = .35mm] ({-1.5*(0.5) * 1/2}, {-(1.5*0.5) * 0.866025404}) -- ({(1.5*0.5) * 1/2}, {-(1.5*0.5) * 0.866025404});
\end{tikzpicture} = \begin{cases}
\begin{tikzpicture}[scale = 1.2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-.7*(0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- ({(.7*0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- cycle;
\draw[red, fill = red, rotate around={60:({-.35*(0.5) * 1/2}, {-(.35*0.5) * 0.866025404})}] ({-.35*(0.5) * 1/2}, {-(.35*0.5) * 0.866025404}) ellipse (0.175 and 0.03);
\end{tikzpicture} \leftrightarrow
\begin{tikzpicture}[scale = 1.2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-.7*(0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- ({(.7*0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- cycle;
\draw[red, fill = red, rotate around={-60:({(.35*0.5) * 1/2}, {-(.35*0.5) * 0.866025404})}] ({(.35*0.5) * 1/2}, {-(.35*0.5) * 0.866025404}) ellipse (0.175 and 0.03);
\end{tikzpicture}
\\
\begin{tikzpicture}[scale = 1.2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-.7*(0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- ({(.7*0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- cycle;
\end{tikzpicture}
\leftrightarrow
\begin{tikzpicture}[scale = 1.2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-.7*(0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- ({(.7*0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- cycle;
\draw[red, fill = red] ({0}, {-(.7*0.5) * 0.866025404}) ellipse (0.175 and 0.03);
\end{tikzpicture}
\end{cases}
\end{equation}
where $\varhexagon$ refers to a particular hexagon on the kagome lattice.
Similar to the case of the toric code, we can define $e$-anyons above this state to be violations of the Gauss law of Eq.~\eqref{eq-RydbergGauss} and $m$-anyons to be violations of the equal phase condition of Eq.~\eqref{eq-RydRVB} (equivalently violations of the Wilson loop stabilizer Eq.~\eqref{eq-RydbergWilson}).
\subsubsection{Phase Diagram in the Absence of Long-Range Tails} \label{subsec-PXP_PD}
Since the effect of the blockade is largest effect of the interactions $V$, as a first approximation the interaction can be to replace $V(r \leq R_b) = + \infty$ and $V(r > R_b) = 0$, neglecting the effect of long-range $1/R^6$ tails [See Fig.~\ref{fig-RydbergPotential}(b)].
In this limit, the Rydberg model is traditionally called a ``PXP model'' \cite{Fendley04,Bernien17,Turner18a,Turner18b,Choi19,Lin_2019,Shiraishi19,Mark20,Moudgalya20,Iadecola20,Surace20,Lin20,Michailidis20}.
This is because the effective Hamiltonian in this limit is simply the Pauli-$X$ operator projected into the blockade constraint satisfying subspace along with the detuning term:
\begin{equation} \label{eq-actualHPXP}
H_{\text{PXP}} = \frac{\Omega}{2} \sum_i \mathcal{P}_{\text{blockade}} X_i \mathcal{P}_{\text{blockade}} - \delta \sum_i n_i
\end{equation}
where $\mathcal{P}_{\text{blockade}} = \prod_{i, j: |r_i - r_j| \leq R_b} \left( 1 - n_i n_j \right)$ removes configurations that violate the blockade constraint.
The ground state phase diagram of this PXP model on the ruby lattice was found in Ref.~\onlinecite{Verresen21} to be:
\begin{equation*}
\hspace{7 mm}\begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[-stealth, line width=0.4mm] (0, 0) -- (5, 0);
\node at (5.5,0) {\normalsize $\delta/\Omega$};
\draw[line width=0.4mm] (1.2, 0) -- (1.2, 0.2);
\node at (0.6,0.3) {\small Higgs};
\draw[line width=0.4mm] (3.3, 0) -- (3.3, 0.2);
\node at (4.2,0.3) {\small Confined};
\node at (2.3,0.3) {\small $\mathbb{Z}_2$ QSL};
\node at (1.2, -0.2) {\small $\approx 1.4$};
\node at (3.3, -0.2) {\small $\approx 2.1$};
\end{tikzpicture}
\end{equation*}
which contains three phases.
In particular, when $\delta/\Omega$ is small or negative, $e$-anyon excitations condense, yielding the Higgs phase which is adiabatically connected to the state with no dimers (equivalently no excited Rydberg atoms).
Moreover, in an intermediate regime of $\delta/\Omega$ we get the deconfined phase which shares the properties of the RVB state of Eq.~\eqref{eq-RydRVB}.
Finally, when $\delta/\Omega$ is large, we can treat the effect of $\Omega$, which generates violations of the Gauss law (analogous to $h_x$ term in Eq.~\eqref{eq-KitaevTC-in-F}), perturbatively.
This will generate a resonance between dimer states given by \cite{Verresen21}:
\begin{equation}\label{eq-Rydberg-Resonances}
H_{\text{eff}} = - \frac{3\Omega^6}{32\delta^5} \sum_{\varhexagon}
\ket{\begin{tikzpicture}[scale = 0.7, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-0.5 * 1/2}, {0.5 * 0.866025404}) -- ({0}, {0.866025404}) -- ({0.5}, {0.866025404}) -- ({ 0.5 +0.5 * 1/2}, {0.5 * 0.866025404}) -- (0.5, 0) -- cycle;
\draw[red, fill = red] (0.25,0) ellipse (0.25 and 0.05);
\draw[red, fill = red, rotate around={60:({-0.25 * 1/2}, {0.75 * 0.866025404})}] ({-0.25 * 1/2}, {0.75 * 0.866025404}) ellipse (0.25 and 0.05);
\draw[red, fill = red, rotate around={-60:({1.25 * 1/2}, {0.75 * 0.866025404})}] ({1.25 * 1/2}, {0.75 * 0.866025404}) ellipse (0.25 and 0.05);
\end{tikzpicture}}
\bra{\begin{tikzpicture}[scale = 0.7, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-0.5 * 1/2}, {0.5 * 0.866025404}) -- ({0}, {0.866025404}) -- ({0.5}, {0.866025404}) -- ({ 0.5 +0.5 * 1/2}, {0.5 * 0.866025404}) -- (0.5, 0) -- cycle;
\draw[red, fill = red] (0.25,{0.866025404}) ellipse (0.25 and 0.05);
\draw[red, fill = red, rotate around={-60:({-0.25 * 1/2}, {0.25 * 0.866025404})}] ({-0.25 * 1/2}, {0.25 * 0.866025404}) ellipse (0.25 and 0.05);
\draw[red, fill = red, rotate around={60:({1.25 * 1/2}, {0.25 * 0.866025404})}] ({1.25 * 1/2}, {0.25 * 0.866025404}) ellipse (0.25 and 0.05);
\end{tikzpicture}}
\end{equation}
This resonance alone (as well as an eigth order perturbatively generated term) yields a confined ``valence bond solid'' phase \cite{Nikolic03}.
Such a phase is a condensate of $m$-anyons and the ground state wavefunction corresponds to a localized superposition of a subset of full-packing dimer configurations.
\subsubsection{Effect of Long-Range Tails} \label{subsubsec-EffectofLRtails}
Having reviewed the basic physics of the Rydberg PXP model in the absence of long-range tails, we now analyze the effect of the long-range tails beyond the blockade radius.
Since the long-range density-density interactions such as $n_i n_j$ commute with the Gauss law of Eq.~\eqref{eq-RydbergGauss} but fail to commute with the Wilson loop of Eq.~\eqref{eq-RydbergWilson}, they contribute to the energy-scale associated with the creation of $m$-anyons.
Concretely, in the absence of long-range tails and local resonances, the Rydberg blockade treats all dimer configurations on equal footing, inducing no splittings between such states; our goal is to estimate how much these tails lead to energy density splittings between dimer configurations.
One might expect that the leading order contribution of these tails is due to the first interaction outside of the blockade radius, namely at $r = \sqrt{7} a$.
However, we demonstrate a new result that instead this contribution is due to the \emph{second} interaction outside of the blockade and occurs with a much smaller coefficient than one would expect; roughly speaking, this decreases the apparent effect by at least an order of magnitude.
To estimate effect of the long-range tails, we consider the effect of the six leading contributions of the Rydberg interaction outside of the blockade radius:
\begin{equation} \label{eq-RydbergDistances}
\begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1}{
\foreach \j in {0,...,1}{
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2*1/2}, {\j * 0.866025404 + 1/2*0.866025404}) -- cycle;
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2*1/2}, {\j * 0.866025404 - 1/2*0.866025404}) -- cycle;
\filldraw ({\i + \j * 1/2 + 1/4}, {\j * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/4}, {\j * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 + 1/8 }, {\j * 0.866025404 + 1/4 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/8 }, {\j * 0.866025404 - 1/4 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 + 1/8 + 1/4 }, {\j * 0.866025404 + 1/4 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/8 -1/4}, {\j * 0.866025404 - 1/4 * 0.866025404}) circle (1 pt);
}
}
\draw [-stealth, thick, dodgerblue] (1/4,0) -- (1/2- 1/8 -1/4, 1* 0.866025404 - 1/4 * 0.866025404);
\draw [-stealth, thick, red, dashed] (1/4,0) -- (1/2 - 1/4, 1* 0.866025404);
\draw [-stealth, thick, red] (1/4 + 1/2,0) -- (1/2 + 1/2 - 1/4, 1* 0.866025404);
\draw [-stealth, thick, forest] (1/4 + 1/2,0) -- (1/2 + 1/2 - 1/4 + 1/8, 1* 0.866025404 + 0.866025404/4);
\draw [-stealth, thick, orange(ryb)] (1/4 + 1/8,0.866025404/4) -- (1/2 + 1/4, 1* 0.866025404);
\draw [-stealth, thick, blue] (1/4 + 1/8, 0.866025404/4) -- (1/2 + 1/8, 5/4* 0.866025404 );
\draw [-stealth, thick, magenta] (1/4 + 1/8 + 1, 0.866025404/4) -- (1/2 + 3/8 + 1, 5/4* 0.866025404 );
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale = 0.4, baseline={([yshift=-.5ex]current bounding box.center)}]
\node at (0, 2.5) {\footnotesize $\textcolor{dodgerblue}{R_1 = \sqrt{7} a}$};
\node at (0, 1.5) {\footnotesize $\textcolor{orange(ryb)}{R_2 = 3 a}$};
\node at (0, 0.5) {\footnotesize $\textcolor{red}{R_3 = 2\sqrt{3} a}$};
\node at (0, -0.5) {\footnotesize $\textcolor{blue}{R_4 = \sqrt{13} a}$};
\node at (0, -1.5) {\footnotesize $\textcolor{magenta}{R_5 = 4 a}$};
\node at (0, -2.5) {\footnotesize $\textcolor{forest}{R_6 = \sqrt{19} a}$};
\end{tikzpicture}
%
\quad
\begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1}{
\foreach \j in {0,...,1}{
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2*1/2}, {\j * 0.866025404 + 1/2*0.866025404}) -- cycle;
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2*1/2}, {\j * 0.866025404 - 1/2*0.866025404}) -- cycle;
}
}
\node at (1/4,0) {\tiny $1$};
\node at (1/4 + 1/8, 0.866025404/4) {\tiny $2$};
\node at (1/4 - 1/8, 0.866025404/4) {\tiny $3$};
\node at (-3/8 + 1/2, 0.866025404*3/4 ) {\tiny $4$};
\node at (-1/8 + 1/2, 0.866025404*3/4 ) {\tiny $5$};
\node at (-1/4 + 1/2, 0.866025404 ) {\tiny $6$};
\node at (1/4 + 1/2, 0.866025404 ) {\tiny $7$};
\node at (1/8 + 1/2, 5/4*0.866025404 ) {\tiny $9$};
\node at (3/8 + 1/2, 5/4*0.866025404 ) {\tiny $8$};
\node at (5/8 - 1/16, -1/4*0.866025404 ) {\tiny $10$};
\node at (7/8 + 1/16, -1/4*0.866025404 ) {\tiny $11$};
\node at (3/4, 0) {\tiny $12$};
\node at (1 + 1/4,0) {\tiny $13$};
\node at (1 + 1/16 + 1/4 + 1/8, 0.866025404/4) {\tiny $14$};
\node at (1 - 1/16 + 1/4 - 1/8, 0.866025404/4) {\tiny $15$};
\node at (1 - 1/16 -3/8 + 1/2, 0.866025404*3/4 ) {\tiny $16$};
\node at (1 + 1/16 -1/8 + 1/2, 0.866025404*3/4 ) {\tiny $17$};
\node at (1 -1/4 + 1/2, 0.866025404 ) {\tiny $18$};
\end{tikzpicture}
\end{equation}
where the colors refer to different distance couplings \footnote{We remark that the distances $R_n$ are the square roots of the so-called ``Loeschian numbers.''} and we will utilize the numeric labeling of the sites on the right in the following discussion.
For convenience, we will denote the distance-$R_n$ coupling as $V_n = \Omega (R_b/R_n)^6 \sum_{|r_i - r_j| = R_n} n_i n_j$.
Naively, the leading effect of the long-range tails will be due to the distance $R_1$ coupling $V_1$.
For practical experimental values of $R_b$ and $\Omega$, this coupling can be quite large (See Subsubsection~\ref{subsubsec-NumericalEstimates} for more details) which would suggest that the $m$-anyons would proliferate and strongly confine the QSL.
However, one can prove that such a coupling must be constant across all full-packing dimer configurations (i.e. $V_1 \ket{\psi} = c\ket{\psi}$ if $\ket{\psi}$ is a full-packing dimer configuration, where $c$ does not depend on $\ket{\psi}$).
To see that this is the case, we remark that any full-packing dimer configuration must obey the Gauss law and as such $\sum_{\alpha \in v} n_{\alpha} = 1$ (where $\alpha$ label sites neighboring vertex $v$).
Using this fact, the distance-$R_1$ coupling, $V_1$, acts on full-packing dimer configurations $\ket{\psi}$ as:
\begin{align}
n_1(n_4 + n_5) \ket{\psi} &= n_1 (1 - n_2 - n_3) \ket{\psi} \nonumber\\
&= [n_1 - n_1 (n_2 + n_3)]\ket{\psi} = n_1\ket{\psi}
\end{align}
where in the first line we used the Gauss law and in the second line we noted that $n_1 (n_2 + n_3) \ket{\psi} = 0$ by the blockade constraint.
As such, the effect of the distance $R_1$ coupling on $\ket{\psi}$ is $V_1 \ket{\psi} = \Omega (R_b/R_1)^6\sum_i n_i \ket{\psi}$ and hence it simply renormalizes the detuning of the model.
Since the number of dimers in any full-packing dimer configurations is the same, $V_1 \ket{\psi} = c \ket{\psi}$ as claimed and hence $V_1$ does not split dimer configurations (i.e. does not contribute to the $m$-anyon energy scale).
As a consequence of the above, the leading order effect of the long-range tail is due to the distance-$R_2$ coupling $V_2$.
Once again, although a naive estimate of the energy density splitting due to these terms is $\Omega (R_b/R_2)^6$, this turns out to not be the case.
To see why, first note that this coupling pairs atoms that are within a hexagon:
\begin{equation} \label{eq-Hexagon1}
\begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1}{
\foreach \j in {0,...,1}{
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2*1/2}, {\j * 0.866025404 + 1/2*0.866025404}) -- cycle;
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2*1/2}, {\j * 0.866025404 - 1/2*0.866025404}) -- cycle;
}
}
\draw [stealth-stealth, thick, orange(ryb)] (1/4 + 1/8,0.866025404/4) -- (1/2 + 1/4, 1* 0.866025404);
\draw [stealth-stealth, thick, orange(ryb)] (3/4 + 1/4 + 1/8,0.866025404/4) -- (1/2 + 1/4, 1* 0.866025404);
\draw [stealth-stealth, thick, orange(ryb)] (1/4 + 1/8,0.866025404/4) -- (3/4 + 1/4 + 1/8,0.866025404/4);
\draw [stealth-stealth, thick, orange(ryb)] (-1/8 + 1/2, 0.866025404*3/4 ) -- (3/4, 0);
\draw [stealth-stealth, thick, orange(ryb)] (-1/8 + 1/2 + 3/4, 0.866025404*3/4 ) -- (3/4, 0);
\draw [stealth-stealth, thick, orange(ryb)](-1/8 + 1/2 + 3/4, 0.866025404*3/4 )-- (-1/8 + 1/2, 0.866025404*3/4 );
\node at (1/4 + 1/8, 0.866025404/4) {\tiny $2$};
\node at (1/4 + 1/2, 0.866025404 ) {\tiny $7$};
\node at (1 - 1/16 + 1/4 - 1/8, 0.866025404/4) {\tiny $15$};
\node at (-1/8 + 1/2, 0.866025404*3/4 ) {\tiny $5$};
\node at (3/4, 0) {\tiny $12$};
\node at (1 - 1/16 -3/8 + 1/2, 0.866025404*3/4 ) {\tiny $16$};
\end{tikzpicture}
\end{equation}
As such, $V_2$ can be written as the sum of terms localized to hexagons.
By exploiting the Gauss law once again, we can rewrite the action of one of these terms on a full-packing dimer configuration $\ket{\psi}$ as:
\begin{align}
n_2 n_7 \ket{\psi} = \frac{1}{2} \left[\left(n_2 - n_2 n_5 - n_2 n_6 - n_2 n_9 \right) \right. \\
\left. + \left(n_7 - n_5 n_7 - n_4 n_7 - n_3 n_7 \right) \right]\ket{\psi}
\end{align}
Note that the first terms in both parenthesis will renormalize the detuning.
The second terms in both parenthesis will be zero when acting on any full-packing dimer configuration due to the blockade constraint.
The third terms look like a distance-$R_1$ density-density interaction and as such, when we sum over hexagons, these terms will just renormalize the detuning as per the discussion in the previous paragraph.
Therefore, the only remaining non-trivial term will be the fourth terms which look like distance $R_4$ density density interactions.
Put succinctly, we have found that $n_2 n_7 = \frac{1}{2} \left(-n_2 n_9 - n_3 n_7 + \cdots \right)$ where the ``$\cdots$'' indicates terms that renormalize the detuning.
This fact implies that the distance-$R_4$ couplings will partially cancel out the distance-$R_2$ couplings as:
\begin{align}\label{eq-numericalestimate_prelim}
&\Omega R_b^6 \left[ \frac{n_2 n_7}{R_2^6} + \frac{ n_2 n_9}{R_4^6} +\frac{ n_3 n_7}{R_4^6} \right] \ket{\psi} \nonumber \\
&= \Omega R_b^6 \left[ \frac{1}{R_2^6} - \frac{2}{R_4^6} \right] n_2 n_7 \ket{\psi} = \alpha \Omega\left(\frac{R_b}{a}\right)^6 n_2 n_7 \ket{\psi}
\end{align}
where $\alpha
\approx 5 \cdot 10^{-4}$---which is reduced from the naive estimate by a factor of $3$.
As a remark, since there are two distance $R_4$-couplings per distance $R_2$-coupling, the above implies that the effect of $V_4$ has been completely canceled.
Finally, we can show that the distance $R_3$-couplings $V_3$ also serve to help cancel the effects of the $R_2$-coupling.
To see why, first note that there are two types of distance $R_3$-couplings.
The first is shown with a dashed red arrow in Eq.~\eqref{eq-RydbergDistances} and occurs between diametric ends of triangle pairs.
This coupling is identically zero on the space of full-packing dimer configurations because if it were not, then that would imply there was no dimer touching the vertex shared by the triangles.
The second type couples atoms at diametrically opposite ends of the hexagons:
\begin{equation}
\begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1}{
\foreach \j in {0,...,1}{
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2*1/2}, {\j * 0.866025404 + 1/2*0.866025404}) -- cycle;
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2*1/2}, {\j * 0.866025404 - 1/2*0.866025404}) -- cycle;
}
}
\draw[stealth-stealth, thick, red] (1/4 + 1/2, 0.866025404 ) -- (3/4, 0);
\draw[stealth-stealth, thick, red] (-1/8 + 1/2, 0.866025404*3/4 ) -- (1 + 1/4 - 1/8, 0.866025404/4);
\draw[stealth-stealth, thick, red] (1/4 + 1/8, 0.866025404/4) -- (1 -3/8 + 1/2, 0.866025404*3/4 );
\node at (1/4 + 1/8, 0.866025404/4) {\tiny $2$};
\node at (1/4 + 1/2, 0.866025404 ) {\tiny $7$};
\node at (1 - 1/16 + 1/4 - 1/8, 0.866025404/4) {\tiny $15$};
\node at (-1/8 + 1/2, 0.866025404*3/4 ) {\tiny $5$};
\node at (3/4, 0) {\tiny $12$};
\node at (1 - 1/16 -3/8 + 1/2, 0.866025404*3/4 ) {\tiny $16$};
\end{tikzpicture}
\end{equation}
As such, $V_3$ can also be written as the sum of terms localized to the hexagons on the ruby lattice.
Using the Gauss law again, one of these terms can be rewritten as:
\begin{align} \label{eq-n2n16}
n_2 n_{16} \ket{\psi} &= \frac{1}{4} \left[ n_2 (2 - n_7 - n_8 - n_{18} - n_{17} - n_{15} - n_{14}) \right. \nonumber \\
&+ \left.(2 - n_5 - n_4 - n_3 - n_1 - n_{12} - n_{10}) n_{16}\right] \ket{\psi}
\end{align}
where the terms in parenthesis lead to renormalization of the detuning as well as a distance-$R_2$, distance-$R_5$, distance-$R_6$, distance-$R_6$, distance-$R_2$, and distance-$R_5$ coupling in that order.
First, note that for every distance-$R_3$ coupling, there are exactly four distance-$R_6$ couplings.
Consequently, the above will eliminate the distance-$R_6$ coupling, $V_6$, and we can discard the $n_i n_j$ terms in Eq.~\eqref{eq-n2n16} with $|r_i - r_j| = R_6$ due to near perfect cancellation with $V_6$: $1/4 (a/R_3)^6 - (a/R_6)^6 \approx -10^{-6}$.
Next, the $n_i n_j$ terms in Eq.~\eqref{eq-n2n16} with $|r_i - r_j| = R_2$ will help cancel the magnitude of the distance-$R_2$ coupling, $V_2$ further.
In particular, the coefficient $\alpha$ in Eq.~\eqref{eq-numericalestimate_prelim} will be lowered to $\alpha \to \beta = \alpha - \frac{1}{4} \left(\frac{a}{R_3}\right)^6
\approx 3 \cdot 10^{-4} $.
Finally, terms such as $n_2 n_8$ can be destructively interfered with the distance-$R_5$ couplings, $V_5$, and will appear with magnitude $(a/R_5)^6 - 1/4 (a/R_3)^6 \approx 10^{-4} \approx \frac{1}{2} (a/R_5)^6$.
Note that unlike the case with $V_6$ and $V_4$, not all terms in $V_5$ are cancelled by using $V_3$.
In particular, terms that coupling atoms within a ``line'' of the ruby lattice such as:
\begin{equation}
\begin{tikzpicture}[scale = 0.8, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,0}{
\foreach \j in {0,...,1}{
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2*1/2}, {\j * 0.866025404 + 1/2*0.866025404}) -- cycle;
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2*1/2}, {\j * 0.866025404 - 1/2*0.866025404}) -- cycle;
\filldraw ({\i + \j * 1/2 + 1/4}, {\j * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/4}, {\j * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 + 1/8 }, {\j * 0.866025404 + 1/4 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/8 }, {\j * 0.866025404 - 1/4 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 + 1/8 + 1/4 }, {\j * 0.866025404 + 1/4 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/8 -1/4}, {\j * 0.866025404 - 1/4 * 0.866025404}) circle (1 pt);
}
}
\draw [-stealth, thick, magenta] (1/8, 0.866025404/4) -- (1/4 + 3/8, 5/4* 0.866025404 );
\end{tikzpicture}
\end{equation}
are not canceled out.
Hence, the remaining Hamiltonian projected into the space of full-packing dimer configurations will be:
\begin{equation} \label{eq-projected_ham}
H_{LR} \approx \Omega \left( \frac{R_b}{a} \right)^{\hspace{-1mm} 6} \left[ \beta \hspace{-3mm} \sum_{|r_i - r_j| = R_2 } n_i n_j + \hspace{-3mm} \sum_{|r_i - r_j| = R_5} \frac{ \gamma_{i, j} n_i n_j}{(R_5/a)^6} \right]
\end{equation}
where $\gamma_{i, j}$ equals $1$ or $1/2$ depending on whether the term was cancelled out by $V_3$ or not.
We will estimate the $m$-anyon energy scale by summing an estimate for the maximum possible energy density (per qubit) from the first and second terms independently.
Since the $m$-anyon energy scale corresponds to energy density splittings between dimer configurations and both terms in Eq.~\eqref{eq-projected_ham} are positive semi-definite, this will provide an upper bound on these splittings.
Note that the first term couples qubits in the manner illustrated in Eq.~\eqref{eq-Hexagon1}.
First and foremost, the eigenstate with maximum eigenvalue under this $\sum_{|r_i - r_j| = R_2} n_i n_j$ is a valence bond solid configuration on the kagome lattice analyzed in Ref.~\onlinecite{Nikolic03}.
Such a configuration has a twelve hexagon unit cell with the value of $\sum_{|r_i - r_j| = R_2} n_i n_j$ on the unit cell being six (corresponding to two ``perfect'' hexagons).
As such, the maximum energy density per hexagon of the first term will be $\Omega \left( \frac{R_b}{a} \right)^6 \frac{\beta}{2}$ which per qubit is $\Omega \left( \frac{R_b}{a} \right)^6 \frac{\beta}{12}$.
For the second term, it is harder to precisely determine the maximum energy density per qubit.
To gain an estimate for the scale of this term, we note that the distance-$R_5$ coupling pairs qubits that are far relative to a blockade radius $2a <R_b < \sqrt{7} a$, one might expect that the maximum eigenvalue of $n_i n_j$ (with $|r_i - r_j| = R_5$) will on average be close to the uncorrelated value for a dimer configuration of $\langle n_i \rangle \langle n_j \rangle = 1/16$.
Hence, since there are three distance-$R_5$ couplings per qubit with two occuring with strength $\Omega \left( \frac{R_b}{a} \right)^6 \times \frac{1}{2(R_5/a)^6}$ and one feeling $\Omega \left( \frac{R_b}{a} \right)^6 \cdot \frac{1}{(R_5/a)^6}$, an estimate for the rough scale of the maximum energy density per qubit will be $\Omega \left( \frac{R_b}{a} \right)^6 \frac{2}{ (R_5/a)^6} \times \frac{1}{16}$.
(In fact, for the distance considered in the previous paragraph, which is considerably shorter, the analysis gave an effective $1/12$ which is already close to the uncorrelated value of $1/16$).
Thus, our estimate for the $m$-anyon energy scale is:
\begin{align} \label{eq-numericalestimate}
E_m \lesssim \Omega \left( \frac{R_b}{a} \right)^6 \left[ \frac{\beta}{12} + \frac{a^6}{8 R_5^6} \right]
\end{align}
where the term in parenthesis is approximately $5.6 \times 10^{-5}$ which is nearly $25$ times smaller than the naive expectation.
We note that the above gives a soft upper bound on the energy scale of splittings induced by the Rydberg interaction (See Section~\ref{subsubsec-NumericalEstimates} for discussion of this energy scale compared to experimental time scales).
\subsubsection{Phase Diagram with Long-Range Tails}
Since the long-range tails of the Rydberg interaction can generate $m$-anyon fluctuations, they could potentially confine the QSL phase of the model in the absence of these tails.
As a consequence, Refs.~\onlinecite{Verresen21,Semeghini21} studied the ground state phase diagram of the Rydberg model with the presence of the long-range tails in addition to a study of the PXP model phase diagram. In particular, we consider the truncated VdW model in Fig.~\ref{fig-RydbergPotential}, where we keep the Van der Waals interactions within a distance $r<R_\textrm{trunc}$.
Let us first consider the particular instance of ruby lattice defined by the qubits on the bonds of the kagome lattice. In this case, Ref.~\onlinecite{Verresen21} found that upon including the effects of the long-range Rydberg interaction around $R_\textrm{trunc} \approx 6a$, the QSL eventualy disappears and the phase diagram is:
\begin{equation*}
\hspace{7 mm}\begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[-stealth, line width=0.4mm] (0, 0) -- (5, 0);
\node at (5.5,0) {\normalsize $\delta/\Omega$};
\draw[line width=0.4mm] (2.5, 0) -- (2.5, 0.2);
\node at (1.25,0.3) {\small Higgs};
\node at (3.75,0.3) {\small Confined};
\node at (2.5, -0.2) {\small $\approx 3.5$};
\end{tikzpicture}
\end{equation*}
where the Higgs and confined phase are separated by a first-order phase transition.
As a consequence, for the lattice geometry simulated in the Rydberg atom experiment \cite{Semeghini21}, the system did not have a QSL phase in its ground state phase diagram.
We note that Ref.~\onlinecite{Verresen21} showed that the ground state spin liquid can persist by considering an elongated ruby lattice, where the triangles are placed further apart. In particular, while for the bonds of the kagome lattice the aspect ratio of the rectangles of the ruby lattice is $\rho = \sqrt{3}$, increasing this to $\rho = 3$ stabilizes a spin liquid in the ground state, even as one arbitrarily increases $R_\textrm{trunc}$.
\subsection{Dynamical Preparation of Quantum Spin Lake with Rydberg Atoms}
Equilibrium physics in hand, we can see that the physics is precisely in the regime where one would expect to dynamically prepare the quantum spin lake.
In particular, equivalent to the toric code case, the dynamics of the $e$-particle will be fast as it is set by the strong Rydberg interaction and Rabi oscillation scale, and the dynamics of the $m$ particle will be slow as it is set by terms generated at high orders in perturbation theory and small energy scales occuring due to the long-range tails of the Rydberg interaction.
In this subsection, we address why we would expect that dynamics that are slow relative to $e$-anyons and fast relative to $m$-anyons would produce a quantum spin lake in the Rydberg system.
Subsequently, we will make numerical estimates for the $m$-anyon scale in the experiment and demonstrate that the time and energy scales used in the Rydberg atom experiment place us in the regime for producing a quantum spin lake.
\subsubsection{Quantum Spin Lakes from Rydberg Atoms}
We aim to show that the ground state of the Higgs phase in the Rydberg model yields a quantum spin liquid when we project out Gauss law violations.
By translation invariance and the low-entangled nature of the Higgs phase, a mean-field ansatz for the initial state of the sweep can be expressed as:
\begin{equation}\label{eq-RydbergMFansatz}
\ket{\psi(0)} \propto \mathcal{P}_{\text{blockade}} \bigotimes_{i} \left( \ket{\downarrow} + \varepsilon \ket{\uparrow} \right)
\end{equation}
where $\mathcal{P}_{\text{blockade}}$ is a projector onto blockade satisfying states (defined below Eq.~\eqref{eq-actualHPXP}).
Then, by Eq.~\eqref{eq-sweeping-proj}, the final state under the dynamics will be:
\begin{equation} \label{eq-RydbergProj}
\ket{\psi(T)} \propto \mathcal{P}_{G} \ket{\psi(0)} = \ket{\text{RVB}}
\end{equation}
where $\mathcal{P}_{G} = (1 - G_v)/2$, $G_v$ is defined in Eq~\eqref{eq-RydbergGauss}, and $\ket{\text{RVB}}$ is defined in Eq.~\eqref{eq-RydRVB}.
Crucially, the above follows from the fact that each dimer configuration has the same number of dimers and thus each enters with the same amplitude in Eq.~\eqref{eq-RydbergMFansatz}.
Consequently, the state prepared in dynamics will resemble a QSL.
\subsubsection{Numerical Estimates for Regime of the Experiment} \label{subsubsec-NumericalEstimates}
We conclude by numerically estimating what dynamical regime the Rydberg atom experiment of Ref.~\onlinecite{Semeghini21} was in.
Since the Rabi frequency and the Gauss law (enforced by the detuning and Rydberg blockade) are both large energy scales in the problem, the energy scales governing the equilibration of $e$-anyons is large as required.
As such, here, we aim to estimate a figure of merit for the density of $m$-anyons produced during the sweep of the experiment.
In particular, we aim to compute $E_m T_{\text{exp}}$ where $E_m$ is the energy scale associated with the dynamics of $m$-anyons and $T_{\text{exp}}$ is the amount of time that the the experiment spends in the regime of parameter space with a constrained low-energy subspace.
To do so, we remark that in the Rydberg atom experiment, the Rabi frequency and blockade radius were reported to be $\Omega = 2\pi \times 1.4\text{ MHz}$ and $R_b/a = 2.4$.
Ignoring the sixth order plaquette resonance term, we use Eq.~\eqref{eq-numericalestimate} to get an estimate for $E_m = 0.011 \cdot \Omega = 0.096 \text{ MHz}$ which is two orders of magnitude smaller than the characteristic energy scale for $e$-anyons!
Moreover, to get a rough estimate of $T_{\text{exp}}$ (which we roughly estimate to be around the amount of time in the experiment spent past $\Delta/\Omega \approx 3.5$) is $T_{\text{exp}} = 0.5\ \mu\text{s}$.
As such, the dimensionless figure of merit $E_m T_{\text{exp}} \approx 0.048 \ll 1$.
As a consequence, the density of $m$-anyons nucleated during the dynamical sweep in the Rydberg experiment of Ref.~\onlinecite{Semeghini21} is expected to be low, putting the experiment in the regime for preparation of the quantum spin lake.
We conclude by remarking that the large separation between the energy scales controlling the dynamics of $e$-anyons and $m$-anyons suggest that it should be possible to ignore the effects of $m$-anyons when numerically and analytically studying the experimental settings such as the Rydberg atom experiment.
This will be explored and confirmed in further detail in the following section.
\section{Resonating without Resonances: Spin Lakes on Trees} \label{sec-Tree}
The discussions of the previous two sections concluded with two findings.
First, in Section~\ref{sec-Summary}, we found that the preparation of a quantum spin lake was limited by the energy scale of $m$-anyon excitations, which is set by any confining fields in the problem and a perturbatively generated resonance term: the larger the energy scale of $m$-anyons, the smaller the spin lake one can prepare.
A natural conclusion of this is that, in the absence of confining fields, the presence of perturbatively generated resonance terms is what limits the preparation of a quantum spin lake on the ruby lattice!
This is a striking reversal of logic relative to the equilibrium case where a proper combination of resonances are precisely what stabilize the QSL.
Second, in Section~\ref{sec-Experiment}, we found that, in experimentally relevant settings, the aforementioned perturbative resonances and confining terms are quite small and hence are predicted to not influence the short-time dynamics accessible in experiments.
As a consequence, as alluded to in the previous section, it should be possible within this time-frame to study the dynamics numerically and analytically by ignoring the effects of $m$-anyons.
In this section, we culminate these two observations by studying the Rydberg model on a tree lattice version of the ruby lattice.
In particular, we envision putting qubits on the links of the so-called Husimi cactus lattice: a version of the kagome lattice with no hexagonal loops [See Figure~\ref{fig:Z2Tree}(a) and Subsection~\ref{subsec-Z2TreeModel} for more detail].
The motivation to do so is due to a unique feature of this tree lattice.
Namely, resonances generated through the $\Omega$ term of the Rydberg model do not occur at any finite order in perturbation theory---the Rydberg model on this lattice has no resonances!
By using infinite tree tensor network methods (described in Subsection~\ref{subsec-Z2TTNMethod}), we numerically demonstrate the preparation of a quantum spin lake for the PXP model [defined by Eq.~\eqref{eq-HPXP} with $V_{ij}$ taken to be that of Fig.~\ref{fig-RydbergPotential}(b)] of the Husimi cactus in Subsection~\ref{subsec-Z2TreePXPNumerics}, thereby confirming the aforementioned reversal of logic in the most extreme setting.
The complete absence of $m$-anyon dynamics allows one to prepare an arbitrarily large quantum spin lake.
In additional to its conceptual value, we show that the Rydberg model on the tree can correctly approximate the experimental setup within time-scales wherein one does not resolve the $m$-anyon dynamics.
Indeed, in Subsection~\ref{subsec-Z2TreeVdWNumerics}, we show that tree tensor network simulations of the Rydberg model with the more experimentally faithful truncated VdW potential [given by Fig.~\ref{fig-RydbergPotential}(c) with $R_{\text{trunc}} = 2 \sqrt{3} a$] on the tree lattice are able to match the experimental data from the Rydberg experiment just as well as cylinder matrix product state simulations of true ruby lattice.
Moreover, we find that such simulations are roughly two orders of magnitude faster than the cylinder matrix product state simulations that are traditionally used to study dynamics of such systems.
As such, this identifies tree tensor network methods as an ideal numerical tool for studying the dynamical preparation of QSL-like order in analog NISQ devices.
\begin{figure}
\centering
\includegraphics[width = 247pt]{Figure_4_v3.pdf}
\caption{\textbf{$\mathbb{Z}_2$ Spin Lakes on the Husimi Cactus.}
(a) The Husimi cactus lattice is a tree version of the kagome lattice. Here, we simulate the PXP model for Rydberg atoms living on the links of the Husimi cactus by using a two-tensor infinite tree tensor network ansatz with the first tensor encoding the state of the $A$ triangles and the second encoding the state of the $B$ triangles. (b) In our simulations, we dynamically sweep the values of the detuning $\delta$ and Rabi drive $\Omega$ in such a way that our sweep adiabatically prepares the ground state for $\Omega = 1$ and $\delta$ large and negative. Subsequently, $\Omega$ remains constant and set to $1$ as $\delta$ sweeps into a Gauss law satisfying regime. At the end of our dynamical sweep, the resulting state displays QSL-like signatures. In particular, in panel (c), we show that the state has the same entanglement as the fixed point RVB state on the Husimi cactus and in panel (d) we show that the final state in approximately stabilized by the stabilizers of the RVB (defined in Eq.~\eqref{eq-RydbergGauss} and Eq.~\eqref{eq-treewilson}). For both the entanglement and the stabilizers, the value better approaches the fixed point value with increased total time of the sweep. In our numerics, we use a bond-dimension of $\chi_{\alpha} = 7$ ($\alpha = a, b, c$) and trotter step size of $dt = 0.005$.}.
\label{fig:Z2Tree}
\end{figure}
\subsection{Rydberg Models on the Husimi Cactus} \label{subsec-Z2TreeModel}
As stated earlier, we want to study a version of the Rydberg model with qubits on the links of the Husimi cactus lattice, a tree version of the kagome lattice [See Fig.~\ref{fig:Z2Tree}(a)].
While the global structure of the Husimi cactus differs from the kagome lattice, the local structure and connectivity of the lattice is identical.
As such, we can consider a version of the Rydberg model on links of the Husimi cactus.
In particular, the PXP model Eq.~\eqref{eq-HPXP} can be directly carried over to this tree geometry, where we understand the blockade interactions to project out any two neighboring bonds from both being occupied with a dimer.
Later in this section we will also consider a slightly modulated version: while we will not include longer-range $\sim 1/r^6$ interactions (which admittedly requires care to define on a tree geometry), we will make the interactions within the blockade radius spatially dependent, choosing the strengths we had on the planar lattice for the experimental choice of blockade radius $R_b = 2.4a$. In particular, while the shortest intra-triangle interactions are still infinitely strong (i.e., there is never more than one dimer per triangle), we set the second nearest neighbor to be $\Omega \left( \frac{2.4}{\sqrt{3}} \right)^6 \approx 7.08 \Omega$ and the third to be $\Omega \left( \frac{2.4}{2} \right)^6 \approx 2.99 \Omega$.
\subsection{Tree Tensor Network Numerical Method}\label{subsec-Z2TTNMethod}
To analyze either Rydberg model, our approach will be to numerically simulate the dynamics using an infinite tree tensor network approach \cite{Vidal_Tree, Vidal_Tree_2, Murg_Tree}.
In particular, we will make the following translationally invariant ansatz for the wavefunction defined on the lattice of Fig.~\ref{fig:Z2Tree}(a):
\begin{equation} \label{eq-TTN}
\includegraphics[width = 70pt, valign = c]{TreeTN.pdf}
\end{equation}
where $A$ and $B$ are $4 \times \chi_a \times \chi_b \times \chi_c$ tensors that encode the state of the two triangles forming the unit cell of the lattice and the $\{\chi_{\alpha}\}$ are called the bond dimensions and refers to the ranks of the non-dangling legs of the tensors.
The physical legs of the $A$ and $B$ tensors are rank-4 corresponding to the following four states:
\begin{equation} \label{eq-Atriangle}
d =
\quad
\begin{tikzpicture}[scale = 2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-.7*(0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- ({(.7*0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- cycle;
\node at (0, {-(.7*0.25) * 0.866025404 -0.03}) {\small $0$};
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale = 2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-.7*(0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- ({(.7*0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- cycle;
\draw[red, fill = red] ({0}, {-(.7*0.5) * 0.866025404}) ellipse (0.175 and 0.03);
\node at (0, {-(.7*0.25) * 0.866025404 -0.03}) {\small $1$};
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale = 2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-.7*(0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- ({(.7*0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- cycle;
\draw[red, fill = red, rotate around={60:({-.35*(0.5) * 1/2}, {-(.35*0.5) * 0.866025404})}] ({-.35*(0.5) * 1/2}, {-(.35*0.5) * 0.866025404}) ellipse (0.175 and 0.03);
\node at (0, {-(.7*0.25) * 0.866025404 -0.03}) {\small $2$};
\end{tikzpicture} \quad
\begin{tikzpicture}[scale = 2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-.7*(0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- ({(.7*0.5) * 1/2}, {-(.7*0.5) * 0.866025404}) -- cycle;
\draw[red, fill = red, rotate around={-60:({(.35*0.5) * 1/2}, {-(.35*0.5) * 0.866025404})}] ({(.35*0.5) * 1/2}, {-(.35*0.5) * 0.866025404}) ellipse (0.175 and 0.03);
\node at (0, {-(.7*0.25) * 0.866025404 -0.03}) {\small $3$};
\end{tikzpicture}
\end{equation}
for $A$ and:
\begin{equation}\label{eq-Btriangle}
d =\quad \begin{tikzpicture}[scale = 2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-0.7*(0.5) * 1/2}, {0.7*0.5 * 0.866025404}) -- ({0.7*0.5 * 1/2}, {0.7*0.5 * 0.866025404}) -- cycle;
\node at (0, {(.7*0.25) * 0.866025404 +0.03}) {\small $0$};
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale = 2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-0.7*(0.5) * 1/2}, {0.7*0.5 * 0.866025404}) -- ({0.7*0.5 * 1/2}, {0.7*0.5 * 0.866025404}) -- cycle;
\node at (0, {(.7*0.25) * 0.866025404 +0.03}) {\small $1$};
\draw[red, fill = red, rotate around={-60:({-0.35*(0.5) * 1/2}, {0.35*0.5 * 0.866025404})}] ({-0.35*(0.5) * 1/2}, {0.35*0.5 * 0.866025404}) ellipse (0.175 and 0.03);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale = 2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-0.7*(0.5) * 1/2}, {0.7*0.5 * 0.866025404}) -- ({0.7*0.5 * 1/2}, {0.7*0.5 * 0.866025404}) -- cycle;
\node at (0, {(.7*0.25) * 0.866025404 +0.03}) {\small $2$};
\draw[red, fill = red, rotate around={60:({0.35*(0.5) * 1/2}, {0.35*0.5 * 0.866025404})}] ({0.35*(0.5) * 1/2}, {0.35*0.5 * 0.866025404}) ellipse (0.175 and 0.03);
\end{tikzpicture}
\quad
\begin{tikzpicture}[scale = 2, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0,0) -- ({-0.7*(0.5) * 1/2}, {0.7*0.5 * 0.866025404}) -- ({0.7*0.5 * 1/2}, {0.7*0.5 * 0.866025404}) -- cycle;
\node at (0, {(.7*0.25) * 0.866025404 +0.03}) {\small $3$};
\draw[red, fill = red] ({0}, {0.7*0.5 * 0.866025404}) ellipse (0.175 and 0.03);
\end{tikzpicture}
\end{equation}
for $B$.
By encoding the local Hilbert space of the triangles of the lattice in the manner above, we explicitly enforce the blockade constraint inside the triangles, which amounts to assuming that the Rydberg interaction is effectively infinite for qubits within the same triangle.
This explicit enforcement is exact for the PXP model and is a good approximation for the truncated van der Waals model with the $1/R^6$ tails where for $R_b = 2.4 a$, the interaction within the triangles is $\approx 191 \Omega$ (two orders of magnitude larger than every other coupling in the system).
As such, our ansatz enables studying both models.
Our ansatz contains three additional tensors $a, b,$ and $c$ that are diagonal matrices which live on the bonds between the $A$ and $B$ tensors.
These tensors encode the Schmidt values of the tree tensor network state under bipartitioning, similar to the diagonal tensors in the mixed canonical form of the matrix product state \cite{Li_TTN, Hauschild18}.
Using such an ansatz, we can efficiently simulate trotterized dynamics on this system \cite{Li_TTN}.
\subsection{Large Spin Lakes in the PXP Model on a Tree}\label{subsec-Z2TreePXPNumerics}
\begin{figure}
\centering
\includegraphics[width = 247 pt]{Experimental_Trees_Figure.pdf}
\caption{ \textbf{Comparing Tree Simulations, Matrix Product State Cylinder Simulations, and Experimental Data for Rydberg Atom System.} We simulate the dynamical sweep performed in the Rydberg atom experiment of Ref.~\onlinecite{Semeghini21} by simulating a tree lattice instead of the usual ruby lattice. The dynamical sweep is pulled directly from the experimental paper and is show in the inset of panel (a). In panel (a), we compare the time trace of the density $\langle n(t) \rangle$ between tree tensor network simulations (dark orange), cylinder matrix product state simulations (light orange), and experiment (black, dashed). We find excellent agreement between the three, with the two numerical methods being virtually indistinguishable. In panel (b), we compare the probability of observing monomers (empty vertices), dimers, and double dimers in the wavefunction at $\delta(t)/\Omega(t)$. We once again find near perfect agreement between conventional matrix product state simulations and tree lattice simulations. Both simulations qualitatively reproduce the results of the experiment for all times and quantitatively reproduce the results of the experiments at early times. At late times, the experiment quantitatively finds a higher probability of observing monomers and double dimers.
In our tree lattice numerics, we use a bond dimension of $\chi_{\alpha} = 7$ ($\alpha = a, b, c$) and a trotter step size of $dt = 0.01$.}
\label{fig:experimental_trees}
\end{figure}
Numerical method in hand, we now simulate the dynamics of the PXP model defined on the links of the Husimi cactus.
Since the dynamics of $m$-anyons is infinitely slow in this model, our goal is to demonstrate the emergence of a quantum spin lake in this model that increases in fidelity as we decrease the sweep rate.
We start by initializing our state in the ground state of the Higgs phase ($\delta/\Omega = -14$).
To ensure we initialize the state properly, we start by setting $\Omega = 0$ and initializing the state with no dimers, the exact ground state.
Subsequently, we ramp $\delta$ and $\Omega$ in the fashion shown in Fig.~\ref{fig:Z2Tree}(b) to prepare the ground state of the Higgs phase adiabatically and then sweep to the Gauss law satisfying phase ($\delta/\Omega = 14$) \footnote{The exact nature of the ground state at $\delta/\Omega$ large and positive is not important to the discussion in this section. It is sufficient that at low-energies, the system will satisfies Eq.~\eqref{eq-RydbergGauss}}.
To diagnose the onset of the quantum spin lake, we use two approaches.
First, we compute the entanglement across a bond of the tree tensor network (equivalently, a vertex of the original Husimi cactus lattice), and compare to the expected value of the fixed point $\ket{\text{RVB}}$ state on the tree lattice which we find to be $\log(2)$.
Indeed, by plotting the entanglement entropy as a function of time in Fig.~\ref{fig:Z2Tree}(c) (in units of the total time of the sweep which we vary), we find that the the entanglement entropy saturates to $\log(2)$ after crossing the transition, with convergence improving as a function of total time.
This is consistent with the emergence of the quantum spin lake.
Our second approach will be to measure the stabilizers of the original ruby lattice Rydberg model.
While the Gauss law ('t Hooft) loop of Eq.~\eqref{eq-RydbergGauss} will remain unchanged, the resonance (Wilson) loop of Eq.~\eqref{eq-Rydberg-Resonances} becomes an infinitely long line on the Husimi lattice:
\begin{equation*}
\begin{tikzpicture} [scale = 0.9, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (1, 0) -- (1/2, 0.866025404) -- cycle;
\draw[gray] (0, 0) -- ( -1/4, -0.866025404/2) -- (-1/2, 0) -- cycle;
\draw[gray] (1, 0) -- ( 1 + 1/4, -0.866025404/2) -- (1 + 1/2, 0) -- cycle;
\draw[gray] (1/2, 0.866025404) -- (1/2 - 1/4, 0.866025404 * 3/2) -- (1/2 + 1/4, 0.866025404 * 3/2) -- cycle;
\draw[dodgerblue, decorate, decoration={snake, segment length=3.6mm}, line width = .35mm] ( -1/4 * 1.4, -0.866025404/2 * 1.4) -- ({(1/2 + 1/4)*1.2}, {(0.866025404 * 3/2) * 1.2});
\draw[-stealth, dodgerblue, line width = 0.5mm] ({(1/2 + 1/4)*1.1}, {(0.866025404 * 3/2) * 1.1 - 0.5}) -- ({(1/2 + 1/4)*1.5}, {(0.866025404 * 3/2) * 1.5 - 0.5});
\draw[-stealth, dodgerblue, line width = 0.5mm] ({(1/2 + 1/4)*(0.2)}, {(0.866025404 * 3/2) * (0.2) - 0.5}) -- ({(1/2 + 1/4)*(-0.2)}, {(0.866025404 * 3/2) * (-0.2) - 0.5});
\end{tikzpicture}
\end{equation*}
Since even the smallest perturbation of the RVB from its fixed point states will generically endow the Wilson line with a line tension rendering its expectation value zero, we wish to compute the expectation value of the Wilson line per unit length.
This can be done using the tree tensor network ansatz by constructing the following mixed transfer matrix:
\begin{equation} \label{eq-treewilson}
T = \includegraphics[width = 75pt, valign = c]{Transfer_Matrix_Wilson5.pdf}
\end{equation}
where the blue squares represent the action of the Wilson line (left of Eq.~\eqref{eq-RydbergWilson}).
Subsequently, we compute the square root of the largest eigenvalue of $T$, which encodes the expectation value of the Wilson line per unit length.
With these two stabilizers, we can plot the expectation value of the gauss loop $\langle G_v\rangle $ and the Wilson line per unit length $\langle w \rangle$ as a function of time across the sweep [Fig.~\ref{fig:Z2Tree}(d)] in units of the total time.
We find that both stabilizers approach and saturate their fixed point value with the deviation from the fixed point decreasing with increased sweep time, once again signaling the onset of the quantum spin lake!
Having numerically demonstrated the preparation of a quantum spin lake for the PXP model on the links of the Husimi cactus lattice---a model with no dimer resonances---we now raise the queston, where did the resonances in the final state come from.
The answer is elucidated by the projection formula of Eq.~\eqref{eq-RydbergProj}.
In particular, during the dynamics, when the initially condensed $e$-anyons are equilibrated (projected) out, onsite quantum fluctuations of the initial state get elevated to many-body fluctuations in the final state.
\subsection{ Tree Simulations as Numerical Tools for Experiments} \label{subsec-Z2TreeVdWNumerics}
So far, we have examined a model of Rydberg atoms on the links of the Husimi cactus to highlight a conceptual point about the dynamical preparation of quantum spin lakes---preparation benefits from the perfectly slow $m$-anyon dynamics of Rydbergs on a tree.
While the dynamics of $m$-anyons on normal lattices are never perfectly slow, they are often significantly slower than the dynamics of $e$-anyons.
Indeed, in Section~\ref{subsubsec-NumericalEstimates}, we showed that the energy scale controlling the dynamics of the $m$-anyons is an order of magnitude smaller than the one for $e$-anyons for the Rydberg atom experiment in Ref.~\onlinecite{Semeghini21}.
As a consequence, one might postulate that the tree model studied in this section may be able to approximately capture the dynamics of the aforementioned experiment.
Here, we show that, indeed, this is the case.
By numerically simulating the truncated van der Waals Rydberg model on the Husimi cactus lattice (defined in Section~\ref{subsec-Z2TreeModel}), we show that the tree model is able to capture the results of the Rydberg atom experiment nearly as well as matrix product state simulations done for the regular ruby lattice.
Moreover, the tree tensor network numerics have roughly a two order of magnitude speed up compared to the matrix product state simulations performed.
As such, we propose that tree tensor network simulations of tree lattices can actually be used as a practical experimental aid to study the dynamical preparation of QSL-like order in NISQ devices.
We now numerically simulate the dynamical sweep performed in the experiment for the truncated van der Waals model on the tree lattice.
To accurately reproduce the dynamics of the experiment, we use the same sweep profiles of $\Omega(t)$ and $\delta(t)$ as used in the experiment (inset of Fig.~\ref{fig:experimental_trees}(a)) and the same value of the blockade radius $R_b = 2.4 a$ \cite{Semeghini21}.
We now compare the results from our tree tensor network simulations of Rydberg atoms on the Husimi cactus, the matrix product state simulations of Rydberg atoms on the ruby lattice \cite{zenodoQSL} (from the supplemental information of Ref.~\onlinecite{Semeghini21}), and the experimental data from the Rydberg atom experiment \cite{semeghinidata} (from the main text of Ref.~\onlinecite{Semeghini21}).
In particular, first, we plot the density $\langle n \rangle$ of Rydberg atoms as a function of $\delta/\Omega$ for each of these three methods in Fig.~\ref{fig:experimental_trees}(a).
We find excellent agreement between the three for all values of $\delta/\Omega$.
We note that our tree tensor network simulations very slightly deviate from the experimental value towards the end of the sweep but matches the matrix product state results throughout.
Beyond comparing the density, we additionally compare the probability of dimerless vertices, vertices touching a single dimer, and vertices touching two dimers between the experiment, tree tensor network simulations, and matrix product state simulations in Fig.~\ref{fig:experimental_trees}.
We find that near perfect agreement between the matrix product state simulations and the tree tensor network simulations.
Moreover, both types of simulations quantitatively reproduce the results of the experiment near the beginning of the sweep but only qualitatively capture the experiment towards the end.
Generally, the experiment shows a higher density of dimerless vertices and vertices touching two dimers.
The fact that the tree numerics and cylinder DMRG---which rely on entirely different approximations---give virtually identical results suggests that for these timescales and parameter values, we roughly obtain the true result for the 2D lattice. It would be interesting for future work to pinpoint the source of the experimental deviation, which nevertheless qualitatively agrees.
\section{Generalizations: U(1) Spin Lake} \label{sec-U1}
\begin{figure}
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[width = 247pt]{Figure_5_v3.pdf}};
\node at (-2.2,2.15) {\includegraphics[scale=0.3]{Bethelattice2.pdf}};
\end{tikzpicture}
\caption{\textbf{U(1) Spin Lakes on the Bethe Lattice.} (a) We simulate the PXP model on the links of the Bethe lattice---a tree version of the honeycomb lattice. Since the vertices of the lattice are bipartite, the RVB wavefunction on this lattice is a $U(1)$ QSL and is analogous to the fine-tuned Rokhsar-Kivelson point wavefunction. To simulate dynamics, we use the same dynamical sweep as Fig.~\ref{fig:Z2Tree}(b) but with $\delta$ going from $\delta = -10$ to $10$. In panel (a), we show that the density $\langle n_{\alpha} \rangle$ is isotropic indicating the state is not a VBS that breaks the discrete rotation symmetries of the model. Since our tree tensor network ansatz explicitly preserves lattice translation symmetry, any candidate VBS state would be a cat state with $\log(2)$ bipartite entanglement, the same as the RVB. In panel (c), we show that the entanglement we observe is consistent with either the VBS cat state or the RVB. However, in panel (d), we plot density-density correlations as a function of distance [using a rolling average of $3$ sites for easier visualization (See Appendix~\ref{app-U1} for raw data)] for different total sweep times (from light to dark, $T \in \{2.5, 3.5, 4.5, 4.75, 5, 7.5\}$) on the tree lattice.
%
We find that they fall off as $2^{-x}$ which is the predicted fall off of gapless states on tree lattices \cite{Laumann09}.
%
This is inconsistent with a VBS cat state and hence, the above provides strong numerical evidence for the emergence of a $U(1)$ spin lake on the Bethe lattice. For our numerics, we use a bond dimension of $\chi_{\alpha} = 10$ ($\alpha = a, b, c$) and $dt = 0.005$. }
\label{fig:U1QSL}
\end{figure}
Thus far, we have focused on the dynamical preparation of $\mathbb{Z}_2$ quantum spin lakes.
In this section, we generalize our results to a broader class of spin liquid states by demonstrating that non-equilibrium dynamics can also prepare a $U(1)$ quantum spin lake.
This is particularly surprising because as ground states, U(1) quantum spin liquids are unstable, being described by a compact U(1) gauge theory that is known to be typically confining \cite{PolyakovBook}.
To see the emergence of a $U(1)$ quantum spin lake, we will first introduce a model \textit{capable} of hosting an (unstable) $U(1)$ QSL.
For reasons that will reviewed in Section~\ref{subsec-U1equil}, a clear option will be to place Rydberg atoms on the bonds of the (bipartite) honeycomb lattice.
In the subsections that follow, we will consider an extremal version of the Rydberg model on the honeycomb lattice with infinitely large plaquettes.
This will once again exemplify the reversal of logic from the last section that the absence of dimer resonances \textit{helps} in the dynamical preparation of QSLs.
Moreover, as we evidenced in the previous section, such a tree geometry offers a good approximation of the sweep dynamics on the true physical planar lattice, suggesting that this indeed offers a realistic route to a $U(1)$ spin lake accessible with current Rydberg atom tweezer array platforms.
\subsection{$U(1)$ QSLs from Rydberg Atoms}\label{subsec-U1equil}
Let us consider the `PXP' Rydberg model of Section~\ref{sec-Experiment} (Eq.~\eqref{eq-HPXP} without long-range tails) placed on the links of the honeycomb lattice, or equivalently, the vertices of the kagome lattice:
\begin{equation*}
\begin{tikzpicture}[scale = 1, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1}{
\foreach \j in {0,...,1}{
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 + 1/2*1/2}, {\j * 0.866025404 + 1/2*0.866025404}) -- cycle;
\draw[gray] ({\i + \j * 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2}, {\j * 0.866025404}) -- ({\i + \j * 1/2 - 1/2*1/2}, {\j * 0.866025404 - 1/2*0.866025404}) -- cycle;
\filldraw ({\i + \j * 1/2}, {\j * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/2}, {\j * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 + 1/4 }, {\j * 0.866025404 + 1/2 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 - 1/4 }, {\j * 0.866025404 - 1/2 * 0.866025404}) circle (1 pt);
\filldraw ({\i + \j * 1/2 + 1/2}, {\j * 0.866025404}) circle (1 pt);
}
}
\draw[red, fill=red, fill opacity=0.1, dashed, thick](1/2,0) circle (0.6);
\filldraw[red] (0.5, 0) circle (1 pt);
\draw [-stealth, thick] (1/2,0) -- (1/2 - 0.44/0.5 * 0.6, 0.25/0.5 * 0.6);
\node at ({-1/4 - 1/16}, 0.4) {\normalsize $R_b$};
\draw [stealth-stealth, line width = 0.1 mm] (0,0.866025404 + 0.15) -- (1/2 ,0.866025404 + 0.15);
\node at (0.25, 0.866025404 + 0.15 + 0.15) {\normalsize $a$};
\end{tikzpicture}
\end{equation*}
If we choose the blockade radius $R_b$ such that the blockade radius encloses only a qubit's four nearest neighbors (as shown in the above schematic), i.e.:
\begin{equation} \label{eq-honeycombRb}
1 < R_b /a < \sqrt{3},
\end{equation}
then we exclude two neighboring bonds from both being occupied. Hence, by appropriately tuning the chemical potential $\delta/\Omega$ (to achieve a density $\langle n_i \rangle \approx 1/3$) we will approximately realize a dimer model on the honeycomb lattice.
Unlike the case studied above for the ruby lattice (where we obtained a kagome dimer model), by putting atoms on the kagome lattice, we obtain an effective dimer model on the honeycomb lattice which is \emph{bipartite}. If we label the two sublattices $A$ and $B$, we can assign each dimer an orientation pointing from the $A$ sublattice to the $B$ sublattice:
\begin{equation}
\begin{tikzpicture}[scale = 0.7, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (0, 1) -- (0.86602540378, 1 + 1/2);
\draw[gray] (0, 1) -- (-0.86602540378, 1 + 1/2);
\draw[gray] (0.86602540378, -1/2) -- (0, 0) -- (-0.86602540378, -1/2);
\node at (0,0) {\normalsize $A$};
\node at (0,1) {\normalsize $B$};
\draw[red, fill = red] (0,0.5) circle (0.075);
\draw[black, fill = black] (0.86602540378/2, -1/4) circle (0.075);
\draw[black, fill = black] (-0.86602540378/2, -1/4) circle (0.075);
\draw[black, fill = black] (0.86602540378/2, 1+1/4) circle (0.075);
\draw[black, fill = black] (-0.86602540378/2, 1+1/4) circle (0.075);
\end{tikzpicture} \quad \longrightarrow \quad
\begin{tikzpicture}[scale = 0.7, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (0, 1) -- (0.86602540378, 1 + 1/2);
\draw[gray] (0, 1) -- (-0.86602540378, 1 + 1/2);
\draw[gray] (0.86602540378, -1/2) -- (0, 0) -- (-0.86602540378, -1/2);
\draw[-stealth, red, line width = 0.75 mm] (0,0) -- (0,1);
\end{tikzpicture}
\end{equation}
The ability to orient dimers on bipartite lattice has a striking consequence.
In particular, it implies that the $\mathbb{Z}_2$ Gauss law of the ruby lattice (Eq.~\eqref{eq-RydbergGauss}) gets promoted to:
\begin{equation}\label{eq-U1RydbergGauss}
G_v = \begin{cases} \ \begin{tikzpicture}[scale = 0.4, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, -1/2)--(0, 0) -- (-0.86602540378, -1/2);
\draw[gray] (0, 0) -- (0, 1);
\draw[orange(ryb), dashed, line width = 0.5mm] (0, 0) circle (0.5);
\end{tikzpicture} \\ \\
\ \begin{tikzpicture}[scale = 0.4, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, 1 + 1/2)--(0, 1) -- (-0.86602540378, 1+1/2);
\draw[gray] (0, 0) -- (0, 1);
\draw[orange(ryb), dashed, line width = 0.5mm] (0, 1) circle (0.5);
\end{tikzpicture}
\end{cases} = \begin{cases}
+1 \text{ if } v \in A \text{ sublattice} \\
-1 \text{ if } v \in B \text{ sublattice}
\end{cases}
\end{equation}
where the small orange loop operator counts the number of outgoing arrows subtracted by the number of incoming arrows.
As a consequence, the number of outgoing arrows subtracted by the number of incoming arrows from any closed loop will equal the number of $A$ sublattices subtracted from the number of $B$ sublattices enclosed which can be any integer.
Consequently, the emergent gauge theory is a compact $U(1)$ gauge theory \cite{Baskaran88,ReadSachdev_PRB,ReadSachdev1989PRL,READSACHDEV_1989_Nuc,Fradkin90}.
We will again be interested in the resonating valence bond state given by the equal weight and equal phase superposition of all dimer configurations:
\begin{comment}
\begin{equation*}
\begin{tikzpicture}[scale = 0.7, baseline={([yshift=-.5ex]current bounding box.center)}]
\foreach \i in {0,...,1}{
\foreach \j in {0,...,1}{
\draw[color = gray] ({0 + 1.5*\i + 1.5*\j} , {0 + 0.866025404*\i - 0.866025404*\j}) -- ({0 + 1.5*\i + 1.5*\j + 0.5} , {0 + 0.866025404*\i - 0.866025404*\j + 0.866025404});
\draw[color = gray] ({0 + 1.5*\i + 1.5*\j} , {0 + 0.866025404*\i - 0.866025404*\j}) -- ({0 + 1.5*\i + 1.5*\j + 0.5} , {0 + 0.866025404*\i - 0.866025404*\j - 0.866025404});
\draw[color = gray] ({0 + 1.5*\i + 1.5*\j + 0.5} , {0 + 0.866025404*\i - 0.866025404*\j - 0.866025404}) -- ({0 + 1.5*\i + 1.5*\j + 0.5 + 1} , {0 + 0.866025404*\i - 0.866025404*\j - 0.866025404});
\draw[color = gray] ({0 + 1.5*\i + 1.5*\j + 0.5} , {0 + 0.866025404*\i - 0.866025404*\j + 0.866025404}) -- ({0 + 1.5*\i + 1.5*\j + 0.5 + 1} , {0 + 0.866025404*\i - 0.866025404*\j + 0.866025404});
\draw[color = gray] ({0 + 1.5*\i + 1.5*\j + 0.5 + 1} , {0 + 0.866025404*\i - 0.866025404*\j + 0.866025404}) -- ({0 + 1.5*\i + 1.5*\j + 2} , {0 + 0.866025404*\i - 0.866025404*\j});
\draw[color = gray] ({0 + 1.5*\i + 1.5*\j + 0.5 + 1} , {0 + 0.866025404*\i - 0.866025404*\j - 0.866025404}) -- ({0 + 1.5*\i + 1.5*\j + 2} , {0 + 0.866025404*\i - 0.866025404*\j});
}
\draw[color = gray] ({0 + 0.5} , {0 - 0.866025404}) -- ({0} , {0 - 2*0.866025404});
\draw[color = gray] ({0} , {0 - 2*0.866025404}) -- (0.5, - 3*0.866025404);
\draw[color = gray] (0.5, - 3*0.866025404) -- (1.5, - 3*0.866025404);
\draw[color = gray] (1.5, - 3*0.866025404) -- ({2} , {0 - 2*0.866025404});
\draw[color = gray] ({3 + 0.5} , {0 - 0.866025404}) -- ({3} , {0 - 2*0.866025404});
\draw[color = gray] ({3} , {0 - 2*0.866025404}) -- (3.5, - 3*0.866025404);
\draw[color = gray] (3.5, - 3*0.866025404) -- (4.5, - 3*0.866025404);
\draw[color = gray] (4.5, - 3*0.866025404) -- ({5} , {0 - 2*0.866025404});
\draw[color = gray] ({5} , {0 - 2*0.866025404}) -- ({4.5} , {0 - 0.866025404});
\draw[color = gray] ({1.5 + 0.5} , {0 - 2*0.866025404}) -- ({1.5} , {0 - 3*0.866025404});
\draw[color = gray] ({1.5} , {0 - 3*0.866025404}) -- (2, - 4*0.866025404);
\draw[color = gray] (2, - 4*0.866025404) -- (3, - 4*0.866025404);
\draw[color = gray] (3, - 4*0.866025404) -- ({3.5} , {0 - 3*0.866025404});
}
\end{tikzpicture}
\end{equation*}
\end{comment}
\begin{equation}
\ket{\text{RVB}} = \ket{\includegraphics[width = 40 pt, valign = c]{u1_rvb1.pdf}} + \ket{\includegraphics[width = 40 pt, valign = c]{u1_rvb2.pdf}} + \ket{\includegraphics[width = 40 pt, valign = c]{u1_rvb3.pdf}} + \cdots \label{eq-U1RVB}
\end{equation}
As before, this RVB state corresponds to the deconfined phase of this gauge theory, analogous to electromagnetism, and is a fixed point representative of the $U(1)$ QSL.
Unlike the $\mathbb Z_2$ QSL, this $U(1)$ QSL has algebraic correlations. In two spatial dimensions, it is known that these gapless excitations make this into a fine-tuned point, which is unstable to generic deformations \cite{ReadSachdev1989PRL,READSACHDEV_1989_Nuc,ReadSachdev_PRB,SachdevED}. This is well-illustrated by the Rokshar-Kivelson model \cite{RK}, which both on the (bipartite) square \cite{RK} and honeycomb lattices \cite{RKhoneycomb} admit exactly solvable Rokhsar-Kivelson points where the ground state is this RVB state, surrounded by nearby gapped phases.
Hence, in 2+1d one does not expect to observe a pure $U(1)$ spin liquid without excessive fine-tuning. In particular, for the Rydberg model we would expect that the longer-range repulsive interactions push dimers onto opposite sides of a hexagon, giving rise to the staggered or columnar ground state of the honeycomb RK model \cite{RKhoneycomb} (i.e., the regime $t<V$ of the RK model). In fact, recent theoretical work has numerically explored the \emph{ground state} phase diagram of Rydberg atoms on the kagome lattice \cite{Samajdar_2021}. Its focus was on the possible emergence of an approximate dimer liquid on the (non-bipartite) triangular lattice at larger blockade radius and did not discuss the possible emergence of a $U(1)$ honeycomb dimer liquid. Nevertheless, Fig.~1(e) of Ref.~\onlinecite{Samajdar_2021} also reports the ground state phase diagram for a blockade radius of $R_b=1.2a$ (well within the range of Eq.~\eqref{eq-honeycombRb}), where they find a solid phase which---when drawn as a honeycomb dimer state---indeed agrees with the staggered phase mentioned above.
In contrast, we will now see that \emph{dynamics} can lead to a robust \emph{$U(1)$ quantum spin lake}.
\subsection{From Honeycomb to Bethe Lattice and Numerical Implementation}
To explore the dynamical preparation of a $U(1)$ quantum spin lake in the above model, we perform the same approximation as in Sec.~\ref{sec-Tree} by taking the plaquette size to infinity. This makes the problem more easily numerically tractable and moreover defines a limit where the energy scale of the magnetic particles (or visons or fluxons) is effectively zero. In addition to its conceptual value, we discussed in Sec.~\ref{sec-Tree} how this drastic approximation can, in fact, offer a good simulation for a realistic experiment on a genuine 2D lattice for timescales where we are fast with respect to the magnetic excitations. Indeed, for a dimer model on the honeycomb lattice, the dimer resonances occur at the same order as for the kagome dimer model studied in Eq.~\eqref{eq-Rydberg-Resonances}, which thus occurs at sixth order in perturbation theory. Moreover, similar to our analysis in Section~\ref{subsubsec-EffectofLRtails} and Section~\ref{subsubsec-NumericalEstimates}, one can show that the $\sim 1/r^6$ tails of the Rydberg interactions only lead to small splittings in the dimer subspace.
For example, by repeating a version of the analysis performed in Section~\ref{subsubsec-EffectofLRtails}, we found that the induced splittings are on the order of $\sim 10^{-2} \times \Omega$ for $R_b = 1.2 a$.
More precisely, the tree version of the honeycomb lattice is called the Bethe lattice \cite{bethe1935statistical} and is shown in Fig.~\ref{fig:U1QSL}(a). In particular, this is for the coordination number $z=3$. It is known that the correlation length for physical states (i.e. cat states excluded) is restricted to be below $ \xi = 1/\log(z - 1)$ (owing to the $(z-1)$-fold branching rate of the tree) with gapless systems saturating this bound \cite{Laumann09, nagy2012simulating}. Hence, although the RVB dimer wavefunction on the honeycomb lattice \eqref{eq-U1RVB} has algebraic correlations (and thus an infinite correlation length), the deconfined phase of $U(1)$ gauge theory on the Bethe lattice will have $\xi = 1/\log 2$.
To numerically simulate the Rydberg model on the Bethe lattice, we first start by doubling each qubit degree of freedom on the Bethe lattice:
\begin{equation}
\begin{tikzpicture}[scale = 0.7, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0, 0) -- (0, 1) -- (0.86602540378, 1 + 1/2);
\draw[gray] (0, 1) -- (-0.86602540378, 1 + 1/2);
\draw[gray] (0.86602540378, -1/2) -- (0, 0) -- (-0.86602540378, -1/2);
\draw[black, fill = black] (0,0.75) circle (0.075);
\draw[black, fill = black] (0,0.25) circle (0.075);
\draw[stealth-stealth, black] (0.3, 0.25) -- (0.3, 0.75);
\draw[black, fill = black] (0.86602540378 * 3/4, -1/2 * 3/4) circle (0.075);
\draw[black, fill = black] (0.86602540378 * 1/4, -1/2 * 1/4) circle (0.075);
\draw[black, fill = black] (-0.86602540378 * 3/4, -1/2 * 3/4) circle (0.075);
\draw[black, fill = black] (-0.86602540378 * 1/4, -1/2 * 1/4) circle (0.075);
\draw[black, fill = black] (0.86602540378 * 3/4, 1+1/2*3/4) circle (0.075);
\draw[black, fill = black] (0.86602540378 * 1/4, 1+1/2*1/4) circle (0.075);
\draw[black, fill = black] (-0.86602540378 * 3/4, 1+1/2*3/4) circle (0.075);
\draw[black, fill = black] (-0.86602540378 * 1/4, 1+1/2*1/4) circle (0.075);
\end{tikzpicture}
\end{equation}
Subsequently, we add to our model strong Ising ferromagnetic interactions between our doubled qubits so that at energies below that scale, the system behaves like the undoubled system.
With this doubling and the blockade constraint, we can utilize the tree tensor network ansatz of Eq.~\eqref{eq-TTN} to encode the wavefunction of our system.
The physical legs of the $A$ tensor will encode the following states:
\begin{equation}
d = \quad \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, -1/2)--(0, 0) -- (-0.86602540378, -1/2);
\draw[gray] (0, 0) -- (0, 1);
\node at (0, 0) {\normalsize $0$};
\end{tikzpicture}
\quad \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, -1/2)--(0, 0) -- (-0.86602540378, -1/2);
\draw[gray] (0, 0) -- (0, 1);
\draw[-stealth, red, line width = 0.75 mm] (0, 0) -- (-0.86602540378, -1/2);
\node at (0, 0) {\normalsize $1$};
\end{tikzpicture}
\quad \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, -1/2)--(0, 0) -- (-0.86602540378, -1/2);
\draw[gray] (0, 0) -- (0, 1);
\draw[-stealth, red, line width = 0.75 mm] (0, 0) -- (0, 1);
\node at (0, 0) {\normalsize $2$};
\end{tikzpicture}
\quad \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, -1/2)--(0, 0) -- (-0.86602540378, -1/2);
\draw[gray] (0, 0) -- (0, 1);
\draw[-stealth, red, line width = 0.75 mm] (0, 0) -- (0.86602540378, -1/2);
\node at (0, 0) {\normalsize $3$};
\end{tikzpicture}
\end{equation}
Similarly, the physical legs of the $B$ tensor will encode:
\begin{equation}
d = \quad \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, 1 + 1/2)--(0, 1) -- (-0.86602540378, 1+1/2);
\draw[gray] (0, 0) -- (0, 1);
\node at (0, 1) {\normalsize $0$};
\end{tikzpicture}
\quad \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, 1 + 1/2)--(0, 1) -- (-0.86602540378, 1+1/2);
\draw[gray] (0, 0) -- (0, 1);
\draw[stealth-, red, line width = 0.75 mm] (0, 1) -- (0.86602540378, 1+1/2);
\node at (0, 1) {\normalsize $1$};
\end{tikzpicture}
\quad \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, 1 + 1/2)--(0, 1) -- (-0.86602540378, 1+1/2);
\draw[gray] (0, 0) -- (0, 1);
\draw[-stealth, red, line width = 0.75 mm] (0, 0) -- (0, 1);
\node at (0, 1) {\normalsize $2$};
\end{tikzpicture}
\quad \begin{tikzpicture}[scale = 0.5, baseline={([yshift=-.5ex]current bounding box.center)}]
\draw[gray] (0.86602540378, 1 + 1/2)--(0, 1) -- (-0.86602540378, 1+1/2);
\draw[gray] (0, 0) -- (0, 1);
\draw[stealth-, red, line width = 0.75 mm] (0, 1) -- (-0.86602540378, 1+1/2);
\node at (0, 1) {\normalsize $3$};
\end{tikzpicture}
\end{equation}
Numerical method in hand, we can now test the dynamical preparation of a $U(1)$ spin lake by simulating trotterized time-evolution as in the $\mathbb{Z}_2$ tree case.
\subsection{Dynamical Preparation of $U(1)$ Spin Lake}
We now once again initialize our system in the Higgs phase and start with $\delta= -10$ and $\Omega = 0$ (effectively starting with a product state on the Bethe lattice).
By linearly ramping up the value of $\delta$ to $\delta = 10$ and turning up the value of $\Omega$ in the same way as Fig.~\ref{fig:Z2Tree}(b), we sweep from the ground state of the Higgs phase into the region of the phase diagram with an emergent gauge theory.
Our expectation is that the final state that we prepare will be the initial state with Gauss law violations projected out following Eq.~\eqref{eq-sweeping-proj}.
Similar to the $\mathbb{Z}_2$ case of Sec.~\ref{sec-Summary}~and~\ref{sec-Tree}, we can make the mean-field ansatz for the initial state of the sweep after $\Omega$ has been ramped up to $1$ given by Eq.~\eqref{eq-RydbergMFansatz}.
Subsequently, when we project out Gauss law violations, the resulting state will be the RVB (for the same reasons as Eq.~\eqref{eq-RydbergProj} which is the fixed point state of the $U(1)$ QSL.
On the infinite tree, at any finite rate, we will prepare a finite-size quantum spin lake with the size of the lake getting larger with decreasing rate.
We can diagnose the onset of the $U(1)$ quantum spin lake numerically through three probes.
First, to verify that our final state satisfies the dimer constraint, we plot the density $n_{\alpha} = (1 + Z_{\alpha})/2$ of the three qubits on site $A$ (equivalently $B$ as the qubits in $B$ are ferromagnetically locked to the qubits in $B$) as a function of time [Fig.~\ref{fig:U1QSL}(b)].
We find that the density of each qubit saturates to $1/3$ as we approach the end of the sweep implying a maximal packing of dimers.
As such, we know that the final state satisfies the Gauss law and can conclude that it is rotationally invariant.
This leaves open the possibility of either the $U(1)$ QSL or a rotationally invariant valence bond solid (VBS) state---a symmetry breaking state consisting of a finite superposition of dimer states.
Next, in Fig.~\ref{fig:U1QSL}(c), we plot the entanglement across the central bond of the tree tensor network state as a function of the time along the sweep.
We see that the entanglement saturates to around $\log(2)$ towards the end of the sweep.
This is consistent with the fixed point entanglement of the RVB state of the $U(1)$ QSL and also with a cat state of two VBS configurations.
To distinguish the VBS cat state from the $U(1)$ QSL, we now turn our attention to the behavior of correlation functions in the final state of the sweep.
The prediction is that, since the $U(1)$ QSL is gapless, on a tree lattice, it will decay with the maximal possible correlation length for physical (non-cat) states, $\xi = 1/\log(2)$.
On the other hand, the VBS cat state is predicted to have a correlation length larger than $\xi > 1/\log(2)$.
To distinguish these two cases, we compute the following correlation function for the final state of the dynamical sweep $C(x) = |\langle [\bar{n}(x + 1) - \bar{n}(x)] [\bar{n}(1) - \bar{n}(0)]|$, as a function of the total time of the sweep and distance, where $\bar{n}(x) = \sum_{\alpha} n_{\alpha}(x)/3$ is the average density of dimers at vertex $x$ of the Bethe lattice.
Such a correlation function is numerically convenient because it manifestly tends to a zero value at long distances (due to the translation invariance of our tree tensor network ansatz) and hence enables us to probe the correlation function at long distances without being mired by numerical errors in the value of the one-point function.
The results of this correlation function are shown in Fig.~\ref{fig:U1QSL}(d).
We find that as we increase the time of our sweep, the correlation function goes from decaying faster than $1/\log(2)$ to saturating at a $1/\log(2)$ decay (see Appendix~\ref{app-U1}).
This is strong evidence signaling the onset of the gapless $U(1)$ quantum spin lake!
\section{Outlook}
We have considered systems containing two \textit{emergent} low-energy degrees of freedom---such as the $e$- and $m$-anyons of the $\mathbb Z_2$ spin liquid---whose dynamics are controlled by well-separated energy scales.
In such a setting, we first demonstrated that there exists a dynamical regime wherein one is nearly in equilibrium (quasi-adiabatic) relative to one degree of freedom while out-of-equilibrium relative to the other (sudden).
A remarkably clean observational signature of being in this regime is given by an ``echo'' experiment wherein one sweeps back and forth as in Fig.~\ref{fig-dTC}(e): this displays a revival that is distinct from the purely-adiabatic or purely-sudden regime.
While the existence of such a non-equilibrium regime is already of interest, it is even more interesting that this dynamical regime can be exploited to help realize exotic quantum states.
In particular, consider a protocol wherein one sweeps parameters in the Hamiltonian between two different phases, where the quasi-adiabatic degree of freedom goes from being condensed in the ground state to not being condensed.
Then, dynamics in the aforementioned regime can be used to effectively implement a projection operator (corresponding to adiabatically pushing out only one degree of freedom, whilst leaving the other unaffected).
We demonstrated that if one starts in an initial product state (where an anyonic degree of freedom was condensed), then this novel non-equilibrium regime can prepare a QSL-like state by ``projecting'' into a constrained subspace of the Hilbert space.
We extensively illustrated this for $\mathbb Z_2$ spin liquids, both in the toric code as well as the Rydberg ruby lattice settings (whose constrained spaces are loop and dimer states, respectively).
Equally important as recognizing the existence of the preparation scheme above is understanding its limitations.
In particular, since the preparation scheme involves sweeping parameters in the Hamiltonian between two phases, the separation of energy scales between the two degrees of freedom required for the foregoing non-equilibrium regime can only be guaranteed for a finite range of system sizes.
For larger systems, Kibble-Zurek-type considerations imply that projection only takes place of a finite length scale creating a finite-size ``quantum spin lake'' instead of a full QSL.
Despite this limitation, the current range of available system sizes and coherence times in near-term quantum devices make quantum spin lakes a very promising alternative to traditional ground state preparation.
This mechanism gives a variety of exciting new directions to explore.
We have already highlighted how our protocol even allows us to approximately realize a $U(1)$ spin liquid as a honeycomb dimer model, which is all the more remarkable given that this does not arise as a stable ground state in two spatial dimensions.
We have argued that the dynamical preparation is achievable in Rydberg atom tweezer arrays, using the same ingredients as already demonstrated in the ruby lattice experiment \cite{Semeghini21}.
Moreover, we argued and demonstrated how simulating the dynamical protocol on a tree gives a new handle on matching experimental data.
One striking aspect of our mechanism is that it suggests that \emph{larger plaquettes are better}.
Indeed, these lead to smaller energy scales for the magnetic excitations, making it easier to be sudden with respect to them.
For the two Rydberg-related models we discussed---namely a $\mathbb Z_2$ ($U(1)$) spin liquid for a kagome (honeycomb) dimer model---these plaquettes were already sizable, consisting of six bonds.
However, it would be interesting to explore lattices with larger plaquettes. E.g., instead of placing Rydberg atoms on the bonds of the kagome, they can be placed on the bonds of the Fisher (or star) lattice, which is a decorated version of the former.
In addition, going to 3D naturally suggests the hyperkagome lattice. In fact, the tree geometry we studied is the limit of infinitely large plaquettes.
More generally, one could explore hyperbolic lattices, which allow for arbitrary plaquette size, as captured by the Schl\"afli symbol \cite{coxeter1973regular}.
With regard to further possible generalizations, we note that the broader context of our work is the non-equilibrium dynamics of gauge theories. A special feature here is the interplay of charge and flux dynamics, which can be decoupled to an extent in this non-equilibrium context. Thus we are led to our ``two-time'' criteria, that represent the independent equilibriation times of the charge and of the flux. In this language, our mechanism should also apply to other gauge groups, such as $\mathbb Z_3$, of which there exist interesting ground state proposals \cite{Motrunich02,Tarabunga22} where our dynamical mechanism might provide a route toward their realization. Similarly it would be interesting to explore the potential applicability of our mechanism in the context of deconfined gauge theories obtained by local two-body interactions using combinatorial gauge symmetry \cite{Chamon20,Wu21,Green22}.
A fascinating question for future study is how these results extend to non-Abelian gauge theories. Discrete non-Abelian gauge groups lead to excitations with non-Abelian statistics. Their classical counterparts---non-Abelian defects in ordered media---can lead to glassy dynamics \cite{NelsonDefects}. Are there analogs in the quantum dynamics of non-Abelian gauge theories? Looking further afield, the dynamics of Yang-Mills theories is clearly of prime importance in a variety of situations including the early universe \cite{mukhanov_2005}. Detailed studies of real time dynamics of gauge fields in concrete models that can be experimentally realized, are likely to make significant contributions to this important area.
Even beyond realizing gauge theories, there is likely a broad range of applications of our protocol for dynamically implementing a projection operator using a non-equilibrium sweep. One tantalizing option is to start with a free-fermion state and dynamically implement the Gutwziller projection. Combining our non-equilibrium protocol with the technical ingredients introduced in Ref.~\cite{Kale22} could lead the way to implementing this idea.
In conclusion, quantum spin lakes present an exciting new interplay between non-equilibrium dynamics, topological states, and NISQ devices. While we have focused here on state preparation, it would be worthwhile to explore the potential quantum-information-theoretic applications of quantum spin lakes. Moreover, this new non-equilibrium regime might be interesting to explore in its own right, offering a new handle on the rich phenomenology of quantum dynamics.
\section{Acknowledgements}
The authors would like to thank Dominic Else, Manuel Endres, Ruihua Fan, Giuliano Giudici, David Huse, Marcin Kalinowski, Misha Lukin, Francisco Machado, Nishad Maskara, Dan Parker, Hannes Pichler, Drew Potter, Saran Prembabu, and Ryan Thorngren for illuminating discussions.
DMRG simulations were performed on the Harvard FASRC facility using the TeNPy Library \cite{Hauschild18}, which was
inspired by a previous library \cite{Kjaell13}.
RS acknowledges support by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific
Computing Research, Department of Energy Computational Science Graduate Fellowship under Award Number
DESC0022158.
RV is supported by the Harvard Quantum Initiative Postdoctoral Fellowship in Science and Engineering, and RV and AV by the
Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foundation (651440, AV).
A.V. further acknowledges support by NSF-DMR 2220703.
|
2,877,628,089,369 | arxiv | \section{Introduction}
Exact evaluations of physical quantities at finite temperatures
pose serious difficulties even for integrable models.
One has to go much beyond mere diagonalization of a Hamiltonian;
summation over the eigenspectra must be performed.
The string hypothesis \cite{YY,G,T,TS}
brought the first breakthrough and success.
It yields a systematic way to evaluate several bulk quantities
including specific heats, susceptibilities and so on.
More recently, the quantum transfer matrix (QTM)
method has been proposed to overcome some difficulties, to which
the standard approach is not applicable.
\cite{MSuzPB,InSuz,InSuz2,Koma,SAW,SNW,DdV,TakQT,Mizu,Klu,KZeit,JK,JKStJ,JKStp,JKSfusion,JKSHub,KSS,Klu2,FKM,FKM2,JSuz1,JSuz2,JSuz3,Sch}
One reduces the original problem
to finding the largest eigenvalue of the QTM which acts on a
fictitious system of size $N$
(referred to as the Trotter number),
which should be sent $N\to\infty$
(Trotter limit).
As this procedure is sometimes
difficult,
we integrate its procedure with another ingredient,
the integrable structure of the underlying model.
It allows for introduction of
the commuting QTMs
which are labeled by complex
parameter $x$.\cite{Klu}
A set of
auxiliary functions, including the
QTM itself,
satisfy certain functional relations.
We shall choose these functions
such that they
have a nice analytical property
called ANZC (Analytic,
NonZero, and Constant asymptotics, see Sec.~III)
in a certain strip on the complex $x$ plane.
This admits the transformation of the functional
relations into a closed set of the integral equations.
For all cases known up to now,
the Trotter limit $N \to \infty$
can be taken analytically in the integral equations.
We thus have seen a remarkable reduction
from the problem of combinatorics (summation over the
eigenspectra)
to the study of analytic structures of
suitably chosen auxiliary functions.
This novel scenario has been applied to many models
of physical interest.
\cite{Klu,KZeit,JK,JKStJ,JKStp,JKSfusion,JKSHub,KSS,Klu2,FKM,FKM2,JSuz1,JSuz2,JSuz3}
Especially,
the correlation lengths, of which calculation
have been one of the major difficulty in the string hypothesis,
are explicitly evaluated in the spin models.
For example of the success, we refer to recent analysis
on the quantum-classical crossover
phenomena in the massless ${XXZ}$ model in ``attractive" regime.
\cite{FKM,FKM2}
We extend these studies to lattice Fermion systems.
Our formulation is fully general for the
1D Fermion systems which are
integrable in the sense of the Yang-Baxter (YB) equation.
As a concrete example, we take the spinless
Fermion model with repulsive interactions in
gapless regime.
This simple example already manifests
some fundamental differences from the spin models,
and yields a sound basis for the future studies on more
realistic Fermion systems such as the Hubbard model.
As in Refs.~\onlinecite{JKStJ,JKStp,JKSfusion,JKSHub},
first one may
perform the Jordan-Wigner (JW)
transformations to the Fermion models and
further convert the resultant quantum spin models
into 2D classical vertex models.
These procedures have been successful in studies of the
bulk quantities.
In evaluating correlation lengths, however, this is no longer true.
As an example, which will be discussed in the main body of this paper,
let us take Fermion one-particle Green's function $\langle c^{\dagger}_j
c_i \rangle$
and its correspondent
$\langle \sigma^{+}_j \sigma^{-}_i \rangle$ in the spin model.
Obviously they are related, but quite different by
nonlocal terms due to the JW transformation.
At zero temperature ($T=0$),
using the conformal mapping, one
evaluates the scaling dimensions from the finite size
corrections to the energy spectra.
As the Hamiltonians are equivalent through the
JW transformations,
it is normally difficult to discriminate
between the energy spectra of
the Fermions and those of the spins.
The difference lies only
in the boundary conditions.
Nevertheless, even after JW transformation
one can explicitly calculate
the correct scaling dimensions only by
incorporating the proper Fermion statistics
at the very last stage (see Appendix B).
At finite temperature ($T>0$),
the QTM approach gives the correlation
function in the spectral decomposition form
as $\sum_k |A_k|^2 (\Lambda_k /\Lambda_1)^{x}$.
Here $\Lambda_k$ denotes the $k$-th largest eigenvalue of
the QTM and $A_k$ is a certain matrix element.
Once the JW transformation is performed,
it is difficult to trace
the difference in the statistics in this framework.
Then one hardly recognizes the difference in
the eigenvalues of the QTM
between the spin models and the Fermion models.
A simple prescription is not yet found in contrast to the
above mentioned case at $T=0$.
Let us recall the quantum inverse scattering method
based on the graded YB relation, \cite{PZ,OWA} by which
the integrability and other algebraic
structures of the Fermion systems have been discussed successfully.
\cite{PZ,OWA,OW,MG,FM,SUW,MR}
The formulation, however, has a severe problem in applying
to the finite temperature case. We must treat the quantum
and the auxiliary spaces on the same footing when constructing
the QTM.
On the contrary, in the graded YB relation the quantum space is the
Fermion Fock space, while the auxiliary space is the (graded)
vector space.
To overcome these difficulties,
we adopt another approach to the Fermion systems,
which was invented quite recently. \cite{DS,USW1,USW2}
In this method, we consider an ${R}$-operator consisting of
the Fermion operators alone, together with its ``super-transposition".
This time both quantum and auxiliary spaces are Fermion Fock spaces.
%
Therefore we can, for instance, exchange their roles with no difficulty.
Actually, by careful introduction of the super-trace and interchange of
it with the normal trace to the partition function,
we can derive the commuting QTM for the Fermion systems.
The resultant QTM preserves genuine Fermion statistics.
In other words, the selection rule is already built-in algebraically.
This proper treatment for the statistics results in
a change of the analytic structure for the QTM.
In the ``physical strip", the QTM has only one
additional zero which characterizes ``excited free
energy"
at finite $T$, while in the corresponding spin model
there appear two such zeros.
Consequently, one observes a $T$-dependent
oscillating behavior of one-particle Green's function,
as well as
the difference in the correlation length between
the Fermion model and the corresponding spin model.
These are smoothly connected to the expected values at
the CFT limit,
$T \to 0$ (see Appendix B).
This paper is organized as follows.
In the next section, we will present the commuting QTM formulation
of the spinless Fermion model at $T>0$.
The Fermionic $R$-operator, together with
its ``super-transposition" $\tilde{R}$, play
fundamental roles.
The analytic structure of the QTM and the auxiliary functions are
discussed in Sec.~III, which leads to the
nonlinear integral equations (NLIE)
characterizing the correlation length.
The limit $T \to 0$ is treated analytically at
the ``half-filling" ($n_{\rm e}=0.5$), which recovers the prediction from CFT.
We also perform numerical investigations on NLIE and the correlation
length for one-particle Green's function.
To our knowledge, this is the first exact computation
of the correlation
length for various interaction strengths, electron filling and
for wide range of temperatures.
In Sec.~IV, we comment on
alternative forms of NLIE derived from different
choice of the auxiliary functions.
They are akin to the standard ``thermodynamic Bethe ansatz
(TBA) equations" from the string hypothesis,
thus may be of their own interest.
Details of calculations and supplementary knowledge on CFT are
summarized in appendices.
\section{Commuting Quantum Transfer Matrix for the Spinless Fermion Model }
In this section we formulate the commuting QTM for the spinless Fermion model.
The formulation is based on the recent developments in the study of the
integrability of the lattice Fermion systems.\cite{DS,USW1,USW2}
The central role is played by an operator solution of the YB equation
called the Fermionic ${R}$-operator.
The ``transfer matrix" can be constructed from the ${R}$-operator,
which generates the left-shift operator,
the Fermionic Hamiltonian and other conserved operators.
Here and in Sec.~II A, we briefly describe the method.
To extend the method to the finite temperature case
utilizing the Trotter formula,
it is necessary to look for another transfer matrix
which generates {\it right}-shift operator and the Hamiltonian.
In Sec.~II B, we shall argue how to construct the desired transfer matrix
by considering the super-transposition of the ${R}$-operator.
Based on these two kinds of the transfer matrices,
we devise the QTM for the Fermion model in Sec.~II C.
The QTM constitutes a one-parameter commuting family,
which is a consequence of the global YB relation.
The YB relation also
enables us to diagonalize the QTM by means of the algebraic Bethe ansatz.
The free-energy and the correlation length are expressed in terms of
the eigenvalues of the QTM.
\subsection{Fermionic ${R}$-Operator}
We define the spinless Fermion model by the Hamiltonian
\begin{eqnarray}
{\cal H} &:=& \sum_{j=1}^{L} {\cal H}_{j,j+1} \nonumber \\
{\cal H}_{j,j+1} &:=& \frac{t}{2}
\Big\{ c_{j}^{\dagger} c_{j+1} + c_{j+1}^{\dagger} c_{j}
\nonumber \\
& & \ \ \ \ \ + 2 \Delta \left(n_j - \frac{1}{2} \right)
\left(n_{j+1} - \frac{1}{2} \right) \Big\},
\label{eq.hamiltonian}
\end{eqnarray}
where ${c_{j}^{\dagger}}$ and ${c_{j}}$ are the Fermionic creation
and annihilation operators at the ${j}$-th site
satisfying the canonical anti-commutation relations
\begin{equation}
\{ c_{j}, c_{k} \}
= \{ c_{j}^{\dagger}, c_{k}^{\dagger} \} = 0,
\ \ \{ c_{j}^{\dagger}, c_{k} \} = \delta_{jk}. \label{eq.ACR}
\end{equation}
We assume the periodic boundary condition (PBC)
on the Fermion operators,
\begin{equation}
c_{L+1}^{\dagger} = c_{1}^{\dagger}, \ \ \ \ c_{L+1} = c_{1}. \label{eq.pbc}
\end{equation}
The parameters ${t,\Delta}$ are real coupling constants.
In the present paper we consider the repulsive critical
region ${0 \le \Delta <1, \ \ 0< t}$ and introduce the parametrization
\begin{equation}
\Delta:= \cos 2 \eta, \ \ 0 < 2 \eta \le \frac{\pi}{2}.
\end{equation}
In the subsequent sections, we shall also use the parameter ${p_0}$ defined by
\begin{equation}
p_0:= \frac{\pi}{2 \eta}.
\end{equation}
Hereafter we set ${t=1}$ for simplicity.
The model (\ref{eq.hamiltonian}) is exactly solved by the Bethe ansatz method.
Since the Hamiltonian (\ref{eq.hamiltonian}) preserves the number of the particles,
we can add the ``chemical potential" term without breaking the integrability
\begin{equation}
{\cal H}_{\rm chemical} := \mu \sum_{j=1}^{L} \left(n_{j} -
\frac{1}{2}\right) \label{eq.chemical}.
\end{equation}
However we consider only the case ${\mu =0}$ for a while.
The several physical properties including the integrability of
the Fermion model (\ref{eq.hamiltonian}) has been discussed by
transforming it into the ${XXZ}$ model
\begin{eqnarray}
H = \frac{1}{4} \sum_{j=1}^{L}
\Big\{ \sigma_{j}^{x} \sigma_{j+1}^{x} + \sigma_{j}^{y} \sigma_{j+1}^{y}
+ \Delta \sigma_{j}^{z} \sigma_{j+1}^{z} \Big\},
\label{eq.spinhamiltonian}
\end{eqnarray}
through the JW transformation.
However it was recently discovered that we can treat
the Fermion model (\ref{eq.hamiltonian}) only with the Fermion operators.
We shall summarize the method in what follows.
First let us consider a two-dimensional Fermion Fock space ${V_j}$,
a basis of which is given by
\begin{eqnarray}
& & | 0 \rangle_{j}, \ \ \ \ | 1 \rangle_{j} := c_j^{\dagger}| 0 \rangle_{j}, \nonumber \\
& & c_j | 0 \rangle_{j} = 0.
\end{eqnarray}
Define the Fermionic ${R}$-operator acting on the tensor product of
the Fermion Fock spaces ${V_j {\displaystyle \sotimes} V_k}$ by
\begin{eqnarray}
{\cal R}_{jk}(v) &:=&
a(v) \left\{ - n_j n_k
+ (1-n_j)(1-n_k) \right\} \nonumber \\
& & + b(v) \left\{ n_j (1-n_k) + (1-n_j) n_k \right\} \nonumber \\
& & + c(v) ( c_j^{\dagger} c_k - c_j c_k^{\dagger}),
\label{eq.fR}
\end{eqnarray}
where
\begin{equation}
a(v):= \frac{\sin \eta (v + 2)}{\sin 2 \eta}, \ \
b(v):= \frac{\sin \eta v}{\sin 2 \eta}, \ \ c(v):= 1.
\end{equation}
A basis of ${V_j {\displaystyle \sotimes} V_k}$ is given by
\begin{eqnarray}
& & | 0 \rangle_{j} \sotimes | 0 \rangle_{k} := | 0 \rangle,
\ \ | 1 \rangle_{j} \sotimes | 0 \rangle_{k} := c_j^{\dagger}| 0 \rangle, \nonumber \\
& & | 0 \rangle_{j} \sotimes | 1 \rangle_{k} := c_k^{\dagger}| 0 \rangle,
\ \ | 1 \rangle_{j} \sotimes | 1 \rangle_{k} := c_j^{\dagger} c_k^{\dagger}| 0 \rangle,
\end{eqnarray}
and we can calculate the matrix elements of (\ref{eq.fR}) if necessary.
However we keep the operator form (\ref{eq.fR}) as much
as possible and avoid the use of the matrix elements,
because the former is more transparent.
The ${R}$-operator (\ref{eq.fR}) satisfies the following YB equation \cite{USW1,USW2}
\begin{equation}
{\cal R}_{12}(u-v) {\cal R}_{13}(u) {\cal R}_{23}(v)
= {\cal R}_{23}(v) {\cal R}_{13}(u) {\cal R}_{12}(u-v).
\label{eq.YBE1}
\end{equation}
The equation (\ref{eq.YBE1}) is an operator identity and
one should carefully use the anti-commutation relations (\ref{eq.ACR})
to confirm its validity.
It is one of the fundamental properties of the
${R}$-operator ${{\cal R}_{ij}(v)}$ that
${{\cal R}_{ij}(0) = {\cal P}_{ij}}$ is the permutation operator
for the Fermion operators,
\begin{eqnarray}
& & {\cal P}_{jk} := (1-n_j)(1 - n_k) - n_j n_k + c_j^{\dagger}
c_k - c_j c_k^{\dagger}, \nonumber \\
& & {\cal P}_{jk} \ x_{j} = x_{k} {\cal P}_{jk}, \ \ \ \ ( x_{j} = c_j \
{\rm or} \ c_j^{\dagger}). \label{eq.PPc}
\end{eqnarray}
We can define an analog of the transfer matrix by
\begin{eqnarray}
T(v) &:=& {\rm Str}_a \left\{ {\cal R}_{aL}(v) \cdots {\cal R}_{a1}(v)
\right\}.
\label{eq.transfer1}
\end{eqnarray}
Here the super-trace of an arbitrary operator ${X}$ is defined by
\begin{equation}
{\rm Str}_a X:= {}_{a} \langle 0 | X | 0 \rangle_{a}
- {}_{a} \langle 1 | X | 1 \rangle_{a},
\label{eq.str}
\end{equation}
where the dual Fermion Fock space is spanned by
${ {}_{a} \langle 0 | }$ and ${ {}_{a} \langle 1 | }$ with
\begin{equation}
\hspace{5mm} {}_a \langle 0 |c_a^{\dagger} = 0,
\ \ \hspace{5mm} {}_{a} \langle 1 |:= {}_a \langle 0 |c_a.
\end{equation}
We also assume
\begin{equation}
{}_a \langle 0 | 0 \rangle_a
= {}_a \langle 1 | 1 \rangle_a=1.
\end{equation}
The super-trace (\ref{eq.str}) corresponds to
the PBC for the Fermion operators (\ref{eq.pbc}) satisfies the property
\begin{eqnarray}
& & {\rm Str}_a \left\{ {\cal R}_{aL}(v) \cdots {\cal R}_{a1}(v)
\right\} \nonumber \\
& & = {\rm Str}_a \left\{ {\cal R}_{a1}(v) {\cal R}_{aL}(v) \cdots {\cal R}_{a2}(v) \right\}.
\end{eqnarray}
Hereafter we call (\ref{eq.transfer1}) the transfer matrix for simplicity.
As in the case with the integrable spin models,
the YB equation (\ref{eq.YBE1}) ensures
the commutativity of the transfer matrices (\ref{eq.transfer1})
\begin{equation}
\left[ T(v), T(v') \right] = 0.
\end{equation}
The expansion of the transfer matrix (\ref{eq.str}) with respect to
the spectral parameter ${v}$ is given by
\begin{equation}
T(v) = T(0) \left\{ 1 + \frac{2 \eta}{\sin 2 \eta} \left( {\cal H} +
\frac{L}{4} \Delta \right) v + {\cal O}(v^2) \right\},
\label{eq.expansion1}
\end{equation}
which follows from the relationship
\begin{eqnarray}
& & \frac{{\rm d} {\cal R}_{aj}(v)}{{\rm d} v} \Big|_{v=0} {\cal P}_{a,j-1} \nonumber \\
& & = \frac{2 \eta}{\sin 2 \eta} {\cal P}_{aj} {\cal P}_{a,j-1} \left( {\cal H}_{j-1,j} + \frac{1}{4} \Delta \right).
\end{eqnarray}
Note that the operator
${T(0)={\rm Str}_a \{ {\cal P}_{aL} \cdots {\cal P}_{a1} \} }$
is the left-shift operator
\begin{equation}
T(0) x_{j} = x_{j+1} T(0),
\ \ \ \ \ \ ( x_{j} = c_j \ {\rm or} \ c_j^{\dagger}). \label{eq.left}
\end{equation}
One can easily prove the relation (\ref{eq.left})
utilizing the property of the permutation operator,
\begin{equation}
{\cal P}_{a,j+1} {\cal P}_{aj} \ x_{j} = x_{j+1} {\cal P}_{a,j+1} {\cal P}_{aj},
\ \ \ \ ( x_{j} = c_j \ {\rm or} \ c_j^{\dagger}).
\end{equation}
\subsection{ Super-Transposed Fermionic ${R}$-Operator}
In this section, we shall consider another transfer matrix
which generates the right-shift operator.
For this purpose we first define the
super-transposition ${{\rm st}_j}$ for
an arbitrary operator ${X_{j}(v)}$ in the form
\begin{equation}
X_j(v) = A(v)(1-n_j) + D(v) n_j + B(v) c_j + C(v) c_j^{\dagger},
\end{equation}
by
\begin{equation}
X_j^{{\rm st}_j}(v):= A(v)(1-n_j) + D(v) n_j + B(v) c_j^{\dagger} - C(v) c_j.
\end{equation}
Here ${A(v)}$ and ${D(v)}$ (${B(v)}$ and ${C(v)}$) are
assumed to be Grassmann even (odd) operators.
Now applying the super-transposition ${{\rm st}_1}$ to both sides of the YB equation
(\ref{eq.YBE1}), we obtain
\begin{equation}
{\cal R}_{13}^{{\rm st}_1}(u) {\cal R}_{12}^{{\rm st}_1}(u-v) {\cal R}_{23}(v)
= {\cal R}_{23}(v) {\cal R}_{12}^{{\rm st}_1}(u-v)
{\cal R}_{13}^{{\rm st}_1}(u),
\end{equation}
where we have used a property of the super-transposition
\begin{equation}
\left( {\cal R}_{jk}(u) {\cal R}_{jl}(v) \right)^{{\rm st}_j} =
{\cal R}_{jl}^{{\rm st}_{j}}(v) {\cal R}_{jk}^{{\rm st}_{j}}(u), \ \ \ \ \ \
(k \ne l).
\end{equation}
Then changing suffixes and spectral parameters as
\begin{eqnarray}
& & 1 \rightarrow 3, \ \ 2 \rightarrow 1, \ \ 3 \rightarrow 2, \nonumber \\
& & u \rightarrow -v, \ \ v \rightarrow u-v,
\end{eqnarray}
we get the following new type of the YB equation
\begin{equation}
{\cal R}_{12}(u-v) \widetilde{\cal R}_{13}(u)
\widetilde{\cal R}_{23}(v)
= \widetilde{\cal R}_{23}(v) \widetilde{\cal R}_{13}(u) {\cal R}_{12}(u-v),
\label{eq.YBE2}
\end{equation}
where
\begin{eqnarray}
\widetilde{\cal R}_{jk}(v) & := & {\cal R}_{kj}^{{\rm st}_k}(-v) \nonumber \\
& = & a(-v) \left\{ - n_j n_k
+ (1-n_j)(1-n_k) \right\} \nonumber \\
& & + b(-v) \left\{ n_j (1-n_k)
+ (1-n_j) n_k \right\} \nonumber \\
& & - c(-v) ( c_j^{\dagger} c_k^{\dagger} + c_j c_k ). \label{eq.tildefR}
\end{eqnarray}
Although the new ${R}$-operator ${\widetilde{\cal R}_{jk}(v)}$ is not
symmetric (${\widetilde{\cal R}_{jk}(v) \ne \widetilde{\cal R}_{kj}(v)}$),
it is still possible to prove the relation
\begin{equation}
\widetilde{\cal R}_{12}(u-v) \widetilde{\cal R}_{13}(u)
{\cal R}_{23}(v)
= {\cal R}_{23}(v) \widetilde{\cal R}_{13}(u) \widetilde{\cal R}_{12}(u-v).
\label{eq.YBE3}
\end{equation}
Using ${\widetilde{\cal R}_{aj}(v)}$, we define another transfer matrix by
\begin{eqnarray}
\widetilde{T}(v)&:=&
{\rm Str}_a \left\{ \widetilde{\cal R}_{aL}(v) \cdots \widetilde{\cal R}_{a1}(v)
\right\}.
\label{eq.transfer2}
\end{eqnarray}
Then the commutative properties of
the transfer matrices follow from the YB equations
(\ref{eq.YBE2}) and (\ref{eq.YBE3}),
\begin{equation}
\left[ T(v), \widetilde{T}(v') \right]
= \left[ \widetilde{T}(v), \widetilde{T}(v') \right] = 0. \label{eq.comm1}
\end{equation}
The following remarkable relations hold
\begin{equation}
\widetilde{\cal P}_{aj} \widetilde{\cal P}_{a,j-1} x_{j} =
x_{j-1} \widetilde{\cal P}_{aj} \widetilde{\cal P}_{a,j-1},
\ \ (x_j = c_j \ {\rm or} \ c_j^{\dagger}),
\label{eq.barPPC}
\end{equation}
where
\begin{eqnarray}
\widetilde{\cal P}_{jk} & := & \widetilde{\cal R}_{jk}(0) \nonumber \\
& = & (1-n_j)(1 -n_k) - n_j n_k - (c_j^{\dagger} c_k^{\dagger} + c_j c_k).
\end{eqnarray}
Using the relations (\ref{eq.barPPC}),
one can confirm that the operator ${\widetilde{T}(0)}$
provides the right-shift operator, i.e.,
\begin{equation}
\widetilde{T}(0) x_{j} = x_{j-1} \widetilde{T}(0),
\ \ \ \ (x_j = c_j \ {\rm or} \ c_j^{\dagger}).
\end{equation}
In other words,
${\widetilde{T}(0)}$ is the inverse of ${T(0)}$
\begin{equation}
T(0) \widetilde{T}(0) = 1. \label{eq.inverse}
\end{equation}
Furthermore, from the relationship
\begin{eqnarray}
& & \widetilde{\cal P}_{a,j+1} \frac{{\rm d}
\widetilde{\cal R}_{aj}(v)}{{\rm d} v} \Big|_{v=0} \nonumber \\
& & = - \frac{2 \eta}{\sin 2 \eta} \widetilde{\cal P}_{a,j+1}
\widetilde{\cal P}_{aj} \left( {\cal H}_{j+1,j} +
\frac{1}{4} \Delta \right),
\end{eqnarray}
the expansion of the transfer matrix ${\widetilde{T}(v)}$
with respect to the spectral parameter ${v}$ is given by
\begin{equation}
\widetilde{T}(v) =
\widetilde{T}(0) \left\{ 1 - \frac{2 \eta }{\sin 2 \eta}
\left( {\cal H} + \frac{L}{4} \Delta \right) v +
{\cal O}(v^2) \right\}. \label{eq.expansion2}
\end{equation}
\subsection{Commuting Quantum Transfer Matrix}
The expansions (\ref{eq.expansion1}) and (\ref{eq.expansion2})
with the relation (\ref{eq.inverse}) are combined into a formula
\begin{equation}
T(u) \widetilde{T}(-u) = 1 + \frac{4 \eta }{\sin 2 \eta}
\left( {\cal H} + \frac{L}{4} \Delta \right) u + {\cal O}(u^2).
\end{equation}
This facilitates the investigation
the finite temperature properties of the spinless Fermion model
(\ref{eq.hamiltonian}) via the Trotter formula,
\begin{eqnarray}
\exp \left( - \beta \left( {\cal H} +\frac{L}{4} \Delta \right) \right)
&=& \lim_{N \rightarrow \infty} \left( T(u_N) \widetilde{T}(-u_N) \right)^{N/2},
\nonumber \\
& & u_N = -\frac{\beta \sin 2 \eta}{2 \eta N}.
\end{eqnarray}
Here an (even) integer ${N}$ called the Trotter number,
represents the number of sites in the fictitious
Trotter direction and ${\beta}$
is the inverse temperature ${\beta = 1/T}$.
The free energy per site, for instance, is given by
\begin {equation}
f = - \lim_{L \rightarrow \infty} \lim_{N \rightarrow \infty}
\frac{1}{L \beta} \ln {\rm Tr}
\left(T(u_N) \widetilde{T}(-u_N) \right)^{N/2} - \frac{1}{4} \Delta.
\label{eq.fenergy}
\end{equation}
However, as is the case with the corresponding spin model,
the eigenvalues of ${T(u_N) \widetilde{T}(-u_N)}$ are infinitely
degenerate in the limit ${N \rightarrow \infty}$.
Therefore it is a formidable task to take the trace in this limit.
To avoid this difficulty, we transform the term
${{\rm Tr} \left(T(u_N) \widetilde{T}(-u_N) \right)^{N/2}}$ in
(\ref{eq.fenergy}) as follows:
\begin{eqnarray}
& & {\rm Tr} \left(T(u_N) \widetilde{T}(-u_N) \right)^{N/2} = \nonumber \\
& & = {\rm Tr} \prod_{m=1}^{N/2} {\rm Str}_{a_{2m},a_{2m-1}}
\Big[ {\cal R}_{a_{2m},L}(u_N) \cdots {\cal R}_{a_{2m},1}(u_N) \nonumber \\
& & \hspace{1cm}\times\widetilde{\cal R}_{a_{2m-1},L}(-u_N)
\cdots \widetilde{\cal R}_{a_{2m-1},1}(-u_N) \Big], \nonumber \\
& & = {\rm Str} \ \prod_{j=1}^{L} {\rm Tr}_{j} \prod_{m=1}^{N/2}
{\cal R}_{a_{2m},j}(u_N) \widetilde{\cal R}_{a_{2m-1},j}(-u_N).
\end{eqnarray}
We now introduce a fundamental object in the present approach
called the quantum transfer matrix (QTM)
\begin{equation}
T_{\rm QTM}(u_N,v):= {\rm Tr}_j {\cal T}_{j}(u_N,v), \label{eq.QTM}
\end{equation}
where the monodromy operator ${{\cal T}_{j}(u_N,v)}$ is defined by
\begin{equation}
{\cal T}_{j} (u_N,v):= \prod_{m=1}^{N/2} {\cal R}_{a_{2m},j}(v+u_N)
\widetilde{\cal R}_{a_{2m-1},j}(v-u_N). \label{eq.QTMmonodromy}
\end{equation}
Using the YB equations (\ref{eq.YBE1}) and (\ref{eq.YBE2}),
we can show that the monodromy operator satisfies the global YB relation
\begin{eqnarray}
& & {\cal R}_{21}(v-v') {\cal T}_{1}(u_N,v) {\cal T}_{2}(u_N,v') \nonumber \\
& & = {\cal T}_{2}(u_N,v') {\cal T}_{1}(u_N,v) {\cal R}_{21}(v-v').
\label{eq.QTMYBE}
\end{eqnarray}
Accordingly the QTM constitutes a commuting family
\begin{equation}
\left[ T_{\rm QTM}(u_N,v), T_{\rm QTM}(u_N,v') \right] = 0.
\end{equation}
We remark that the trace in the definition of the QTM (\ref{eq.QTM})
implies the anti-periodic boundary condition for
the Fermion operators in the Trotter direction, \cite{USW2}
i.e.,
\begin{equation}
c_{a_{N+1}} = - c_{a_1}, \ \ c_{a_{N+1}}^{\dagger} = - c_{a_1}^{\dagger}.
\end{equation}
The free energy per site (\ref{eq.fenergy}) is then
represented in terms of the QTM as
\begin{equation}
f = - \lim_{L \rightarrow \infty} \lim_{N \rightarrow \infty}
\frac{1}{L \beta} \ln {\rm Str} \left(T_{\rm QTM}(u_N,0)^{L} \right) -
\frac{1}{4} \Delta.
\label{eq.fenergy2}
\end{equation}
Since the two limits in (\ref{eq.fenergy2})
are exchangeable, \cite{MSuzPB,InSuz}
we take the limit ${L\rightarrow \infty}$ first.
Because there is a finite gap between the first and
the second largest eigenvalue of the QTM for finite temperature,
we can write
\begin{equation}
f = - \frac{1}{\beta} \lim_{N \rightarrow \infty} \ln \Lambda_1 - \frac{1}{4} \Delta.
\end{equation}
where ${\Lambda_1}$ is the first largest
eigenvalue of the QTM ${T_{\rm QTM} (u_N,0)}$.
From now on $\Lambda_k$ denotes the
$k$-th largest eigenvalue of the QTM.
The correlation length ${\xi}$ of the
correlation function ${\langle c_j^{\dagger} c_k \rangle}$
can also be represented in terms of the first
and the second largest eigenvalues $\Lambda_2$ as
\begin{equation}
\xi^{-1} = - \lim_{N \rightarrow \infty}
\ln \Big| \frac{\Lambda_2}{\Lambda_1} \Big|. \label{eq.xi}
\end{equation}
In this way the calculation of certain thermal quantities
reduces to the evaluation of the eigenvalues of the QTM
in the Trotter limit (${N \rightarrow \infty}$).
For ${N}$ finite,
it is possible to diagonalize the QTM (\ref{eq.QTM})
by means of the algebraic Bethe ansatz \cite{KBI}
(see Appendix A).
The eigenvalue is then given by
\begin{eqnarray}
& & \Lambda(x) = \lambda_1(x) + \lambda_2(x), \nonumber \\
& & \lambda_1(x):=\phi_{+}(x) \phi_{-}(x - 2 i)
\frac{Q(x + 2 i)}{Q(x)}e^{ \beta \mu /2} , \nonumber \\
& & \lambda_2(x):=(-1)^{N/2 + N_{\rm e}}
\phi_{-}(x) \phi_{+}(x + 2 i) \frac{Q(x - 2 i)}{Q(x)}e^{ -\beta \mu /2},
\nonumber \\ \label{eq.eigen}
\end{eqnarray}
where
\begin{eqnarray}
\phi_{\pm}(x) &:=& \left(
\frac{\sinh \eta (x \pm i u_{N})}{\sin 2 \eta} \right)^{N/2}, \np
Q(x) &:=& \prod_{j=1}^{N_{\rm e}} \sinh \eta (x- x_j).
\label{qdef}
\end{eqnarray}
Here we have changed the spectral parameter from
${v}$ to ${x}$ defined by ${v = i x}$ for later convenience.
Note that we have also included the contribution
from the chemical potential term (\ref{eq.chemical})
in the expression (\ref{eq.eigen}).
The associated Bethe ansatz equation (BAE) is given by
\begin{eqnarray}
& & \left( \frac{\phi_{+}(x) \phi_{-}(x - 2 i)}
{\phi_{-}(x) \phi_{+}(x + 2 i )} \right)^{N/2} \nonumber \\
& & = - (-1)^{N/2 + N_{\rm e}} e^{-\beta \mu } \prod_{k =1}^{N_{\rm e}}
\frac{Q(x_j - 2 i)}{Q(x_j + 2 i)}. \label{eq.bae}
\end{eqnarray}
Compared with the ${XXZ}$ model,
we observe an extra factor ${(-1)^{N/2 + N_{\rm e}}}$
in (\ref{eq.eigen}) and (\ref{eq.bae}) which reflects
the Fermionic nature of the present system.
In particular,
if ${N/2 + N_{\rm e} \equiv 1 \ ({\rm mod} \ 2)}$,
the Eqs. (\ref{eq.eigen}) and (\ref{eq.bae}) are
clearly different from the corresponding ones for the ${XXZ}$ model.
Actually the second largest eigenvalue lies in the sector ${N_{\rm e} = N/2-1}$,
while the first largest one is in the sector ${N_{\rm e} = N/2}$.
Therefore the correlation length ${\xi}$ (\ref{eq.xi})
exhibits the manifest difference between the Fermion system
(\ref{eq.hamiltonian}) and the spin system (\ref{eq.spinhamiltonian}).
\section{NLIE and the Exact enumeration of correlation length}
\subsection{Analyticities of Auxiliary Functions and NLIE}
In order to proceed further, one needs to
clarify the analytic property of the QTM.
For this purpose, we perform numerical investigations
by fixing the Trotter number $N$ finite.
First we give the description for the largest eigenvalue
sector, which is naturally identical to the corresponding
$XXZ$ model.
There are $N_{\rm e} = N/2$ BAE roots.
Only at the ``half-filling", they distribute exactly on the real axis
symmetrically with respect to $x=0$, while for the general
particle density they bend in the complex $x$ plane.
The QTM has $N$ zeros in $\Im x\in[-2p_0,2p_0]$:
$N/2$ zeros locate on the smooth curve $ \Im x \sim 2$,
and the other $N/2$ zeros are on the curve $ \Im x \sim -2$.
Thus there is a strip $ \Im x\in [ -1, 1 ]$ where
the QTM is analytic and nonzero. We call this ``physical strip".
Next consider the excited state relevant to the
second largest eigenvalue.
In contrast to the $XXZ$ model, we find that
two complex eigenvalues are degenerate in magnitude.
Both of them are characterized by
$N_{\rm e} = N/2-1$ BAE roots located on a smooth curve
near the real axis.
The distribution of the BAE roots for the one and
that for the other are symmetric with respect to
the imaginary axis.
As to the zeros of the QTM, $N-2$ zeros are on the
smooth curves $ \Im x \sim \pm 2$.
The locations of the two
``missing zeros" are vital in the evaluation of the
excited states.
For the $XXZ$ model, both of them enter into the physical strip.
Especially, with vanishing external field $h$,
they are on the real axis and are symmetric with respect
to the imaginary axis.
With the increase of $h$, they are away from the real
axis, but still stay in
the physical strip preserving the symmetry.
We find a different situation for the Fermion model.
At the half-filling, corresponding to $h=0$ in the $XXZ$ model,
one of them is located at $\theta_0$ on the real axis,
while the other is at $\theta' _0+ i p_0$,
$\theta _0\sim \theta'_0$.
Namely only one zero appears in the physical strip.
Away from the half-filling, the zero in the physical strip
(we call it $\theta$) moves upward while
the other ($\theta'$) moves downward.
Nevertheless, we find that $\theta$ remains in the physical strip
while $\theta'$ never comes in.
{}From now on we consider the case $\Re\theta>0$ ($\Re\theta'>0$).
Then the trajectories of $\theta'$, for example,
are depicted in FIG.~\ref{zero}.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{zero.eps}
\end{center}
\caption{The trajectories of the
additional zero $\theta'$
are depicted in the case $p_0=3$, $N=100$.
With the decrease of $T$, $\theta'$ moves
downward, whereas it never
comes into the physical strip.}
\label{zero}
\end{figure}
We assume all these features are valid in the Trotter limit
$N \rightarrow \infty$.
Then a set of nonlinear integral equations (NLIE)
can be derived
as in the case of the $XXZ$ model.\cite{KZeit}
We define auxiliary functions
\begin{eqnarray}
{\mathfrak{a}}(x) &:=&
\frac{\lambda_1(x+i-i\gamma_1)}{\lambda_2(x+i-i\gamma_1)},\qquad
{\mathfrak{A}}(x):= 1+{\mathfrak{a}}(x), \np
\bar{{\mathfrak{a}}}(x) &:=&
\frac{\lambda_2(x-i+i\gamma_2)}{\lambda_{1}(x-i+i\gamma_2)}, \qquad
\overline{{\mathfrak{A}}}(x):= 1+\overline{{\mathfrak{a}}}(x).
\label{bdef}
\end{eqnarray}
where $\gamma_1, \gamma_2$ are small positive quantities introduced
for the convenience in numerical calculations.
Note that these functions have asymptotic values
\begin{subequations}
\bea
\mfa(x)&=&\begin{cases}
\exp((-\pi+4\eta)i+\beta\mu) & \text{for $x\to-\infty$,} \\
\exp((\pi-4\eta)i+\beta\mu) & \text{for $x\to\infty$,}
\end{cases} \label{asymptotics1} \\
\overline{\mfa}(x)&=&\begin{cases}
\exp((\pi-4\eta)i-\beta\mu) & \text{for $x\to-\infty$,} \\
\exp((-\pi+4\eta)i-\beta\mu) & \text{for $x\to\infty$.}
\end{cases}
\label{asymptotics2}
\eea
\end{subequations}
Immediately seen from the above analyticity argument,
${\mathfrak{a}}(x),{\mathfrak{A}}(x)$
($\overline{{\mathfrak{a}}}(x),\overline{{\mathfrak{A}}}(x)$) are Analytic,
NonZero and have Constant asymptotic values (ANZC) in a certain
strip in the lower (upper) half plane including real axis.
The above definitions, together with the
knowledge of zeros for $\Lambda(x)$,
fix the NLIE among these auxiliary functions.
We defer the detail derivation
to Appendix C.
The resultant expressions allow for taking the Trotter limit
analytically.
Thereby one arrives at the final expressions totally independent
of fictitious parameter $N$,
\begin{eqnarray}
\ln {\mathfrak{a}}(x) &=&
-\frac{\pi \beta \sin2\eta}{4\eta \cosh\frac{\pi}{2}(x-i\gamma_1)}+
F* \ln {\mathfrak{A}}(x) \np
&&- F* \ln \overline{{\mathfrak{A}}}(x+2i-i(\gamma_1+\gamma_2)) \np
&&+2\pi i {\cal F}(x-\theta+i(1-\gamma_1))
+\frac{\beta \mu p_0}{2(p_0-1)}, \np
\ln \overline{{\mathfrak{a}}}(x) &=&
-\frac{\pi \beta \sin2\eta}{4\eta \cosh\frac{\pi}{2}(x+i\gamma_2)}+
F* \ln \overline{{\mathfrak{A}}}(x) \np
&&-F* \ln {\mathfrak{A}}(x-2i+i(\gamma_1+\gamma_2)) \np
&&-2\pi i {\cal F}(x-\theta-i(1-\gamma_2))
-\frac{\beta \mu p_0}{2(p_0-1)}.
\label{NLIE}
\end{eqnarray}
where
\begin{eqnarray}
&&A*B(x):=\int_{-\infty}^{\infty} A(x-y) B(y) dy, \np
&&F(x):= \frac{1}{2\pi} \int_{-\infty}^{\infty}
\frac{\sinh(p_0 -2) k}{2 \cosh k \sinh(p_0-1)k }
e^{-ikx} dk, \np
&&{\cal F} (x):=\frac{i}{2\pi} \int_{-\infty}^{\infty}
\frac{\sinh(p_0 -2) k}{2k \cosh k \sinh(p_0-1)k }
e^{-ikx}dk.
\label{DefF}
\end{eqnarray}
The location of zero $\theta$ satisfies a subsidiary condition,
\begin{equation}
{\mathfrak{a}}(\theta-i+i\gamma_1) =-1.
\label{sub}
\end{equation}
Taking the Trotter limit $N\to\infty$ after setting $x=0$ in
(\ref{second})
we derive the ``first excited free energy "
per site $f_2$ is
\begin{eqnarray}
f_2&&=-\frac{1}{\beta} \ln \Lambda_2 (0)-\frac{1}{4}{\Delta} \np
&&=\epsilon_0 -\frac{1}{\beta} K* \ln{\mathfrak{A}}(i\gamma_1) -
\frac{1}{\beta} K* \ln \overline{{\mathfrak{A}}}(-i\gamma_2),\np
&&-\frac{1}{\beta} \ln \tanh\frac{\pi \theta}{4}
-i\frac{\pi}{2\beta}
\label{sec}
\end{eqnarray}
where $\epsilon_0$ is the ground state
energy defined in (\ref{ground}) and
\be
K(x) := \frac{1}{4\cosh \frac{\pi x}{2} }.
\end{equation}
Together with the NLIE for the largest eigenvalue,
summarized in Appendix C,
these relations characterize the correlation length $\xi$ of
one-particle Green's function $\langle c^{\dagger}_j,c_i \rangle$
at $T>0$ completely (see Eq.~(\ref{eq.xi})).
We remark that in derivations of above relations
one does not need precise information like
roots distributions of the BAE.
Only ANZC properties of the QTM and the auxiliary functions
are sufficient.
Thus the structure is rather robust, and permits to introduce
small free parameters
$\gamma_1$ and $\gamma_2$.
In the next two subsections, we present analytical and numerical
studies on these equations and the correlation length
of one-particle Green's function,
which are main results in this paper.
\subsection{Low temperature property of NLIE ($\mu=0$)}
We study the low temperature behavior for the
half-filling case $\mu=0$ utilizing
the Dilogarithm trick \cite{KP}, which enables us to
obtain the first low temperature correction without solving NLIE.
As in the case of the largest eigenvalue sector,
$|\mfa (x)|$ and $|\overline{\mfa}(x)|$ exhibit a crossover behavior,
\be
\begin{cases}
|\mfa(x)|,|\overline{\mfa}(x)|
\ll 1 &\text{for $|x|<{\cal K}$,} \\
|\mfa(x)|,|\overline{\mfa}(x)|
\sim 1 &\text{for $|x|>{\cal K}$,}
\end{cases}
\label{crossover}
\end{equation}
where
\be
{\cal K}:=
\frac{2}{\pi}\ln \frac{\pi\beta\sin(2\eta)}{2\eta}.
\end{equation}
Thus one carefully takes into account of
contributions near ``Fermi-surfaces" $\pm{\cal K}$.
For this purpose, we introduce following
shifted variables and scaling functions,
\begin{eqnarray}
la_{\pm}(x) &:=& \ln {\mathfrak{a}}\left(\pm \frac{2}{\pi}x
\pm {\cal K}\right), \np
l\overline{a}_{\pm}(x) &:=& \ln \overline{{\mathfrak{a}}}\left(
\pm\frac{2}{\pi}x\pm {\cal K}\right), \np
\overline{\theta} &:=& \frac{\pi}{2}(\theta - {\cal K}).
\label{scaling}
\end{eqnarray}
and similarly for capital functions
$\mfA$, $\overline{\mfA}$, $A_{\pm}$ and $\overline{A}_{\pm}$.
In $T \rightarrow 0$, they satisfy truncated equations,
\begin{subequations}
\begin{eqnarray}
la_{+}(x) &=& -e^{-x +\frac{\pi}{2}i\gamma_1}+
F_1*lA_{+}(x) -F_2*l\overline{A}_{+}(x) \np
& & +2\pi i {\cal F}\left(\frac{2}{\pi}(x - \overline{\theta}) +
i (1 - \gamma_1)\right),
\label{scaleNLIE1} \\
l\overline{a}_{+}(x) &=&-e^{-x-\frac{\pi}{2}i\gamma_2}+
F_1*l\overline{A}_{+}(x) -\overline{F}_2*lA_{+}(x) \np
& & - 2\pi i {\cal F}\left(
\frac{2}{\pi}(x-\overline{\theta})-i(1-\gamma_2)\right),
\label{scaleNLIE2} \\
la_{-}(x) &=& -e^{-x-\frac{\pi}{2}i\gamma_1} +
F_1*lA_{-}(x) - \overline{F}_2*l\overline{A}_{-}(x) \np
& & + 2\pi i {\cal F}(- \infty),
\label{scaleNLIE3} \\
l\overline{a}_{-}(x) &=& -e^{-x + \frac{\pi}{2}i\gamma_2} +
F_1*l\overline{A}_{-}(x) -F_2*lA_{-}(x) \np
& & - 2\pi i {\cal F}(- \infty),
\label{scaleNLIE4}
\end{eqnarray}
\end{subequations}
where
\bea
F_1(x)&&:=\frac{2}{\pi} F\left(\frac{2 x}{\pi}\right),\np
F_2(x)&&:=\frac{2}{\pi} F\left(\frac{2}{\pi}x+2i-
i(\gamma_1+\gamma_2)\right).
\eea
and $\overline{F}_1, \overline{F}_2$ are their complex conjugate.
In this limit, the finite $T$ correction part, $\ln\Lambda_{\rm fn}(x)$
(see (\ref{finiteT})) reads
\begin{eqnarray}
\ln&&\Lambda_{\rm fn}(x) \sim
\frac{\pi}{2}i+\frac{2\eta}{\pi^2 \beta \sin 2 \eta}
\Bigl (
-2\pi e^{\frac{\pi}{2}x - \overline{\theta}} \np
& & + e^{\frac{\pi}{2}x}\int_{-\infty}^{\infty} e^{-y}
\{ e^{\frac{\pi}{2}i\gamma_1} lA_{+}(y) +
e^{-\frac{\pi}{2}i\gamma_2} l{\overline A}_{+}(y)\} dy \np
& & +e^{-\frac{\pi}{2}x }
\int_{-\infty}^{\infty} e^{-y}
\{ e^{-\frac{\pi}{2}i\gamma_1} lA_{-}(y) +
e^{\frac{\pi}{2}i\gamma_2} l{\overline A}_{-}(y) \} dy
\Bigr). \np
\label{scaleLam}
\end{eqnarray}
Thanks to the subsidiary condition for the additional zero
$\theta$ (\ref{sub}), we have
\begin{eqnarray}
e^{-\overline{\theta}} &&=\pi - \frac{2i}{\pi} \biggl(
\int_{-\infty}^{\infty}
F\left(\frac{2}{\pi}(z-\overline{\theta})+
i(1-\gamma_1)\right)lA_{+}(z)dz \np
-&& \int_{-\infty}^{\infty}
F\left(\frac{2}{\pi}(z-\overline{\theta})-
i(1-\gamma_2)\right)l\overline{A}_{+}(z)dz
\biggr).
\label{scaletheta}
\end{eqnarray}
For further simplification, we define $D_{\pm}$ by,
\begin{eqnarray}
D_{\pm} &:= &\int_{-\infty}^{\infty}
\Bigl( lA_{\pm}(x) \frac{d}{dx}la_{\pm}(x)+l\overline{A}_{\pm}(x)
\frac{d}{dx}l\overline{a}_{\pm}(x) \np
& & -la_{\pm}(x)\frac{d}{dx}lA_{\pm}(x)-
l\overline{a}_{\pm}(x)\frac{d}{dx}l\overline{A}_{\pm}(x)\Bigr) dx \np
&=& \int_{a_{\pm}(-\infty)}^{a_{\pm}(\infty)}
\left( \frac{\ln(1+a)}{a}- \frac{\ln a}{1+a} \right) da \np
& & + \int_{\overline{a}_{\pm}(-\infty)}^{\overline{a}_{\pm}(\infty)}
\left(\frac{\ln(1+\overline{a})}{\overline{a}}-
\frac{\ln \overline{a}}{1+\overline{a}}\right) d\overline{a}.
\end{eqnarray}
Obviously, they are equal to special values of Roger's Dilogarithm ${\cal L}$,
\begin{eqnarray}
D_{\pm} &&=
2 {\cal L} \left( \frac{a_{\pm}(\infty)}{1+a_{\pm}(\infty)} \right)+
2 {\cal L} \left( \frac{\overline{a}_{\pm}(\infty)}{1+\overline{a}_{\pm} (\infty)} \right) \np
&& - 2 {\cal L} \left(\frac{a_{\pm}(-\infty)}{1+a_{\pm}(-\infty)} \right)-
2 {\cal L} \left(\frac{\overline{a}_{\pm}(-\infty)}
{1+\overline{a}_{\pm} (-\infty)} \right), \np
{\cal L}(x)&&:=-\frac{1}{2} \int_{0}^{x} dy \left[ \frac{\ln (1-y)}{y} +
\frac{\ln y}{1 - y} \right].
\eea
We then apply the dilogarithm trick to (\ref{scaleNLIE1})--(\ref{scaleNLIE4}).
For example, we take the first two equations.
After differentiating,
we multiply them by $lA_+(x),l\overline{A}_+(x)$ respectively
and take the summation.
We call resultant equality (A).
Next multiply (\ref{scaleNLIE1}),
(\ref{scaleNLIE2}) by $(lA_+(x))',
(l\overline{A}_+(x))'$, respectively and
take the summation.
Let us call the outcome as (B).
Finally we subtract (B) from (A) and integrate over $x$.
The lhs of the equality is nothing but $D_+$.
Remarkably in the rhs, most complicated terms like
\begin{eqnarray}
& & - \int lA_+(x) \frac{d F_2(x-y)}{dx} l\overline{A}_+(y) dx dy \np
& & = - \int lA_+(x) F_2(x-y) \frac{d l\overline{A}_+(y)}{dy} dx dy,
\end{eqnarray}
and
\be
\int \frac{dl \overline{A}_+(x)}{dx} \overline{F}_2(x-y) l {A_+}(y) dx dy,
\end{equation}
cancel with each other.
After rearrangement we obtain,
\begin{eqnarray}
& &D_+ + 2\pi i {\cal F}(\infty) \ln \frac{A_{+}(\infty)}{\overline{A}_{+}(\infty)}
\np
& & = \int_{-\infty}^{\infty} 2 e^{-y}
\left( e^{\frac{\pi}{2}i\gamma_1} lA_+(y) +
e^{- \frac{\pi}{2}i\gamma_2} l\overline{A}_+(y) \right) dy \np
& & + 8 i \int_{-\infty}^{\infty} F\left(\frac{2}{\pi}(x-\overline{\theta})+
i(1-\gamma_1)\right)lA_{+}(x) dx \np
& & - 8 i \int_{-\infty}^{\infty} F\left(\frac{2}{\pi}(x-\overline{\theta})-
i(1-\gamma_2)\right)l\overline{A}_{+}(x) dx,
\label{trick1}
\end{eqnarray}
where $a_+(-\infty)=\overline{a}_+(-\infty)=0$ is used.
Similarly, from (\ref{scaleNLIE3}), (\ref{scaleNLIE4}), we have
\begin{eqnarray}
& & D_- + 2\pi i {\cal F}(-\infty) \ln \frac{A_{-}(\infty)}{\overline{A}_{-}(\infty)} \np
& & = \int_{-\infty}^{\infty} 2 e^{-x} \left( e^{- \frac{\pi}{2}i \gamma_1} lA_-(x) +
e^{\frac{\pi}{2}i\gamma_2} l\overline{A}_-(x) \right) dx, \np
\label{trick2}
\end{eqnarray}
Applying (\ref{trick1}), (\ref{trick2}), together with
(\ref{scaletheta}) to (\ref{scaleLam}),
\begin{eqnarray}
& & \ln\Lambda_{\rm fn}(x) \sim \frac{\pi}{2}i+
\frac{2\eta}{2\pi^2 \beta \sin 2 \eta} \np
& & \times \Bigl\{ e^{\frac{\pi}{2} x} \left( - 4 \pi^2 + D_+ +
2\pi i {\cal F}(\infty) \ln \frac{A_{+}(\infty)}{\overline{A}_{+}(\infty)} \right) \np
& & \ \ \ \ + e^{-\frac{\pi x}{2}} \left( D_- + 2\pi i {\cal F}(-\infty)
\ln \frac{A_{-}(\infty)}{\overline{A}_{-}(\infty)} \right) \Bigr \}.
\label{trickLam}
\end{eqnarray}
Now that the asymptotic values are easily found,
\bea
{\cal F}(\infty)&&=-{\cal F}(-\infty) =
\frac{\pi-4\eta}{4(\pi-2\eta)}, \np
a_+(\infty)&&=\overline{a}_-(\infty)=e^{(\pi-4\eta)i}, \np
a_-(\infty)&&=\overline{a}_+(\infty)=e^{(-\pi+4\eta)i},
\eea
we can explicitly evaluate (\ref{trickLam})
at $x=0$,
\be
\ln\Lambda_{\rm fn}(x=0) =
\frac{\pi}{6\beta v_{\rm F}}
-\frac{\pi}{\beta v_{\rm F}}
\left( \frac{1}{\alpha}+\frac{\alpha}{4} \right)+\frac{\pi}{2}i.
\end{equation}
where ${\cal L}(x)+{\cal L}(1-x)=\pi^2/6$ is also applied.
Here $\alpha$ is introduced in (\ref{alpha}) and
the Fermi velocity $v_{{\rm F}}$ is also derived
in (\ref{fv2}) for $n_{\rm e}=0.5$.
The first term is identical to the largest eigenvalue sector, and it
reproduces conformal anomaly term with $c=1$.
Comparing them, one concludes
\be
\frac{\Lambda_2}{\Lambda_1} \sim e^{ i k_{{\rm F}} -1/\xi},
\label{eqkf}
\end{equation}
where $ k_{{\rm F}}$ denotes the ``Fermi momentum".
Note that $k_{\rm F}=\pi/2$ in the half-filling case.
Consequently the inverse correlation length is given as
\begin{equation}
\xi^{-1} = \frac{\pi T}{v_{{\rm F}}}\left(\frac{1}{\alpha}+\frac{\alpha}{4}
\right),
\end{equation}
These are nothing but the expected results from CFT (see (\ref{xitcft})).
This fact represents the consistency of both our result and
validity of CFT mapping in the finite temperature problem at low temperatures.
\subsection{ Numerical Analyses on NLIE}
Having verified consistency at the specific limits, we now
perform numerical analyses on the NLIE for
a wide range of temperatures,
electron fillings and interaction strengths.
To keep the electron filling constant, we adopt the
temperature dependent
chemical potential which are determined by the curve,
\be
\frac{d\langle n_{\rm e} (T, \mu(T))\rangle}{dT}=
\frac{d}{dT}\left(\frac{\partial f}{\partial\mu}\right)_T=0.
\label{chem}
\end{equation}
The NLIE are numerically solved by the iteration
method.
In each iteration steps,
convolution parts are treated by the Fast Fourier
Transformation (FFT).
As a technical remark, we call an attention to
proper re-scaling of auxiliary functions for the FFT;
one needs to modify the integrands such that these
asymptotic values vanish.
{}From the asymptotics in
(\ref{asymptotics1}) and (\ref{asymptotics2}),
we introduce
\be
\mfB(x) :=\begin{cases}
\mfA(x)/\mfA(\infty)
& \text{for $x\ge 0$,} \\
\mfA(x)/\mfA(-\infty)
& \text{for $x<0$,}
\end{cases}
\end{equation}
and similarly for others.
We also rewrite NLIE in terms of ${\mathfrak{B}}(x)$,
which has now zero asymptotic values.
For example,
\begin{eqnarray}
\ln {\mathfrak{a}}&&(x)=
-\frac{\pi \beta \sin(2\eta)}{4\eta \cosh\frac{\pi}{2}(x-i\gamma_1)}+
F*\ln\mfB(x) \np
&&-F*\ln\overline{\mfB}(x+2i-i(\gamma_1+\gamma_2))
+{\cal F}(x) \ln \frac{\mfA(\infty)}{\mfA(-\infty)} \np
&&-{\cal F}(x+2i-i(\gamma_1+\gamma_2))
\ln \frac{\overline{\mfA}(\infty)}{\overline{\mfA}(-\infty)} \np
&&+2\pi i {\cal F}(x-\theta+i(1-\gamma_1))
+\beta\mu.
\eea
In addition, one must be careful in the branch cuts of the
logarithms.
In the above,
$\ln \mfA(\infty)/\mfA(-\infty)$ and so on
must be understood as
\be
\ln \frac{{\mathfrak{A}}(\infty)}{{\mathfrak{A}}(-\infty)}=
\ln \left( -\frac{\sinh(\beta \mu/2-2i\eta)}{\sinh(\beta \mu/2 +2i\eta)}
\right)
+ (\pi-4\eta) i.
\end{equation}
Under these arrangements, the iteration method works in a stable manner.
We plot the temperature dependence of the
correlation length $\xi T$ in FIG.~\ref{cl1} for various fillings
keeping the interaction
strength constant $\Delta= \cos (\pi/6)$.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{sfmcl6.eps}
\end{center}
\caption{The temperature dependence of
the correlation length $\xi$ for $p_0$=6.}
\label{cl1}
\end{figure}
The extrapolated values $T \rightarrow 0$ agree with the predictions from
CFT within few percents even far away from ``half-filling"
($n_{\rm e}=0.5$).
The curves are going down gradually with the decrease of electron density $n_{\rm e}$.
As further information, chemical potential $\mu(T)$ determined by
Eq.~(\ref{chem}) and
the location of the additional zero $\theta$
are depicted in FIG.~\ref{mu} and FIG.~\ref{sfmzero6},
respectively.
The zero $\theta$ moves on a smooth curve and its curvature
increases with the decrease of $n_{\rm e}$.
In fact, we find that it moves to $\theta=i$
when $n_{\rm e}, T \rightarrow 0$.
(See also the analytic argument for non-interacting Fermion case
in FIG.~\ref{fzeros} for $\mu=1.0$.)
We also calculate the ``Fermi momentum"
$k_{\rm{F}}=\Im \ln \Lambda_2/\Lambda_1$
(cf. Eq.~(\ref{eqkf})).
(Here the inverse period of oscillatory behavior at
arbitrary $T$ is referred to as $k_{\rm F}$ as in the case of $T=0$.)
The figure clearly shows the temperature dependency of $k_{\rm F}$.
In the low temperature limit
$T \rightarrow$ 0, it converge to the expected value,
$k_{{\rm F}} = n_{\rm e} \pi$,
which indicates the significance of the Fermi surface for one-particle
excitations in the Luttinger liquid at $T=0$.\cite{KY}
With the increase of $T$, the auxiliary functions cease to
exhibit a sharp crossover behavior (\ref{crossover}),
which roughly corresponds
to broadening of the Fermi distribution at $T>0$.
The particle excitations are enhanced within the wide range
near the Fermi surface, which yield the shift of $k_{\rm F}$.
We remark that such $T$ dependent
oscillatory behavior has been reported for the longitudinal
correlation function of ferromagnetic Heisenberg model. \cite{FKM2}
Although the physical origins are different for these two cases,
the explicit determination of $T$ dependency is important.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{sfmmu6.eps}
\end{center}
\caption{The temperature dependence of
the chemical potential $\mu$ for $p_0$=6.}
\label{mu}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{sfmzero6.eps}
\end{center}
\caption{The trajectory of the
additional zero $\theta$ inside the physical strip.}
\label{sfmzero6}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{sfmkf6.eps}
\end{center}
\caption{The temperature dependence of
the Fermi momentum $k_{{\rm F}}$ for $p_0$=6.}
\label{kf}
\end{figure}
FIG.~\ref{sfmcl01}, \ref{sfmcl04}
presents the temperature dependence of the correlation length
for various interaction strengths
for fixed $n_{\rm e}$.
Naturally in the limit, $n_{\rm e}, T\rightarrow 0$,
$\xi T$ does not depend significantly on
the interaction strength;
it merely behaves as
$\xi T \sim v_{{\rm F}}/\pi \sim n_{\rm e}$
(see Appendix B).
This fact is typical for non-interacting cases.
Although our model inherits strong correlations,
FIG.~\ref{sfmcl01} indicates that $n_{\rm e}=0.1$ is already well
described by ``non-interacting approximation"
and also shows this approximation is applicable
in the wide range of $T$.
On the other hand, data for $n_{\rm e}=0.4$ show strong dependency on
$\Delta$, therefore it belongs to proper ``interacting class"
(see FIG.~\ref{sfmcl04}).
It seems these crossover occurs near $n_{\rm e} \sim 0.25 $ but
it is not yet conclusive.
We hope to clarify this in a future communication.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{sfmcl01.eps}
\end{center}
\caption{The temperature dependence of
the correlation length for $n_{\rm e}=0.1$.}
\label{sfmcl01}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{sfmcl04.eps}
\end{center}
\caption{The temperature dependence of
the correlation length for $n_{\rm e}$=0.4.}
\label{sfmcl04}
\end{figure}
Finally we plot the correlation length of
transverse spin-spin correlation
$\langle\sigma^{+}_j\sigma_j^{-}\rangle$ without
external field (FIG.~\ref{clxxz}) for comparison with
$n_{\rm e}=0.5$ of spinless Fermion models
(FIG.~\ref{sfmcl}).
Besides the difference between their limiting values
at $T \rightarrow 0$, one clearly sees the difference in
the dependence of $\xi T$ on $T$.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{xxzcl.eps}
\end{center}
\caption{The correlation length for $\langle
\sigma^{+}_j\sigma^{-}_i\rangle$ of the
corresponding ${XXZ}$ model with zero magnetic field.}
\label{clxxz}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{sfmcl.eps}
\end{center}
\caption{The temperature dependence of
the correlation length at half-filling.}
\label{sfmcl}
\end{figure}
\section{Summary and discussion}
We have proposed the QTM approach to the integrable
lattice Fermion systems at any finite temperatures.
The Fermionic $R$-operator,
together with its super-transposition
$\widetilde{R}$, where
the Fermion statistics is embedded naturally,
play the crucial role in
this approach.
Consequently, we have observed the
significant difference between the
Fermion model and
that of the spin model.
In principle,
we can apply this approach to
any integrable 1D Fermion systems.
The application to
the Hubbard model is under progress.
Here we comment on the ``attractive regime"
$t>0$, $\Delta<0$ in (\ref{eq.hamiltonian}),
which we have not been concerned with in this
paper.
In the ${XXZ}$ model
without external magnetic field,
one may recall the remarkable difference
between the repulsive (anti-ferromagnetic)
case and the attractive (ferromagnetic) one.
\cite{FKM,FKM2}
In the repulsive regime,
the eigenvalues related
to the correlation $\langle
\sigma^{+}_j\sigma^{-}_i\rangle$ or
$\langle\sigma^{z}_j\sigma^{z}_i\rangle$
is characterized by two real
additional zeros
which are symmetric with respect to
the imaginary axis.
This symmetry is never broken at any
temperatures.
On the other hand in the
attractive regime, ``level crossing"
occurs successively.
One may attribute it to
the change of the distribution patterns
of the additional zeros.
It will be interesting to see
if similar phenomena happens for the
spinless Fermion model in the attractive
regime.
Finally we refer to another formulation of NLIE
derived from the different choice of the auxiliary
functions.
The NLIE have a close connection
with the ``TBA" or ``excited states TBA"
equations from the standard ``string
hypothesis".
The idea is as follows.
First we embed the QTM itself
into a more general family called $T$-functions
and explore functional relations among them
($T$-system).
Then we define the $Y$-functions by
certain ratio of the $T$-functions
and also derive functional relations
for them ($Y$-system).
The analytical properties of these functions
leads to the NLIE which determine the
free energy and the correlation length.
As concerns the largest eigenvalue sector,
the $T$-functions coincide with those
in Ref.~\onlinecite{KSS}.
Therefore the derived NLIE for the free energy
are identical to
the TBA equations
of the ${XXZ}$ model.\cite{TS,KSS}
In contrast,
for the second largest eigenvalue sector
we find the essential difference between
the Fermion model and the corresponding
spin model.
For example, we write explicitly
the NLIE (excited state TBA equation)
for $p_0=5$, $\mu=0$ as
\bea
\ln\eta_1(x)&=&-\frac{5\beta\sin\frac{\pi}{5}x}
{2\cosh\frac{\pi}{2} x}+K*\ln(1+\eta_2)(x)+\pi i,\nn \\
\ln\eta_2(x)&=&K*\ln(1-\eta_1)(1-\eta_3)(x) \nn \\
&+&\ln\left(\tanh\frac{\pi}{4}(x-\theta_1)
\tanh\frac{\pi}{4}(x-\theta_2)
\right)+\pi i, \nn \\
\ln\eta_3(x)&=&K*\ln(1+\eta_2)(1-\kappa^{2})(x) \nn \\
\ln\kappa&=&K*\ln(1-\eta_2)(x)+
\ln\left(\tanh\frac{\pi}{4}(x-\theta_2)\right) \nn \\
&+&\frac{\pi}{2} i,
\eea
where $\theta_1$ and $\theta_2$ are
determined from
\bea
&&i\frac{5\beta\sin\frac{\pi}{5}\theta_1}
{2\sinh\frac{\pi}{2} \theta_1}+
K*\ln(1+\eta_2)(\theta_1+i)-\pi i=0, \nn \\
&&K*\ln(1+\eta_2)(1-\kappa^{2})(\theta_2+i)=0.
\eea
The meaning of the functions $\eta_j$ and the quantities
$\theta_j$ are similar to those in Ref.~\onlinecite{KSS}.
Although the above expressions
are quite different from those in Sec.~III,
the numerical result shows a good agreement.
The detailed derivations of above equations
will be described in a
separate communication.\cite{S}
\acknowledgments
The authors are grateful to A. Kuniba, M. Wadati
for helpful comments and continuous encouragements.
M. S. thanks H. Asakawa for discussions.
J. S. thanks
A. Kl{\"u}mper, R. Martinez, B. M. McCoy and C. Scheeren
for useful discussions.
This work is in part supported by Grant-in-Aid for JSPS Fellows from
the Ministry of Education, Science, Sports and Culture of Japan.
|
2,877,628,089,370 | arxiv | \section{Introduction}
\label{intro}
At low temperatures, phonons provide the heat bath to which electrons couple weakly in a mesoscopic electron circuit. Due to fast electron-electron relaxation \cite{pothier}, electrons typically obey Fermi-Dirac distribution even under non-equilibrium conditions with a well-defined temperature $T_e$ that can differ from the temperature $T_p$ of the phonons. For clean metals in three-dimensional structures, the relaxation rate of electrons to the temperature of phonons scales as $T_e^{-3}$ \cite{gantmakher,roukes,wellstood}. This result is based on the electron-phonon coupling arising from standard deformation potential model \cite{fetter}. For an arbitrary temperature difference (as long as the model is valid) the average heat current between electrons and phonons scales as $T_e^5 -T_p^5$ in the same model, and this result is widely observed in experiments at sub-kelvin temperatures.
\begin{figure}[!h]
\centering
\includegraphics[width=0.4\textwidth]{Fig1.png}
\caption{Normal metal with temperature $T_e$ coupled to phonon bath ($T_p$) via electron-phonon coupling with thermal conductance $G_{ep}$.}
\label{fig:1}
\end{figure}
Detection of single quanta of radiation by calorimetric means has become popular over the past years \cite{Eisaman,JP}. In this context, not only the average heat current but also its fluctuations are important. They set the fundamental bound of minimum detectable energy of the radiation quantum. Here we re-visit the standard results of heat current fluctuations under equilibrium and non-equilibrium conditions, which influence, e.g., single-photon detection in the microwave regime \cite{Dima,golwala}. As the main result we present electron-phonon heat current noise at finite frequencies, and realize that it is non-vanishing even in the zero temperature limit. This result has interesting implications in terms of the fluctuation-dissipation theorem for heat in the quantum regime, the topic discussed over the past few years in the context of electron transmission in tunnel contacts and through general scatterers \cite{pekolaPRL}.
\section{Description of the system and heat current operators}
\label{sec:2}
A sketch of the system is schematically shown in Fig. \ref{fig:1}. The normal metal is thermally coupled to the local phonon bath with thermal conductance $G_{ep}$ at temperature $T_{\rm p}$. The total Hamiltonian describing the system and the environment is given by
\begin{equation} \label{hamiltonian}
H=H_e+H_p+H_{\rm ep},
\end{equation}
where $H_e, H_p$ are the Hamiltonians of the electrons and phonons, respectively, and $H_{\rm ep}$ is the coupling between them. The unperturbed Hamiltonian $H_0 = H_e+H_p$ can be written as
\begin{equation}\label{unperturbed hamiltonian}
H_0=\sum_k \epsilon_k a_k^\dagger a_k+\sum_q \hbar\omega_q c_q^\dagger c_q,
\end{equation}
where the first part describes electron states with energy $\epsilon_k$, momentum $k$, and $a_k^\dagger$ and $a_k$ are the corresponding creation and annihilation operators. With analogous notation, the second part shows the Hamiltonian of phonons with eigenenergies $\hbar \omega_q$, wavevector $q$, and bosonic creation and annihilation operators $c_q^\dagger$ and $c_q$. The coupling term as a perturbation of the system has the following form in a metal \cite{fetter}
\begin{equation} \label{perturbation}
H_{\rm ep}= \gamma \sum_{k,q} \omega_q^{1/2}(a_k^\dagger a_{k-q}c_q + a_{k-q}^\dagger a_{k}c_q^\dagger).
\end{equation}
Here, the magnitude of $\gamma$ depends on the material properties of the system. The operator of heat flux from the electron system to phonons due to ep coupling is
\begin{equation} \label{heatp2}
\dot H_p=\frac{i}{\hbar}[H_{\rm ep},H_p]=i\gamma \sum_{k,q} \omega_q^{3/2}( a_k^\dagger a_{k-q}c_{q} - a_{k-q}^\dagger a_{k}c_q^\dagger),
\end{equation}
where we used the commutation relations for the bosonic operators, $[c_q,c_q^\dagger c_q]=c_q$ and $[c_q^\dagger,c_q^\dagger c_q]=-c_q^\dagger$.
Similarly we find the operator for heat flux to electron system
\begin{equation} \label{heate}
\dot H_e= -\frac{i\gamma}{\hbar} \sum_{k,q} \omega_q^{1/2}(\epsilon_k - \epsilon_{k-q})(a_{k}^\dagger a_{k-q}c_{q}-a_{k-q}^\dagger a_{k}c_{q}^\dagger).
\end{equation}
\section{Fluctuations of heat current}
\label{sec:3}
In order to find the heat current fluctuations, we evaluate the correlator $\langle \mathfrak{I}(t) \mathfrak{I}(0)\rangle$, where $\mathfrak{I}\equiv \frac{1}{2}(\dot H_e-\dot H_p)$ is the symmetric heat current operator between electron and phonon baths. Using Eqs. (\ref{heatp2}) and (\ref{heate}) yields
\begin{equation} \label{hcurrent1}
\mathfrak I= -\frac{i\gamma}{2\hbar} \sum_{k,q} \omega_q^{1/2}(\hbar \omega_q+\epsilon_k - \epsilon_{k-q})(a_{k}^\dagger a_{k-q}c_{q}-a_{k-q}^\dagger a_{k}c_{q}^\dagger).
\end{equation}
Then we have
\begin{eqnarray} \label{ItI0}
\langle \mathfrak I(t)\mathfrak I(0)\rangle && = \frac{\gamma^2}{4\hbar^2} \sum_{k,q} \omega_q(\hbar\omega_q+\epsilon_k-\epsilon_{k-q})^2 \Big[ \langle a_k^\dagger (t)a_k(0)a_{k-q}(t)a_{k-q}^\dagger(0)c_q(t)c_q^\dagger(0)\rangle \nonumber\\ && +
\langle a_{k-q}^\dagger (t)a_{k-q}(0)a_{k}(t)a_{k}^\dagger(0)c_q^\dagger(t)c_q(0)\rangle \Big].
\end{eqnarray}
We use the time dependence of the creation and annihilation operators, $a_k(t)=a_ke^{-i\epsilon_kt/\hbar}$ and $c_q(t)=c_qe^{-i\omega_qt}$, taking the expectation values of the products of the operators ($\langle a_{k}^\dagger a_k\rangle=f(\epsilon_k)$ and $\langle c_{q}^\dagger c_q\rangle=n(\omega_q)$), where $f(\epsilon)=(1+e^{\beta_e \epsilon})^{-1}$ and $n(\omega)=(e^{\beta_p \hbar \omega}-1)^{-1}$ are Fermi and Bose distribution for electrons and phonons, respectively, with related inverse temperatures $\beta_e=(k_BT_e)^{-1}$ and $\beta_p=(k_BT_p)^{-1}$. Integrating for the noise power $S_{\mathfrak I}(\omega)=\int dt \langle \mathfrak I(t)\mathfrak I(0)\rangle e^{i\omega t}$ leads to
\begin{eqnarray} \label{ItI0}
S_{\mathfrak I}(\omega) && = \frac{\gamma^2}{4\hbar^2} \sum_{k,q} \omega_q(\hbar\omega_q +\epsilon_k -\epsilon_{k-q})^2 \Big[ e^{i(\epsilon_k -\epsilon_{k-q}-\hbar \omega_q +\hbar\omega)t/\hbar} f(\epsilon_k)[1-f(\epsilon_{k-q})] \nonumber\\ && [1+n(\omega_q)]+e^{i(\epsilon_{k-q}-\epsilon_k+\hbar \omega_q+\hbar\omega)t/\hbar} f(\epsilon_{k-q})[1-f(\epsilon_k)]n(\omega_q)\Big].
\end{eqnarray}
Now, we integrate over time, and replace $\sum_q \rightarrow D(q)\int d^3q = \frac{\mathcal{V}}{(2\pi)^2}\int_0^\infty dq\,q^2\int_{-1}^{1} d(\cos \theta)$ in spherical coordinates, where $\theta$ is the angle between $k$ and $q$, and $\sum_k \rightarrow N(0) \int d\epsilon_k$. $N(0)$ $(D(q)=\frac{\mathcal{V}}{(2\pi)^3})$ denotes the density of states of electrons (phonons), here $\mathcal{V}$ is the volume of the system. Further, $\epsilon_{k} =\frac{\hbar^2 k^2}{2m}$, $\epsilon_{k \pm q}\simeq \epsilon_k \pm \frac{\hbar^2 k_F}{m}q\cos \theta$, where the last approximation is due to $k\simeq k_F$ and $q\ll k_F$, where $k_F$ is the Fermi wave vector and $m$ is the mass of electron. Moreover $\omega_q$ is replaced by $c_lq$, where $c_l$ is the speed of sound. Then Eq. (\ref{ItI0}) leads to
\begin{eqnarray} \label{sIw1}
&&S_{\mathfrak I}(\omega) = \frac{\gamma^2N(0)\mathcal V}{8\pi\hbar} \int_{-\infty}^{+\infty} d\epsilon_k \int_{0}^{+\infty} dq\,q^2\int_{-1}^{1} d(\cos \theta) \,\, \omega_q(\hbar \omega_q+\frac{\hbar^2 k_F}{m}q\cos \theta)^2\times \nonumber\\ &&\Big[ f(\epsilon_k)[1-f(\epsilon_{k}-\frac{\hbar^2 k_F}{m}q\cos \theta)][1+n(\omega_q)]\delta(\frac{\hbar^2 k_F}{m}q\cos \theta-\hbar\omega_q+\hbar\omega)\nonumber \\&& +
f(\epsilon_{k}-\frac{\hbar^2 k_F}{m}q\cos \theta)[1-f(\epsilon_{k})]n(\omega_q)\delta(-\frac{\hbar^2 k_F}{m}q\cos \theta+\hbar\omega_q+\hbar\omega) \Big].
\end{eqnarray}
Collecting the angle dependent terms and integrating over $\cos \theta$ and using notation $\epsilon\equiv \hbar \omega_q=\hbar c_lq$, we have
\begin{eqnarray} \label{sIw2}
S_{\mathfrak I}(\omega) =\frac{\Sigma \mathcal V}{96\zeta (5)k_B^5}\int_0^\infty d\epsilon \,\epsilon^2 &&\Big[(2\epsilon-\hbar\omega)^2\frac{1}{1-e^{-\beta_p \epsilon}}\,\,\frac{\epsilon-\hbar\omega}{e^{\beta_e(\epsilon-\hbar\omega)}-1}\nonumber\\&&+(2\epsilon+\hbar\omega)^2\frac{1}{e^{\beta_p \epsilon}-1}\,\,\frac{\epsilon+\hbar\omega}{1-e^{-\beta_e(\epsilon+\hbar\omega)}}\Big],
\end{eqnarray}
where we have defined the electron-phonon coupling constant \cite{wellstood,cleland} as $\Sigma=\frac{12\gamma^2N(0)m\zeta (5)k_B^5}{\pi k_Fc_l^2\hbar^6}$ and $\zeta (z)$ denotes the Riemann zeta function.
One can calculate the average heat flux into the phonon bath $\dot Q_{ep} =\langle \dot H_p\rangle$ by applying the Kubo formula in the interaction picture to Eq. (\ref{heatp2}) as
\begin{eqnarray} \label{Qep}
&&\dot Q_{ep}=-\frac{i}{\hbar}\int_0^\infty dt'\langle [\dot H_p(t),H_{ep}(t')]\rangle.
\end{eqnarray}
We obtain then the known result
\begin{eqnarray} \label{zeroT}
&&\dot Q_{ep} = \Sigma \mathcal V (T_e^5-T_p^5).
\end{eqnarray}
For the noise spectrum at equal temperatures, $T_e=T_p$, and $\omega=0$, we obtain
\begin{equation}\label{zeroTS}
S_{\mathfrak I}(0) = 10\Sigma \mathcal V k_BT^6,
\end{equation}
which is again the well known classical result \cite{Dima}. In this regime $\dot Q_{ep}$ yields the thermal conductance between electrons and phonons, $G_{ep}$, by differentiation with respect to $T_e$ at $T=T_e=T_p$. We have $G_{ep}=d\dot Q_{ep}/dT_e=5\Sigma\mathcal{V}T^4$, which satisfies the fluctuation-dissipation relation
\begin{equation}
S_{\mathfrak I}(0)=2 k_B T^2 G_{ep}.
\end{equation}
\begin{figure}[!h]
\centering
\includegraphics[width=\textwidth]{Fig2.pdf}
\caption{Heat current noise $S_{\mathfrak{I}}(\Omega)$ normalized by its classical value $S_{\mathfrak{I}}(0)$, where $\Omega=\beta\hbar\omega$. Solid red line is the full expression based on Eq. (\ref{dimensionlesssIw2}); solid blue line shows the first order approximation $1+\frac{\Omega}{2}$ while dashed black line is the second order result based on Eq. (\ref{Sexpansion}). Top scale indicates the temperature $T$ with a detector of cut-off frequency $f_c$ for achieving the frequency on the bottom axis. Inset: The same data as in the main frame over a wider frequency range.}
\label{fig:2}
\end{figure}
For zero temperature $\beta_p,\beta_e\rightarrow \infty$, we see that only the first term in the integrand of Eq. (\ref{sIw2}) is non-zero and just over the interval $0< \epsilon < \hbar \omega$. Then,
\begin{eqnarray} \label{nonzeroT}
S_{\mathfrak I}(\omega)= \frac{\Sigma \mathcal V}{96\zeta (5)k_B^5}\frac{(\hbar\omega)^6}{60}.
\end{eqnarray}
This result is in analogy with the expressions of zero-temperature noise of heat current in charge transport through a scatterer, for instance a tunnel junction \cite{pekolaPRL,sergi,hanggi}. In that case $S_{\mathfrak{I}}\propto \omega^3$, which is consistent with our result in the following way. The thermal conductance in a tunnel junction scales as $\propto T$. The fluctuation dissipation theorem then yields $S_{\mathfrak{I}}(0)\propto T^3$, to be compared to $T^6$ in the electron-phonon system (Eq. (\ref{nonzeroT})). Now, replacing $k_{\rm B}T$ by $\hbar\omega$ in the two cases, we get the corresponding zero temperature finite frequency noise $\propto \omega^3$ and $\propto \omega^6$, respectively.
\section{Discussion of fluctuations at finite frequency}
At equal temperatures and finite frequency, Eq. (\ref{sIw2}) can be written in the dimensionless form
\begin{eqnarray} \label{dimensionlesssIw2}
S_{\mathfrak I}(\Omega)=\frac{S_{\mathfrak I}(0)}{960 \zeta (5)} \int_0^\infty du \,u^2 &&\Big[(2u-\Omega)^2\frac{1}{1-e^{-u}}\,\,\frac{u-\Omega}{e^{u-\Omega}-1}\nonumber\\&&+(2u+\Omega)^2\frac{1}{e^{u}-1}\,\,\frac{u+\Omega}{1-e^{-(u+\Omega)}}\Big],
\end{eqnarray}
where $\Omega\equiv \beta \hbar \omega$. The expansion of $S_{\mathfrak I}(\Omega)$ in $\Omega$ up to second order yields
\begin{equation}\label{Sexpansion}
\frac{S_{\mathfrak I}(\Omega)}{S_{\mathfrak I}(0)}=1+\frac{\Omega}{2}+(\frac{1}{12}+\frac{7}{240}\frac{\zeta (3)}{\zeta (5)})\Omega^2.
\end{equation}
We present the result of Eq. (\ref{dimensionlesssIw2}) for the noise power in the case of equal temperatures as a function of $\Omega$ in Fig. \ref{fig:2}. The analytic approximations up to the first and second order in $\Omega$ are also shown, and we see that the second order result follows the exact result up to $\Omega \sim 1$.
We finally comment on the observability of the finite frequency corrections. Experimental techniques to measure temperature focus traditionally into the low frequency regime. Yet RF-techniques developed for charge transport \cite{schoelkopf} and circuit QED (Quantum Electro-Dynamics) experiments \cite{wallraff} have recently been adapted to thermometry to address temporal evolution of temperature and noise in heat currents down to sub-$\mu$s time scales \cite{schmidt,simone,olli}. With these methods, noise up to $f_c \sim 1...10$ MHz frequencies becomes experimentally feasible. The experimentally available $\Omega$ range $(0<\Omega< \Omega_{max})$ in Fig. \ref{fig:2}, can be obtained by setting $\Omega_{max}=hf_c/k_{\rm B}T$, where $T$ is the temperature of the experiment. We indicate $T$ on the top axis of Fig. \ref{fig:2} scaled by cut-off frequency $f_c$ of the thermometer. As an example we see that for $f_c=10$ MHz, one needs to measure at $T=5$ mK to achieve $\Omega_{max}=0.1$. This electronic temperature is within the range of present day experiments on nanostructures \cite{Iftikhar,anna}. In this case a correction of $\sim 5\%$ in $S_{\mathfrak I}$ can be observed. However, at the time when the experimental observation of even the classical heat current noise remains elusive, the experiment on quantum heat current noise is still a challenge.
We acknowledge funding from the Academy of Finland under grants 272218 and 273827.
|
2,877,628,089,371 | arxiv | \section{Introduction}
\label{sec-introduction}
At first glance, the electronic structure of graphene is simple: due to the $sp^2$ bonding of the single-atom-thick two-dimensional (2D) carbon layer, it is easily modeled by a non-interacting tight-binding approach.\cite{Wal47,Sem84} For the 2D honeycomb lattice, this results at low energies in two non-equivalent Dirac cones at the $\mathbf{K}$ and $\mathbf{K}'$ points.
And indeed much of the tremendous progress\cite{CasGPN09,AbeABZ10} in understanding graphene's electronic properties\cite{KosVSL11} since its isolation\cite{NovGMJ04} in 2004 has been due to the exploitation of this fact.
However, electron-electron ($e$-$e$) interactions\cite{KotUPC10,NomM06} also play an important role in graphene, particularly in the high magnetic field regime.\cite{NovGMJ05,ZhaTSK05,DuSDL09,BolGSS09}
As well as the transport properties, $e$-$e$ interactions affect the optical excitations of the system. Previous studies have calculated the dispersion relation for particle-hole excitations for the case of clean graphene with integer Landau level
(LL) filling.\cite{IyeWFB07,BycM08,RolFG10b} This has been done for the bilayer system too.\cite{TahS08,Shi09} Furthermore, charged collective excitations have been predicted to exist as discrete states outside the continuum.\cite{FisRD10}
In this work, we investigate the effect of \emph{short-ranged} disorder on the collective excitations (CEs) of the monolayer system in the presence of a strong perpendicular magnetic field. Specifically, we calculate the energies and optical properties of CEs that become localized on a $\delta$-function scatterer situated at one of the graphene lattice sites. We shall refer to this scatterer as an impurity, but it can represent a vacancy as well.\cite{PerGLP06,Bas08} We explore how the bound states are influenced by the filling factor $\nu$, the sublattice containing the impurity, the light polarization and the impurity strength. The results are notably different from those for a long-range Coulomb impurity potential, studied previously.\cite{FisDR09a} In particular, the choice of a sublattice position for the impurity breaks the equivalence between $\mathbf{K}$ and $\mathbf{K}'$ points and thus the SU(4) symmetry. Furthermore, the short-range character of the impurity allows for significant inter-valley scattering.
Nevertheless, we will show that due to particle-hole symmetry at the Dirac point, the resulting optical excitations obey another form of sublattice symmetry between attractive and repulsive impurities. Our results show a symmetry structure which allows us to distinguish between long- and short-ranged impurities in the optical excitations of graphene.
\section{Theoretical approach}
\label{sec-model}
\subsection{Single particle problem}
\label{subsec-spp}
Let us consider a single electron in graphene in a perpendicular magnetic field with a $\delta$-function impurity located at the origin. For the case when the origin is at a $\mu = A, B$ sublattice site, the Hamiltonian has the form $\hat{H}_{\mu} = \hat{H}_0 + \hat{V}_{\mu}$, where $\hat{H}_0$ is the Hamiltonian for a free electron in graphene in a uniform magnetic field\cite{CasGPN09}
\begin{equation}
\label{eq-ham_free}
\hat{H}_0=v_\mathrm{F}
\left(
\begin{array}{cccc}
0 & \Pi_- & 0 & 0 \\
\Pi_+ & 0 & 0 & 0 \\
0 & 0 & 0 & \Pi_+ \\
0 & 0 & \Pi_- & 0
\end{array}
\right).
\end{equation}
Here $v_\mathrm{F}$ is the Fermi velocity and $\Pi_\pm=\Pi_x \pm i \Pi_y$,
with $\bm{\Pi} = \mathbf{p} + \frac{e}{c}\mathbf{A}$, the kinematic momentum operator.
Note that the $(A\mathbf{K}, B\mathbf{K}, A\mathbf{K}', B\mathbf{K}')$ ordering is used and that
$\hat{H}_0$ is diagonal with respect to the valley index. The contribution to the Hamiltonian
from the short-range $V(\mathbf{r})=V \delta \left(\mathbf{r}\right)$ impurity is
\begin{equation}
\label{eq-va}
\hat{V}_A=
V \delta \left(\mathbf{r}\right)
\left(
\begin{array}{cccc}
1 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 \\
0 & 0 & 0 & 0
\end{array}
\right),
\end{equation}
for an impurity located on an $A$ site at the origin and
\begin{equation}
\label{eq-vb}
\hat{V}_B=
V \delta \left(\mathbf{r}\right)
\left(
\begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & 1 & 0 & -\alpha^\ast \\
0 & 0 & 0 & 0 \\
0 & -\alpha & 0 & 1
\end{array}
\right),
\end{equation}
for an impurity located on a $B$ site at the origin.
Here $\alpha=e^{2\pi i/3}$ and $V=\sqrt{3}Wa^2/2$, where $W$ is the onsite energy associated with the impurity,
see Eq.~(\ref{eq-tbhama}); $a$ is the distance between atoms on the \emph{same} sublattice, see Fig.~\ref{fig-lat}.
The off-diagonal terms in Eqs.~(\ref{eq-va}) and (\ref{eq-vb})
describe scattering between the valleys; such terms can be neglected for long-range potentials.
Note that in both cases $\mu = A, B$ the origin is chosen at the impurity. The derivation
of Eqs.~(\ref{eq-ham_free})--(\ref{eq-vb}) is given in Appendix \ref{app-ham}; see also Ref.~\onlinecite{AndN98}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.47\textwidth]{fig-lattice}
\caption {(a) Real space graphene lattice with nearest neighbor vectors defined for the two cases of having an $A$ or $B$ sublattice site at the origin. (b) Reciprocal lattice with the first Brillouin zone indicated by shading and definitions of $\mathbf{K}$, $\mathbf{K}'$ points.
}
\label{fig-lat}
\end{figure}
The eigenstates of the Hamiltonian $\hat{H}_0$ in the symmetric gauge ${\bf A} = \frac12 {\bf B} \times {\bf r}$ for an electron in, e.g.,
the ${\bf K}$ valley (pseudospin $\tau = \: \,\Uparrow$),
are
\begin{align}
\label{sp}
\Psi_{n s \Uparrow m}(\mathbf{r}) & =
\langle \mathbf{r} | c^{\dag}_{n s \Uparrow m} |0 \rangle \\ \nonumber
& =\Phi_{n \Uparrow m}(\mathbf{r}) \, \chi_s\\ \nonumber
& = a_n
(\mathcal{S}_n \phi_{|n|- 1 \, m}(\mathbf{r}),
\phi_{|n| \, m}(\mathbf{r}),
0,
0)^{\mathrm{T}} \, \chi_s \, .
\end{align}
Here $n$ is the integer LL number, $\phi_{|n| m}({\bf r})$ is a 2D Electron Gas (2DEG) wavefunction
with oscillator quantum number $m = 0, 1, \ldots$, $a_n=2^{\frac{1}{2}(\delta_{n,0} -1)}$,
$\mathcal{S}_n={\rm sign}(n)$ (with $\mathcal{S}_0=0$) and
$\chi_s$ is the spin part for the two spin projections
$s = \: \, \uparrow, \downarrow$. The corresponding wavefunction in the ${\bf K'}$ valley ($\tau = \: \, \Downarrow$) is obtained by reversing the order of the spinor components. There is the Landau degeneracy in quantum number $m$,
so that defining the composite index $\mathcal{N} = \{ n s \tau \}$, the single particle energy is
\begin{equation}
\label{eq-en}
\epsilon_\mathcal{N} = \mathcal{S}_n \hbar\omega_c \sqrt{|n|} + \hbar\omega_s s_z + \hbar\omega_{v} \tau_z \, .
\end{equation}
Here $\hbar\omega_c = \sqrt{2} \, \hbar v_F/ \ell_B$ is the cyclotron energy in graphene, $\ell_B=\sqrt{\hbar c/eB}$ is the magnetic length
and $\hbar\omega_s, \hbar\omega_v$ the phenomenological spin and valley splittings, respectively.
We assume $\hbar\omega_s > \hbar\omega_v$ and that these splittings are small in comparison with the cyclotron energy,
$\hbar\omega_s, \hbar\omega_v \ll \hbar\omega_c$;
we set them equal to zero for the purposes of numerical calculations.
Note that the wavefunctions $\Psi_{\mathcal{N} m}(\mathbf{r})$ are not eigenfunctions of the orbital angular momentum projection operator
$\hat{l}_z = \mathbf{r} \times \mathbf{p} $.
Instead, the generalized angular momentum\cite{DivM84}
$\hat{j}_z=\hat{l}_z+\frac{1}{2} \sigma_z \otimes \hat{I}$ is conserved in graphene.
Here $\sigma_z$ is a Pauli matrix acting on the isospin (in the sublattice index space
$\mu = A, B$) while the unit matrix $\hat{I}$ acts on the pseudospin
(in the valley ${\bf K}, {\bf K'}$ space $\tau = \: \, \Uparrow, \Downarrow$).
The generalized angular momentum operator gives
$\hat{j}_z \Psi_{\mathcal{N} m}(\mathbf{r})=j_z\Psi_{\mathcal{N} m}(\mathbf{r})$,
with the half-integer eigenvalue $j_z=l_z - \frac{1}{2}$
and the integer orbital part $l_z = |n|-m$.
The calculation of impurity matrix elements involving $\hat{V}_{\mu}$ is straightforward, since
\begin{align}
\label{eq-impme}
{\mathcal{V}_{\mu}}_{\mathcal{N} m}^{\mathcal{N}' m'}& =
\int \! \mathrm{d} \mathbf{r} \, \Psi^\dag_{\mathcal{ N'}m'}(\mathbf{r})
\hat{V}_{\mu } \Psi^{\vphantom{\dagger}}_{ \mathcal{N}m}(\mathbf{r}) \\ \nonumber
& = \delta_{s, s'} \int \! \mathrm{d} \mathbf{r} \, \Phi^\dag_{n' \tau' m'}(\mathbf{r}) \hat{V}_{\mu } \Phi^{\vphantom{\dagger}}_{n \tau m}(\mathbf{r}) \\
& \sim \delta_{s, s'} V \, \phi_{k m'}^*\left( 0 \right)\phi^{\vphantom{\dagger}}_{l m}\left( 0 \right) \nonumber,
\end{align}
where $l \in \left\lbrace |n| ,|n|-1 \right\rbrace$
and $k \in \left\lbrace |n'|,|n'|-1 \right\rbrace$,
with exact values set by pseudospins (valley indices) $\tau$ and $\tau'$.
In addition, $\phi_{n m}\left(0 \right) \sim \delta_{n,m} $,
imposing the selection rule determining which states are affected by the $\delta$-impurity.
Indeed, only the $s$-orbitals with $l_z = |n| - m = 0$ have non-vanishing probability amplitudes at the origin, where the $\delta$-impurity is located.
Note that this selection rule allows mixing of states with \emph{different} generalized angular momentum projections $j_z$.
This is in contrast to a Coulomb potential, where $j_z$ is strictly conserved.
In Appendix \ref{app-high} we dicuss when higher order corrections to the energies $\epsilon_\mathcal{N}$
due to the impurity interaction are small and may be ignored.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{fig-allowed_trans}
\caption {(Color online) Particle-hole excitations for filling factor $\nu=-1$.
The thin arrows indicate two possible spin states $s= \: \, \uparrow, \downarrow$
and the thick arrows two possible pseudospin $\tau = \: \, \Uparrow, \Downarrow$
(valleys ${\bf K}, {\bf K'}$) states.
The dashed lines connect excitations mixed by the direct Coulomb interaction, see Fig.~\protect\ref{fig-Coul}a.
The dashed box contains four excitations resonantly mixed by the exchange Coulomb interaction, see Fig.~\protect\ref{fig-Coul}b.
The six leftmost (red-dotted) excitations are those which are mixed when there is an impurity on the $A$ sublattice and are optically-active in the left circular polarization $\sigma^+$ of light.
}
\label{fig-diag}
\end{figure}
\subsection{Collective excitations}
\label{subsec-ces}
We consider excitations in which an electron is promoted from one of the uppermost filled LLs, $\mathcal{N}_2$ in the Dirac sea, to an empty higher lying LL $\mathcal{N}_1$, leaving behind a hole.
The creation operator for such an excitation is
\begin{equation}
\label{eq-Qdag}
\mbox{} \hspace{-5pt}
Q^{\dag}_{\mathcal{ N}_1 \mathcal{ N}_2 J_z } =
\sum_{m_1 , m_2 = 0}^{\infty}
A_{\mathcal{N}_1 \mathcal{N}_2 J_z }(m_1,m_2) \,
c^{\dag}_{\mathcal{N}_1 m_1} d^{\dag}_{\mathcal{N}_2 m_2} \, ,
\end{equation}
where the hole representation, $c_{\mathcal{N} m} \rightarrow d^{\dag}_{\mathcal{N} m}$
and $c^{\dag}_{\mathcal{N} m} \rightarrow d_{\mathcal{N} m}$, is used for all filled levels.
The excitation operator $Q^{\dag}_{\mathcal{ N}_1 \mathcal{ N}_2 J_z }$ acts on the ground state,
denoted by $|\nu \rangle$ for a system with filling factor $\nu$.
The expansion coefficients satisfy
${A_{\mathcal{N}_1 \mathcal{N}_2 J_z }(m_1,m_2) \sim \delta_{J_z, |n_1| - m_1 - |n_2| + m_2}}$.
The quantum number $J_z$ is the total generalized angular momentum projection given by summing over the $j_z$ values for individual particles $i$,
$j_{zi} = l_{zi} - \frac{1}{2}$, with $l_{zi} = |n_i| - m_i$.
In second quantized form,
\begin{equation}
\label{eq-jzop}
\hat{J_z} = \sum_{\mathcal{N}_1, m_1} j_{z1} \, c^{\dag}_{\mathcal{N}_1 m_1}c^{\vphantom{\dagger}}_{\mathcal{N}_1 m_1}
- \sum_{\mathcal{N}_2, m_2} j_{z2} \, d^{\dag}_{\mathcal{N}_2 m_2}d^{\vphantom{\dagger}}_{\mathcal{N}_2 m_2} \, .
\end{equation}
We have
$\hat{J_z}Q^{\dag}_{\mathcal{ N}_1 \mathcal{ N}_2 J_z }|\nu\rangle=J_zQ^{\dag}_{\mathcal{ N}_1 \mathcal{ N}_2 J_z }|\nu\rangle$.
Note that for the neutral particle-hole excitations
$J_z = l_{z1} - l_{z2} = |n_1| - m_1 - |n_2| + m_2$ only contains an orbital part, in contrast to the single particle states and charged
collective excitations\cite{FisRD10} in graphene.
The total Hamiltonian, $\hat{H}_{\mu} =\hat{H}_{e\mu}+\hat{H}_{h\mu}+\hat{H}_{\mathrm{int}}$,
including the free energies, the interaction with the impurity on the $\mu = A, B$ site,
and the electron-hole ($e$-$h$) interaction is
\begin{align}
\label{eq-bigham}
\hat{H}_{\mu} &=
\sum_{\substack{\mathcal{N}_1, \mathcal{N}_1' \\ m_1, m_1'}}
\left( \delta_{\mathcal{N}_1 \mathcal{N}_1'}\delta_{m_1 m_1'}
\tilde{\epsilon}_{\mathcal{N}_1}+{\mathcal{V}_{\mu}}_{\mathcal{N}_1 m_1}^{\mathcal{N}_1' m_1'} \right)
c^{\dag}_{\mathcal{N}_1' m_1'}c^{\vphantom{\dagger}}_{\mathcal{N}_1 m_1} \\ \nonumber
& -
\sum_{\substack{\mathcal{N}_2, \mathcal{N}_2' \\ m_2, m_2'}}
\left(\delta_{\mathcal{N}_2 \mathcal{N}_2'} \delta_{m_2 m_2'}
\tilde{\epsilon}_{\mathcal{N}_2}+{\mathcal{V}_{\mu}}_{\mathcal{N}_2 m_2}^{\mathcal{N}_2' m_2'} \right)
d^{\dag}_{\mathcal{N}_2' m_2'} d^{\vphantom{\dagger}}_{\mathcal{N}_2 m_2} \\ \nonumber
& + \sum_{\substack{\mathcal{N}^{\vphantom{A}}_1, \mathcal{N}^{\vphantom{A}}_2 \\ m^{\vphantom{a}}_1, m_2 } }
\sum_{\substack{\mathcal{N}_1', \mathcal{N}_2' \\ m_1', m_2'} }
\mathcal{W}_{\mathcal{N}_1 m_1 \mathcal{N}_2 m_2}^{\mathcal{N}_1' m_1' \mathcal{N}_2' m_2'} \,
c^{\dag}_{\mathcal{N}_1' m_1'}d^{\dag}_{\mathcal{N}_2' m_2'}
d^{\vphantom{\dagger}}_{\mathcal{N}_2 m_2}c^{\vphantom{\dagger}}_{\mathcal{N}_1 m_1}. \nonumber \,
\end{align}
The $\tilde{\epsilon}_\mathcal{N}$ are the single particle energies, which are renormalized
due to the exchange interaction with other electrons in the Dirac sea (see Appendix \ref{app-se}).
The last term gives the dynamical part of the $e$-$h$ Coulomb interaction, following from the pairwise Coulomb potential
of $e$-$e$ interactions in graphene,
$U\left( |\mathbf{r}_1-\mathbf{r}_2|\right) = \frac{e^2}{\varepsilon |\mathbf{r}_1-\mathbf{r}_2|}$.
Here $\varepsilon$ is an effective dielectric constant,
which depends for graphene on its environment.
The dynamical $e$-$h$ interaction is made up of the
$e$-$h$ direct attraction and the $e$-$h$ exchange repulsion
$\hat{H}_{\mathrm{int}} = \hat{H}_{eh}^{D} + \hat{H}_{eh}^{X}$ with the total vertex given by
\begin{equation}
\label{eq-w}
\mathcal{W}_{\mathcal{N}_1 m_1 \mathcal{N}_2 m_2 }^{\mathcal{N}_1' m_1' \mathcal{N}_2' m_2'}=
-\mathcal{U}_{\mathcal{N}_1 m_1 \mathcal{N}_2' m_2'}^{\mathcal{N}_1' m_1' \mathcal{N}_2 m_2 }
+\mathcal{U}_{\mathcal{N}_1 m_1 \mathcal{N}_2' m_2'}^{\mathcal{N}_2 m_2 \mathcal{N}_1' m_1'} \, ,
\end{equation}
see Fig.~\ref{fig-Coul}.
The matrix element $\mathcal{U}$ is defined in the \emph{electron} representation by
\begin{widetext}
\begin{equation}
\label{eq-u}
\mathcal{U}_{\mathcal{N}_1 m_1 \mathcal{N}_2 m_2}^{\mathcal{N}_1' m_1' \mathcal{N}_2' m_2'} =
\delta_{s_1, s_1'}\delta_{s_2, s_2'} \int \!\! \mathrm{d} \mathbf{r}_1 \!\! \int \!\! \mathrm{d} \mathbf{r}_2 \;
\Phi^\dag_{ n_1'\tau_1' m_1'}(\mathbf{r}_1)\otimes\Phi^\dag_{ n_2'\tau_2'm_2'}(\mathbf{r}_2)
U\left( |\mathbf{r}_1-\mathbf{r}_2|\right)
\Phi^{\vphantom{\dagger}}_{ n_1 \tau_1 m_1} (\mathbf{r}_1) \otimes \Phi^{\vphantom{\dagger}}_{ n_2 \tau_2 m_2 } (\mathbf{r}_2) \, ,
\end{equation}
\end{widetext}
where $\otimes$ denotes the direct (Kronecker) product.
\begin{figure}[bht]
\centering
\includegraphics[width=0.25\textwidth]{fig-diagrams
\caption {Direct (a) and exchange (b) electron-hole interaction vertices, see Eq.~(\protect\ref{eq-w}).
}
\label{fig-Coul}
\end{figure}
Note that $\mathcal{U}$ may be expressed in terms of the corresponding matrix elements for the 2DEG\cite{FisDR09a}
and conserves spin, pseudopsin (no intervalley scattering), and the generalized angular momentum:
\begin{equation}
\label{eq-udeltas}
\mathcal{U}_{\mathcal{N}_1 m_1 \mathcal{N}_2 m_2}^{\mathcal{N}_1' m_1' \mathcal{N}_2' m_2'}
\sim \delta_{s_1, s_1'}\delta_{s_2, s_2'}\delta_{\tau_1, \tau_1'}\delta_{\tau_2, \tau_2'} \delta_{J_z, J_z'}. \,
\end{equation}
It also possesses the SU(4) symmetry,\cite{AbeABZ10,GoeMD06,FisRD10} as described below.
\subsection{Symmetry properties of the Hamiltonian}
\label{subsec-hamsym}
In the absence of an impurity ($V=0$), the Hamiltonian $\hat{H} = \hat{H}_0 + \hat{H}_{\mathrm{int}}$
has several symmetries.
Firstly, there is the generalized axial symmetry\cite{DivM84,FisDR09a}
$[\hat{J_z},\hat{H}]=0$.
This symmetry is broken in the presence of a $\delta$-impurity, which scatters particles between the valleys.
Specifically for $V \ne 0$,
\begin{eqnarray}
\label{eq-jzcom}
[\hat{J_z},\hat{H}_{\mu}]
& = &
\sum_{\substack{\mathcal{N}_1, \mathcal{N}_1' \\ m_1, m_1'} }
{\mathcal{V}_{\mu}}_{\mathcal{N}_1 m_1}^{\mathcal{N}_1' m_1'} \,
\left( j_{z1}' - j_{z1} \right) c^{\dag}_{\mathcal{N}_1' m_1'}c^{\vphantom{\dagger}}_{\mathcal{N}_1 m_1} \\
\nonumber
& +&
\sum_{\substack{\mathcal{N}_2, \mathcal{N}_2' \\ m_2, m_2'} }
{\mathcal{V}_{\mu}}_{\mathcal{N}_2 m_2}^{\mathcal{N}_2' m_2'} \,
\left( j_{z2}' - j_{z2} \right) \, d^{\dag}_{\mathcal{N}_2' m_2'}d^{\vphantom{\dagger}}_{\mathcal{N}_2 m_2} \neq 0 \, ,
\end{eqnarray}
with $j_{zi}' - j_{zi} = |n_i'|-m_i'-|n_i|+m_i$.
This results in excitations with different $J_z$ quantum numbers being mixed by the impurity,
which we shall discuss in Sec.\ \ref{subsec-mix}. Therefore, $J_z$ ceases to be an exact quantum number.
We will still use it for classification of states to indicate their origin and predominant character.
Another kind of symmetry present in clean graphene with long-range Coulomb interactions is the SU(4) symmetry,
due to the equivalence of the four possible pseudospin-spin states,
$\Uparrow\uparrow$,$\Downarrow\uparrow$,$\Uparrow\downarrow$,$\Downarrow\downarrow$.
The generators of the SU(4) group may be expressed in our context in second quantized form as
$C_{jk} =\sum_N c^{\dagger}_{Nj} c_{Nk} - \sum_{N}d^{\dagger}_{Nk} d_{Nj}$,
where $N=\{n m\}$ is a set of \emph{orbital} quantum numbers
and $j,k= \, \Uparrow\uparrow$,$\Downarrow\uparrow$,$\Uparrow\downarrow$,$\Downarrow\downarrow$ are the four SU(4) \emph{flavors}.
In the absence of an impurity and for a fully filled LL, $[\hat{H},C_{jk}]=0$.
Upon introducing an impurity, the commutation relation becomes
\begin{eqnarray}
\label{eq-sufourcom}
[\hat{H}_{\mu},C_{jk}] & = & \sum_{N_1,N_1'} \sum_{f} \left(1+\delta_{N_1, N_1'}\right) \\ \nonumber
& & \times \left( {\mathcal{V}_{\mu}}_{N_1 j}^{N_1' f} \, c^{\dagger}_{N_1' f} c^{\vphantom{\dagger}}_{N^{\vphantom{.}}_1 k}
- {\mathcal{V}_{\mu}}_{N_1 f}^{N_1' k} \, c^{\dagger}_{N_1' j} c^{\vphantom{\dagger}}_{N^{\vphantom{.}}_1 f}
\right) \\ \nonumber
& & + \sum_{N_2,N_2'} \sum_{f} \left( 1 + \delta_{N_2, N_2'} \right) \\ \nonumber
& & \times \left( {\mathcal{V}_{\mu}}_{N_2 k}^{N_2' f} \, d^{\dagger}_{N_2' f}d^{\vphantom{\dagger}}_{N^{\vphantom{.}}_2 j}
-{\mathcal{V}_{\mu}}_{N_2 f}^{N_2' j} \, d^{\dagger}_{N_2' k}d^{\vphantom{\dagger}}_{N^{\vphantom{.}}_2 f}
\right).
\end{eqnarray}
For an impurity which does not change the flavor of a particle \emph{and} has scattering matrix elements which are flavor independent,
${\mathcal{V}_{\mu}}_{N k}^{N' f} = \delta_{k,f} {\mathcal{V}_{\mu}}_{N}^{N'}$,
one can see from Eq.~(\ref{eq-sufourcom}) that the SU(4) symmetry is still preserved, $[\hat{H}_{\mu},C_{jk}] = 0$.
This is the case for, e.g., a Coulomb impurity and explains the degeneracies of the corresponding states.\cite{FisRD11}
In contrast, the $\delta$-impurity considered here may scatter between the valleys,
thus changing a particle's flavor and breaking the SU(4) symmetry.
\subsection{Optical selection rules}
\label{subsec-op}
We work in the electric dipole approximation, ignoring the magnetic field component of the electromagnetic wave.
The Hamiltonian describing the electron-photon interaction for an incoming circularly $\sigma^\pm$ polarized beam of light is
\begin{equation}
\label{eq-delh}
\delta \hat{\mathcal{H}}_\pm = \frac{i e v _\mathrm{F} \mathcal{F}}{\omega c} e^{-i\omega t}
\left(\begin{array}{cc}
\sigma_\pm & 0 \\
0 & \sigma_\mp
\end{array}\right),
\end{equation}
where $\mathcal{F}$ represents the electric field strength, $\omega$ is the angular frequency and $\sigma_\pm =\sigma_x\pm i\sigma_y$
are the Pauli matrices acting on the isospin (sublattice $A, B$) components.
In the dipole approximation, the linear momentum transferred to the electron by a photon is negligible;
accordingly, no intervalley transitions can be induced. Besides, the electric field is not coupled directly to the electron spin,
so that $\delta \hat{\mathcal{H}}_\pm$ conserves spin, i.e., no spin flips occur in electric dipole optical transitions.
In addition, one can show that
\[
\langle \mathcal{N}' m' | \delta \hat{\mathcal{H}}_\pm | \mathcal{N} m \rangle \sim \delta_{|n'|\mp 1,|n|}\delta_{m, m'} \, ,
\]
where $|\mathcal{N} m \rangle\equiv c^\dagger_{\mathcal{N} m}|0\rangle$.
The single particle selection rules are thus $|n'|-|n|=\pm1$, $m=m'$, $\tau=\tau'$ and $s=s'$,
for the $\sigma^\pm$ polarizations, where the unprimed quantum numbers describe the initial state of the electron
and the primed quantum numbers its final state, see Refs.~\onlinecite{GusSC07,AbeF07,FisDR09a} and references therein.
In the second quantization form and in the $e$-$h$ representation, the interaction with the circularly $\sigma^\pm$ polarized light is
\begin{equation}
\label{eq-colldelH}
\delta \hat{\mathcal{H}}_\pm =
\sum_{\mathcal{N},\mathcal{N'}}
\langle n' | \delta \hat{\mathcal{H}}_\pm | n\rangle
\sum_{m} c^\dagger_{\mathcal{N}' m}d^\dagger_{\mathcal{N} m} \, .
\end{equation}
Here we made use of the fact that the matrix element
$\langle \mathcal{N}' m | \delta \hat{\mathcal{H}}_\pm | \mathcal{N} m \rangle \equiv
\delta_{s',s}\delta_{\tau',\tau}\delta_{m',m} \langle n' | \delta \hat{\mathcal{H}}_\pm |n \rangle$
is diagonal in $s, \tau, m$ and, besides, it does not depend on these quantum numbers.
The operator emerging in Eq.~(\ref{eq-colldelH}),
\begin{equation}
\label{Q_0}
\frac{1}{ \sqrt{N^{\vphantom{X}}_0} } \sum_{m} c^\dagger_{\mathcal{N}' m}d^\dagger_{\mathcal{N} m}
= Q^{\dag}_{\mathcal{N}' \mathcal{N} \bm{\kappa}=0 }\ ,
\end{equation}
is proportional to the composite boson creation operator $Q^{\dag}_{\mathcal{ N}_1 \mathcal{ N}_2 \bm{\kappa}=0}$
describing a magnetoexciton
of zero magnetic momentum $\bm{\kappa}=0$
with electron (hole) in the $\mathcal{N}'$ ($\mathcal{N}$) LL in graphene.
$N_0 = S/2\pi \ell_B^2$ is the LL degeneracy with $S$ the area of the graphene sheet and the operator of magnetic translations is
$\hat{\bm{\kappa}} = \sum_i (\bm{\Pi}_i - \frac{q_i}{c} \mathbf{r}_i \times \mathbf{B})$.
Moreover,
the electron-photon interaction is SU(4)-symmetric, $[\delta \hat{\mathcal{H}}_\pm,C_{jk}]=0$; it does not change the flavor of an electron
and has matrix elements which are flavor independent.
This leads to the coupling of the electric dipole photon only to the flavorless boson in the given LLs\cite{FisDR09a}
$\bar{Q}^{\dag}_{n' n } \equiv \sum_{s, \tau}
Q^{\dag}_{n's \tau \, n s\tau \, \bm{\kappa}=0 }$.
In the presence of an impurity, the magnetic momentum $\bm{\kappa}$ is no longer conserved,
$[\hat{H}_{\mu},\hat{\bm{\kappa}}] \neq 0$.
In addition, a short-range impurity induces the intervalley scattering and breaks the SU(4) symmetry, further relaxing
the selection rules.
Let $d \equiv \langle \mathcal{N}_1 \mathcal{N}_2 J_z |\delta \hat{\mathcal{H}}_\pm |\nu\rangle$
denote the optical dipole transition matrix element from the ground state $| \nu \rangle $ with filling factor $\nu$, to
the final state with one excitation
\begin{equation}
\label{Q}
Q^{\dag}_{\mathcal{ N}_1 \mathcal{ N}_2 J_z }| \nu \rangle \equiv |\mathcal{ N}_1 \mathcal{N}_2 J_z \rangle \, .
\end{equation}
The transition matrix element squared $|d|^2$ is proportional to the absorption intensity or oscillator strength for that state.
It can be shown that
\begin{equation}
\label{sel_Q}
\langle \mathcal{N}_1 \mathcal{N}_2 J_z |\delta \hat{\mathcal{H}}_\pm |\nu\rangle \sim \delta_{J_z, \pm 1} \, ,
\end{equation}
so the optical selection rule for a CE is $J_z=\pm 1$. Besides, only the excitations with the
total spin and pseudospin projections $S_z=s_{z1}-s_{z2}=0$, $T_z=\tau_{z1}-\tau_{z2}=0$ are optically active. The latter selection rule is strict.
For equal spin (pseudospin) filling of a given $n^\mathrm{th}$ LL,
the CE states can additionally be classified by a total spin $S=0, 1$ (pseudospin $T=0,1$).
Among those states only the spin $S=0$ and pseudospin $T=0$ singlets
have non-zero projections onto the state $\bar{Q}^{\dag}_{n' n }$, which is directly coupled to photons.
Therefore, only the singlets $S=0$, $T=0$ are optically active,
while all the triplet $S=1$ and $T=1$ states are optically dark.
\subsection{Mixing of excitations}
\label{subsec-mix}
The long range nature of the Coulomb potential means it cannot provide a large enough change in momentum
$|\Delta\mathbf{k}| \simeq |\mathbf{K} - \mathbf{K}'| \sim 1/a$
to scatter between the valleys.
It also only mixes excitations with the same angular momentum projection $J_z=|n_1|-m_1-|n_2|+m_2$. In addition, Eqs.~(\ref{eq-w}) and (\ref{eq-udeltas})
restrict the possible spin and pseudospin states of transitions mixed by the Coulomb interaction.
Generally, two excitations, $|\mathcal{N}_1 \mathcal{N}_2 J_z \rangle $ and $|\mathcal{N}_1' \mathcal{N}_2' J_z \rangle$,
are mixed by (i) the direct interaction if $s_1=s_1^{'}$, $\tau_1=\tau_1^{'}$, $s_2=s_2^{'}$ and $\tau_2=\tau_2^{'}$
(see Fig.~\ref{fig-Coul}a)
and (ii) the exchange interaction if $s_1=s_2$, $\tau_1=\tau_2$, $s_1^{'}=s_2^{'}$ and $\tau_1^{'}=\tau_2^{'}$ (see Fig.~\ref{fig-Coul}b).
Figure~\ref{fig-diag} shows the possible excitations for filling factor $\nu=-1$.
The four excitations with no spin or pseudospin flips contained within the dashed box are mixed by the exchange interaction.
Pairs of excitations connected by a dashed line are mixed by the direct interaction. The remaining excitations are unmixed by the $e$-$h$ Coulomb interparticle interactions.
In our calculations we assume all LLs with $n<0$ are filled, all LLs with $n>0$ are empty and that the sublevels
of the zeroth LL become successively completely filled ($\nu=-1,0,1,2$).
We have seen that infinitely many excitations $|\mathcal{ N}_1 \mathcal{N}_2 J_z \rangle$ with the same $J_z$
and particular spin and pseudospin quantum numbers are mixed by the $e$-$h$ Coulomb interaction.
However, we may truncate the basis so as to obtain a tractable finite-size matrix representation
of Hamiltonian $\hat{H}_{\mu}$, Eq.~(\ref{eq-bigham}).
To this end we only consider excitations, which are either in resonance (have same energies) or are very close to resonance.
In particular, we are interested in the lowest energy $\sim \hbar\omega_c$ inter-LL excitations
for the chosen filling factors.
Therefore, we need to consider the mixing of $0 \to 1$ and $-1 \to 0$ excitations.
Neglecting the mixing of non-resonant excitations amounts to ignoring terms with amplitudes,
which depend on the ``fine structure constant'' for graphene, $\alpha =e^2/\hbar v_F \varepsilon$.
But even with this simplification, the Hamiltonian matrix remains infinite because of the macroscopic degeneracy of each LL
in the oscillator quantum number $m$. However, as long as we are interested in the states localized on the impurity, we may truncate the
summation over $m$ in Eq.~(\ref{eq-Qdag}) at a finite sufficiently large value $m_{\mathrm{max}}$. This is justified because the distance
from the origin (impurity) of single-particle orbitals in LLs increases with $m$
as
\begin{equation}
\label{dist}
\langle \mathbf{r}^2\rangle = (2|n|+2m+1+\delta_{n,0})\ell_B^2 \, .
\end{equation}
As a result, for $e$-$h$ states localized on the impurity we have a fast convergence with $m$.\cite{Dzy90,FisDR09a}
In our numerical calculations we choose
$m_{\mathrm{max}} =30$; the total number of states involved ranges from $180$ to $240$,
depending on $\nu$, the sublattice of the impurity and which light polarization the excitation is bright in.
We are primarily interested in optically active excitations, although these will always be mixed by the
$\delta$-impurity potential to some which are nominally dark.
This leads to the redistribution of the oscillator strength so that a number of states instead of being strictly dark become ``faint''.
In the previous Sec.~\ref{subsec-op}, we saw that bright states had no spin or pseudospin flips
and $J_z=\pm 1$ for the $\sigma^\pm$ polarizations. The four excitations with no spin or pseudospin flips are always mixed by the exchange
$e$-$h$ Coulomb interaction, although how many of them are bright depends on the filling factor and light polarization. Two or four additional excitations with pseudospin flips will be mixed to these by the $\delta$-impurity; generally, for a given $J_z$,
the admixtures will have $J_z$ and $J_z \pm 1$ values.
Thus the full CE comprises of six or eight excitations with different spin/pseudospin characters and $J_z$ values.
As an example, Fig.~\ref{fig-diag} indicates the six (red) excitations which should be taken into account
for $\nu=-1$, the $\sigma^+$ light polarization and an impurity on the $A$ sublattice.
\begin{figure}[t!]
\centering
\includegraphics[width=0.47\textwidth]{fig-Aplusnu-1}
\caption {(Color online) Dimensionless energies $\mathcal{E}$ of localized CEs
vs dimensionless impurity coupling constant $g$ [Eqs.~(\protect\ref{eq-ener}), (\ref{eq-impcoupconst})],
for the ($A$, $\sigma^+$, $\nu=-1$) system (squares) and the conjugate system ($B$, $\sigma^-$, $\nu=1$) system (circles).
Notice the symmetry of the spectra, Eq.~(\protect\ref{symmetry}).
The shaded region corresponds to the magnetoplasmon band with extended states.}
\label{fig-Amz1nu1}
\end{figure}
\section{Results and discussion}
\label{sec-results}
\subsection{Energies and optical strengths}
\label{subsec-en}
A representative selection of results are displayed in Figs.~\ref{fig-Amz1nu1}--\ref{fig-Amz-1nu1}.
In the absence of an impurity,\cite{BycM08,IyeWFB07,FisRD11} all CEs have \emph{extended} wavefunctions and fill a band
(indicated by the shaded regions) of width determined by the characteristic Coulomb energy
\begin{equation}
\label{E0}
E_0 = \sqrt{\dfrac{\pi}{2}}\dfrac{e^2}{\varepsilon\ell_B} \sim \sqrt{B} \, .
\end{equation}
More precisely, the band has width $E_0$ for odd filling factors $\nu = -1, 1$ and width $0.75 E_0$ for even filling factors
$\nu = 0, 2$.
The difference occurs because excitations which are mixed by the direct interaction,
such as the pair in Fig.~\ref{fig-diag} with $J_z=0$, partake in the CE for odd filling factors, but not for even ones (see Appendix \ref{app-se}). Upon introducing an impurity, discrete localized CEs with energies $E$ appear both below and \emph{above} the band of continuum states. We plot the dimensionless energy $\mathcal{E}$ as a function of the dimensionless impurity coupling constant $g$:
\begin{eqnarray}
\label{eq-ener}
\mathcal{E} & = & \frac{E}{E_0} \, , \\
\label{eq-impcoupconst}
g & = & \frac{V}{E_0 \ell_B^2}=\frac{\sqrt{3}}{2}\frac{W}{E_0} \left( \frac{a}{\ell_B} \right)^2 \sim W\sqrt{B} \, .
\end{eqnarray}
Thus the impurity strength $W$ and magnetic field $B$ may be simultaneously tuned so that the quantity $g$ is left unchanged. Note that in this case the dimensionless energies $\mathcal{E}$ remain unchanged, indicating a scaling property of our theory.
Note that all the plotted energies have been renormalized:
The lower continuum edge corresponds to the observable lowest cyclotron mode,
$\hbar\tilde{\omega}_c = \hbar\omega_c + \delta\hbar\omega_c$, which has been set to the zero energy reference level. Here
$\delta\hbar\omega_c = \Delta_{0 \uparrow \Uparrow}^{1 \uparrow \Uparrow}+\Gamma_{0 \uparrow \Uparrow}^{1 \uparrow \Uparrow}\simeq 0.7 E_0$
is the $e$-$e$ correction to the bare cyclotron energy, with $\Delta_{0 \uparrow \Uparrow}^{1 \uparrow \Uparrow}\approx 1.43E_0$
the total self energy correction and $\Gamma_ {0 \uparrow \Uparrow}^{1 \uparrow \Uparrow}=-0.75E_0$ the ``excitonic'' or vertex correction (see Appendix \ref{app-se}
and Ref.~\onlinecite{FisDR09a}).
The size of square data points in Figs.~\ref{fig-Amz1nu1}--\ref{fig-Amz-1nu1} is proportional to the optical dipole transition matrix element
squared $|d|^2$, see Sec.~\ref{subsec-op} above.
Notice that the majority of states are optically active, although some (marked as ``faint'') are considerably weaker than others,
by at least three orders of magnitude. Some branches become brighter as they approach the band, whereas for others, their oscillator strength decreases. The physical reason for this behavior is at present unclear.
The dark states shown in Fig.~\ref{fig-Amz1nu1} have a special character. Out of the six possible types of excitation in our basis (see Fig.~\ref{fig-diag}), only the two excitations,
$|{0 \! \downarrow \Downarrow} \; { -1 \! \downarrow \Downarrow} \; J_z =1 \rangle$ and
$|{0\! \uparrow \Downarrow} \; {-1 \! \uparrow \Downarrow } \; J_z=1\rangle$ contribute; see Eq.\ (\ref{Q}) for the definition of $|\mathcal{ N}_1 \mathcal{N}_2 J_z \rangle$.
Having no amplitude on $|{1 \!\!\uparrow \Downarrow} \; { 0 \!\!\uparrow \Downarrow} \; J_z=1\rangle$,
the only excitation out of the six which is optically active in $\sigma^+$, explains why they are dark.
Explicitly they are created by the operator
\begin{equation}
\label{eq-rdag}
D^\dagger=\sum_m A(m)\left(c^\dagger_{0 \uparrow \Downarrow m} d^\dagger_{-1 \uparrow \Downarrow 2+m}-c^\dagger_{0 \downarrow \Downarrow m}d^\dagger_{-1 \downarrow \Downarrow 2+m}\right),
\end{equation}
with certain amplitudes $A(m)$ rapidly decreasing with $m$ [distance from impurity, Eq.~(\ref{dist})].
Although generally the total spin quantum number is not well defined for CEs,
excitations (\ref{eq-rdag}) have a higher symmetry and are in fact spin triplet states $S=1, S_z=0$.
Besides, the type of excitations is unusual in the following sense:
All other excitations shown in Fig.~\ref{fig-Amz1nu1} have contributions from all six of the excitations which can be mixed.
We note that the excitations with pseudospin flips are in general strongly mixed by the $\delta$-potential to those with no pseudospin flips.
The results indicate critical $g$ values for the formation of localized excitations, although within our approach it is difficult
to resolve what may be a barely bound low-energy state from the lower continuum edge. The critical value for which bound states appear varies slightly depending on the filling factor $\nu$, light polarization $\sigma^{\pm}$ and the sublattice position $A,B$ of the impurity.
For example, for a system with filling factor $\nu=-1$ illuminated by $\sigma^+$ circularly polarized light with an impurity on an $A$ sublattice
[such a system will henceforth be denoted by ($A$, $\sigma^+$, $\nu=-1$)], there are no bound states for $-2.3<g<7.4$. If we then take $B=15$\,T and $\varepsilon=5$ for example, this corresponds to an impurity strength of approximately $-100\mathrm{\,eV}<W<340\mathrm{\,eV}$. Notice that short range potentials with rather large amplitudes correspond to vacancies.\cite{Bas08} It takes a very strong $\delta$-function impurity potential to localize excitations, since the impurity only couples to basis states where one of the particles has $m=0,1$. Strictly speaking, such energies are larger than the width of the $\pi$ band, where the electrons can be reasonably treated as massless Dirac fermions. For such high values of the impurity potential, our results have only a qualitative nature.
In all cases, larger magnitudes of $g$ are required to push the states \emph{above} the band than to pull them below.
This is because the states below the band form when either the electron or hole is nearer to the impurity and attracted to it; in this case the additional $e$-$h$ attraction also lowers the energy. States above the band form when one particle is held nearer to the impurity due to magnetic confinement, but is repelled by it, whilst the other particle is further away. In this case the $e$-$h$ attraction works against this, trying to lower the energy, so that a larger impurity strength is required to overcome this.
The bound states move further from the band as the impurity coupling constant $g$ increases, as expected.
An interesting question is what happens to the bound states when they approach the band.
One possibility is they cease to exist, merging with the two-particle $e$-$h$ continuum.
The alternative is that some continue to exist within the band as quasibound states (resonances).
As a general rule,
the latter states have high probability amplitudes on the impurity and
long-range oscillating tails, which make them non-normalizable.\cite{BazZP69}
The existence of resonances is facilitated by the confining effect of the magnetic field.
Although we detect possible signatures of such resonant states, our method is not accurate enough to claim their existence.
Interestingly, in all cases, the number of branches below the band for positive (negative)
$g$ equals that of branches above the band for negative (positive) $g$.
\begin{figure
\centering
\includegraphics[width=0.47\textwidth]{fig-Aplusnu0}
\caption{(Color online) Energies of localized CEs vs impurity strength as in Fig.~\ref{fig-Amz1nu1}, but for the symmetric systems ($A$, $\sigma^+$, $\nu=0$) and ($B$, $\sigma^+$, $\nu=0$). States marked as ``faint'' have values of $|d|^2$ at least three orders of magnitude smaller than the brightest states.}
\label{fig-Amz1nu2}
\end{figure}
In most cases for large enough $|g|$ values, bound states may be found both above and below the band for both $g>0$ and $g<0$.
One exception (see Fig.~\ref{fig-Amz1nu1}) is the ($A$, $\sigma^+$, $\nu=-1$) system.
In this case the impurity selection rule
forbids the hole to interact with the impurity,
so that all diagonal matrix elements have the same sign as $g$.
As a result, the states above the band are only seen for large enough positive $g$ and states below the band are only for large enough negative $g$.
This behavior is mimicked by the ``conjugate''-symmetric (see Sec.~\ref{subsec-sym} below) system ($B$, $\sigma^-$, $\nu=1$) except that the electron is forbidden to interact with the impurity. As a result, states above the band only appear for large enough negative values $g < 0$ and states below the band only for large enough positive $g > 0$ values. The other exception is the pair of ``sublattice'' symmetric systems
($A, \sigma^-, \nu=2$) and ($B, \sigma^-, \nu=2$), where in both cases only the hole interacts with the impurity so that states above (below) the band only exist for large enough negative (positive) values of $g$. These are in turn ``conjugate''-symmetric to the systems ($B, \sigma^+, \nu=-2$) and ($A, \sigma^+, \nu=-2$) respectively.
The number of branches due to the electron-impurity and the number of branches due to the hole-impurity interaction can be correctly predicted by studying the transitions involved and the impurity selection rules.
\begin{figure
\centering
\includegraphics[width=0.47\textwidth]{fig-Bplusnu-1}
\caption {(Color online) Energies of localized CEs vs impurity strength as in Fig.~\ref{fig-Amz1nu2}, but
for the ($B$, $\sigma^+$, $\nu=-1$) system.
\label{fig-Bmz1nu1}}
\end{figure}
\subsection{Symmetries of the spectra}
\label{subsec-sym}
The above results indicate that the CEs possess a high level of symmetry. Moving the impurity from one sublattice to the other,
$A \leftrightarrow B$, yields the same energies and oscillator strengths for filling factors $\nu=0,2$ (see Fig.~\ref{fig-Amz1nu2}).
However, the results are qualitatively different
for filling factors $\nu=\pm1$ (compare Figs.~\ref{fig-Amz1nu1} and \ref{fig-Bmz1nu1}).
This is because for odd filling factors such as $\nu=\pm1$, the valleys and thus sublattices are inequivalent.
This follows from the phenomenologically introduced valley splitting energy $\hbar\omega_{v}\tau_z$ [see Eq.~(\ref{eq-en})],
which leads to unequal occupancies of the two valleys. However,
for even filling factors $\nu=0,2,\ldots$, the occupancies are the same and effectively there is no valley splitting.
For the cases when the sublattices may be considered equivalent, the $A \leftrightarrow B$
correspondence of the energies and oscillator
strengths is understood formally by observing that the Hamiltonians $\hat{H}_A$ and $\hat{H}_B$ are connected by a unitary transformation
$\hat{H}_B=\hat{U}\hat{H}_A\hat{U}^\dagger$. This transformation interchanges both valley and sublattice indices and is given by
\begin{equation}
\label{eq-un}
\hat{U}=
\left(
\begin{array}{cccc}
0 & 0 & 0 & \alpha \\
0 & 0 &\alpha & 0 \\
0 & -\alpha^\ast & 0 & 0 \\
-\alpha^\ast & 0 & 0 & 0 \\
\end{array}
\right) \, ,
\end{equation}
with $\alpha=e^{2\pi i/3}$.
Upon moving the impurity between the sublattices $A \leftrightarrow B$, symmetry may be restored also for odd filling factors. This requires the valleys to be interchanged, which can be achieved by reflecting about $n=0$. This entails flipping the spin and pseudospin quantum numbers and transforming electrons into holes and vice versa ($n \to -n$), which means that $\nu \leftrightarrow -\nu$ and the sign of the impurity potential and also the light polarization should be changed. Note that the LL structure in graphene,
as well as its upper-lower cone dispersion in the absence of a magnetic field, is electron-hole symmetric. As a result, the energies and oscillator strengths of CEs induced by circularly polarized light in a system with filling factor $\nu$ and an impurity on the $A$ sublattice of strength $V$, correspond exactly to those of excitations induced by light circularly polarized in the opposite direction in a system with filling factor $-\nu$ and an impurity on the $B$ sublattice of strength $-V$. This symmetry may be expressed symbolically as
\begin{figure}[!t]
\centering
\includegraphics[width=0.47\textwidth]{fig-Aminusnu-1}
\caption {(Color online) Energies of localized CEs vs impurity strength as in Fig.~\ref{fig-Amz1nu2}, but
for the ($A$, $\sigma^-$, $\nu=-1$) system.
}
\label{fig-Amz-1nu1}
\end{figure}
\begin{equation}
\label{symmetry}
A, \nu, V, \sigma^\pm \longleftrightarrow B, -\nu, -V, \sigma^\mp \, .
\end{equation}
An example is shown in Fig.~\ref{fig-Amz1nu1}.
The above symmetry can be seen formally as follows. Switching the filling factor $\nu \to -\nu$
changes a pair
excitation\footnote{A coherent superposition of such pair
excitations constitutes collective excitations $|\mathcal{ N}_1 \mathcal{N}_2 J_z \rangle $,
see Eqs.~(\ref{eq-Qdag}) and (\ref{Q}).}
${|\mathcal{N}_1 m_1, \mathcal{N}_2 m_2 ; \nu \rangle \equiv c^{\dag}_{\mathcal{N}_1 m_1} d^{\dag}_{\mathcal{N}_2 m_2}|\nu\rangle}$
according to
\begin{equation}
\label{ket}
|\mathcal{N}_1 m_1, \mathcal{N}_2 m_2 ; \nu \rangle \to
|\widetilde{\mathcal{N}}_2 m_2, \widetilde{\mathcal{N}}_1 m_1 ; - \nu \rangle \, ,
\end{equation}
where $\mathcal{N} \equiv \{n,s,\tau \}$ and the conjugate
$\widetilde{\mathcal{N}} \equiv \{-n,\tilde{s},\tilde{\tau}\}$
with $\tilde{s},\tilde{\tau}$ representing flipped spin and pseudospin. From this it follows that $J_z \to -J_z$.
From the $e$-$h$ symmetry,
\begin{eqnarray}
\nonumber
\langle -\nu; \widetilde{\mathcal{N}}_2' m_2' , \widetilde{\mathcal{N}}_1' m_1' |
\hat{H}_\mathrm{int}
|\widetilde{\mathcal{N}}_2 m_2 , \widetilde{\mathcal{N}}_1 m_1 ; -\nu \rangle = \\
\label{equiv-Hm}
\mbox{}\hphantom{XX} = \langle \nu ; \mathcal{N}_1' m_1' , \mathcal{N}_2' m_2' | \hat{H}_\mathrm{int}
| \mathcal{N}_1 m_1, \mathcal{N}_2 m_2; \nu \rangle^\ast \: .
\end{eqnarray}
Hence the two matrices have the same energy eigenvalues and conjugate eigenvectors
leading to the same values of the dipole transition matrix elements $|d|^2$.
\section{Conclusions}
\label{sec-conclusions}
We have considered binding of neutral collective excitations
on a short-range impurity in graphene in strong magnetic fields.
The considered excitations are formed when an electron
is promoted to one of the empty Landau levels leaving behind a hole in a lower filled level.
This kind of problem requires treatment of interparticle (electron-hole) Coulomb,
electron-impurity, and hole-impurity interactions all on equal footing.
The scheme developed in this paper treats the impurity in the continuous approximation,
can be applied to arbitrary integer LL filling factors $\nu$
and gives the energies and optical strengths of excitations localized on the impurity in graphene.
The concrete results we have presented and discussed here are for the $n=0$ Landau level filling factors $\nu = -1, 0, 1, 2$.
A separate study\cite{FisDR09c} using the same approach has shown that the $\delta$-impurity localized states do not seem to evolve smoothly
as a function of filling factor $\nu$ in contrast to the case of the Coulomb impurity.\cite{FisDR09a} This is perhaps because for the Coulomb impurity there are always four transitions with the same $J_z$ value for any filling factor $\nu$, whereas for the short-range impurity, where the intervalley scattering is significant, changing $\nu$ may change (i) the number of transitions that are mixed and (ii) their $J_z$ values.
This apparently affects the number of branches of bound states that are formed. Another difference between the $\delta$-function and Coulomb impurities is that there are degenerate states for the Coulomb impurity
but not for the $\delta$-function. This is because the Coulomb impurity is sublattice symmetric,
whereas the $\delta$-function impurity introduces an asymmetry between sublattices and hence valleys,
thus breaking the SU(4) symmetry.
Notice also, that the intervalley excitations $\mathbf{K} \leftrightarrows \mathbf{K}'$
$(\Uparrow \leftrightarrows \Downarrow)$
are admixed by a short range potential
to optically-active intravalley excitations opening a possibility to introduce pseudospin flips by dipole photons.
We believe that the application of the continuous description to an extremely short-range $\delta$-impurity
systematically underestimates the impurity effect. Indeed, only a few impurity interaction channels are open in a given LL in graphene
as discussed above. Therefore, our predictions for the energies of bound states are mostly of a qualitative nature. It would be interesting to compare these with results of the tight-binding approach incorporating \emph{both} the impurity and the Coulomb interactions; this may be a subject of a separate study. We wish to stress that, on the other hand, the symmetries of the excitonic states and the optical selection rules leading to dark and bright states established in this paper should remain valid in other schemes. We believe that these qualitative predictions can be tested well in experiments.
A companion approach to studying the effect of short-range impurities would be the study of finite-range impurities, e.g., those which arise due to a disorder potential of Gaussian shape. Then the tuning of the height and width of the Gaussian would allow a continuous study of the transition from finite- to zero-range. Furthermore, one might then be able to quantitatively describe impurities induced by the presence of a suitably chosen substrate. We emphasize that the theoretical approach outlined in this work is equally applicable to such Gaussian disorder.
|
2,877,628,089,372 | arxiv |
\section{Introduction}
We focus in this paper on Taylor-series expansion based collocation approaches for Partial Differential Equations (PDEs). In these methods, instead of writing the problem in an average sense, as in Galerkin methods, the strong form is written explicitly at a set of computational points, distributed over the domain. Derivative operators are computed through the use of stencils of points, which can be built in different ways. A lot of work has been done in this field since the early 1900's, and collocation methods are regaining interest, due to the advent of massively parallel computing, which lends itself very naturally to these methods. Our goal in this paper is to facilitate the understanding of newcomers to the field, help choose optimal parameters, to benchmark the methods and, finally, to propose novel approaches to deal with non-smooth solutions. Specifically, we aim to :
\begin{itemize}
\item briefly review approaches to alleviate the mesh burden in computational mechanics;
\item provide a gradual, clear and detailed introduction to Taylor-series expansion based collocation approaches;
\item investigate the sensitivity of the above approaches to the parameters involved;
\item provide recommendations on the methods optimal parameters
\item propose and experiment on a computational approach to handle sharp corners and singularities;
\item provide a comprehensive investigation of the relative performance of some of the most popular such approaches;
\begin{itemize}
\item for smooth problems;
\item for rough and singular problems with low solution regularity;
\end{itemize}
\item facilitate future benchmarking by providing all data files, including geometries, point distributions, loading and boundary conditions, solution fields, to help benchmarking existing and new methods.
\end{itemize}
Numerical methods have been under development for approximately 80 years. The first methods which were developed were finite difference methods, which focus on the approximation of the differential operator. The first known reference is the inception of finite difference methods for partial differential equation, in the work of C. Runge in 1908 \cite{Runge1908}. The idea was to use stencils of points in order to approximate differential operators using finite differences. In their initial form, finite difference methods were largely limited to Cartesian domains in space or to time approximations.
This limitation of finite difference methods to the union of Cartesian domains may have been the motivation for the development of alternative methods including the Ritz method \cite{Ritz1908} and the Galerkin finite element method \cite{Galerkin1915}. Contrary to finite difference schemes, finite element methods were able to handle arbitrarily complex geometries, at the cost of the generation of a mesh, i.e. a cover of the volume with simple shapes including tetrahedral, hexahedral and prismatic elements.
Shortly after the introduction of the concept of mesh, in 1977 with the creation of the smoothed particle hydrodynamics method \cite{Monaghan1992}, the notion of methods which would later become known as mesh-free methods came about. SPH enabled the solution of problems which caused difficulties to finite elements, in particular those involving fluid flow, fragmentation and very large deformations.
The finite element concept of mesh, closely related to that of interpolation and approximation comes with at least five associated challenges:
\begin{itemize}
\item the mesh should conform to the potentially complex geometry of the domain and hence be regenerated, at least partially, for each change in the geometry of the component under consideration;
\item for moving boundaries, the mesh must be regenerated at each geometrical change in the boundary;
\item the aspect ratios of the elements should be controlled to ensure accuracy, in large deformations, this includes ensuring that the elements do not become too deformed or inverted during deformation;
\item locking problems have to be accounted for when small parameters appear within the PDE, e.g. for thin plates and shells or incompressible materials, warranting the development of new locking-free formulations;
\item stability of approximation schemes for coupled multi-field problems must be ensured, leading to the requirement of hybrid methods.
\end{itemize}
Some of these challenges may well have motivated the inception of alternative methods known at the time as meshless or meshfree methods \cite{Belytschko1994,Liu1995,Duarte1996,Atluri1998,De2000,Chen2000}. The original idea behind such methods was to decrease the burden posed by the generation and regeneration of a mesh. In particular, the Bubnov-Galerkin or Petrov-Galerkin methods relax some of the constraints associated with locating the points used to construct the approximation and thus simplify local refinement. Nonetheless, these methods rely on non-polynomial approximations which are usually non-interpolating, thus posing additional difficulties associated with enforcing boundary conditions and numerical integration. For many of these methods, numerical integration requires a background mesh or local integration rules on complex domains such as lenses. In their initial formulation, meshfree methods are computationally expensive, which somewhat limits their application to industrial problems. The 2008 review on implementation and recent advances in meshfree methods is a possible reference \cite{Nguyen2008}.
Contemporarily to the birth of meshfree methods, partition of unity approaches see the light of day \cite{Babuska1995,Babuska1997}. In their original form, they enable the introduction of known features about the solution within the finite element approximation. Either this known feature is computed numerically (as in the generalised finite element method \cite{Strouboulis2001}) or they are extracted from analytical knowledge about the solution, as in the extended finite element method (XFEM). These methods, born in parallel to meshfree methods create an intermediate world between finite element methods and meshfree methods, and have similarities with both. For instance, methods such as XFEM enable the simulation of propagating discontinuities in the field variable or its derivative with minimal or no remeshing, whilst some versions of partition of unity methods require special treatment of boundary conditions.
Some of the most exciting applications of partition of unity methods include fracture mechanics either as enriched finite elements \cite{Mos1999,Sukumar2000,Dolbow2000,Dolbow2001,Sukumar2001,Mos2002,Ji2004,Duflot2008} or as enriched meshfree methods \cite{Rabczuk2007,Rabczuk2007Sec,Rabczuk2007Thi,Bordas2007,Bordas2008,Talebi2011,Natarajan2011}. Note that such partition of unity methods were also used to permit the implicit treatment of (evolving) discontinuities using level set methods, including an implicit description of the boundary of the computational domain \cite{Belytschko2002,Moumnassi2014}. Several recent reviews can be consulted for an overview on partition of unity methods \cite{Rabczuk2010}.
A decade after the appearance of Galerkin meshfree methods, isogeometric analysis (IGA) approaches saw the light \cite{Hughes2005}. Their primary goal is to facilitate the connection between computer aided design (CAD) and computer aided engineering (CAE) with numerical analysis by using the same functions used to describe the geometry of the object to also approximate the unknown field variables. In this way, the method is able to represent complex geometries exactly. During early stage design iterations, any change in the geometry is automatically inherited by the approximation scheme for the field variables, thereby simplifying the iterative design process. Isogeometric analysis boundary element methods (IGABEM) \cite{Simpson2012,Simpson2013,Scott2013,Lian2013,Peng2014,Atroshchenko2015,Lian2016,Peng2017,Lian2017,Atroshchenko2017} transcend the intrinsic limitations of IGA within a finite element context, in particular the requirement of 3D volume parameterisation, akin to hexahedral meshing \cite{Xu2011,Xu2013,Xu2018}. IGA shares many common points with meshfree methods, in particular its natural ability to deal with high order approximations, which makes it suitable to handle Kirchhoff-Love plates and shells and high-order PDEs. Various approaches combining enrichment with IGA were introduced \cite{Nguyen2015}. The reader can refer to the recent overview and computer implementation aspects of IGA presented in \cite{NguyenImplementation2015}.
To overcome the most negative aspects of IGA, i.e. the need for structured Cartesian parameterisation associated with the tensor product nature of the method as well as the consequential difficulties associated with local mesh refinement, the geometry-independent field approximation method (GIFT) was proposed by Atroshchenko and colleagues in a series of papers \cite{Atroshchenko2018}, which relaxes the strict requirement of using the same basis functions to represent the geometry and the field variables, and, hence enables the local refinement of the field approximation independently of the non-uniform rational B-splines (NURBS) representation of the boundary. This therefore maintains the tight coupling between the CAD and the analysis of a given component, without requiring the use of NURBS for field approximations, which has been shown to be suboptimal in certain situations, for example, for problems with corner singularities, or weakly regular solutions.
Contemporarily with IGA, methods based on implicit treatments of boundaries have continued to develop, thanks to the combined efforts of engineers and applied mathematicians \cite{Burman2010,Burman2012,Burman2014,Burman2014_2,Hansbo2014,Burman2015,Claus2015,Claus2017,Claus2018}, and \cite{BordasUnfitted2017}.
In light of the above summary, the finite element methods and the meshfree methods seem to have been abandoned by the computational mechanics community. Collocation methods, however, have been continuously studied from the mid-1950s to date. Collocation methods have been reintroduced into the literature thanks to the recrudescence of advanced computing hardware such as graphical processing units and Xeon Phis, among others. Such computing architectures have memory architectures which are well suited to handling similar data shapes such as the row of a stiffness matrix provided by collocation approaches.
Now that we have painted an impressionist picture of the path towards mesh-burden reduction, subsequent to the birth of finite difference methods and finite element methods, we proceed to introducing collocation methods, which we classify broadly into two groups. The first group includes all methods which use an approximation of the differential operator to solve the Partial Differential Equation (PDE). In this paper, two methods of the first group, which use a Taylor's series expansion to approximate the field derivatives, are considered. These methods are the Generalized Finite Difference (GFD) method and the Discretization-Corrected Particle Strength Exchange (DC PSE) method, see the discussion below. The second group includes methods which are based on an approximation of the unknown field. The most prominent method in this second group is the Moving Least Squares (MLS) method \cite{Lancaster1981,Shepard1968} that is used in the Element Free Galerkin (EFG) method \cite{Belytschko1994}.
The idea of generalizing the Finite Difference Method (FDM) began in 1953 with MacNeal \cite{Macneal953}, and in 1960 with Forsyth and Wasow \cite{Forsythe1960}. They proposed a method to transform an irregular node distribution over the domain into a regular sub-domain on which the FDM can be applied. In 1962, Jensen \cite{Jensen1972} introduced the basis of the Generalized Finite Difference Method. The method, described for two-dimensional problems, uses a six-node star and a second order Taylor's series expansion to approximate the spatial derivatives up to the second order. In 1980, Liszka and Orkisz \cite{Liszka1980} presented a method based on an eight-node star which allows obtaining a more stable approximation of the derivatives. The method is based on some selected weights and a mean least square approximation of the derivatives. In 1998, Orkisz \cite{Orkisz1998} presented a more complete version of the GFD method covering various subjects, such as the application of the method to the Galerkin framework, and the use of a posteriori error estimators for model adaptivity.
The Particle Strength Exchange (PSE) method was introduced by Degond and Mas-Gallic in 1989 \cite{Degond1989}. Initially developed to approximate the diffusion operator of the convection-diffusion equations, the method has been generalized by Eldredge et al. in 2002 \cite{Eldredge2002} in order to approximate any derivative order. The Discretization-Corrected Particle Strength Exchange (DC PSE) method has been introduced by Schrader et al. \cite{Schrader2010} in 2010 in order to account for the discretization of the domain in the operator calculation. This allows removing the discretization error, which led to the name being ``Discretization-Corrected".
The GFD and the DC PSE methods show many similarities, which are analyzed in this paper. Both methods are based on a set of parameters. In this paper, we study the sensitivity of these methods to these parameters. Some methods, aiming at improving the accuracy of the solution, are presented and analyzed in the paper. The two considered methods are compared to other well known collocation methods. These methods can be classified into two categories. The methods based on an approximation of the differential operator such as the GFD and the DC PSE methods form the first group, and the methods based on an approximation of the field form the second group. For each category, the following methods are considered:
\begin{itemize}
\item Differential Operator Approximation;
\begin{itemize}
\item Generalized Finite Difference Method (GFD);
\item Discretization-Corrected Particle Strength Exchange Method (DC PSE);
\item Radial Basis Function Finite Difference Method (RBF-FD).
\end{itemize}
\item Field Approximation;
\begin{itemize}
\item Moving Least Square Method (MLS);
\item Interpolating Moving Least Square Method (I-MLS).
\end{itemize}
\end{itemize}
A brief outline of the remainder of the paper is as follows. In Section \ref{MethodDescription}, we briefly describe each of the methods considered in this paper. In Section \ref{ProblemAndNorms}, three linear elastic problems, for which an analytical solution is known (i.e. a cylinder under internal pressure, a sphere under internal pressure, and an L-shape domain in mode I loading), are presented. Moreover, the error norms are also introduced in Section \ref{ProblemAndNorms}. The methods are compared for the $L_2$ norm and the $L_{\infty}$ norm of the error for the calculated stress components. In Section \ref{ParametricAnalysis}, we present a parametric sensitivity study of the methods. This includes a study of the weight function, of the correction function (for DC PSE), and of the number of support nodes. In Section \ref{ImprovementMethods_Section}, we present some improvement methods for the GFD and the DC PSE methods, such as Voronoi diagram, stabilization, and criteria for support node selection for singular problems. In Section \ref{Benchmarking}, we present some benchmarking results from the comparison of the various methods listed above. We also present some results on convergence rates and computational expenses of these methods. In Section \ref{3DResults}, we present the results of the GFD method for 3D problems. Moreover, we compare our results with finite elements results obtained using the commercial package ABAQUS \cite{Abaqus2017}. Some conclusions are drawn in Section \ref{Conclusions}. Finally, a detailed comparison of the GFD and DC PSE methods for 1D problems is provided in the Appendix.
As a novelty of our work, we list two main components, namely (1) a detailed comparison of the GFD and DC PSE methods for 2D and 3D linear elastic problems (such as the pressurized cylinder and the L-shape domain in mode I loading), and (2) the assessments of the improvement methods as well as the identification and comparison of variations on DC PSE methods. To the best of the authors' knowledge, these studies are not found in the literature.
\section{Collocation Methods} \label{MethodDescription}
\subsection{Introduction} \label{General_Method}
Solving a problem by collocation methods consists in solving the set of PDEs only at collocation centers. A number of nodes spread over the domain are used to estimate the derivatives at the collocation centers. In most collocation methods, the equations are solved at the nodes. The problem being solved locally, the strong form of the PDEs is considered.
In this paper, we primarily consider the GFD and DC PSE methods. These methods are compared to the MLS approximation method and to the RBF-FD method, which are among the most popular methods of approximation in the framework of collocation methods. In the remainder of the present section, we present the principles of each of these methods.
In order to facilitate the comprehension of the methods, the case of a two dimensional problem in a Cartesian coordinate system is considered. The GFD and DC PSE methods are also presented and compared for the case of a 1D problem in Appendix \ref{app:a}.
In the sections below, the spatial coordinates are denoted by $x$ and $y$. The coordinates of a node $\mathbf{X}$ are then $\mathbf{X}=[x,y]^T$. The subscripts $c$ and $p$ are used to identify, respectively, the collocation node and a particle ``$p$".
The first and second derivatives in the two spatial directions are denoted by: $\frac{\partial}{\partial x}$, $\frac{\partial}{\partial y}$, $\frac{\partial^{2}}{\partial x^{2}}$, $\frac{\partial^{2}}{\partial x \partial y}$, $\frac{\partial^{2}}{\partial y^{2}}$. In the general case, these derivatives are written as $D^{n_x,n_y}f(\mathbf{X_c})$, where $n_x$ and $n_y$ are, respectively, the derivation orders in the directions $x$ and $y$.
The derivatives at a collocation center are typically approximated based on a defined support. The support is the set of nodes located in the vicinity of the collocation node. Figure \ref{Node_Neighbors} below shows the nodes of the domain $\Omega$ included in the support $\Omega_c$ of a collocation node $\mathbf{X_c}$. In 2D, the support is limited by a circle of radius $R_{\text{sup}}$.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\definecolor{GreyColor}{rgb}{0.3,0.3,0.3}
\begin{axis}[height=7.5cm,width=7.5cm, xmin=1.80,xmax=2.26,scaled x ticks = false, ymin=1.80,ymax=2.26,scaled y ticks = false,axis line style={draw=none},tick style={draw=none},xticklabels={,,}, yticklabels={,,}, cells={anchor=west}, font=\footnotesize, rounded corners=1pt]
\addplot[blue,line width=1.5pt] table [x=X-Circle, y=Y-Circle, col sep=comma] {SupportDrawing.csv};
\addplot[only marks,red,mark=*,mark options={fill=red}] table [x=X-Ref, y=Y-Ref, col sep=comma] {SupportDrawing.csv};
\addplot[only marks,black,mark=*,mark options={fill=black}] table [x=X-Nodes, y=Y-Nodes, col sep=comma] {SupportDrawing.csv};
\end{axis}
\draw [black,-stealth, line width=1.0pt] (0,2.57) node [left] {Collocation Node $X_c$} -- (2.96,3.02);
\draw [black,-stealth, line width=1.0pt] (0,4.23) node [left] {Support Node $X_p$} -- (2.35,4.23);
\draw [black,-stealth, line width=1.0pt] (3.075,3.03) node [anchor=west, align=left] {\quad $R_{\text{sup}}$} -- (4.52,3.77);
\draw [black] (0.5,1) node [left] {$\Omega$};
\draw [blue] (5.3,3.3) node [left] {$\Omega_c$};
\end{tikzpicture}
\caption{Collocation Node Support}
\label{Node_Neighbors}
\end{figure}
\subsection{Generalized Finite Difference Method} \label{GFD_Method}
\subsubsection{Principle}
The FDM is the most simple and one of the oldest methods for derivative approximation. The major drawback of this method is that it requires the use of a regular mesh. In 1972, Jensen \cite{Jensen1972} presented a method to approximate two dimensional derivatives using the Taylor's series approximation on an irregular grid. This method is known as the Generalized Finite Difference (GFD) method.
For the GFD method, the derivatives are calculated at collocation nodes $\mathbf{X_c}=[x_c,y_c]^T$ using a Taylor's series expansion of the unknown field. The field derivatives at $\mathbf{X_c}$ are computed in order to reproduce the known field values $f(\mathbf{X_{pi}})$ for a number of points $\mathbf{X_{pi}}=[x_{pi},y_{pi}]^T$. The number of selected points depends on the approximated derivative order.
\subsubsection{Differential Operator Approximation}
Considering a function $f: \rm I\!R^2 \rightarrow \rm I\!R$, the Taylor's series expansion of this function at $\mathbf{X_{pi}}$ in the vicinity of a collocation node $\mathbf{X_c}$ is written:
\begin{equation} \label{Taylor2D_AllTerms}
f(\mathbf{X_{pi}})=\sum_{i=0}^{+\infty} \sum_{j=0}^{+\infty} \frac{\partial^{i+j}f (\mathbf{X_c})}{\partial x^i\partial y^j} \frac{(x_{pi} - x_c)^i}{i!} \frac{(y_{pi} - y_c)^j}{j!}.
\end{equation}
For ease of notations, we write the second order approximation of the function $f$ at the point $\mathbf{X_{pi}}$ near $\mathbf{X_c}$ as $f_h(\mathbf{X_{pi}})$. For $f_h(\mathbf{X_{pi}})$, Equation (\ref{Taylor2D_AllTerms}) becomes:
\begin{equation} \label{Taylor2D_SecondOrderApprox}
\begin{aligned}
f_h(\mathbf{X_{pi}})= &f(\mathbf{X_c}) +(x_{pi} - x_c)\frac{\partial f (\mathbf{X_c})}{\partial x} + (y_{pi} - y_c)\frac{\partial f (\mathbf{X_c})}{\partial y} \\
&+ \frac{(x_{pi} - x_c)^{2}}{2!}\frac{\partial^2 f (\mathbf{X_c})}{\partial x^2} + (x_{pi} - x_c)(y_{pi} - y_c)\frac{\partial^2 f (\mathbf{X_c})}{\partial x \partial y} + \frac{(y_{pi} - y_c)^{2}}{2!}\frac{\partial^2 f (\mathbf{X_c})}{\partial y^2}.
\end{aligned}
\end{equation}
Equation (\ref{Taylor2D_SecondOrderApprox}) can be cast in a matrix form:
\begin{equation} \label{Taylor2D_GFD}
\begin{aligned}
&\Matrix { x_{pi} - x_c, y_{pi} - y_c, \frac{(x_{pi} - x_c)^{2}}{2!}, (x_{pi} - x_c)(y_{pi} - y_c), \frac{(y_{pi} - y_c)^{2}}{2!}} \Matrix {\frac{\partial f (\mathbf{X_c})}{\partial x} ;\frac{\partial f (\mathbf{X_c})}{\partial y} ; \frac{\partial^2 f (\mathbf{X_c})}{\partial x^2}; \frac{\partial^2 f (\mathbf{X_c})}{\partial x \partial y}; \frac{\partial^2 f (\mathbf{X_c})}{\partial y^2}}
= f_h(\mathbf{X_{pi}}) - f(\mathbf{X_c}).
\end{aligned}
\end{equation}
In order to determine an approximation of the field derivatives $\mathbf{Df(X)}= \Big[ \frac{\partial f (\mathbf{X})}{\partial x}, \frac{\partial f (\mathbf{X})}{\partial y}, \frac{\partial^2 f (\mathbf{X})}{\partial x^2}, \frac{\partial^2 f (\mathbf{X})}{\partial x \partial y}, \frac{\partial^2 f (\mathbf{X})}{\partial y^2} \Big]^T $ (five unknowns), Equation (\ref{Taylor2D_SecondOrderApprox}) will be written for five nodes $\mathbf{X_{pi}}$ in the vicinity of $\mathbf{X_c}$ (see Figure \ref{GFD_FiveSupNodes}). Thereby, a linear system is obtained.
\begin{figure}[H]
\centering
\def10cm{6cm}
\includegraphics{GFD_FiveSupNodes_2.pdf}
\caption{Five Nodes Support of a Collocation Node $X_c$}
\label{GFD_FiveSupNodes}
\end{figure}
\begin{equation} \label{System_GFD}
\begin{aligned}
&\Matrix { x_{p1} - x_c, y_{p1} - y_c, \frac{(x_{p1} - x_c)^{2}}{2!}, (x_{p1} - x_c)(y_{p1} - y_c), \frac{(y_{p1} - y_c)^{2}}{2!} ; x_{p2} - x_c, y_{p2} - y_c, \frac{(x_{p2} - x_c)^{2}}{2!}, (x_{p2} - x_c)(y_{p2} - y_c), \frac{(y_{p2} - y_c)^{2}}{2!} ; x_{p3} - x_c, y_{p3} - y_c, \frac{(x_{p3} - x_c)^{2}}{2!}, (x_{p3} - x_c)(y_{p3} - y_c), \frac{(y_{p3} - y_c)^{2}}{2!} ; x_{p4} - x_c, y_{p4} - y_c, \frac{(x_{p4} - x_c)^{2}}{2!}, (x_{p4} - x_c)(y_{p4} - y_c), \frac{(y_{p4} - y_c)^{2}}{2!} ; x_{p5} - x_c, y_{p5} - y_c, \frac{(x_{p5} - x_c)^{2}}{2!}, (x_{p5} - x_c)(y_{p5} - y_c), \frac{(y_{p5} - y_c)^{2}}{2!}} \Matrix {\frac{\partial f (\mathbf{X_c})}{\partial x} ;\frac{\partial f (\mathbf{X_c})}{\partial y} ; \frac{\partial^2 f (\mathbf{X_c})}{\partial x^2}; \frac{\partial^2 f (\mathbf{X_c})}{\partial x \partial y}; \frac{\partial^2 f (\mathbf{X_c})}{\partial y^2}}
= \Matrix {f_h(\mathbf{X_{p1}}) - f(\mathbf{X_c});f_h(\mathbf{X_{p2}}) - f(\mathbf{X_c});f_h(\mathbf{X_{p3}}) - f(\mathbf{X_c});f_h(\mathbf{X_{p4}}) - f(\mathbf{X_c});f_h(\mathbf{X_{p5}}) - f(\mathbf{X_c})}.
\end{aligned}
\end{equation}
Assuming that $f_h$ is close to $f$ in the vicinity of $\mathbf{X_c}$, the derivatives at the collocation node $\mathbf{X_c}$ can be approximated as a function of $f_h(\mathbf{X_c})$ and $f_h(\mathbf{X_{pi}})$ by solving the above system. If more than five points $\mathbf{X_{pi}}$ are chosen for solving Equation (\ref{Taylor2D_GFD}), the system is overdetermined. In that case, the derivatives at $\mathbf{X_c}$ leading to the minimum error can be determined using the least square method.
\subsubsection{Overdetermined Approximation}
If an arbitrary number of nodes $m$ is selected, the derivatives are determined using the mean least square method. A mean least square functional $B$ is presented below for the two dimensional case for both, the general form (\ref{FunctionalB_AllTerms_GFD}) and the second order approximation (\ref{FunctionalB_SecondOrder_GFD}). A weight function $w$ is typically used to balance the contribution of each node in the approximation. While a wide range of functions can be used as weight, $3^{\text{rd}}$ and $4^{\text{th}}$ order splines are usually preferred.
\begin{align}
\begin{split}\label{FunctionalB_AllTerms_GFD}
B(\mathbf{X_c})= \sum_{i=1}^m { w(\mathbf{X_{pi}} - \mathbf{X_c}) \Big[\sum_{j=0}^{+\infty} \sum_{k=0}^{+\infty} \frac{\partial^{j+k} f (\mathbf{X_c})}{\partial x^j\partial y^k} \frac{(x_{pi} - x_c)^j}{j!} \frac{(y_{pi} - y_c)^k}{k!} - f(\mathbf{X_{pi}}) \Big]^2}. \\
\end{split}\\
\begin{split} \label{FunctionalB_SecondOrder_GFD}
B_h(\mathbf{X_c})=\sum_{i=1}^m { w(\mathbf{X_{pi}} - \mathbf{X_c}) \Big[ } & {f(\mathbf{X_c}) - f(\mathbf{X_{pi}}) + (x_{pi} - x_c)\frac{\partial f (\mathbf{X_c})}{\partial x} + (y_{pi} - y_c)\frac{\partial f (\mathbf{X_c})}{\partial y}}\\
& + {\frac{(x_{pi} - x_c)^{2}}{2!}\frac{\partial^2 f (\mathbf{X_c})}{\partial x^2} + (x_{pi} - x_c)(y_{pi} - y_c)\frac{\partial^2 f (\mathbf{X_c})}{\partial x \partial y}}\\
& + {\frac{(y_{pi} - y_c)^{2}}{2!}\frac{\partial^2 f (\mathbf{X_c})}{\partial y^2} \Big]^2}. \\
\end{split}
\end{align}
The derivatives $\mathbf{Df(X_c)}$, that best approximate the known field values using the Taylor's series expansion, minimize $B_h(\mathbf{X})$ when:
\begin{equation} \label{DerivativeFunctionalB_SecondOrder_GFD}
\frac{\partial B_h(\mathbf{X})}{\partial \mathbf{Df(X)}}\biggr\rvert_{\mathbf{X}=\mathbf{X_c}}=0.
\end{equation}
Equation (\ref{DerivativeFunctionalB_SecondOrder_GFD}) can be written as a linear system of the form:
\begin{equation} \label{GFDLinearSystem}
\mathbf{A(X_c) Df(X_c) = E(X_c) F(X_c)}.
\end{equation}
For the two dimensional second order case, the matrices $\mathbf{A(X_c)}$, $\mathbf{E(X_c)}$ and $\mathbf{F(X_c)}$ are:
\begin{align}
\begin{split}\label{GFD_MatAx}
\mathbf{A(X_c)}=& \begin{bmatrix}
m_{11} & m_{12} & \dots & m_{15} \\
m_{21} & m_{22} & \dots & m_{25} \\
\vdots & & & \vdots \\
m_{51} & m_{52} & \dots & m_{55} \\
\end{bmatrix} \in \rm I\!R^{5 \times 5},
\end{split}\\[15pt]
\begin{split}\label{GFD_MatBx}
\mathbf{E(X_c)}=&\begin{bmatrix}
-m_{01} & m_{01,1} & \dots & m_{01,m} \\
-m_{02} & m_{02,1} & \dots & m_{02,m} \\
\vdots & & & \vdots \\
-m_{05} & m_{05,1} & \dots & m_{05,m} \\
\end{bmatrix} \in \rm I\!R^{5 \times (m+1)},
\end{split}\\[15pt]
\begin{split}\label{GFD_MatFx}
\mathbf{F(x_c)}=&\begin{bmatrix}
f(\mathbf{X_c}) & f(\mathbf{X_{p1}}) & f(\mathbf{X_{p2}}) & \dots & f(\mathbf{X_{pm}}) \\
\end{bmatrix}^T,
\end{split}
\end{align}
where the moments $m_{ij,k}$ and $m_{ij}$ correspond to:
\begin{equation}\label{Moments_GFD}
\begin{aligned}
m_{ij,k} &= w(\mathbf{X_{pk}} - \mathbf{X_c}) P_{(i+1),k}(\mathbf{X_c}) P_{(j+1),k}(\mathbf{X_c}), \\
m_{ij} &= \sum_{k=1}^m {m_{ij,k}}.
\end{aligned}
\end{equation}
The matrix $\mathbf{P(X_c)} \in \rm I\!R^{5 \times m}$ is written as follows:
\begin{equation}\label{GFD_MatP}
\mathbf{P(X_c)}=\begin{bmatrix}
1 &1 & \dots & 1 \\
(x_{p1} - x_c) & (x_{p2} - x_c) & \dots & (x_{pm} - x_c) \\
(y_{p1} - y_c) & (y_{p2} - y_c) & \dots & (y_{pm} - y_c) \\
\frac{(x_{p1} - x_c)^2}{2!} & \frac{(x_{p2} - x_c)^2}{2!} & \dots & \frac{(x_{pm} - x_c)^2}{2!} \\
(x_{p1} - x_c)(y_{p1} - y_c) & (x_{p2} - x)(y_{p2} - y_c) & \dots & (x_{pm} - x_c)(y_{pm} - y_c) \\
\frac{(y_{p1} - y_c)^2}{2!} & \frac{(y_{p2} - y_c)^2}{2!} & \dots & \frac{(y_{pm} - y_c)^2}{2!} \\
\end{bmatrix}.
\end{equation}
The derivative vector $\mathbf{Df(X_c)}$ can then be determined as a function of $\mathbf{F(X_c)}$:
\begin{equation}\label{FinalSystem_GFD}
\mathbf{Df(X_c)=A(X_c)^{-1} E(X_c) F(X_c)}.
\end{equation}
The approximated derivatives are determined by solving the linear system (\ref{FinalSystem_GFD}). These derivatives are, by definition, consistent with each other as they participate in reproducing the unknown field values based on a Taylor's series expansion.
\subsection{Discretization-Corrected Particle Strength Exchange Method (DC PSE)}\label{DC-PSE_Method}
\subsubsection{General DC PSE Operator}
The DC PSE method is based on a Taylor's series expansion of the unknown field. A convolution function is used to select the approximated derivative term. All the other unknown terms of the expansion are canceled out by the convolution function. The Taylor's series expansions presented in Equation (\ref{Taylor2D_AllTerms}) and Equation (\ref{Taylor2D_SecondOrderApprox}) are convoluted by a function $\eta$ over a domain $\Omega_c$:
\begin{equation} \label{Convolution_AllTerms}
\begin{aligned}
\int_{\Omega_c} {f(\mathbf{X_p})} \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p} = &\sum_{i=0}^{+\infty} \sum_{j=0}^{+\infty} \int_{\Omega_c}{\frac{\partial^{i+j} f (\mathbf{X_c})}{\partial x^{i} \partial x^{j}} \frac{(x_p - x_c)^i}{i!} \frac{(y_p - y_c)^j}{j!} \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p}}.
\end{aligned}
\end{equation}
The second order approximation of Equation (\ref{Convolution_AllTerms}) is written as follows:
\begin{equation} \label{Convolution_SecondOrderApprox}
\begin{aligned}
\int_{\Omega_c} {f_h(\mathbf{X_p})} \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p} = &\int_{\Omega_c} {f(\mathbf{X_c})} \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p} \\
&+ \int_{\Omega_c} {\frac{\partial f(\mathbf{X_c})}{\partial x}} (x_p - x_c) \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p} \\
&+ \int_{\Omega_c} {\frac{\partial f(\mathbf{X_c})}{\partial y}} (y_p - y_c) \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p} \\
&+ \int_{\Omega_c} {\frac{\partial^2 f(\mathbf{X_c})}{\partial x^2}} \frac{(x_p - x_c)^{2}}{2!} \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p} \\
&+ \int_{\Omega_c} {\frac{\partial^2 f(\mathbf{X_c})}{\partial x \partial xy}} (x_p - x_c) (y_p - y_c) \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p} \\
&+ \int_{\Omega_c} {\frac{\partial^2 f(\mathbf{X_c})}{\partial y^2}} \frac{(y_p - y_c)^{2}}{2!} \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p}.
\end{aligned}
\end{equation}
Equations (\ref{Convolution_AllTerms}) and (\ref{Convolution_SecondOrderApprox}) can be simplified by introducing the moments $M_{i,j}(\mathbf{X_c})$ which are defined as follows:
\begin{equation} \label{MomentEquation}
M_{i,j}(\mathbf{X_c})= \int_{\Omega_c} \frac{(x_p - x_c)^i}{i!} \frac{(y_p - y_c)^j}{j!} \eta(\mathbf{X_p}-\mathbf{X_c}) d\mathbf{X_p}.
\end{equation}
Considering that the field is relatively smooth in $\Omega_c$, the integration can be transformed into a discrete summation over the nodes of the domain. Constant values $V_p$ are associated to each of the nodes of the domain. The moments then become:
\begin{equation} \label{DiscreteMomentEquation}
M_{i,j}(\mathbf{X_c})= \sum_{p \in \Omega_c} V_p \frac{(x_p - x_c)^i}{i!} \frac{(y_p - y_c)^j}{j!} \eta(\mathbf{X_p}-\mathbf{X_c}).
\end{equation}
The values $V_p$ associated to the particles $p$ are hard to determine in the general case. Assuming a uniform distribution of the particles over the domain, these values are typically set to unity. Equation (\ref{DiscreteMomentEquation}) then becomes:
\begin{equation} \label{DiscreteMomentEquation_UnitVol}
M_{i,j}(\mathbf{X_c})= \sum_{p \in \Omega_c} \frac{(x_p - x_c)^i}{i!} \frac{(y_p - y_c)^j}{j!} \eta(\mathbf{X_p}-\mathbf{X_c}).
\end{equation}
Using these moments, Equation (\ref{Convolution_AllTerms}) and Equation (\ref{Convolution_SecondOrderApprox}), respectively, become:
\begin{equation} \label{ConvWithMoment_AllTerms}
\begin{aligned}
\sum_{p \in \Omega_c} {f_h(\mathbf{X_p})} \eta(\mathbf{X_p}-\mathbf{X_c}) = &\sum_{i=0}^{+\infty} \sum_{j=0}^{+\infty} {\frac{\partial^{i+j} f(\mathbf{X_c})}{\partial x^i \partial x^j} M_{i,j}(\mathbf{X_c})},
\end{aligned}
\end{equation}
\begin{equation} \label{ConvWithMoment_SecondOrderApprox}
\begin{aligned}
\sum_{p \in \Omega_c} {f_h(\mathbf{X_p})} \eta(\mathbf{X_p}-\mathbf{X_c}) = &f(\mathbf{X_c}) M_{0,0}(\mathbf{X_c}) + {\frac{\partial f(\mathbf{X_c})}{\partial x}} M_{1,0}(\mathbf{X_c}) + {\frac{\partial f(\mathbf{X_c})}{\partial y}} M_{0,1}(\mathbf{X_c}) \\
&+ {\frac{\partial f^2(\mathbf{X_c})}{\partial x^2}} M_{2,0}(\mathbf{X_c}) + {\frac{\partial f^2(\mathbf{X_c})}{\partial x \partial y}} M_{1,1}(\mathbf{X_c}) + {\frac{\partial f^2(\mathbf{X_c})}{\partial y^2}} M_{0,2}(\mathbf{X_c}).
\end{aligned}
\end{equation}
The selection of an appropriate function $\eta$ allows approximating the desired derivative $D^{k,l}f(\mathbf{X_c})=\frac{\partial f^{k+l} (\mathbf{X_c})}{\partial^k x \partial^l y}$ by setting all the moments to zero except the one multiplying $D^{k,l}f(\mathbf{X_c})$, which is set to unity.
Equation (\ref{ConvWithMoment_AllTerms}) can then be written:
\begin{equation}\label{DC PSE Operator}
\left \{
\begin{aligned}
&D^{k,l}f(\mathbf{X_c}) = \sum_{p \in \Omega_c} {f_h(\mathbf{X_{p}})} \eta(\mathbf{X_{p}}-\mathbf{X_c})\\
&\begin{array}{ll}
\text{with} &M_{k,l}(\mathbf{X_c})=1\\
&M_{i,j}(\mathbf{X_c})=0 \text{ \quad } \text{if} \ (i,j) \ne (k,l) .\\
\end{array}\\
\end{aligned}
\right.
\end{equation}
\subsubsection{The Convolution Function}
In order to satisfy at each node of the domain the moment condition (\ref{DC PSE Operator}), the convolution function $\eta$ needs to be chosen carefully. Schrader et al. \cite{Schrader2012} performed a study of a wide range of functions. In general, the convolution function is composed of the product of two functions: the correction function $K$ and the weight function $w$:
\begin{equation} \label{KernelForm}
\eta(\mathbf{X})=K(\mathbf{X}) w(\mathbf{X}).
\end{equation}
The correction function is typically derived from a polynomial or an exponential basis. For the case of a two dimensional problem, the polynomial basis $\mathbf{P}=[1, x, y, x^2, xy, y^2]^T$ can be selected. The weight function is a function that returns a scalar based on the distance to a defined origin. It has typically a compact support: the weights are null outside of a defined perimeter. For isotropic weight functions (functions with similar behavior in every direction), the support of a collocation node $X_c$ is limited by a radius $r_c$. The normalized distance to the collocation node is written as $s$. For the node $X_p$ within the support of $X_c$, $s$ is written as $s_p$ and equals to:
\begin{equation} \label{NormalizedDistance}
s_p=\frac{\lVert \mathbf{X_p}-\mathbf{X_c} \rVert_2}{r_c}.
\end{equation}
The shape of the weight function has a significant impact on the solution as it balances the contribution of each node of the support in the field derivative approximation. Three types of weight functions can be considered in particular. These are:
\begin{align}
\intertext{The exponential weight functions:}
\begin{split}\label{ExpWeight}
w(s)=
\begin{cases}
e(-s^{\alpha}\epsilon^{-2}) & \text{ \quad if } s \leq 1 \\
0 & \text{ \quad if } s > 1, \\
\end{cases}\\
\end{split}\\
\intertext{where $\alpha$ is an exponent and $\epsilon$ is a shape parameter,}
\intertext{3$^\text{rd}$ order spline weight functions:}
\begin{split}\label{Spline3Weight}
w(s) =
\begin{cases}
\frac{2}{3} - 4 s^2 + 4 s^3 & \text{ \quad if } s \leq 0.5 \\
\frac{4}{3} - 4 s + 4 s^2 - \frac{4}{3} s^3 & \text{ \quad if } 0.5 < s \leq 1 \\
0 & \text{ \quad if } s > 1,
\end{cases}\\
\end{split}\\
\intertext{4$^\text{th}$ order spline weight functions:}
\begin{split}\label{Spline4Weight}
w(s)=
\begin{cases}
1 - 6 s^2 + 8 s^3 - 3 s^4 & \text{ \quad if } s \leq 1 \\
0 & \text{ \quad if } s > 1. \\
\end{cases}\\
\end{split}
\end{align}
A typical convolution function, composed of a polynomial correction function $\mathbf{P}$ and a vector of coefficients $\mathbf{a}$, is written as:
\begin{equation} \label{SimpleKernel}
\eta(\mathbf{X_p}-\mathbf{X_c}) ={\mathbf{P(X_p-X_c)}}^T \mathbf{a} \ w(s_p).
\end{equation}
In order for this convolution function to satisfy the moment condition, the polynomial order shall be of at least the derivation order.
\subsubsection{Correction Function Calculation}
The coefficient vector $\mathbf{a}$ is the solution of a linear system $\mathbf{A_M(X_c)a }=\mathbf{B_M}$, where the left side of the equation corresponds to the moments calculation for the unknown convolution function $\eta$. The vector $\mathbf{B_M}$ corresponds to the moment condition which needs to be satisfied to obtain the desired derivative approximation.
For instance, in order to approximate the derivative $D^{2,0}f(\mathbf{X_c})$, the system is:
\begin{equation} \label{DCPSE_LinerSyst}
\left \{
\begin{array}{ll}
\begin{aligned}
&M_{0,0}(\mathbf{X_c})=0 &\Leftrightarrow \quad &\sum_{p \in \Omega_c} {\mathbf{P(X_p-X_c)}}^T \mathbf{a} w(s_p) = 0 \\
&M_{1,0}(\mathbf{X_c})=0 &\Leftrightarrow \quad &\sum_{p \in \Omega_c} (x_p - x_c) {\mathbf{P(X_p-X_c)}}^T \mathbf{a} w(s_p) = 0 \\
&M_{0,1}(\mathbf{X_c})=0 &\Leftrightarrow \quad &\sum_{p \in \Omega_c} (y_p - y_c) {\mathbf{P(X_p-X_c)}}^T \mathbf{a} w(s_p) = 0 \\
&M_{2,0}(\mathbf{X_c})=1 &\Leftrightarrow \quad &\sum_{p \in \Omega_c} \frac{(x_p - x_c)^2}{2!} {\mathbf{P(X_p-X_c)}}^T \mathbf{a} w(s_p) = 1 \\
&M_{1,1}(\mathbf{X_c})=0 &\Leftrightarrow \quad &\sum_{p \in \Omega_c} (x_p - x_c)(y_p - y_c) {\mathbf{P(X_p-X_c)}}^T \mathbf{a} w(s_p) = 0 \\
&M_{0,2}(\mathbf{X_c})=0 &\Leftrightarrow \quad &\sum_{p \in \Omega_c} \frac{(y_p - y_c)^2}{2!} {\mathbf{P(X_p-X_c)}}^T \mathbf{a} w(s_p) = 0. \\
\end{aligned}
\end{array}
\right.
\end{equation}
Considering the vector $\mathbf{Q(X_c,X_p)}=[1,(x_p - x_c),(y_p - y_c),\frac{(x_p - x_c)^2}{2!},(x_p - x_c)(y_p - y_c),\frac{(y_p - y_c)^2}{2!}]^T$, the correction function basis $\mathbf{P}$ and the weight function $w$, the coefficients of the matrix $\mathbf{A_M} \in \rm I\!R^{6 \times 6}$ can be written:
\begin{equation}\label{DCPSE_ACoefficients}
A_{M(i,j)}(\mathbf{X_c})=\sum_{p \in \Omega_c}Q_i(\mathbf{X_c},\mathbf{X_p})P_j(\mathbf{X_p}-\mathbf{X_c})w(s_p).
\end{equation}
Having solved the system of equations ($\mathbf{a}=\mathbf{A_M^{-1}(X_c) B_M}$), the derivative $D^{2,0}f(\mathbf{X_c})$ can be approximated with Equation (\ref{DC PSE Operator}) as a function of $f_h(\mathbf{X_p}), \: p \in \Omega_c$. From a computational point of view, it shall be noted that the inversion of the matrix $\mathbf{A_M(X_c)}$ only needs to be performed once per collocation node $X_c$. If the approximation of another derivative is required for the solution of the partial differential equation, only the moment condition set by the vector $\mathbf{B_M}$ is updated.
\subsubsection{Identified Variations of DC PSE Method}\label{DCPEVariationsSec}
It can be observed from the DC PSE method presented above that setting the moment $M_{0,0}(\mathbf{X_c})$ to zero is not necessary. The value $f(\mathbf{X_c})$ is determined in the global problem, and does not need to be canceled by a null moment $M_{0,0}(\mathbf{X_c})$. Based on this remark, three approaches can be considered. These approaches have respectively been labeled DCPSE0, DCPSE1 and DCPSE2.
\begin{itemize}
\item \underline{DCPSE0:} \quad $M_{0,0}(\mathbf{X_c})$ is set to zero (case presented above);
\item \underline{DCPSE1:} \quad $M_{0,0}(\mathbf{X_c})$ is set to a constant value (e.g. 1), thereby reducing the sparsity of the global matrix;
\item \underline{DCPSE2:} \quad The $M_{0,0}(\mathbf{X_c})$ moment is not introduced in the polynomial coefficient calculation. The dimension of the polynomial basis is then reduced by one. The matrix $\mathbf{A_M}$ belongs to $\rm I\!R^{5 \times 5}$.
\end{itemize}
All of these methods are compared in Section \ref{DCPSEVariationComp}.
\subsection{Radial Basis Function Finite Difference Method}
\subsubsection{Principle}
In \cite{Kansa1990a, Kansa1990b}, Kansa introduced the idea of using Radial Basis Functions for solving differential equations over a domain. Contributions to the RBF-FD method were made later by Driscoll and Fornberg \cite{Driscoll2002}, Shu \cite{Shu2003}, Fornberg \cite{Fornberg2011,Fornberg2013} and Davydov \cite{Davydov2011,Davydov2011a}. The principle of the RBF-FD method is to determine an approximation of the differential operator at a collocation node $\mathbf{X_c}$ based on a linear combination of the field values at the nodes $\mathbf{X_p}$ nearby. Considering $m$ nodes in the support of $\mathbf{X_c}$, the aim is to determine a set of weights $\lambda$, written in a vector form as $\mathbf{W(X_c)}=[\lambda_{p1}\ \dots \ \lambda_{pm}]$, so that:
\begin{equation} \label{DiffApprox_RBF-FD}
D^{n_x,n_y}f(\mathbf{X_c}) = \mathbf{W(X_c)} \begin{bmatrix}
f(\mathbf{X_{p1}}) \\
\vdots \\
f(\mathbf{X_{pm}})\\
\end{bmatrix}.
\end{equation}
The RBF-FD method assumes that Equation (\ref{DiffApprox_RBF-FD}) is exact for all radial basis functions $\varphi$ centered at each node of the support.
\subsubsection{Radial Basis Functions}
Various classes of RBFs can be used for the purpose of the RBF-FD method. Depending on the type of RBF, one or two shape parameters need to be selected. The selection of the shape parameter(s) is critical to the accuracy of the solution as it balances the contribution of the neighboring nodes in the derivative approximation. The main classes of RBFs are presented in Table \ref{RBF_Types} below.
\begin{table}[h]
\centering
\caption{Radial Basis Functions Types \cite{Kee2008}}
\label{RBF_Types}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|l|l|c|}
\hline
\multicolumn{1}{|c|}{\textbf{Type}} & \multicolumn{1}{c|}{\textbf{Expression}} & \textbf{Shape Parameters} \\
\hline
Multi-quadratics (MQ) & $\varphi(s_p)=({s_p}^2+c^2)^q$ & c, q \\
Gaussian (EXP) & $\varphi(s_p)=e^{-c {s_p}^2}$ & c \\
Thin plate spline (TPS) & $\varphi(s_p)={s_p}^\eta$ & $\eta$ \\
Logarithmic & $\varphi(s_p)={s_p}^\eta log(s_p)$ & $\eta$ \\
3$^\text{rd}$ order spline & \multicolumn{1}{c|}{Equation (\ref{Spline3Weight})} & - \\
4$^\text{th}$ order spline &\multicolumn{1}{c|}{Equation (\ref{Spline4Weight})} & - \\
\hline
\end{tabular}
\end{table}
\FloatBarrier
In order for the overall system matrix to be sparse, the radial basis functions are chosen with a compact support.
\subsubsection{Obtaining the Differentiation Matrix}
Replacing the unknown field $f$ by the selected radial basis function $\varphi_{X_c}$ centered in $X_c$, Equation (\ref{DiffApprox_RBF-FD}) becomes:
\begin{equation} \label{DiffRBFApprox_RBF-FD}
D^{n_x,n_y}\varphi_{X_c}(\mathbf{X_c}) = \mathbf{W(X_c)} \begin{bmatrix}
\varphi_{X_c}(\mathbf{X_{p1}}) \\
\vdots \\
\varphi_{X_c}(\mathbf{X_{pm}})\\
\end{bmatrix}.
\end{equation}
The weights are determined so that the approximation is exact for every radial basis function centered at each support node of the collocation node support. A linear system of equations is thus obtained:
\begin{equation} \label{0OrderApprox_RBF-FD}
\begin{bmatrix}
\varphi_{X_{p1}}(\mathbf{X_{p1}}) & \varphi_{X_{p1}}(\mathbf{X_{p2}}) & \dots & \varphi_{X_{p1}}(\mathbf{X_{pm}}) \\
\varphi_{X_{p2}}(\mathbf{X_{p1}}) & \varphi_{X_{p2}}(\mathbf{X_{p2}}) & \dots & \varphi_{X_{p2}}(\mathbf{X_{pm}}) \\
\vdots & \vdots & & \vdots \\
\varphi_{X_{pm}}(\mathbf{X_{p1}}) & \varphi_{X_{pm}}(\mathbf{X_{p2}}) & \dots & \varphi_{X_{pm}}(\mathbf{X_{pm}}) \\
\end{bmatrix} \begin{bmatrix}
\lambda_{p1} \\ \lambda_{p2} \\ \vdots \\ \lambda_{pm}\\
\end{bmatrix} = \begin{bmatrix}
D\varphi_{X_{p1}}(\mathbf{X_c}) \\ D\varphi_{X_{p2}}(\mathbf{X_c}) \\ \vdots \\ D\varphi_{X_{pm}}(\mathbf{X_c})\\
\end{bmatrix}.
\end{equation}
Additional constraints can be added to the system in order for the radial basis functions to reproduce exactly polynomials of at least the derivative order. This also ensures a certain regularity of the solution. For instance, for the case of a 2D problem and for a first order derivation in the $y$ direction, a first order polynomial basis can be added to the set of RBFs. Thereby, the system presented in Equation (\ref{0OrderApprox_RBF-FD}) becomes:
\begin{equation} \label{1OrderApprox_RBF-FD}
\begin{bmatrix}\begin{array}{ccc|ccc}
\varphi_{X_{p1}}(\mathbf{X_{p1}}) & \dots & \varphi_{X_{p1}}(\mathbf{X_{pm}}) & 1 & x_{p1} & y_{p1} \\
\varphi_{X_{p2}}(\mathbf{X_{p1}}) & \dots & \varphi_{X_{p2}}(\mathbf{X_{pm}}) & 1 & x_{p2} & y_{p2} \\
\vdots & & \vdots & \vdots & \vdots & \vdots \\
\varphi_{X_{pm}}(\mathbf{X_{p1}}) & \dots & \varphi_{X_{pm}}(\mathbf{X_{pm}}) & 1 & x_{pm} & y_{pm} \\ \hline
1 & \dots & 1 & 0 & 0 & 0 \\
x_{p1} & \dots & x_{pm} & 0 & 0 & 0 \\
y_{p1} & \dots & y_{pm} & 0 & 0 & 0 \\
\end{array}
\end{bmatrix}
\begin{bmatrix}\begin{array}{c}
\lambda_{p1} \\ \lambda_{p2} \\ \vdots \\ \lambda_{pm}\\ \hline
\lambda_{m+1} \\ \lambda_{m+2}\\ \lambda_{m+3}\\
\end{array}\end{bmatrix} =
\begin{bmatrix}
\begin{array}{c}
D^{0,1}\varphi_{X_{p1}}(\mathbf{X_c}) \\ D^{0,1}\varphi_{X_{p2}}(\mathbf{X_c}) \\ \vdots \\ D^{0,1}\varphi_{X_{pm}}(\mathbf{X_c})\\ \hline
D^{0,1}1=0\\ D^{0,1}x=0\\ D^{0,1}y=1\\
\end{array}\end{bmatrix}.
\end{equation}
Once the weights $W(\mathbf{X_c})$ determined by the solution of Equation (\ref{0OrderApprox_RBF-FD}) or Equation (\ref{1OrderApprox_RBF-FD}), the derivative $D^{n_x,n_y}f(\mathbf{X_c})$ can be approximated.
\subsection{Moving Least Square Approximation}
\subsubsection{Field Approximation}
The MLS method has been introduced by Lancaster and Salkauskas in 1981 \cite{Lancaster1981}. Interpolation for the lowest order has been introduced by Shepard in 1968 \cite{Shepard1968}. The method has been widely used in the context of the Element-Free Galerkin (EFG) method \cite{Belytschko1994} and in the framework of collocation for the Finite Point Method \cite{Onate1996}. The MLS method consists in approximating the unknown field using a function basis. Differentiation of the approximated field and solution of a partial differential equation then becomes possible. The unknown field can be approximated with various types of functions depending on the considered application. Polynomial functions are typically used for linear elasticity problems.
Considering a polynomial basis $P(\mathbf{X})$ and a coefficient vector $\mathbf{a(X_c)}$, an approximation of the field $f$ around a collocation node $\mathbf{X_c}$ can be written as follows:
\begin{equation} \label{FieldApprox_MLS}
f_h(\mathbf{X},\mathbf{X_c}) = \mathbf{P(X)^T a(X_c)}.
\end{equation}
The coefficients $\mathbf{a(X_c)}$ are determined by minimizing the error of the approximated field over a set of $m$ nodes around the collocation node. The error is weighted by a function $w$ centered in $X_c$. The minimization problem can be expressed by a functional $B(\mathbf{X_c})$:
\begin{equation} \label{FunctionalB_MLS}
B(\mathbf{X_c})= \sum_{i=1}^m {w(\mathbf{X_c} - \mathbf{X_{pi}}) \Big[\mathbf{P(X_{pi})}^T \mathbf{a(X_c)} - f(\mathbf{X_{pi}})} \Big]^2.
\end{equation}
The resulting error represented by the functional $B(\mathbf{X_c})$ is minimal when:
\begin{equation} \label{DerivativeFunctionalB_MLS}
\frac{\partial B(\mathbf{X_c})}{\partial \mathbf{a(X_c)}}=0.
\end{equation}
This problem is a linear system of the form:
\begin{equation} \label{DerivativeSystem_MLS}
\mathbf{A(X_c) a(X_c) = E(X_c) F(X_c)}.
\end{equation}
For a polynomial vector of size $n$, the matrices $\mathbf{A(X_c)}$, $\mathbf{E(X_c)}$ and $\mathbf{F(X_c)}$ correspond to:
\begin{align}
\begin{split}\label{MLS_MatAx}
\mathbf{A(X_c)}=\begin{bmatrix}
m_{11} & m_{12} & \dots & m_{1n} \\
m_{21} & m_{22} & \dots & m_{2n} \\
\vdots & & & \vdots \\
m_{n1} & m_{n2} & \dots & m_{nn} \\
\end{bmatrix} \ \in \rm I\!R^{n \times n},
\end{split}\\[15pt]
\begin{split}\label{MLS_MatBx}
\mathbf{E(X_c)}=\begin{bmatrix}
m_{01,1} & m_{01,2} & \dots & m_{01,m} \\
m_{02,1} & m_{02,2} & \dots & m_{02,m} \\
\vdots & & & \vdots \\
m_{0n,1} & m_{0n,2} & \dots & m_{0n,m} \\
\end{bmatrix} \ \in \rm I\!R^{n \times m},
\end{split}\\[15pt]
\begin{split}\label{MLS_MatFx}
\mathbf{F(x_c)}=\begin{bmatrix}
f\mathbf{(X_{p1}}) & f(\mathbf{X_{p2}}) & \dots & f(\mathbf{X_{pm}}) \\
\end{bmatrix}^T,
\end{split}\\
\intertext{where}
\begin{split}\label{MLS_Moments0}
m_{ij,k}= w(\mathbf{X_c} - \mathbf{X_{pk}}) P_i^{X_c}(\mathbf{X_{pk}})P_j^{X_c}(\mathbf{X_{pk}}),\\
\end{split} \\[15pt]
\begin{split}\label{MLS_Moments}
m_{ij}= \sum_{k=1}^m {m_{ij,k}}. \\
\end{split} \\[15pt]
\intertext{For a two dimensional second order case with a polynomial basis, $\mathbf{P}$ is chosen as:}
\begin{split}\label{MLS_P-Values}
\mathbf{P^{X_c}(X_{pk})}=\begin{bmatrix}
1 \\
(x_{pk} - x_c) \\ (y_{pk} - y_c) \\ (x_{pk} - x_c)^2 \\
(x_{pk} - x_c)(y_{p1} - y_c) \\ (y_{p1} - y_c)^2 \\
\end{bmatrix}. \\
\end{split}
\end{align}
As for the RBF-FD method, the dimension of the function basis $\mathbf{P}$ can be augmented in order to improve the regularity of the solution.
\subsubsection{Boundary Condition Enforcement}
The MLS method does not interpolate the field values. This impacts the Dirichlet boundary condition enforcement. Unlike the methods presented in the previous sections, the Dirichlet boundary condition is applied to the approximated field rather than directly to the degree of freedom solved in the linear system. The shape functions of the approximated field need to be calculated at the Dirichlet boundary nodes and are used to set the boundary condition. This leads to a slightly denser linear system as the rows of the matrix corresponding to the Dirichlet degree of freedoms are filled with coefficients allowing the approximation of the field at the boundary condition location.
\subsubsection{Interpolating Moving Least Square Method}
The Interpolating Moving Least Square (IMLS) method is a variation of the Moving Least Square method that allows the approximated field to interpolate the solution. The method has been presented in \cite{Lancaster1986}, and analyzed in a number of papers \cite{Ishida1999}, \cite{Maisuradze2003}. Interpolation of the approximated field can be achieved by various means. One of which consists in choosing a near singular weight function. The weight function assigns to the reference node a very large weight compared to the other nodes of the support. This makes the system (\ref{DerivativeSystem_MLS}) nearly singular but allows interpolation of the field. In this paper, the following weight function is considered:
\begin{equation} \label{IMLSWeight}
w(s)=
\begin{cases}
e^{-s^{2}}(s^n-\epsilon)^{-1} & \text{ \quad if } s \leq 1 \\
0 & \text{ \quad if } s > 1. \\
\end{cases}
\end{equation}
Here the parameters $n$ and $\epsilon$ control the singularity of the function. Based on the analysis of Maisuradze et al. \cite{Maisuradze2003}, the following parameters have been selected: $n=4$ and $\epsilon=10^{-15}$.
\section{Problems and Error Norms}\label{ProblemAndNorms}
\subsection{Problems Considered}
\label{RefProblems}
For our comparisons, we have selected two two-dimensional and one three-dimensional linear elastic problems. These are:
\begin{itemize}
\item A cylinder under internal pressure (2D - plane stress model);
\item An L-shape domain in Mode I loading (2D - plane stress model);
\item A sphere under internal pressure (3D).
\end{itemize}
An analytical solution is known for each of these problems. The 2D and 3D problems are, respectively, presented in Figure \ref{2DProblemsConsidered} and Figure \ref{SphereDwg} below. A Cartesian coordinate system has been used. Due to the symmetries of the problems in the Cartesian coordinate system, we have only considered $\text{1/4}^\text{th}$ of the cylinder and $\text{1/8}^\text{th}$ of the sphere.
For all the problems presented in this section, a regular node discretizations has been selected.
\pagebreak
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node at (0,0) {\includegraphics[scale=0.45]{CylinderModel.png}};
\node[color=red] at (-0.9,0) [right] {$\text{P}_{\text{int}}$};
\node[color=black] at (0.10,0.60) [right] {$\text{R}_{\text{i}}$};
\node[color=black] at (0.05,-0.4) [right] {$\text{R}_{\text{o}}$};
\node[color=black] at (-2.5,3) [right] {Stress Free Edge};
\node[anchor=east] at (-1.5,2.9) (Start2) {};
\node[anchor=west] at (-1.6,1.5) (HorizontalLine) {};
\draw (Start2) edge[out=-90,in=120,->] (HorizontalLine);
\end{tikzpicture}
\input{LShapedModel.tex}
\caption{Pressurized Cylinder (Left), L-Shape Domain in Mode I Loading (Right)}
\label{2DProblemsConsidered}
\end{figure}
\begin{figure}[H]
\centering
\begin{tikzpicture}
\node at (0,0){\includegraphics[scale=0.5]{SphereModel.png}};
\node[color=red] at (-0.3,0) [right] {$\text{P}_{\text{int}}$};
\node[color=black] at (-3.2,3) [right] {Stress Free Surface};
\node[anchor=east] at (-2.1,2.9) (Start2) {};
\node[anchor=west] at (-1.75,1.65) (HorizontalLine) {};
\draw (Start2) edge[out=-90,in=120,->] (HorizontalLine);
\end{tikzpicture}
\caption{Pressurized Sphere Model}
\label{SphereDwg}
\end{figure}
The stress solution in terms of $\sigma_{11}$, $\sigma_{12}$, $\sigma_{22}$ for each problem is presented in Figure \ref{SolutionPipe}, Figure \ref{SolutionL-Shaped} and Figure \ref{SolutionSphere}, respectively, for the pressurized cylinder, the L-shape and the pressurized sphere. The equivalent von Mises stress (noted $\sigma_{VM}$) is also presented.
\begin{figure}[H]
\centering
\begin{tabular}{c}
\text{Stress $\sigma_{11}$} \\
\includegraphics[scale=0.4]{CylinderExactSolution_S11.png}\\
\end{tabular}
\begin{tabular}{c}
\text{Stress $\sigma_{12}$} \\
\includegraphics[scale=0.4]{CylinderExactSolution_S12.png}\\
\end{tabular}
\phantomcaption
\end{figure}
\begin{figure}[H]
\centering
\ContinuedFloat
\begin{tabular}{c}
\text{Stress $\sigma_{22}$}\\
\includegraphics[scale=0.4]{CylinderExactSolution_S22.png} \\
\end{tabular}
\begin{tabular}{c}
\text{Stress $\sigma_{VM}$ } \\
\includegraphics[scale=0.4]{CylinderExactSolution_VMS.png}\\
\end{tabular}
\caption{Pressurized Cylinder - Stress Solution}
\label{SolutionPipe}
\end{figure}
\setcounter{subfigure}{0}
\begin{figure}[H]
\centering
\subfloat[][]{%
\begin{tabular}{c}
\text{Stress $\sigma_{11}$} \\
\includegraphics[scale=0.4]{L-ShapedExactSolution_S11.png}
\end{tabular}
}
\subfloat[][]{%
\begin{tabular}{c}
\text{Stress $\sigma_{12}$} \\
\includegraphics[scale=0.4]{L-ShapedExactSolution_S12.png}\\
\end{tabular}
}
\phantomcaption
\end{figure}
\begin{figure}[H]
\centering
\ContinuedFloat
\subfloat[][]{%
\begin{tabular}{c}
\text{Stress $\sigma_{22}$} \\
\includegraphics[scale=0.4]{L-ShapedExactSolution_S22.png}
\end{tabular}
} \qquad
\subfloat[][]{%
\begin{tabular}{c}
\text{Stress $\sigma_{VM}$} \\
\includegraphics[scale=0.4]{L-ShapedExactSolution_VMS.png}\\
\end{tabular}
}
\caption{L-Shape Plate in Mode I Loading - Stress Solution}
\label{SolutionL-Shaped}
\end{figure}
It should be noted that the stress solution tends to infinity at the interior corner of the L-shape domain. In order to represent the solution in Figure \ref{SolutionL-Shaped}, we have truncated the stress results around the singularity and represented truncated values in the same color as the selected threshold values.
\setcounter{subfigure}{0}
\begin{figure}[H]
\centering
\subfloat[][]{%
\begin{tabular}{c}
\text{Stress $\sigma_{11}$} \\
\includegraphics[scale=0.4]{SphereExactSolution_S11.png} \\
\end{tabular}
}
\subfloat[][]{%
\begin{tabular}{c}
\text{Stress $\sigma_{12}$} \\
\includegraphics[scale=0.4]{SphereExactSolution_S12.png} \\
\end{tabular}
}
\phantomcaption
\end{figure}
\begin{figure}[H]
\centering
\ContinuedFloat
\subfloat[][]{%
\begin{tabular}{c}
\text{Stress $\sigma_{22}$} \\
\includegraphics[scale=0.4]{SphereExactSolution_S22.png} \\
\end{tabular}
}
\subfloat[][]{%
\begin{tabular}{c}
\text{Stress $\sigma_{VM}$} \\
\includegraphics[scale=0.4]{SphereExactSolution_VMS.png} \\
\end{tabular}
}
\caption{Pressurized Sphere - Stress Solution}
\label{SolutionSphere}
\end{figure}
\subsection{Error Estimation}\label{ErrorNorms}
To assess the influence of the considered parameters on the solution, and to compare the accuracy of the methods, a set of errors on the stress results has been calculated. For this purpose, a properly scaled $L_2$ error norm and a $L_{\infty}$ error norm have been selected. The $L_2$ error norm averages out the error over all the collocation nodes.
Considering a domain $\Omega$ discretized with $n$ collocation nodes, where $\sigma_{ij}^{e}(\mathbf{X_k})$ and $\sigma_{ij}^{h}(\mathbf{X_k})$, respectively, represent the exact and the approximated stress values at a node $\mathbf{X_k}$, the $L_2$ error norm is calculated as follows:
\begin{equation} \label{L_2Norm}
L_{2}(\sigma_{ij})= \frac{\sqrt{\sum_{k=1}^{n}{(\sigma_{ij}^{e}(\mathbf{X_k}) - \sigma_{ij}^{h}(\mathbf{X_k}))^2}}}{n}.
\end{equation}
For the L-shape problem, the singular point has not been included in the $L_2$ error norm as the analytical solution diverges at this point.
The $L_{\infty}$ error norm corresponds to the maximum absolute error observed over the considered domain.
\begin{equation} \label{L_InfinityNorm}
L_{\infty}(\sigma_{ij})= \max_{k \in \Omega} \big(|\sigma_{ij}^{e}(\mathbf{X_k}) - \sigma_{ij}^{h}(\mathbf{X_k}) | \big).
\end{equation}
\section{Parametric Study}\label{ParametricAnalysis}
\subsection{General}
In order to better understand the GFD and the DC PSE methods, and to assess the impact of the various parameters on the solution, we performed a parametric study. The purpose of this study is to select a single set of parameters that can be applied to any problem without knowing a priori the solution type. The following items have been considered:
\begin{itemize}
\item The selected weight function;
\item The selected correction function basis for the DC PSE method;
\item The number of support nodes.
\end{itemize}
The 2D cylinder, the 2D L-shape and the 3D sphere models have been considered for this study. A regular distribution of 5,372 nodes and of 13,735 nodes has been selected for the 2D problems, respectively. A regular distribution of 83,174 nodes has been selected for the 3D problem. The DCPSE1 variation of the DC PSE method has been selected for the purpose of this sensitivity study. The sensitivity analyses for the weight function and the DC PSE correction function have been performed for the 2D model only. The impact of the number of support nodes on the error has been assessed for both, the 2D and 3D problems. Due to the symmetries of the models, results are only presented in terms of error on the $\sigma_{11}$ and $\sigma_{12}$ components of the stress tensor.
\subsection{Weight Function Sensitivity}
\paragraph{GFD}\
Various functions can be considered for the weight function introduced in Equation (\ref{FunctionalB_AllTerms_GFD}) and Equation (\ref{Moments_GFD}). The $3^\text{rd}$ and $4^\text{th}$ order splines are the preferred types (see Equation (\ref{Spline3Weight}) and Equation (\ref{Spline4Weight})). In order to vary the shape of the weight functions, we have composed the splines with the following power function
\begin{equation} \label{GFDWeightForm}
W(s)=(w(s))^{\gamma},
\end{equation}
where $w$ is the spline function and $W$ is the modified weight function. We have compared the results obtained for power parameters $\gamma$ between 0.4 and 1.2.
We have also considered a linear weight function for comparison purposes. The equation of this function is given by
\begin{equation} \label{GFDLinearWeight}
w(s)=
\begin{cases}
1-s & \text{ \quad if } s \leq 1 \\
0 & \text{ \quad if } s > 1. \\
\end{cases}\\
\end{equation}
The shapes of the considered functions are presented in Figure \ref{WeightFunctionsGFD}.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm, ymin=0,ymax=1.4,ytick={0,0.2,0.4,0.6,0.8,1,1.2},xmin=0,xmax=1.3, legend entries={($\text{3}^{\text{rd}}$ Order Spline)$^{0.4}$,($\text{3}^{\text{rd}}$ Order Spline)$^{0.6}$,($\text{3}^{\text{rd}}$ Order Spline)$^{0.8}$,($\text{3}^{\text{rd}}$ Order Spline)$^{1.0}$,($\text{3}^{\text{rd}}$ Order Spline)$^{1.2}$,Linear Function},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Normalized Distance to Ref. Node,ylabel=Weight Value]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=DIST-NORM, y=S3-0.4
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=DIST-NORM, y=S3-0.6
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=DIST-NORM, y=S3-0.8
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=DIST-NORM, y=S3-1.0
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=DIST-NORM, y=S3-1.2
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[black,mark=pentagon*,mark options={fill=black}] table [x=DIST-NORM, y=LINEAR-FUNC
, col sep=comma] {WeightFunctionsGFD.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm, ymin=0,ymax=1.4,ytick={0,0.2,0.4,0.6,0.8,1,1.2},xmin=0,xmax=1.3, legend entries={($\text{4}^{\text{rd}}$ Order Spline)$^{0.4}$,($\text{4}^{\text{rd}}$ Order Spline)$^{0.6}$,($\text{4}^{\text{rd}}$ Order Spline)$^{0.8}$,($\text{4}^{\text{rd}}$ Order Spline)$^{1.0}$,($\text{4}^{\text{rd}}$ Order Spline)$^{1.2}$},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Normalized Distance to Ref. Node,ylabel=Weight Value]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=DIST-NORM, y=S4-0.4
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=DIST-NORM, y=S4-0.6
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=DIST-NORM, y=S4-0.8
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=DIST-NORM, y=S4-1.0
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=DIST-NORM, y=S4-1.2
, col sep=comma] {WeightFunctionsGFD.csv};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{GFD Weight Functions: 3$^\text{rd}$ Order Spline (Left) and 4$^\text{th}$ Order Spline (Right) for power parameters ranging from 0.4 to 1.2. The linear weight function is also given for reference purposes.}
\label{WeightFunctionsGFD}
\end{figure}
The $L_2$ and $L_{\infty}$ errors obtained with the considered weight functions are presented in Figure \ref{WeightFunctionsResultsGFD} and Figure \ref{WeightFunctionsResultsGFD_LShaped} for the 2D cylinder and for the 2D L-shape, respectively.
The error for the linear weight function is also presented in these figures for comparison purposes, even though a power parameter is not used in this function.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.000001,ymax=0.01,xmin=0.15,xmax=1.25, legend entries={$\text{3}^{\text{rd}}$ Order Spline,$\text{4}^{\text{th}}$ Order Spline,Linear Function},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Power Parameter,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=S3-L2-REL-S11
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=S4-L2-REL-S11
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent-Linear, y=LIN-L2-REL-S11
, col sep=comma] {WeightFunctionsGFD.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.0001,ymax=10,xmin=0.15,xmax=1.25, legend entries={$\text{3}^{\text{rd}}$ Order Spline,$\text{4}^{\text{th}}$ Order Spline,Linear Function},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Power Parameter,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=S3-L-INF-S11
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=S4-L-INF-S11
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent-Linear, y=LIN-L-INF-S11
, col sep=comma] {WeightFunctionsGFD.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.000001,ymax=0.01,xmin=0.15,xmax=1.25, legend entries={$\text{3}^{\text{rd}}$ Order Spline,$\text{4}^{\text{th}}$ Order Spline,Linear Function},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Power Parameter,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=S3-L2-REL-S12
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=S4-L2-REL-S12
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent-Linear, y=LIN-L2-REL-S12
, col sep=comma] {WeightFunctionsGFD.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.0001,ymax=10,xmin=0.15,xmax=1.25, legend entries={$\text{3}^{\text{rd}}$ Order Spline,$\text{4}^{\text{th}}$ Order Spline,Linear Function},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Power Parameter,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=S3-L-INF-S12
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=S4-L-INF-S12
, col sep=comma] {WeightFunctionsGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent-Linear, y=LIN-L-INF-S12
, col sep=comma] {WeightFunctionsGFD.csv};
\end{axis}
\end{tikzpicture}\\
\end{tabular}
\caption{GFD Weight Sensitivity - 2D Cylinder. $L_2$ (Left) and $L_{\infty}$ (Right) errors. Comparison for $\text{3}^{\text{rd}}$ and $\text{4}^{\text{th}}$ order splines weight functions composed with a power function of various exponents. The linear weight function is also given for reference purposes. The 4$^\text{th}$ order spline consistently leads to a low error.}
\label{WeightFunctionsResultsGFD}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.0001,ymax=0.1,xmin=0.15,xmax=1.25, legend entries={$\text{3}^{\text{rd}}$ Order Spline,$\text{4}^{\text{th}}$ Order Spline,Linear Function},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Power Parameter,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=S3-L2-REL-S11
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=S4-L2-REL-S11
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent-Linear, y=LIN-L2-REL-S11
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=1,ymax=100,xmin=0.15,xmax=1.25, legend entries={$\text{3}^{\text{rd}}$ Order Spline,$\text{4}^{\text{th}}$ Order Spline,Linear Function},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Power Parameter,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=S3-L-INF-S11
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=S4-L-INF-S11
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent-Linear, y=LIN-L-INF-S11
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.0001,ymax=0.1,xmin=0.15,xmax=1.25, legend entries={$\text{3}^{\text{rd}}$ Order Spline,$\text{4}^{\text{th}}$ Order Spline,Linear Function},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Power Parameter,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=S3-L2-REL-S12
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=S4-L2-REL-S12
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent-Linear, y=LIN-L2-REL-S12
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=1,ymax=100,xmin=0.15,xmax=1.25, legend entries={$\text{3}^{\text{rd}}$ Order Spline,$\text{4}^{\text{th}}$ Order Spline,Linear Function},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Power Parameter,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=S3-L-INF-S12
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=S4-L-INF-S12
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent-Linear, y=LIN-L-INF-S12
, col sep=comma] {WeightFunctionsGFD-LShaped.csv};
\end{axis}
\end{tikzpicture}\\
\end{tabular}
\caption{GFD Weight Sensitivity - 2D L-Shape. $L_2$ (Left) and $L_{\infty}$ (Right) errors. Comparison for $\text{3}^{\text{rd}}$ and $\text{4}^{\text{th}}$ order splines weight functions composed with a power function of various exponents. The linear weight function is also given for reference purposes. The $\text{4}^{\text{th}}$ order spline leads to a low error than the $\text{3}^{\text{rd}}$ order spline for both stress components and both error norms. The linear function leads to the lowest error in terms of $L_{\infty}$ error norm.}
\label{WeightFunctionsResultsGFD_LShaped}
\end{figure}
We can see from Figure \ref{WeightFunctionsResultsGFD} that within the range [0.6; 0.9] the power parameter has little impact on the error. In this range, the type of spline used does not significantly impact the error either. From Figure \ref{WeightFunctionsResultsGFD_LShaped} we can see that the power parameter has little impact on the error in terms of $L_2$ norm. A linear weight function leads to similar results as the spline weight functions is the range of power parameter [0.3;1.1]. In terms of $L_{\infty}$ error norm, the linear function leads to a lower error than the spline functions in the range of power parameter [0.4;1.2].
A 4$^\text{th}$ order spline with a power parameter of 0.75 appears to be a reasonable choice as it leads to a minimum error for both considered problem. It leads to a low error for the 2D cylinder without being too close to the rapid error increase that is observed when the power parameter decreases below 0.6. It also leads to a reasonably low error for the singular problem both in terms of $L_2$ and $L_{\infty}$ error norms. A unique set of parameters has been selected for both problems in order to be applied to a wide variety of problems in the domain of linear elasticity.
\paragraph{DC PSE} \
In this Section, we assess the influence of the selected weight function on the error for the DC PSE method. The exponential weight function presented in Equation (\ref{ExpWeight}) is compared to the 3$^\text{rd}$ and 4$^\text{th}$ order splines, of equations (\ref{Spline3Weight}) and (\ref{Spline4Weight}) respectively. We have considered various combinations of shape parameters ($\epsilon$) and exponents ($\alpha$). In Figure \ref{WindowFunctionShapes}, the profile of three exponential weight functions, along with the two splines functions, is presented.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[height=7cm,width=10cm, ymin=0,ymax=1,xmin=0,xmax=1, legend entries={Exp.=1.0 - Shape=0.30,Exp.=2.0 - Shape=0.33,Exp.=3.0 - Shape=0.40,$\text{3}^{\text{rd}}$ Order Spline,$\text{4}^{\text{th}}$ Order Spline},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Normalized Distance to Ref. Node,ylabel=Weight Value]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=DIST-NORM, y=EXP1-0.30-W
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=DIST-NORM, y=EXP2-0.33-W
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=DIST-NORM, y=EXP3-0.4-W
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=DIST-NORM, y=SPLINE3-W
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=DIST-NORM, y=SPLINE4-W
, col sep=comma] {WeightFunctionsDCPSE.csv};
\end{axis}
\end{tikzpicture}
\caption{DC PSE Weight Functions: Comparison of the profile of typical exponential and spline functions used as weight in the DC PSE approximation.}
\label{WindowFunctionShapes}
\end{figure}
In Figure \ref{WeightFunctionsResultsDCPSE} and Figure \ref{WeightFunctionsResultsDCPSE_LShaped}, the error in terms of $L_2$ and $L_{\infty}$ norms for the $\sigma_{11}$ and $\sigma_{12}$ stress components is presented for various combinations of exponents and shape parameters. The error obtained with the $3^\text{rd}$ and $4^\text{th}$ order splines is also presented on the graphs for comparison purposes, even though the shape parameter does not apply for these functions.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=6cm,ymode=log, ymin=0.000001,ymax=0.001,xmin=0.15,xmax=0.55, legend entries={Exp.=1.0,Exp.=1.5,Exp.=2.0,Exp.=2.5,Exp.=3.0,Spline3rd ,Spline4th},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=outer north east,xlabel=Shape Parameter,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=EXP-1.0-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=EXP-1.5-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent, y=EXP-2.0-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=Exponent, y=EXP-2.5-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=Exponent, y=EXP-3.0-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=Exponent-Spline, y=SPLINE3-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[magenta,mark=oplus] table [x=Exponent-Spline, y=SPLINE4-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=6cm,ymode=log, ymin=0.0001,ymax=1,xmin=0.15,xmax=0.55, legend entries={Exp.=1.0,Exp.=1.5,Exp.=2.0,Exp.=2.5,Exp.=3.0,Spline3rd ,Spline4th},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=outer north east,xlabel=Shape Parameter,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=EXP-1.0-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=EXP-1.5-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent, y=EXP-2.0-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=Exponent, y=EXP-2.5-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=Exponent, y=EXP-3.0-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=Exponent-Spline, y=SPLINE3-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[magenta,mark=oplus] table [x=Exponent-Spline, y=SPLINE4-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=6cm,ymode=log, ymin=0.000001,ymax=0.001,xmin=0.15,xmax=0.55, legend entries={Exp.=1.0,Exp.=1.5,Exp.=2.0,Exp.=2.5,Exp.=3.0,Spline3rd ,Spline4th},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=outer north east,xlabel=Shape Parameter,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=EXP-1.0-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=EXP-1.5-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent, y=EXP-2.0-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=Exponent, y=EXP-2.5-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=Exponent, y=EXP-3.0-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=Exponent-Spline, y=SPLINE3-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[magenta,mark=oplus] table [x=Exponent-Spline, y=SPLINE4-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=6cm,ymode=log, ymin=0.0001,ymax=1,xmin=0.15,xmax=0.55, legend entries={Exp.=1.0,Exp.=1.5,Exp.=2.0,Exp.=2.5,Exp.=3.0,Spline3rd ,Spline4th},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=outer north east,xlabel=Shape Parameter,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=EXP-1.0-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=EXP-1.5-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent, y=EXP-2.0-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=Exponent, y=EXP-2.5-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=Exponent, y=EXP-3.0-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=Exponent-Spline, y=SPLINE3-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\addplot+[magenta,mark=oplus] table [x=Exponent-Spline, y=SPLINE4-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE.csv};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{DC PSE Weight Function Sensitivity - 2D Cylinder. $L_2$ (Left) and $L_{\infty}$ (Right) errors as a function of the shape parameters for exponential functions of various exponents. Comparison to results obtained with $3^\text{rd}$ and $4^\text{th}$ order splines. A shape parameter of 0.3 leads to a low error for all the exponents considered.}
\label{WeightFunctionsResultsDCPSE}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=6cm,ymode=log, ymin=0.0001,ymax=0.1,xmin=0.15,xmax=0.55, legend entries={Exp.=1.0,Exp.=1.5,Exp.=2.0,Exp.=2.5,Exp.=3.0,Spline3rd ,Spline4th},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=outer north east,xlabel=Shape Parameter,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=EXP-1.0-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=EXP-1.5-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent, y=EXP-2.0-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=Exponent, y=EXP-2.5-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=Exponent, y=EXP-3.0-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=Exponent-Spline, y=SPLINE3-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[magenta,mark=oplus] table [x=Exponent-Spline, y=SPLINE4-L2-REL-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=6cm,ymode=log, ymin=1,ymax=1000,xmin=0.15,xmax=0.55, legend entries={Exp.=1.0,Exp.=1.5,Exp.=2.0,Exp.=2.5,Exp.=3.0,Spline3rd ,Spline4th},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=outer north east,xlabel=Shape Parameter,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=EXP-1.0-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=EXP-1.5-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent, y=EXP-2.0-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=Exponent, y=EXP-2.5-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=Exponent, y=EXP-3.0-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=Exponent-Spline, y=SPLINE3-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[magenta,mark=oplus] table [x=Exponent-Spline, y=SPLINE4-L-INF-S11
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=6cm,ymode=log, ymin=0.0001,ymax=0.1,xmin=0.15,xmax=0.55, legend entries={Exp.=1.0,Exp.=1.5,Exp.=2.0,Exp.=2.5,Exp.=3.0,Spline3rd ,Spline4th},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=outer north east,xlabel=Shape Parameter,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=EXP-1.0-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=EXP-1.5-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent, y=EXP-2.0-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=Exponent, y=EXP-2.5-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=Exponent, y=EXP-3.0-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=Exponent-Spline, y=SPLINE3-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[magenta,mark=oplus] table [x=Exponent-Spline, y=SPLINE4-L2-REL-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=6cm,ymode=log, ymin=1,ymax=1000,xmin=0.15,xmax=0.55, legend entries={Exp.=1.0,Exp.=1.5,Exp.=2.0,Exp.=2.5,Exp.=3.0,Spline3rd ,Spline4th},legend style={legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=outer north east,xlabel=Shape Parameter,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=Exponent, y=EXP-1.0-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=Exponent, y=EXP-1.5-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=Exponent, y=EXP-2.0-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=Exponent, y=EXP-2.5-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[olive,mark=star,mark options={fill=olive}] table [x=Exponent, y=EXP-3.0-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=Exponent-Spline, y=SPLINE3-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\addplot+[magenta,mark=oplus] table [x=Exponent-Spline, y=SPLINE4-L-INF-S12
, col sep=comma] {WeightFunctionsDCPSE-LShaped.csv};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{DC PSE Weight Function Sensitivity - 2D L-Shape. $L_2$ (Left) and $L_{\infty}$ (Right) errors as a function of the shape parameters for exponential functions of various exponents. Comparison to results obtained with $3^\text{rd}$ and $4^\text{th}$ order splines. An exponent of 2.0 and a shape parameter of 0.33 lead to the lowest error in terms of $L_2$ norm.}
\label{WeightFunctionsResultsDCPSE_LShaped}
\end{figure}
The analysis of Figure \ref{WeightFunctionsResultsDCPSE} and Figure \ref{WeightFunctionsResultsDCPSE_LShaped} shows that the exponential weight functions lead to smaller errors than the spline functions for shape parameters between 0.25 and 0.45 and for both problems. The combination of an exponent of 1.0 and a shape parameter of 0.25 leads to the smallest error for the 2D cylinder. A shape parameter of 0.30, associated to an exponent of 1.0, leads to similar results for this problem without being too close to a rapid increase in the observed error.
The analysis of the results for the 2D L-shape shows that an exponent of 2.0 associated with a shape parameter of 0.33 leads relatively constantly to the lowest error.
In order to be applied to most of the problems where the type of solution is a priori unknown, a single set of parameters is selected. This set of parameters is: a shape parameter of 0.30 and an exponent of 1.0. This combination leads to a more significant error reduction than the set of parameters leading to the minimum error for the 2D L-shape problem.
\subsection{DC PSE Correction Function}
The basis functions used to build the correction function in the DC PSE method can be selected from various function types. The most commons bases are the polynomial basis and the exponential basis. For the case of a two-dimensional problem, the polynomial basis is $\mathbf{P}=[1, x, y, x^2, x y, y^2]^T$, and the exponential basis: $\mathbf{P}=[1, e^x, e^y, e^{2x}, e^{x+y}, e^{2y}]^T$. In order for these bases to be independent from the node density, the functions have been scaled according to the support radius. For a node $\mathbf{X_{pi}}$ in the support of the collocation node $\mathbf{X_c}$, the scaling parameters are $Sx_i=\frac{x_c-x_{pi}}{r_c}$ and $Sy_i=\frac{y_c-y_{pi}}{r_c}$, where $r_c$ is the support radius of the collocation node $\mathbf{X_c}$. The polynomial basis becomes:
\begin{equation} \label{ExpBasisScaled}
\mathbf{P}=\Big[ 1, Sx_i, Sy_i, {Sx_i}^2, {Sx_i}{Sy_i}, {Sy_i}^2 \Big]^T \\
\end{equation}
The errors obtained for each correction function basis are presented in Figure \ref{DCPSE_CorrFunction_Comparison} below for the 2D cylinder and in Figure \ref{DCPSE_CorrFunction_Comparison_LShaped} for the 2D L-shape.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.0000002,ymax=0.003,ymode=log,xmin=1000,xmax=30000, xmode=log, legend entries={Polynomial Basis,Exponential Basis},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S11-DCPSE1, col sep=comma] {DCPSECorrFunctionCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S11-DCPSE1-EXP, col sep=comma] {DCPSECorrFunctionCylinder.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.13}{1.6}{green};
\logLogSlopeTriangle{0.85}{0.1}{0.48}{1.0}{red};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.0001,ymax=1,ymode=log,xmin=1000,xmax=30000, xmode=log, legend entries={Polynomial Basis,Exponential Basis},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L-INF-S11-DCPSE1, col sep=comma] {DCPSECorrFunctionCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L-INF-S11-DCPSE1-EXP, col sep=comma] {DCPSECorrFunctionCylinder.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.14}{1.1}{green};
\logLogSlopeTriangle{0.85}{0.1}{0.48}{0.5}{red};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm, width=7.5cm, ymin=0.0000002, ymax=0.003, ymode=log, xmin=1000,xmax=30000, xmode=log, legend entries={Polynomial Basis,Exponential Basis},legend style={ at={(0.5,-0.2)}, anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes, ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S12-DCPSE1, col sep=comma] {DCPSECorrFunctionCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S12-DCPSE1-EXP, col sep=comma] {DCPSECorrFunctionCylinder.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.11}{1.6}{green};
\logLogSlopeTriangle{0.85}{0.1}{0.42}{1.0}{red};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.0001,ymax=1,ymode=log,xmin=1000,xmax=30000, xmode=log, legend entries={Polynomial Basis,Exponential Basis},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L-INF-S12-DCPSE1, col sep=comma] {DCPSECorrFunctionCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L-INF-S12-DCPSE1-EXP, col sep=comma] {DCPSECorrFunctionCylinder.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.09}{0.9}{green};
\logLogSlopeTriangle{0.85}{0.1}{0.34}{0.5}{red};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{DC PSE Correction Function Basis Comparison - 2D Cylinder. $L_2$ (Left) and $L_{\infty}$ (Right) errors for polynomial and exponential bases functions as a function of the number of nodes in the model. The use of a polynomial basis leads to a lower error and a faster convergence.}
\label{DCPSE_CorrFunction_Comparison}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.00001,ymax=0.01,ymode=log,xmin=3000,xmax=1000000, xmode=log, legend entries={Polynomial Basis,Exponential Basis},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S11-DCPSE1, col sep=comma] {DCPSECorrFunctionLShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S11-DCPSE1-EXP, col sep=comma] {DCPSECorrFunctionLShaped.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.6}{0.6}{green};
\logLogSlopeTriangle{0.85}{0.1}{0.3}{0.6}{red};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=1,ymax=100,ymode=log,xmin=3000,xmax=1000000, xmode=log, legend entries={Polynomial Basis,Exponential Basis},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L-INF-S11-DCPSE1, col sep=comma] {DCPSECorrFunctionLShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L-INF-S11-DCPSE1-EXP, col sep=comma] {DCPSECorrFunctionLShaped.csv};
\logLogSlopeTriangleUp{0.85}{0.1}{0.67}{0.2}{red};
\logLogSlopeTriangleUp{0.85}{0.1}{0.45}{0.2}{green};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm, width=7.5cm, ymin=0.00001, ymax=0.01, ymode=log, xmin=3000,xmax=1000000, xmode=log, legend entries={Polynomial Basis,Exponential Basis},legend style={ at={(0.5,-0.2)}, anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes, ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S12-DCPSE1, col sep=comma] {DCPSECorrFunctionLShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S12-DCPSE1-EXP, col sep=comma] {DCPSECorrFunctionLShaped.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.5}{0.6}{green};
\logLogSlopeTriangle{0.85}{0.1}{0.15}{0.6}{red};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=1,ymax=100,ymode=log,xmin=3000,xmax=1000000, xmode=log, legend entries={Polynomial Basis,Exponential Basis},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L-INF-S12-DCPSE1, col sep=comma] {DCPSECorrFunctionLShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L-INF-S12-DCPSE1-EXP, col sep=comma] {DCPSECorrFunctionLShaped.csv};
\logLogSlopeTriangleUp{0.85}{0.1}{0.4}{0.2}{green};
\logLogSlopeTriangleUp{0.85}{0.1}{0.15}{0.2}{red};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{DC PSE Correction Function Basis Comparison - 2D L-Shape. $L_2$ (Left) and $L_{\infty}$ (Right) errors for polynomial and exponential bases functions as a function of the number of nodes in the model. Both function basis lead to similar convergence rates. The use of a exponential basis leads to a slightly lower error for the $L_2$ error norm. For the $L_{\infty}$ error norm, the polynomial basis lead to the lowest error for the $\sigma_{11}$ stress component while the exponential basis lead to the lowest error for the $\sigma_{12}$ stress component.}
\label{DCPSE_CorrFunction_Comparison_LShaped}
\end{figure}
It can be observed from Figure \ref{DCPSE_CorrFunction_Comparison} that the polynomial function basis constantly leads to a much lower error than the exponential basis. Depending on the number of nodes, the error increases by a factor between 5 and 30 when the exponential basis is used.
In Figure \ref{DCPSE_CorrFunction_Comparison_LShaped} we can see that both correction function bases lead to similar results. In terms of $L_2$ error norm the exponential basis leads to a reduction of around 5\% compared to the results where a polynomial basis is used.
Based on the results presented in this section, the polynomial basis is preferred as it leads to a much higher error reduction than the exponential basis for the considered problems. This basis function is expected to give a reasonably low error for most of the problems in the domain of linear elasticity.
\subsection{Number of Support Nodes}
\paragraph{Support Radius Selection} \
In this work, we have selected the support radii of the collocation nodes based on the number of nodes within the support they define. The number of nodes in the support of a collocation node shall be of at least the number of approximated derivatives. In practice, in order to account for the node distribution, a larger number of nodes is used.
\paragraph{GFD} \
In this Section, we study the impact of the number of nodes in the support of a collocation node for the GFD method. Various combinations of inner node and boundary node support sizes have been considered. Results are presented in Figure \ref{BoundaryNodeResultsGFD}, Figure \ref{BoundaryNodeResultsGFD_LShaped} and Figure \ref{BoundaryNode_3DResultsGFD} below, respectively, for the 2D cylinder, the 2D L-shape and the sphere under internal pressure. The $L_2$ and the $L_{\infty}$ norms are presented as a function of the number of support nodes for collocation nodes located on the boundary. The results are presented for three sizes of inner node supports for the 2D problems and four sizes for the 3D problem.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.000001,ymax=0.01,xmin=12,xmax=25, legend entries={Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-ALL, y=L2-REL-S11-11.0, col sep=comma] {SupportSizeGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-ALL, y=L2-REL-S11-13.0, col sep=comma] {SupportSizeGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-ALL, y=L2-REL-S11-15.0, col sep=comma] {SupportSizeGFD.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.001,ymax=10,xmin=12,xmax=25, legend entries={Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-ALL, y=L-INF-S11-11.0, col sep=comma] {SupportSizeGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-ALL, y=L-INF-S11-13.0, col sep=comma] {SupportSizeGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-ALL, y=L-INF-S11-15.0, col sep=comma] {SupportSizeGFD.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.000001,ymax=0.01,xmin=12,xmax=25, legend entries={Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-ALL, y=L2-REL-S12-11.0, col sep=comma] {SupportSizeGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-ALL, y=L2-REL-S12-13.0, col sep=comma] {SupportSizeGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-ALL, y=L2-REL-S12-15.0, col sep=comma] {SupportSizeGFD.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.001,ymax=10,xmin=12,xmax=25, legend entries={Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-ALL, y=L-INF-S12-11.0, col sep=comma] {SupportSizeGFD.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-ALL, y=L-INF-S12-13.0, col sep=comma] {SupportSizeGFD.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-ALL, y=L-INF-S12-15.0, col sep=comma] {SupportSizeGFD.csv};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{GFD Support Node Number Sensitivity - 2D Cylinder. $L_2$ (Left) and $L_{\infty}$ (Right) errors for various combinations of inner nodes and boundary nodes support sizes. Inner collocation nodes with 11 support nodes lead to the lowest observed error. The error stops decreasing for boundary nodes supports larger than 18 nodes.}
\label{BoundaryNodeResultsGFD}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.0001,ymax=0.01,xmin=12,xmax=27, legend entries={Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-ALL, y=L2-REL-S11-11.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-ALL, y=L2-REL-S11-13.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-ALL, y=L2-REL-S11-15.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=1,ymax=100,xmin=12,xmax=27, legend entries={Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-ALL, y=L-INF-S11-11.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-ALL, y=L-INF-S11-13.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-ALL, y=L-INF-S11-15.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.0001,ymax=0.01,xmin=12,xmax=27, legend entries={Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-ALL, y=L2-REL-S12-11.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-ALL, y=L2-REL-S12-13.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-ALL, y=L2-REL-S12-15.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=1,ymax=100,xmin=12,xmax=27, legend entries={Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-ALL, y=L-INF-S12-11.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-ALL, y=L-INF-S12-13.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-ALL, y=L-INF-S12-15.0, col sep=comma] {SupportSizeGFD-LShaped.csv};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{GFD Support Node Number Sensitivity - 2D L-Shape. $L_2$ (Left) and $L_{\infty}$ (Right) errors for various combinations of inner nodes and boundary nodes support sizes. All the combinations of inner and boundary nodes support size lead to similar errors. This is due to the system being loaded via Dirichlet boundary conditions.}
\label{BoundaryNodeResultsGFD_LShaped}
\end{figure}
The results in terms of $L_2$ and $L_{\infty}$ errors present a similar trend for the $\sigma_{11}$ and $\sigma_{12}$ stress components. We can see that increasing the number of nodes in the inner nodes supports does not necessarily lead to an error reduction. The loss in terms of resolution is not compensated by the gain in solution smoothness. Increasing the number of support nodes for the boundary nodes steadily (and rapidly) reduces the error for the 2D cylinder problem. An error reduction of a factor of approximately one hundred is observed when the number of support nodes for boundary collocation nodes is increased from 13 to 18.
The number of support nodes on the boundary does not affect much the observed error for the 2D L-shape problem as the model is loaded via Dirichlet boundary conditions. The size of the inner nodes support has also little impact on the error for this problem.
A combination of 11 support nodes for interior nodes and 19 support nodes for boundary nodes is selected as it leads to a low error for both problems while maintaining the fill of the system matrix reasonably low.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.000001,ymax=0.01,xmin=50,xmax=95, legend entries={Inn. Sup=35,Inn. Sup=37,Inn. Sup=39,Inn. Sup=41},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-3D, y=L2-REL-S11-35, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-3D, y=L2-REL-S11-37, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-3D, y=L2-REL-S11-39, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-3D, y=L2-REL-S11-41, col sep=comma] {SupportSizeGFD3D.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.001,ymax=10,xmin=50,xmax=95, legend entries={Inn. Sup=35,Inn. Sup=37,Inn. Sup=39,Inn. Sup=41},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-3D, y=L-INF-S11-35, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-3D, y=L-INF-S11-37, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-3D, y=L-INF-S11-39, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-3D, y=L-INF-S11-41, col sep=comma] {SupportSizeGFD3D.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.000001,ymax=0.01,xmin=50,xmax=95, legend entries={Inn. Sup=35,Inn. Sup=37,Inn. Sup=39,Inn. Sup=41},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-3D, y=L2-REL-S12-35, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-3D, y=L2-REL-S12-37, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-3D, y=L2-REL-S12-39, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-3D, y=L2-REL-S12-41, col sep=comma] {SupportSizeGFD3D.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.001,ymax=10,xmin=50,xmax=95, legend entries={Inn. Sup=35,Inn. Sup=37,Inn. Sup=39,Inn. Sup=41},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-3D, y=L-INF-S12-35, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-3D, y=L-INF-S12-37, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-3D, y=L-INF-S12-39, col sep=comma] {SupportSizeGFD3D.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-3D, y=L-INF-S12-41, col sep=comma] {SupportSizeGFD3D.csv};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{GFD Support Node Number Sensitivity - 3D Sphere. $L_2$ (Left) and $L_{\infty}$ (Right) errors for various combinations of inner nodes and boundary nodes support sizes. Boundary collocation nodes with 75 support nodes lead to the lowest error. The size of the support of inner collocation nodes has little impact on the error.}
\label{BoundaryNode_3DResultsGFD}
\end{figure}
It can be observed from Figure \ref{BoundaryNode_3DResultsGFD} that, for the 3D sphere, a minimum error is obtained for 75 support nodes for boundary collocation nodes. Increasing the size of the support from 55 to 75 for boundary nodes reduces in average by a factor 10 the observed error both in terms of $L_2$ and $L_{\infty}$ norms. The number of support nodes for the collocation nodes located in the domain has a smaller impact on the error. 37 support nodes appears to be a reasonable choice as it leads to a low error while keeping the fill of the matrix reasonable.
The sparsity of the problem matrix is reduced when the number of support nodes increases. However, the impact of an increase in the number of support nodes on the boundary is limited as it only affects a fraction of the nodes of the domain.
\paragraph{DC PSE} \
As for the GFD method, the impact of the support size on the observed error is presented in this section for the DC PSE method. The results are presented in Figure \ref{BoundaryNodeResultsDCPSE}, Figure \ref{BoundaryNodeResultsDCPSE_LShaped} and Figure \ref{BoundaryNode_3DResultsDCPSE} below, respectively, for the 2D cylinder, the 2D L-shape and the sphere under internal pressure for various combinations of inner node and boundary node support sizes.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.000001,ymax=0.0001,xmin=12,xmax=25, legend entries={Inn. Sup=9,Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-9.0, y=L2-REL-S11-9.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-11.0, y=L2-REL-S11-11.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-13.0, y=L2-REL-S11-13.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-15.0, y=L2-REL-S11-15.0, col sep=comma] {SupportSizeDCPSE.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.0001,ymax=0.06,xmin=12,xmax=25, legend entries={Inn. Sup=9,Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-9.0, y=L-INF-S11-9.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-11.0, y=L-INF-S11-11.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-13.0, y=L-INF-S11-13.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-15.0, y=L-INF-S11-15.0, col sep=comma] {SupportSizeDCPSE.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.000001,ymax=0.0001,xmin=12,xmax=25, legend entries={Inn. Sup=9,Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-9.0, y=L2-REL-S12-9.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-11.0, y=L2-REL-S12-11.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-13.0, y=L2-REL-S12-13.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-15.0, y=L2-REL-S12-15.0, col sep=comma] {SupportSizeDCPSE.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.0001,ymax=0.06,xmin=12,xmax=25, legend entries={Inn. Sup=9,Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-9.0, y=L-INF-S12-9.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-11.0, y=L-INF-S12-11.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-13.0, y=L-INF-S12-13.0, col sep=comma] {SupportSizeDCPSE.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-15.0, y=L-INF-S12-15.0, col sep=comma] {SupportSizeDCPSE.csv};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{DC PSE Support Node Number Sensitivity - 2D Cylinder. $L_2$ (Left) and $L_{\infty}$ (Right) errors for various combinations of inner nodes and boundary nodes support sizes. Inner collocation nodes with 13 support nodes lead relatively constantly to a low error. The error starts to increase for the 13 inner nodes case when the number of boundary nodes is larger than 19.}
\label{BoundaryNodeResultsDCPSE}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.0001,ymax=0.05,xmin=12,xmax=27, legend entries={Inn. Sup=9,Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-9.0, y=L2-REL-S11-9.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-11.0, y=L2-REL-S11-11.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-13.0, y=L2-REL-S11-13.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-15.0, y=L2-REL-S11-15.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.5,ymax=100,xmin=12,xmax=27, legend entries={Inn. Sup=9,Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-9.0, y=L-INF-S11-9.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-11.0, y=L-INF-S11-11.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-13.0, y=L-INF-S11-13.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-15.0, y=L-INF-S11-15.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.0001,ymax=0.05,xmin=12,xmax=27, legend entries={Inn. Sup=9,Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-9.0, y=L2-REL-S12-9.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-11.0, y=L2-REL-S12-11.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-13.0, y=L2-REL-S12-13.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-15.0, y=L2-REL-S12-15.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,ymode=log, ymin=0.5,ymax=100,xmin=12,xmax=27, legend entries={Inn. Sup=9,Inn. Sup=11,Inn. Sup=13,Inn. Sup=15},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-9.0, y=L-INF-S12-9.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-11.0, y=L-INF-S12-11.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-13.0, y=L-INF-S12-13.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-15.0, y=L-INF-S12-15.0, col sep=comma] {SupportSizeDCPSE-LShaped.csv};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
%
\caption{DC PSE Support Node Number Sensitivity - 2D L-Shape. $L_2$ (Left) and $L_{\infty}$ (Right) errors for various combinations of inner nodes and boundary nodes support sizes. All the combinations of inner and boundary nodes support size lead to similar errors. This is due to the system being loaded via Dirichlet boundary conditions.}
\label{BoundaryNodeResultsDCPSE_LShaped}
\end{figure}
It can be observed from Figure \ref{BoundaryNodeResultsDCPSE} that an inner node support composed of 13 nodes leads almost always to the minimum error. It can also be observed that an increasing number of nodes in the support of the boundary nodes reduces relatively steadily the error for the 2D cylinder. An error reduction of a factor two is observed when increasing the number of support nodes from 13 to 19 for most inner nodes support sizes.
It can be observed from Figure \ref{BoundaryNodeResultsDCPSE_LShaped} that the number of support nodes on the boundary has little effect on the observed error for the L-shape problem. This is because the model is loaded via Dirichlet boundary conditions. The number of support nodes for inner collocation nodes has little impact on the error.
Based the results from Figure \ref{BoundaryNodeResultsDCPSE} and Figure \ref{BoundaryNodeResultsDCPSE_LShaped}, 13 support nodes for inner collocation nodes and 19 support nodes for boundary collocation nodes is a reasonable choice for most problems as it leads to a minimum error.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.00003,ymax=0.00005,xmin=55,xmax=95, legend entries={Inn. Sup=35,Inn. Sup=37,Inn. Sup=39,Inn. Sup=41},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-3D, y=L2-REL-S11-35, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-3D, y=L2-REL-S11-37, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-3D, y=L2-REL-S11-39, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-3D, y=L2-REL-S11-41, col sep=comma] {SupportSizeDCPSE3D.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,xmin=55,xmax=95,ymin=0.03,ymax=0.05, legend entries={Inn. Sup=35,Inn. Sup=37,Inn. Sup=39,Inn. Sup=41},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-3D, y=L-INF-S11-35, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-3D, y=L-INF-S11-37, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-3D, y=L-INF-S11-39, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-3D, y=L-INF-S11-41, col sep=comma] {SupportSizeDCPSE3D.csv};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,xmin=55,xmax=95,ymin=0.000006,ymax=0.00001, legend entries={Inn. Sup=35,Inn. Sup=37,Inn. Sup=39,Inn. Sup=41},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-3D, y=L2-REL-S12-35, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-3D, y=L2-REL-S12-37, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-3D, y=L2-REL-S12-39, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-3D, y=L2-REL-S12-41, col sep=comma] {SupportSizeDCPSE3D.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm,xmin=55,xmax=95,ymin=0.008,ymax=0.013, legend entries={Inn. Sup=35,Inn. Sup=37,Inn. Sup=39,Inn. Sup=41},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Support Size on Boundary,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X-3D, y=L-INF-S12-35, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X-3D, y=L-INF-S12-37, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X-3D, y=L-INF-S12-39, col sep=comma] {SupportSizeDCPSE3D.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=X-3D, y=L-INF-S12-41, col sep=comma] {SupportSizeDCPSE3D.csv};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{DC PSE Support Node Number Sensitivity - 3D Sphere. $L_2$ (Left) and $L_{\infty}$ (Right) errors for various combinations of inner node and boundary node support sizes. Inner collocation nodes with 37 support nodes lead to the lowest observed error.}
\label{BoundaryNode_3DResultsDCPSE}
\end{figure}
It can be observed from Figure \ref{BoundaryNode_3DResultsDCPSE} that, for the 3D sphere, a minimum error is observed for inner node supports composed of 37 nodes. The number of boundary support nodes has little impact on the error. Increasing the number of support nodes from 60 to 90 reduces by only 8\% in average the observed error. 75 support nodes for boundary collocation nodes has been selected as for the GFD method in order to keep the fill of the system matrix as low as possible while maintaining the error low.
\subsection{Results Summary}
Based on the results presented in the above sections, the parameters that lead to a minimum error, while maintaining the computational expense reasonably low, are presented in Table \ref{ParametersSummary} below. These parameters are expected to lead to a low error for a wide variety of linear elasticity problems, including singular problems. They have been used as a base case for the studies presented in the next sections of the paper.
\begin{table}[h]
\centering
\caption{Summary of the results from the parametric study}
\label{ParametersSummary}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|l|c|c|}
\hline
\multicolumn{1}{|c|}{\textbf{Parameter}} & \multicolumn{1}{c|}{\textbf{GFD}} &
\multicolumn{1}{c|}{\textbf{DC PSE}} \\
\hline
Weight Function Type & 4$^\text{th}$ Order Spline & Exponential \\
Weight Function Parameter & $\gamma=0.75$ & $\alpha=1$, $\epsilon$=0.30\\
Correction Function & N/A & Polynomial \\
Size of Inner Nodes Support (2D/3D) & 11/37 & 13/37 \\
Size of Boundary Nodes Support (2D/3D) & 19/75 & 19/75 \\
\hline
\end{tabular}
\end{table}
\section{Improvement Methods} \label{ImprovementMethods_Section}
In this section, we present three methods which are expected to improve the accuracy of the GFD and DC PSE methods.
\subsection{Use of a Voronoi Diagram in Collocation} \label{Voronoi_Section}
\subsubsection{General}
A Voronoi diagram is a partition of a selected region over which nodes are distributed. A cell is associated to each node. The boundaries of the cell are defined so that all the points contained in it are closer to the cell reference node than to any other node of the domain. Figure \ref{VoronoiCell} shows a typical 2D Voronoi diagram drawn on the support of an inner node of the domain. The boundary of the support is drawn in blue, the nodes in red and the Voronoi cells are limited by grey lines. Sukumar \cite{Sukumar2003} and Zhou et al. \cite{Zhou2007}, respectively, used Voronoi diagrams for node selection, and body integration. The purpose of this section is to assess if using a Voronoi diagram on the support of a collocation node helps reducing the error of the considered methods.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\definecolor{GreyColor}{rgb}{0.3,0.3,0.3}
\begin{axis}[height=7.5cm,width=7.5cm, xmin=-0.1,xmax=0.1,scaled x ticks = false, ymin=-0.1,ymax=0.1,scaled y ticks = false,anchor=south west,axis line style={draw=none},tick style={draw=none},xticklabels={,,}, yticklabels={,,}, cells={anchor=west}, font=\footnotesize, rounded corners=2pt]
\addplot[GreyColor,line width=2pt] table [x=X-1, y=Y-1, col sep=comma] {VoroCellX1.csv};
\addplot[GreyColor,line width=2pt] table [x=X-2, y=Y-2, col sep=comma] {VoroCellX1.csv};
\addplot[GreyColor,line width=2pt] table [x=X-3, y=Y-3, col sep=comma] {VoroCellX1.csv};
\addplot[GreyColor,line width=2pt] table [x=X-4, y=Y-4, col sep=comma] {VoroCellX1.csv};
\addplot[GreyColor,line width=2pt] table [x=X-5, y=Y-5, col sep=comma] {VoroCellX1.csv};
\addplot[GreyColor,line width=2pt] table [x=X-6, y=Y-6, col sep=comma] {VoroCellX6.csv};
\addplot[blue,line width=3pt] table [x=X-Circle, y=Y-Circle, col sep=comma] {VoroCellCircle.txt};
\addplot[only marks,red,mark=*,mark options={fill=red}] table [x=X-Scatt, y=Y-Scatt, col sep=comma] {VoroCellScatt.txt};
\end{axis}
\draw [black,-stealth, line width=1.0pt] (0,3.5) node [left] {Collocation Node} -- (2.9,3);
\draw [black,-stealth, line width=1.0pt] (0,5.5) node [left] {Support Node} -- (2.4,4.4);
\end{tikzpicture}
\caption{2D Voronoi Diagram on the disc support of a collocation node. The cells associated to each node are delimited by gray lines, while the boundary of the support is drawn in blue.}
\label{VoronoiCell}
\end{figure}
\subsubsection{Application to the GFD and DC PSE Methods} \label{VoroTheory}
\paragraph{GFD} \
The principle of the GFD method has been presented in Section \ref{GFD_Method}. When more nodes than derivatives are present in the node support, a mean least square approximation is used to determine the field derivatives that best fit the distribution.
The contribution of each node in the least square approximation is weighted by a function which only depends on the distance between the reference node and the support node. A Voronoi diagram can be used to determine an additional weight based on the spatial arrangement of the nodes. This weight is the area or volume $v$ of the considered Voronoi cell and is multiplied with the distance based weight $w$. Equation (\ref{Moments_GFD}) becomes:
\begin{equation} \label{Moments_GFD_Voro}
m_{ij}= \sum_{k=1}^m {w(\mathbf{X_{pk}} - \mathbf{X_c}) v(\mathbf{X_{pk}}^c) P_{ik}(\mathbf{X_c}) P_{jk}(\mathbf{X_c})}.
\end{equation}
\paragraph{DC PSE} \
One of the key aspects of the DC PSE method presented in Section \ref{DC-PSE_Method} is the convolution of the Taylor's series expansion with a correction function $\eta$. The domain integral is transformed into a discrete summation with a volume $V_p$ associated to each particle $\mathbf{X_p}$ of the support. In a first approximation, all $V_p$ values are set to unity. In order to improve the accuracy of the method, a Voronoi diagram can be used to set $V_p$ equal to the volume of the Voronoi cell associated to each node $\mathbf{X_p}$.
\subsubsection{Results}
In this section, we present the results for the 2D cylinder and the 2D L-shape problems. The error are compared between a model where Voroni weights are used, and a model where these weights are not considered. Two types of node distributions are considered: a structured and a free node distribution. The structured node distribution is created using a constant angle and radius increment for the 2D cylinder. For the 2D L-shape, a grid-type arrangement is used. The free node distribution uses a Delaunay triangulation of the domain for both problems. The two types of node arrangements are presented in Figure \ref{NodeDispositionVoro} for the 2D cylinder problem.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=8cm,width=8cm, xmin=-0.2,xmax=3.2, ymin=-0.2,ymax=3.2,mark size=0.5pt,legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=X,ylabel=Y]
\addplot+[black, only marks,mark=*,mark options={fill=black}] table [x=X-STRUCT, y=Y-STRUCT, col sep=comma] {CylStructFree.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=8cm,width=8cm, xmin=-0.2,xmax=3.2, ymin=-0.2,ymax=3.2,mark size=0.5pt,legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=X,ylabel=Y]
\addplot+[black, only marks,mark=*,mark options={fill=black}] table [x=X-FREE, y=Y-FREE, col sep=comma] {CylStructFree.csv};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{2D Cylinder Node Distribution - Structured consisting of 1680 Nodes (Left) and Free consisting of 1762 Nodes (Right). The structured node distribution is based on constant angle and radius increments while the free node distribution uses a Delaunay triangulation of the domain.}
\label{NodeDispositionVoro}
\end{figure}
\paragraph{GFD} \
The results obtained with the GFD method are presented in Figure \ref{ResultsVoroGFD} and Figure \ref{ResultsVoroGFD_LShape} respectively for the 2D cylinder and the 2D L-shape problems for the both node distributions. A slight error reduction can be observed for the $L_2$ and the $L_{\infty}$ error norms when Voronoi weights are used but this reduction is not observed for all node densities. For the 2D cylinder and the 2D L-shape, an error reduction of around 2\% is observed when Voronoi based weights are used with a regular discretization of the domain. A more significant error reduction is observed for the 2D cylinder with a free discretization of the domain. The error reduction is of around 17\%. For the 2D L-shape, an error increase of 3\% is observed when Voronoi weights are used with a free discretization of the domain.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log, ymode=log, ymin=0.00000033,ymax=0.0004, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S11-L2-REL-GFD-NoVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S11-L2-REL-GFD-WithVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S11-L2-REL-GFD-NoVoro-Free, col sep=comma] {VoroResults.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S11-L2-REL-GFD-WithVoro-Free, col sep=comma] {VoroResults.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.1}{1.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log, ymode=log, ymin=0.0001,ymax=0.1, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S11-LINF-GFD-NoVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S11-LINF-GFD-WithVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S11-LINF-GFD-NoVoro-Free, col sep=comma] {VoroResults.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S11-LINF-GFD-WithVoro-Free, col sep=comma] {VoroResults.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.15}{1.1}{black};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log, ymode=log, ymin=0.00000033,ymax=0.0004, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S12-L2-REL-GFD-NoVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S12-L2-REL-GFD-WithVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S12-L2-REL-GFD-NoVoro-Free, col sep=comma] {VoroResults.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S12-L2-REL-GFD-WithVoro-Free, col sep=comma] {VoroResults.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.1}{1.5}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log, ymode=log, ymin=0.0001,ymax=0.1, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S12-LINF-GFD-NoVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S12-LINF-GFD-WithVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S12-LINF-GFD-NoVoro-Free, col sep=comma] {VoroResults.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S12-LINF-GFD-WithVoro-Free, col sep=comma] {VoroResults.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.15}{0.9}{black};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{Impact of Voronoi based Weights on the Errors for the GFD Method - 2D Cylinder. $L_2$ (Left) and $L_{\infty}$ (Right) errors for structured and free node distributions. A reduction in the error is only observed for the free node distribution.}
\label{ResultsVoroGFD}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log,xmin=3000,xmax=1000000, ymode=log, ymin=0.00004,ymax=0.01, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S11-L2-REL-GFD-NoVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S11-L2-REL-GFD-WithVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S11-L2-REL-GFD-NoVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S11-L2-REL-GFD-WithVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.15}{0.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log,xmin=3000,xmax=1000000, ymode=log, ymin=1,ymax=200, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S11-LINF-GFD-NoVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S11-LINF-GFD-WithVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S11-LINF-GFD-NoVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S11-LINF-GFD-WithVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\logLogSlopeTriangleUp{0.9}{0.1}{0.35}{0.2}{black};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log,xmin=3000,xmax=1000000, ymode=log, ymin=0.00004,ymax=0.01, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S12-L2-REL-GFD-NoVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S12-L2-REL-GFD-WithVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S12-L2-REL-GFD-NoVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S12-L2-REL-GFD-WithVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\logLogSlopeTriangle{0.8}{0.1}{0.1}{0.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log,xmin=3000,xmax=1000000, ymode=log, ymin=1,ymax=200, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S12-LINF-GFD-NoVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S12-LINF-GFD-WithVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S12-LINF-GFD-NoVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S12-LINF-GFD-WithVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\logLogSlopeTriangleUp{0.9}{0.1}{0.15}{0.2}{black};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{Impact of Voronoi based Weights on the Errors for the GFD Method - 2D L-Shape. $L_2$ (Left) and $L_{\infty}$ (Right) errors for structured and free node distributions. A slight reduction in the error is observed for the structured node distribution.}
\label{ResultsVoroGFD_LShape}
\end{figure}
\paragraph{DC PSE} \
As for the GFD method, we now assess the impact of Voronoi based volumes for the two node distributions for the 2D cylinder and the 2D L-shape problems. The results are presented in Figure \ref{ResultsVoroDCPSE} and Figure \ref{ResultsVoroDCPSE_LShape} below.
It can be observed from Figure \ref{ResultsVoroDCPSE} that, for the 2D cylinder, the use Voronoi based volumes leads to a large error increase for the structured node distribution. For the free node distribution, an average reduction of 10\% is observed.
From Figure \ref{ResultsVoroDCPSE_LShape} we can see that the trend for the 2D L-shape is the opposite than for the 2D cylinder. An error reduction of around 5\% is observed when Voronoi based volumes are used with the structured node distribution. A slight error increase (less than 1\%) is observed for the free node distribution when Voronoi based volumes are used.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log, ymode=log, ymin=0.00000003,ymax=0.007, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S11-L2-REL-DCPSE-NoVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S11-L2-REL-DCPSE-WithVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S11-L2-REL-DCPSE-NoVoro-Free, col sep=comma] {VoroResults.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S11-L2-REL-DCPSE-WithVoro-Free, col sep=comma] {VoroResults.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.20}{1.3}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log, ymode=log, ymin=0.00004,ymax=0.3, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S11-LINF-DCPSE-NoVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S11-LINF-DCPSE-WithVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S11-LINF-DCPSE-NoVoro-Free, col sep=comma] {VoroResults.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S11-LINF-DCPSE-WithVoro-Free, col sep=comma] {VoroResults.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.2}{0.8}{black};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log, ymode=log, ymin=0.00000003,ymax=0.007, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S12-L2-REL-DCPSE-NoVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S12-L2-REL-DCPSE-WithVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S12-L2-REL-DCPSE-NoVoro-Free, col sep=comma] {VoroResults.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S12-L2-REL-DCPSE-WithVoro-Free, col sep=comma] {VoroResults.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.12}{1.3}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log, ymode=log, ymin=0.00004,ymax=0.3, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S12-LINF-DCPSE-NoVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S12-LINF-DCPSE-WithVoro-Struc, col sep=comma] {VoroResults.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S12-LINF-DCPSE-NoVoro-Free, col sep=comma] {VoroResults.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S12-LINF-DCPSE-WithVoro-Free, col sep=comma] {VoroResults.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.12}{0.8}{black};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{Impact of Voronoi Integration on the Errors for the DC PSE Method - 2D Cylinder. $L_2$ (Left) and $L_{\infty}$ (Right) errors for structured and free node distributions. A slight reduction in the error is only observed for the free node distribution.}
\label{ResultsVoroDCPSE}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log,xmin=3000,xmax=1000000, ymode=log, ymin=0.00004,ymax=0.01, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S11-L2-REL-DCPSE-NoVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S11-L2-REL-DCPSE-WithVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S11-L2-REL-DCPSE-NoVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S11-L2-REL-DCPSE-WithVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.15}{0.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log,xmin=3000,xmax=1000000, ymode=log, ymin=1,ymax=200, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S11-LINF-DCPSE-NoVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S11-LINF-DCPSE-WithVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S11-LINF-DCPSE-NoVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S11-LINF-DCPSE-WithVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\logLogSlopeTriangleUp{0.9}{0.1}{0.35}{0.2}{black};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log,xmin=3000,xmax=1000000, ymode=log, ymin=0.00004,ymax=0.01, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S12-L2-REL-DCPSE-NoVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S12-L2-REL-DCPSE-WithVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S12-L2-REL-DCPSE-NoVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S12-L2-REL-DCPSE-WithVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\logLogSlopeTriangle{0.8}{0.1}{0.1}{0.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, xmode=log,xmin=3000,xmax=1000000, ymode=log, ymin=1,ymax=200, legend entries={Struct. No Voro.,Struct. With Voro.,Free No Voro.,Free With Voro.},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM-STRUC, y=S12-LINF-DCPSE-NoVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM-STRUC, y=S12-LINF-DCPSE-WithVoro-Struc, col sep=comma] {VoroResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM-FREE, y=S12-LINF-DCPSE-NoVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NODE-NUM-FREE, y=S12-LINF-DCPSE-WithVoro-Free, col sep=comma] {VoroResults-LShape.csv};
\logLogSlopeTriangleUp{0.9}{0.1}{0.15}{0.2}{black};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{Impact of Voronoi based Weights on the Errors for the DC PSE Method - 2D L-Shape. $L_2$ (Left) and $L_{\infty}$ (Right) errors for structured and free node distributions. A slight reduction in the error is observed for the structured node distribution.}
\label{ResultsVoroDCPSE_LShape}
\end{figure}
\paragraph{Discussion} \
From Figure \ref{ResultsVoroGFD} to Figure \ref{ResultsVoroDCPSE_LShape} we can see that the error reduction achieved by the use of Voronoi based volumes is not guaranteed. The use of such volumes for the 2D cylinder problem lead to an error reduction of up to 17\% for the 2D cylinder problem. However, for the 2D L-shape, an error increase of 3\% has been observed for the free node distribution. From this study, we can conclude that the use of the Voronoi based weights shall be used with care as is may lead to a significant error.
\subsection{Collocation Method Stabilization}
\subsubsection{General}
Within the framework of collocation, the boundary conditions are applied at the nodes using the strong form of the partial differential equations. This may lead to ill conditioning of the linear system of equations, as both the boundary conditions and the equilibrium equation cannot be enforced simultaneously at a boundary node. To overcome this issue, a stabilization method, known as the \textit{Finite Increment Calculus} (FIC), has been presented by E. O\~nate \cite{Oate1998} for structural problems that are solved with the Finite Point Method. The method is presented in this section.
Results for the 2D cylinder and for the 2D L-shape problems are presented with and without stabilization. This stabilization approach is used for both the methods, GFD and DC PSE.
\subsubsection{Stabilized Equations}
Considering an unknown field $f$, a partial differential problem is defined by a differential operator $\mathcal{A}$ applied to the interior domain $\Omega$, a field $\overline{f}$ set to the boundary $\Gamma_u$, and a differential operator $\mathcal{B}$ applied to the boundary $\Gamma_t$ (see Figure \ref{FICDrawing}).
\begin{figure}[H]
\centering
\begin{tikzpicture}
\def10cm{8cm}
\node at (0,0) {\includegraphics{FICdrawing_2.pdf}};
\node[color=black] at (-0.3,0.5) [left] {$\Omega$};
\node[color=red] at (-4,1) [left] {$\Gamma_u$};
\node[color=blue] at (4.4,1) [left] {$\Gamma_t$};
\node[color=red] at (2.6,0.6) [left] {$X_c$};
\node[color=black] at (1.75,-0.6) [left] {$h_1$};
\node[color=black] at (1.25,0.7) [left] {$h_2$};
\node[color=black] at (-2.5,-2.5) [left] {$x_1$};
\node[color=black] at (-3.1,-1.85) [left] {$x_2$};
\end{tikzpicture}
\caption{2D domain $\Omega$ on which Dirichlet boundary conditions are applied to the boundary $\Gamma_u$ and Neumann boundary conditions to $\Gamma_t$. The characteristic lengths $h_1$ and $h_2$ are presented for the collocation node $\mathbf{X_c}$.}
\label{FICDrawing}
\end{figure}
\begin{equation} \label{ProblemEquations}
\begin{aligned}
\mathbf{\mathcal{A}(f)} &=0 \text{ \quad in \quad} \Omega, \\
\mathbf{f-\overline{f}}&=0 \text{ \quad on \quad} \Gamma_u, \\
\mathbf{\mathcal{B}(f)} &=0 \text{ \quad on \quad} \Gamma_t. \\
\end{aligned}
\end{equation}
Based on \cite{Oate1998} and \cite{Oate2001}, the stabilized system of equations is:
\begin{equation} \label{StabProblemEquations}
\begin{aligned}
\mathbf{\mathcal{A}(f)}- \frac{1}{2} \sum_{j=1}^m {h_j \frac{\mathbf{\partial \mathcal{A}(f)}}{\partial x_j}}&=0 \text{ \quad in \quad} \Omega, \\
\mathbf{f-\overline{f}}&=0 \text{ \quad on \quad} \Gamma_u, \\
\mathbf{\mathcal{B}(f)}- \sum_{j=1}^m {h_j n_j \mathbf{\mathcal{A}(f)}} &=0 \text{ \quad on \quad} \Gamma_t,
\end{aligned}
\end{equation}
where $m$ is the dimension of the domain, $h_j$ is the characteristic length of the domain in the direction $j$, and $n$ is the unit normal.
For isotropic support weight functions, $h_j$ reduces to $h$ and can be expressed as follows:
\begin{equation} \label{CharctLenCalc}
\begin{aligned}
h=&R_{\text{Sup}}\left(\frac{\pi}{N_{\text{Sup}}}\right)^{\frac{1}{2}} & \text{ \quad for 2D problems, \quad} \\
h=&R_{\text{Sup}}\left(\frac{4\pi}{3N_{\text{Sup}}}\right)^{\frac{1}{3}} & \text{ \quad for 3D problems. \quad}, \\
\end{aligned}
\end{equation}
where $R_{\text{Sup}}$ and $N_{\text{Sup}}$, respectively, represent the radius of the node support, and the number of nodes in the support.
\subsubsection{Results}
Equation (\ref{StabProblemEquations}) has only been applied to the boundary nodes where the maximum error is usually observed. Also, the stabilized equation on the boundary does not require the approximation of an additional derivative order. The results obtained are presented from Figure \ref{ResultsStab-Cyl-GFD} to Figure \ref{ResultsStab-LShape-DCPSE}. $L_{\infty}$ error results have not been presented in this section as this error highly depends on the proximity of the closest node to the singularity.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00000002,ymax=0.01,xmin=1000, xmode=log, legend entries={No Stabilization,With Stabilization},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S11-L2-REL-GFD-NoStab, col sep=comma] {StabResults-Cyl.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S11-L2-REL-GFD-WithStab, col sep=comma] {StabResults-Cyl.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.23}{1.5}{green};
\logLogSlopeTriangle{0.9}{0.1}{0.5}{1.6}{red};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00000002,ymax=0.01,xmin=1000, xmode=log, legend entries={No Stabilization,With Stabilization},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S12-L2-REL-GFD-NoStab, col sep=comma] {StabResults-Cyl.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S12-L2-REL-GFD-WithStab, col sep=comma] {StabResults-Cyl.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.25}{1.6}{green};
\logLogSlopeTriangle{0.9}{0.1}{0.45}{1.5}{red};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{Stabilization results comparison - 2D Cylinder - GFD Method. $L_2$ error for stabilized and non-stabilized PDE for increasing node numbers. A lower error is observed for the non-stabilized PDE.}
\label{ResultsStab-Cyl-GFD}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00000002,ymax=0.01,xmin=1000, xmode=log, legend entries={No Stabilization,With Stabilization},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S11-L2-REL-DCPSE-NoStab, col sep=comma] {StabResults-Cyl.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S11-L2-REL-DCPSE-WithStab, col sep=comma] {StabResults-Cyl.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.23}{1.4}{green};
\logLogSlopeTriangle{0.9}{0.1}{0.45}{1.6}{red};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00000002,ymax=0.01,xmin=1000, xmode=log, legend entries={No Stabilization,With Stabilization},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S12-L2-REL-DCPSE-NoStab, col sep=comma] {StabResults-Cyl.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S12-L2-REL-DCPSE-WithStab, col sep=comma] {StabResults-Cyl.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.12}{2.0}{green};
\logLogSlopeTriangle{0.9}{0.1}{0.42}{1.5}{red};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{Stabilization results comparison - 2D Cylinder - DC PSE Method. $L_2$ error for stabilized and non-stabilized PDE for increasing node numbers. A lower error is observed for the non-stabilized PDE.}
\label{ResultsStab-Cyl-DCPSE}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00002,ymax=0.01,xmin=3000,xmax=1000000, xmode=log, legend entries={No Stabilization,With Stabilization},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S11-L2-REL-GFD-NoStab, col sep=comma] {StabResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S11-L2-REL-GFD-WithStab, col sep=comma] {StabResults-LShape.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.22}{0.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00002,ymax=0.01,xmin=3000,xmax=1000000, xmode=log, legend entries={No Stabilization,With Stabilization},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S12-L2-REL-GFD-NoStab, col sep=comma] {StabResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S12-L2-REL-GFD-WithStab, col sep=comma] {StabResults-LShape.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.12}{0.6}{black};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{Stabilization Results Comparison - 2D L-Shape - GFD Method. $L_2$ error for stabilized and non-stabilized PDE for increasing node numbers. A lower error is observed for the stabilized PDE.}
\label{ResultsStab-LShape-GFD}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00002,ymax=0.01,xmin=3000,xmax=1000000, xmode=log, legend entries={No Stabilization,With Stabilization},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S11-L2-REL-DCPSE-NoStab, col sep=comma] {StabResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S11-L2-REL-DCPSE-WithStab, col sep=comma] {StabResults-LShape.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.22}{0.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00002,ymax=0.01,xmin=3000,xmax=1000000, xmode=log, legend entries={No Stabilization,With Stabilization},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S12-L2-REL-DCPSE-NoStab, col sep=comma] {StabResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S12-L2-REL-DCPSE-WithStab, col sep=comma] {StabResults-LShape.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.12}{0.6}{black};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{Stabilization results comparison - 2D L-Shape - DC PSE Method. $L_2$ error for stabilized and non-stabilized PDE for increasing node numbers. A lower error is observed for the stabilized PDE.}
\label{ResultsStab-LShape-DCPSE}
\end{figure}
It can be observed from the results presented from Figure \ref{ResultsStab-Cyl-GFD} and Figure \ref{ResultsStab-Cyl-DCPSE} that the stabilization equations lead to an error increase for the 2D cylinder problem. Using the stabilization method increases by a factor 30 the error for the GFD method and by a factor 20 for the DC PSE method. We can see from Figure \ref{ResultsStab-LShape-GFD} and Figure \ref{ResultsStab-LShape-DCPSE} that the error is reduced by the use of the stabilization method for the L-shape problem. An average error reduction of 25\% is observed for the GFD method and of 35\% for the DC PSE method.
For the 2D cylinder, the loading is applied via Neumann boundary conditions, which represent the pressure loading. The L-shape problem on the other hand is loaded using Dirichlet boundary conditions. It can be concluded from this study that stabilization of the Neumann boundary conditions does not necessarily lead to a reduction of the observed error. Thus, stabilization of Neumann loaded problems does not seem to be an effective solution for the considered methods.
\subsection{Support Node Selection for Singular Problems}\label{SupNodeSelection}
\subsubsection{General} \label{SupNodeSelectionGle}
For singular problems, such as the L-shape problem presented in Section \ref{RefProblems}, the selection of the support nodes in the vicinity of the singularity impacts the solution.
In 1994, with the Element-Free Galerkin (EFG) method, Belytschko et al. \cite{Belytschko1994} introduced the visibility criterion for support nodes selection. This criterion has been widely used in the context of EFG fracture mechanics, see e.g., Duflot \cite{Duflot2004}. The support of a collocation node $\mathbf{X_c}$ in the domain $\Omega$ is selected so that any point $\mathbf{X_p}$ of the support, defined by a radius $R_{Sup}$, can be connected to the collocation node by a segment which does not intersect the domain boundary $\Gamma$ (see Figure \ref{VisibilityDrawing}). The ``hidden" zone is the zone within the support radius for which the segment between the collocation node and the support node intersects the boundary.
\begin{figure}[H]
\centering
\def10cm{7cm}
\includegraphics{DrawingVisibility_2.pdf}
\caption{Visibility criterion. Only the nodes that can be connected to the collocation node by a segment which does not intersect the boundary of the domain are included in the support.}
\label{VisibilityDrawing}
\end{figure}
The diffraction criterion, introduced by Organ in 1996 \cite{Organ1996}, is based on the same principle as the visibility criterion. The nodes in the zone ``hidden" from the collocation node are only included in the support if the sum of the length between the support node $\mathbf{X_p}$ and the singularity $\mathbf{X_s}$ and the length between the singularity and the collocation node $\mathbf{X_c}$ is smaller than the support radius $R_{Sup}$. The weights associated to the nodes in this zone are based on this increased distance to the collocation node.
\begin{figure}[H]
\centering
\def10cm{7cm}
\includegraphics{DrawingDiffraction_2.pdf}
\caption{Diffraction Criterion. The nodes in the ``hidden" zone according to the visibility criterion are included in the support only if the sum of the distances [$\mathbf{X_p}$;$\mathbf{X_s}$] and [$\mathbf{X_s}$;$\mathbf{X_c}$] is smaller than the support radius.}
\label{DiffractionDrawing}
\end{figure}
\subsubsection{Results}
In this section, we assess the impact of the support node selection criterion on the $L_2$ error for the L-shape problem. Results obtained with the visibility criterion and with the diffraction criterion are compared to results where no criterion is considered (i.e. every node within the support radius of a collocation node is included in the support).
Both the GFD and DC PSE methods are considered in this section.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00002,ymax=0.08,xmin=3000, xmode=log, legend entries={No Criterion,Visibility Criterion,Diffraction Criterion},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S11-L2-REL-GFD-NoVis, col sep=comma] {VisResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S11-L2-REL-GFD-Vis, col sep=comma] {VisResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM, y=S11-L2-REL-GFD-Diff, col sep=comma] {VisResults-LShape.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.15}{0.5}{red};
\logLogSlopeTriangle{0.9}{0.1}{0.52}{0.7}{blue};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00002,ymax=0.08,xmin=3000, xmode=log, legend entries={No Criterion,Visibility Criterion,Diffraction Criterion},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S12-L2-REL-GFD-NoVis, col sep=comma] {VisResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S12-L2-REL-GFD-Vis, col sep=comma] {VisResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM, y=S12-L2-REL-GFD-Diff, col sep=comma] {VisResults-LShape.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.10}{0.5}{red};
\logLogSlopeTriangle{0.9}{0.1}{0.45}{0.7}{blue};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{Support node selection results comparison - 2D L-Shape - GFD Method. $L_2$ error obtained with no node selection criterion, with the visibility criterion, and the diffraction criterion. The lowest error is observed for the visibility criterion.}
\label{ResultsVis-LShape-GFD}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00002,ymax=0.1,xmin=3000, xmode=log, legend entries={No Criterion,Visibility Criterion,Diffraction Criterion},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S11-L2-REL-DCPSE-NoVis, col sep=comma] {VisResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S11-L2-REL-DCPSE-Vis, col sep=comma] {VisResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM, y=S11-L2-REL-DCPSE-Diff, col sep=comma] {VisResults-LShape.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.15}{0.5}{red};
\logLogSlopeTriangle{0.9}{0.1}{0.52}{0.8}{blue};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymode=log, ymin=0.00002,ymax=0.1,xmin=3000, xmode=log, legend entries={No Criterion,Visibility Criterion,Diffraction Criterion},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NODE-NUM, y=S12-L2-REL-DCPSE-NoVis, col sep=comma] {VisResults-LShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NODE-NUM, y=S12-L2-REL-DCPSE-Vis, col sep=comma] {VisResults-LShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NODE-NUM, y=S12-L2-REL-DCPSE-Diff, col sep=comma] {VisResults-LShape.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.1}{0.5}{red};
\logLogSlopeTriangle{0.9}{0.1}{0.45}{0.8}{blue};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{Support Node Selection Results Comparison - 2D L-Shape - DC PSE Method. $L_2$ error obtained with no node selection criterion, with the visibility criterion, and the diffraction criterion. The lowest error is observed for the visibility criterion.}
\label{ResultsVis-LShape-DCPSE}
\end{figure}
It can be observed that, for both methods, the use of the visibility criterion leads to a significant error reduction compared to results where no criterion is applied. For the GFD method, the error reduction ranges from 50\% to 20\% depending on the node density. For the DC PSE method, the error reduction ranges from 55\% to 25\%. These results are expected as the singularity of the domain is more accurately represented with the visibility criterion. The diffraction criterion leads to an error increase ranging from a factor 2 to factor 10 for the GFD and the DC PSE methods. For both methods, the convergence rate is larger with the diffraction criterion. The visibility criterion appears to be a sensible choice for convex and singular problems.
\section{Benchmarking}\label{Benchmarking}
In Section \ref{ParametricAnalysis}, we presented the studies of the parameters influencing the GFD and DC PSE methods and have selected a set of optimum parameters presented in Table \ref{ParametersSummary}. Based on these parameters, we compare in this section first the three variations of the DC PSE method presented in Section \ref{DCPEVariationsSec}, and then the methods presented in Section \ref{MethodDescription}. The results are assessed in terms of the $L_2$ and $L_{\infty}$ error norms for the 2D cylinder and the 2D L-shape problems. None of the improvement methods presented in Section \ref{ImprovementMethods_Section} have been used to derive the results presented in this section.
\subsection{DC PSE Variations Comparison}\label{DCPSEVariationComp}
The three variations of the DC PSE method are studied in this section. The results in terms of $L_2$ error are compared in Figure \ref{DCPSE_Comparison} and in Figure \ref{DCPSE_Comparison_L-Shape}, respectively, for the 2D cylinder and the 2D L-shape problems.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.00000033,ymax=0.0001,ymode=log,xmin=1000,xmax=34000, xmode=log, legend entries={DCPSE0,DCPSE1,DCPSE2},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S11-DCPSE0, col sep=comma] {DCPSECompCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S11-DCPSE1, col sep=comma] {DCPSECompCylinder.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L2-REL-S11-DCPSE2, col sep=comma] {DCPSECompCylinder.csv};
\logLogSlopeTriangle{0.8}{0.1}{0.16}{1.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.00000033,ymax=0.0001,ymode=log,xmin=1000,xmax=34000, xmode=log, legend entries={DCPSE0,DCPSE1,DCPSE2},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S12-DCPSE0, col sep=comma] {DCPSECompCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S12-DCPSE1, col sep=comma] {DCPSECompCylinder.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L2-REL-S12-DCPSE2, col sep=comma] {DCPSECompCylinder.csv};
\logLogSlopeTriangle{0.8}{0.1}{0.16}{1.6}{black};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{DC PSE method variations comparison - 2D Cylinder. $L_2$ error as a function of the number of nodes in the model for the DCPSE0, DCPSE1 and DCPSE2 variations of the DC PSE method. No distinction can be observed between the different methods.}
\label{DCPSE_Comparison}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.000034,ymax=0.008,ymode=log, xmin=3000,xmax=1000000, xmode=log, legend entries={DCPSE0,DCPSE1,DCPSE2},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S11-DCPSE0, col sep=comma] {DCPSECompLShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S11-DCPSE1, col sep=comma] {DCPSECompLShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L2-REL-S11-DCPSE2, col sep=comma] {DCPSECompLShape.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.22}{0.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.000034,ymax=0.008,ymode=log,xmin=3000,xmax=1000000, xmode=log, legend entries={DCPSE0,DCPSE1,DCPSE2},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S12-DCPSE0, col sep=comma] {DCPSECompLShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S12-DCPSE1, col sep=comma] {DCPSECompLShape.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L2-REL-S12-DCPSE2, col sep=comma] {DCPSECompLShape.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.1}{0.6}{black};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{DC PSE method variations comparison - 2D L-Shape. $L_2$ error as a function of the number of nodes in the model for the DCPSE0, DCPSE1 and DCPSE2 variations of the DC PSE method. No distinction can be observed between the different methods.}
\label{DCPSE_Comparison_L-Shape}
\end{figure}
It can be observed from Figure \ref{DCPSE_Comparison} and Figure \ref{DCPSE_Comparison_L-Shape} that all three methods lead to very similar results. In order to quantify the difference, the relative difference compared to the DCPSE2 is presented in Figure \ref{DCPSE_Comparison2} and Figure \ref{DCPSE_Comparison2_LShaped} for the DCPSE0 and DCPSE1 methods.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.005,ymax=0.025,xmin=1000,xmax=34000, xmode=log, legend entries={DCPSE0,DCPSE1},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=south east,xlabel=Number of Nodes,ylabel=Rel. Diff. to DCPSE2 Error - $\sigma_{11}$, ytick={0,0.005,0.010,0.015,0.020,0.025}, scaled y ticks=false,yticklabel style={/pgf/number format/fixed, /pgf/number format/precision=5}]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S11-DCPSE0-FRAC, col sep=comma] {DCPSECompCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S11-DCPSE1-FRAC, col sep=comma] {DCPSECompCylinder.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=0.005,ymax=0.025,xmin=1000,xmax=34000, xmode=log, legend entries={DCPSE0,DCPSE1},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=south east,xlabel=Number of Nodes,ylabel=Rel. Diff. to DCPSE2 Error - $\sigma_{12}$, ytick={0,0.005,0.010,0.015,0.020,0.025}, scaled y ticks=false,yticklabel style={/pgf/number format/fixed, /pgf/number format/precision=5}]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S12-DCPSE0-FRAC, col sep=comma] {DCPSECompCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S12-DCPSE1-FRAC, col sep=comma] {DCPSECompCylinder.csv};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{DC PSE method variations comparison - 2D Cylinder. Relative difference to the DCPSE2 $L_2$ error for the DCPSE0 and DCPSE1 methods. The DCPSE2 methods leads to the lowest error followed by the DCPSE1 method.}
\label{DCPSE_Comparison2}
\end{figure}
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=-0.007,ymax=-0.0025,ytick={-0.007,-0.005,-0.003},xmin=3000,xmax=1000000, xmode=log, legend entries={DCPSE0,DCPSE1},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=south east,xlabel=Number of Nodes,ylabel=Rel. Diff. to DCPSE2 Error - $\sigma_{11}$, scaled y ticks=false,yticklabel style={/pgf/number format/fixed, /pgf/number format/precision=5}]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S11-DCPSE0-FRAC, col sep=comma] {DCPSECompLShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S11-DCPSE1-FRAC, col sep=comma] {DCPSECompLShape.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=6cm,width=7.5cm, ymin=-0.015,ymax=0.01,xmin=3000,xmax=1000000, xmode=log, legend entries={DCPSE0,DCPSE1},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=south east,xlabel=Number of Nodes,ylabel=Rel. Diff. to DCPSE2 Error - $\sigma_{12}$, scaled y ticks=false,yticklabel style={/pgf/number format/fixed, /pgf/number format/precision=5}]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S12-DCPSE0-FRAC, col sep=comma] {DCPSECompLShape.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S12-DCPSE1-FRAC, col sep=comma] {DCPSECompLShape.csv};
\end{axis}
\end{tikzpicture}
\end{tabular}
\caption{DC PSE method variations comparison - 2D L-Shape. Relative difference to the DCPSE2 $L_2$ error for the DCPSE0 and DCPSE1 methods. The DCPSE1 methods leads to the lowest error followed by the DCPSE0 method.}
\label{DCPSE_Comparison2_LShaped}
\end{figure}
It can be observed from Figure \ref{DCPSE_Comparison2} that the minimum error is obtained with the DCPSE2 method. The DCPSE1 method leads to a slightly lower error than the DCPSE0 method. The errors obtained with the DCPSE0 and DCPSE1 methods are between 1\% and 2.5\% larger than the error obtained with the DCPSE2 method. For the 2D L-shape problem, the results presented in Figure \ref{DCPSE_Comparison2_LShaped} show that the DCPSE2 method leads to a larger error for most node densities. The DCPSE0 and DCPSE1 methods lead to similar errors.
It can be concluded from this study that all variations of the DC PSE method lead to very similar results. The assembly of the linear problem is slightly faster with the DCPSE2 method as the coefficients of the correction function are obtained with the inversion of a linear problem of a lower dimension. The DCPSE1 method leads in general to a lower error than the DCPSE0 method, and thus, has been selected for the comparison to other collocation methods presented in Subsection \ref{MethodComparison}.
\subsection{GFD, DC PSE and Other Methods Comparison}\label{MethodComparison}
\subsubsection{General}
In this section, the GFD and the DC PSE methods are compared to the MLS, IMLS and RBF-FD collocation methods. The same number of support nodes has been chosen for all methods. A 3$^\text{rd}$ order spline weight function has been chosen for the MLS method. The weight function presented in Equation (\ref{IMLSWeight}) has been considered for the IMLS method, and a Gaussian radial basis function has been selected for the RBF-FD methods. The GFD and the DC PSE methods are based on the parameters presented in Table \ref{ParametersSummary}. For reference purpose, results for the finite element method (FEM), obtained using the commercial software ABAQUS \cite{Abaqus2017}, are also included in the comparison.
The results from FEM are extrapolated to the nodes. This allows the results from the FEM to be compared with the same error norms to the results obtained with the collocation methods. The same discretization as for the collocation model has been selected for the FE models. The adjacent nodes of the regular distribution have been grouped into bilinear quadrilateral elements with four integration points.
\subsubsection{Results for the 2D Cylinder Under Internal Pressure}
From Figure \ref{MethodCompResultsCylinder1}, it can be observed that the DCPSE1 method leads to the lowest error in terms of $L_2$ and $L_{\infty}$ norms for the $\sigma_{11}$ and $\sigma_{12}$ stress components. The GFD method leads to a slightly higher error than the DC PSE method. The MLS and IMLS methods lead to very similar results. The IMLS method lead to an error constantly lower than the MLS method. Finally, the FEM and the RBF-FD method lead to the largest errors. The error obtained with the RBF-FD method does not monotonically decrease as the node density increases.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm,ymode=log, ymin=0.0000003,ymax=0.002, xmin=1000,xmode=log, legend entries={GFD,DCPSE1,MLS,IMLS,RBF-FD,FEA},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=3, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S11-GFD
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S11-DCPSE1
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L2-REL-S11-MLS1
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NodeNum, y=L2-REL-S11-IMLS
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[purple,mark=+,mark options={fill=purple}] table [x=NodeNum, y=L2-REL-S11-RBF-FD
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=NodeNum, y=L2-REL-S11-FEA
, col sep=comma] {MethodCompCylinder.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.09}{1.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm,ymode=log, ymin=0.0001,ymax=0.6, xmin=1000,xmode=log, legend entries={GFD,DCPSE1,MLS,IMLS,RBF-FD,FEA},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=3, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L-INF-S11-GFD
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L-INF-S11-DCPSE1
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L-INF-S11-MLS1
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NodeNum, y=L-INF-S11-IMLS
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[purple,mark=+,mark options={fill=purple}] table [x=NodeNum, y=L-INF-S11-RBF-FD
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=NodeNum, y=L-INF-S11-FEA
, col sep=comma] {MethodCompCylinder.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.13}{1.0}{black};
\end{axis}
\end{tikzpicture} \\
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm,ymode=log, ymin=0.0000003,ymax=0.002, xmin=1000,xmode=log, legend entries={GFD,DCPSE1,MLS,IMLS,RBF-FD,FEA},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=3, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S12-GFD
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S12-DCPSE1
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L2-REL-S12-MLS1
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NodeNum, y=L2-REL-S12-IMLS
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[purple,mark=+,mark options={fill=purple}] table [x=NodeNum, y=L2-REL-S12-RBF-FD
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=NodeNum, y=L2-REL-S12-FEA
, col sep=comma] {MethodCompCylinder.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.09}{1.6}{black};
\end{axis}
\end{tikzpicture}&
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm,ymode=log, ymin=0.0001,ymax=0.6, xmin=1000,xmode=log, legend entries={GFD,DCPSE1,MLS,IMLS,RBF-FD,FEA},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=3, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_{\infty}$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L-INF-S12-GFD
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L-INF-S12-DCPSE1
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L-INF-S12-MLS1
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NodeNum, y=L-INF-S12-IMLS
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[purple,mark=+,mark options={fill=purple}] table [x=NodeNum, y=L-INF-S12-RBF-FD
, col sep=comma] {MethodCompCylinder.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=NodeNum, y=L-INF-S12-FEA, col sep=comma] {MethodCompCylinder.csv};
\logLogSlopeTriangle{0.9}{0.1}{0.11}{1.0}{black};
\end{axis}
\end{tikzpicture}\\
\end{tabular}
\caption{Methods comparison - 2D Cylinder. $L_2$ (Left) and $L_{\infty}$ (Right) errors as a function of the number of nodes for various collocation methods (i.e. GFD, DCPSE1, MLS, IMLS and RBF-FD) and for the FEM. The lowest error is obtained with the DCPSE1 method.}
\label{MethodCompResultsCylinder1}
\end{figure}
\subsubsection{Results for the L-Shape Problem}
The different methods are also compared for the 2D L-shape problem. Only the $L_2$ error is presented for this problem as the $L_{\infty}$ error diverges to infinity as the distance to the singular node tends to zero. The results are presented in Figure \ref{MethodCompResultsLShaped} below.
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm,ymode=log, ymin=0.00002,ymax=0.03, xmin=3000,xmax=1000000,xmode=log, legend entries={GFD,DCPSE1,MLS,IMLS,RBF-FD,FEA},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=3, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{11}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S11-GFD
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S11-DCPSE1
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L2-REL-S11-MLS1
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NodeNum, y=L2-REL-S11-IMLS
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[purple,mark=+,mark options={fill=purple}] table [x=NodeNum, y=L2-REL-S11-RBF-FD
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=NodeNum, y=L2-REL-S11-FEA
, col sep=comma] {MethodCompLShaped.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.2}{0.6}{black};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm,ymode=log, ymin=0.00002,ymax=0.03, xmin=3000,xmax=1000000,xmode=log, legend entries={GFD,DCPSE1,MLS,IMLS,RBF-FD,FEA},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=3, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=Number of Nodes,ylabel=$L_2$ Error - $\sigma_{12}$]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=NodeNum, y=L2-REL-S12-GFD
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=NodeNum, y=L2-REL-S12-DCPSE1
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=NodeNum, y=L2-REL-S12-MLS1
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[yellow,mark=*,mark options={fill=yellow}] table [x=NodeNum, y=L2-REL-S12-IMLS
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[purple,mark=+,mark options={fill=purple}] table [x=NodeNum, y=L2-REL-S12-RBF-FD
, col sep=comma] {MethodCompLShaped.csv};
\addplot+[orange,mark=pentagon*,mark options={fill=orange}] table [x=NodeNum, y=L2-REL-S12-FEA
, col sep=comma] {MethodCompLShaped.csv};
\logLogSlopeTriangle{0.85}{0.1}{0.1}{0.6}{black};
\end{axis}
\end{tikzpicture} \\
\end{tabular}
\caption{Methods comparison - 2D L-Shape. $L_2$ errors as a function of the number of nodes for various collocation methods (i.e. GFD, DCPSE1, MLS, IMLS and RBF-FD) and for the FEM. The lowest error is obtained with the FEM. The collocation method leading to the lowest error is the MLS method.}
\label{MethodCompResultsLShaped}
\end{figure}
We can see in Figure \ref{MethodCompResultsLShaped} that the results obtained with the FEM are the closest to the analytical solution. The MLS, IMLS, GFD and DC PSE methods lead to similar results. The trend is however opposite to the results presented in Figure \ref{MethodCompResultsCylinder1}. It shows hows the methods are affected by a rapid change in the field solution. The MLS method is the method leading to the lowest error among the collocation methods. Finally, the RBF-FD is the method leading to the highest error.
\subsubsection{Convergence Rate and Computational Expense}
The methods are also compared in terms of convergence rate and solution time. Results are summarized in Table \ref{MethodCompSummaryTable} below. It can be observed that the RBF-FD method has the lowest convergence rate for the 2D cylinder problem, and the largest convergence rate for the L-shape problem. The IMLS method shows the largest convergence rate for the 2D cylinder problem. The GFD method is the one having the lowest convergence rate for the L-shape problem.
The right column of Table \ref{MethodCompSummaryTable} shows that the computation time for the MLS and IMLS method is significantly larger than the computation time for the other methods. This is due to the assembly step which is more time consuming for these methods, as the system presented in Equation (\ref{DerivativeSystem_MLS}) needs to be solved for each derivative.
\begin{table}[h]
\centering
\caption{Method Comparison Summary}
\label{MethodCompSummaryTable}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|l|c|c|c|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\centering \textbf{Method}}} & \multicolumn{2}{c|}{\begin{tabular}[x]{@{}c@{}}\textbf{Average $L_2$}\\ \textbf{Convergence Rate} \end{tabular} }&
\multirow{2}{*}{\begin{tabular}[x]{@{}c@{}}\textbf{Computation Time $^\text{(1)}$} \end{tabular} }\\
& \text{2D Cylinder} & \text{2D L-Shape} & \\
\hline
\centering
GFD & 1.5109 & 0.6244 & 9.3s \\
DCPSE1 & 1.5592 & 0.6405 & 11.4s\\
MLS & 1.5064 & 0.6249 & 19.7s \\
IMLS & 1.5717 & 0.6279 & 20.5s \\
RBF-FD & 1.0952 & 0.7401 & 10.1s \\
\hline
\multicolumn{4}{@{}l}{\footnotesize (1) Based on a 12,087 nodes 2D cylinder model solved with a direct solver.}
\end{tabular}
\end{table}
\pagebreak
The computation times presented in Table \ref{MethodCompSummaryTable} can be split into four main steps. These steps are:
\begin{itemize}
\item Problem initialization;
\item Matrix assembly;
\item Solution of the linear problem;
\item Postprocessing and results output.
\end{itemize}
The initialization step consists in loading the problem from the input file and searching for the node neighbors (the nodes to be included in the support of each collocation nodes). During the assembly step, all the derivatives of the unknown field are approximated as a function of the field, and the linear problem is assembled. For the solution of the linear problem, while direct solvers can be used for 2D problems of a reasonable size, iterative solvers should be used for large 3D problems, as the matrix of the linear problem is significantly denser than for 2D problems. Finally, the postprocessing step consists of the computation of the quantities of interest (stress components based on the displacement field). In this work, the linear solver MUMPS \cite{MUMPS01,MUMPS02} and the iterative solver PETSc KSP \cite{petsc-user-ref,petsc-efficient} have been used. The analyses have been run using a C++ code developed in-house. The code was run on a machine equipped with an Intel Xeon E5-1650 processor at 3.2 GHz. A single process and a single thread has been used for the analyses.
For various node densities, the fraction of the total analysis time of the matrix assembly step and of the solution step is presented in Figure \ref{TimeSplitLShapedGFD} and Figure \ref{TimeSplitLShapedIMLS} for the L-shape problem. The duration of the problem initialization and of the postprocessing steps is negligible compare to the two other steps. Is has not been presented in those figures. The total analysis duration has also been presented in these figures on a secondary axis. The results are presented for the GFD and for the IMLS methods as these methods are, respectively, the fastest and the slowest methods for the node density presented in Table \ref{MethodCompSummaryTable}.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[height=7cm,width=10cm,ymode=log, ymin=0.01,ymax=1, xmin=5000,xmax=1100000,xmode=log, legend entries={Frac. Assembly,Frac. Solution,Analysis Duration},legend style={ at={(0.5,-0.2)},anchor=south east,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, axis y line*=left, legend pos=south east, xlabel=Number of Nodes,ylabel=Fraction of analysis duration]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X, y=GFD-Build, col sep=comma] {TimeSplitLShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X, y=GFD-Solve, col sep=comma] {TimeSplitLShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X, y=X, col sep=comma] {TimeSplitLShaped.csv};
\end{axis}
\begin{axis}[height=7cm,width=10cm,ymode=log, ymin=0.1,ymax=10000, xmin=5000,xmax=1100000,xmode=log, legend entries={},legend style={ at={(0.5,-0.2)},anchor=south east,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, axis x line=none, legend pos=south west,axis y line*=right,ylabel=Analysis Duration (s)]
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X, y=GFD-Time, col sep=comma] {TimeSplitLShaped.csv};
\end{axis}
\end{tikzpicture}
\caption{Computation time split and analysis duration - GFD method. Impact of the number of nodes on the fraction of the analysis spent in each step and total analysis time.}
\label{TimeSplitLShapedGFD}
\end{figure}
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{axis}[height=7cm,width=10cm,ymode=log, ymin=0.01,ymax=1, xmin=5000,xmax=1100000,xmode=log, legend entries={Frac. Assembly,Frac. Solution,Analysis Duration},legend style={ at={(0.5,-0.2)},anchor=south east,legend columns=2, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, axis y line*=left, legend pos=south east, xlabel=Number of Nodes,ylabel=Fraction of analysis duration]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X, y=IMLS-Build, col sep=comma] {TimeSplitLShaped.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X, y=IMLS-Solve, col sep=comma] {TimeSplitLShaped.csv};
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X, y=X, col sep=comma] {TimeSplitLShaped.csv};
\end{axis}
\begin{axis}[height=7cm,width=10cm,ymode=log, ymin=0.1,ymax=10000, xmin=5000,xmax=1100000,xmode=log, legend entries={},legend style={ at={(0.5,-0.2)},anchor=south east,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, axis x line=none, legend pos=south west,axis y line*=right,ylabel=Analysis Duration (s)]
\addplot+[blue,mark=triangle*,mark options={fill=blue}] table [x=X, y=IMLS-Time, col sep=comma] {TimeSplitLShaped.csv};
\end{axis}
\end{tikzpicture}
\caption{Computation time split and analysis duration - IMLS method. Impact of the number of nodes on the fraction of the analysis spent in each step and total analysis time.}
\label{TimeSplitLShapedIMLS}
\end{figure}
It can be observed from Figure \ref{TimeSplitLShapedGFD} and Figure \ref{TimeSplitLShapedIMLS} that for both methods the trend of the results is similar. The assembly step represents the largest proportion of the total analysis time for 2D problems of small dimensions. As the number of degrees of freedom increases, the fraction of the solution step (here with a direct solver) in the analysis time increases and becomes larger than the fraction of the assembly step. This result is expected as the assembly time increases linearly with the number of nodes while the solution time increases exponentially with the number of degrees of freedom. The increased computation effort required by the IMLS method during the assembly step is observed on this graph as the fraction of the analysis spent assembling the matrix is larger than for the GFD method.
\section{Three Dimensional Problems} \label{3DResults}
The parametric study presented earlier allowed the selection of ``optimal" weight functions and support sizes for solving problems from different fields of application. In this section, we present the results from the stress analysis of various three dimensional problems. The results in terms of von Mises stress obtained with the GFD method are compared to the results obtained with ABAQUS. For a consistent comparison, the same discretization has been used for both methods. The results obtained from the FEA are extrapolated to the nodes. This tends to overestimate the error for the FEA method as the stresses are less accurate at the nodes than at the integration points. This allows, however, a comparison, at each node, of the results derived with the FE method to the results derived with the GFD method.
In order to obtain convergence of the 3D problems with the collocation method, a relatively large number of nodes is used. Such a high density is required to capture the details of the geometry.
\paragraph{Flange Model} \
The first problem considered in this section is an ISO flange. Only a quarter of the flange has been modeled due to the symmetries of the domain. The model and the various surfaces, on which boundary conditions are applied, are presented in Figure \ref{FlangeBCs}. Two load cases have been considered for this model: an internal pressure loading and an axial traction imposed by the connected pipe. The boundary conditions associated to each load case are presented in Table \ref{FlangeLCsBCs}.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\def10cm{10cm}
\node at (0,0) {\includegraphics{FlangeBCs_2.pdf}};
\node[color=black] at (-4.3,2.1) [left] {Internal Surface};
\node[color=black] at (-4.3,1.1) [left] {XZ Sym. Plane};
\node[color=black] at (-4.3,0.35) [left] {YZ Sym. Plane};
\node[color=black] at (-4.3,-0.4) [left] {XY Sym. Plane};
\node[color=black] at (5,2.1) [right] {Top Face};
\node[color=black] at (5,0.7) [right] {External Surfaces};
\end{tikzpicture}
\caption{Flange Model and Boundary Conditions}
\label{FlangeBCs}
\end{figure}
\begin{table}[h]
\centering
\caption{Boundary conditions applied to the flange for the pressure and displacement load cases. The surfaces are highlighted in Figure \ref{FlangeBCs}.}
\label{FlangeLCsBCs}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\centering \textbf{Surface}}} & \multicolumn{2}{c|}{\textbf{Boundary Conditions}}\\
& \multicolumn{1}{c|}{\text{Pressure Loading}} & \multicolumn{1}{c|}{\text{Diplacement loading}} \\
\hline
\centering
XY Sym. Plane & Constrained in the Z direction & Constrained in the Z direction \\
XZ Sym. Plane & Constrained in the Y direction & Constrained in the Y direction\\
YZ Sym. Plane & Constrained in the X direction & Constrained in the X direction\\
Internal Surface & Constant pressure of 1.0 & Stress free\\
External Surface & Stress free & Stress free\\
Top Face & Constrained in the Z direction & Applied displacement of 6.2e-04 in the Z direction\\
\hline
\end{tabular}
\end{table}
Figure \ref{ISO_Flange_Pressure} and Figure \ref{ISO_Flange_Traction} show the von Mises stresses obtained with the GFD and the FE methods, respectively, for the internal pressure and traction load cases.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale=0.35]{Flange_Pressure_Collocation.png}}
\qquad
\subfloat[]{\includegraphics[scale=0.35]{Flange_Pressure_FEA.png}}
\caption{Flange ISO PN50 DN25 subject to an internal pressure - von Mises stress results from the GFD method (a) and FEM (b) (548,648 nodes). The results from both models are very similar. The stress on the inner surface of the flange is larger for the GFD method.}
\label{ISO_Flange_Pressure}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale=0.35]{Flange_Traction_Collocation.png}}
\qquad
\subfloat[]{\includegraphics[scale=0.35]{Flange_Traction_FEA.png}}
\caption{Flange ISO PN50 DN25 under traction - von Mises stress results from the GFD method (a) and FEM (b) (548,648 nodes). The results from both models are very similar. The stress in the neck of the flange is slightly larger for the GFD method.}
\label{ISO_Flange_Traction}
\end{figure}
The results obtained from the GFD method and from the FEM are very close. In order to visually assess the difference between the two solutions, the difference between the von Mises stress results obtained from the GFD model and from the FE model are presented in Figure \ref{ISO_Flange_CollocFEA} for both load cases.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale=0.35]{Flange_Pressure_CollocationFEA.png}}
\qquad
\subfloat[]{\includegraphics[scale=0.35]{Flange_Traction_CollocationFEA.png}}
\caption{Flange ISO PN50 DN25 under traction - Difference between von Mises stress results obtained from GFD method and FEM for the internal pressure load case (a) and the traction load case (b) (548,648 nodes). The von Mises stress results are larger for the GFD method on the the inner surface for the flange under internal pressure, and in the neck and in the cone bottom section for the flange under traction.}
\label{ISO_Flange_CollocFEA}
\end{figure}
It can be observed from Figure \ref{ISO_Flange_CollocFEA}(a) that the stresses on the inner surface of the flange are larger for the GFD method. From Figure \ref{ISO_Flange_CollocFEA}(b), it can be observed that the stresses obtained in the neck and in the bottom of the conical section are larger for the GFD method.
\paragraph{Blade Model} \
Figure \ref{BladeBCs} presents a simplified model of a high pressure blade. The surfaces, on which the boundary conditions are applied, are presented in this figure. The nodes in the planes YZ, XZ and XY are, respectively, fixed in the X, Y and Z directions. A constant pressure resulting from a gas flow is applied on the pressurized surface. The remaining surfaces of the blade are considered stress free.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\def10cm{10cm}
\node at (0,0) {\includegraphics{BladeBCs_2.pdf}};
\node[color=black] at (-4.2,2.2) [left] {XZ Plane};
\node[color=black] at (-4.2,1.3) [left] {YZ Plane};
\node[color=black] at (-4.2,0.35) [left] {XY Plane};
\node[color=black] at (4.6,1.25) [right] {External Surfaces};
\node[color=black] at (4.6,0.0) [right] {Pressurized Surface};
\end{tikzpicture}
\caption{Blade model and boundary conditions}
\label{BladeBCs}
\end{figure}
Figure \ref{SmallBlade} shows the von Mises stress results for the GFD and FE methods. As for the flange problem, the difference between the two von Mises stress solutions is presented in Figure \ref{BladeResultsDifference}.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale=0.6]{BladeCollocation.png}}
\subfloat[]{\includegraphics[scale=0.6]{BladeFEA.png}}
\caption{Simplified high pressure blade subjected to a uniform pressure on one face - The von Mises stress results from the GFD method (a) and FEM (b) (484,238 nodes).}
\label{SmallBlade}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale=0.4]{BladeCollocationFEA.png}}
\subfloat[]{\includegraphics[scale=0.4]{BladeCollocationFEA2.png}}
\caption{Simplified high pressure blade subjected to a uniform pressure on one face - Difference between von Mises stress results obtained from the GFD method and FEM (484,238 nodes). The stress concentration at the interface between the blade and the support is larger for the GFD model than for the FE model.}
\label{BladeResultsDifference}
\end{figure}
If can be observed from Figure \ref{SmallBlade} and Figure \ref{BladeResultsDifference} that the stress concentration in the zone between the blade and the support is larger for the GFD model.
\paragraph{Horseshoe Model} \
In 2005, the horseshoe model was solved by Hughes et al \cite{Hughes2005} using the IGA method. This model has been reproduced and is presented in Figure \ref{HorseshoeBCs}. The nodes of the top left plane are fixed in the X and Z directions. The nodes of the top left plane and of the top right plane are, respectively, subjected to a positive and a negative displacement applied in the Y direction. These displacements are equal in absolute value. The external surfaces of the horseshoe are considered stress free.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\def10cm{8cm}
\node at (0,0) {\includegraphics{Horseshoe1_2.pdf}};
\node[color=black] at (-3.7,2.1) [left] {Top Left Plane};
\node[color=black] at (-3.7,1.05) [left] {External Surfaces};
\node[color=black] at (3.7,2) [right] {Top Right Plane};
\end{tikzpicture}
\caption{Horseshoe model and boundary conditions}
\label{HorseshoeBCs}
\end{figure}
Figure \ref{Horseshoe_CollocFEA} shows the von Mises stress results for the GFD and FE models. The two figures on the left show two different views of the solution of the problem solved with the GFD method. The two figures on the right show the solution of the problem solved with the FEM. It can be observed form Figure \ref{Horseshoe_CollocFEA} that the stress concentration in the bottom of the horseshoe is larger in the GFD solution. A higher stress concentration is also observed at the edges of the top left plane in the GFD solution.
It should be noted that the computation time was lower for the FE method than for the GFD method. This is due to the loading, which creates a singularity at the edges of the top left plane and of the top right plane. The impact of this singularity affects more the GFD model as it is solved by collocation (strong form of the PDE).
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale=0.39]{HorseshoeGFD.png}}
\qquad
\subfloat[]{\includegraphics[scale=0.39]{HorseshoeFEA.png}}\\
\subfloat[]{\includegraphics[scale=0.39]{HorseshoeGFD_Top.png}}
\qquad
\subfloat[]{\includegraphics[scale=0.39]{HorseshoeFEA_Top.png}}
\caption{Horseshoe under shear loading - von Mises stress results from GFD method (a) (c) and FEM (b) (d) (521,326 nodes). The stress concentration in the inner surface of the horeshoe is slightly larger for the GFD method.}
\label{Horseshoe_CollocFEA}
\end{figure}
The difference between the two von Mises stress solutions is presented in Figure \ref{HorseshoeTop_CollocFEA}. This figure confirms that higher stress concentrations are observed for the GFD model at the top planes edges and in the bottom section of the horseshoe.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale=0.39]{HorseshoeGFDFEA.png}}
\qquad
\subfloat[]{\includegraphics[scale=0.39]{HorseshoeGFDFEA_Top.png}}
\caption{Horseshoe under shear loading - von Mises stress results from GFD method (a) and FEM (b) (521,326 nodes). The stress concentration in the inner surface of the horseshoe is slightly larger for the GFD method.}
\label{HorseshoeTop_CollocFEA}
\end{figure}
\paragraph{Fichera's Corner Model} \
The Fichera's corner model analyzed in \cite{Dimitrov2001,Rachowicz2006,Zander2016} is presented in Figure \ref{FicheraBCs}. The characteristic planes, on which boundary conditions have been applied, are highlighted and labeled in this figure. The nodes in the planes YZ, XZ and XY are, respectively, fixed in the X, Y and Z directions. A uniform traction is applied on the front face of the truncated cube. The rest of the surfaces are considered stress free. When solved using the GFD method, the internal corner nodes have not been included in the Fichera's corner model. These nodes lead to the divergence of the solution as the stress is infinite at these locations. The visibility criterion presented in Section \ref{SupNodeSelectionGle} is applied to this problem.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\def10cm{10cm}
\node at (0,0) {\includegraphics{FicheraBCs_2.pdf}};
\node[color=black] at (-4.8,0.8) [left] {External Surfaces};
\node[color=black] at (-4.8,-0.9) [left] {Front Face};
\node[color=black] at (4.8,2.1) [right] {XY Plane};
\node[color=black] at (4.8,1.1) [right] {YZ Plane};
\node[color=black] at (4.8,-0.15) [right] {XZ Plane};
\end{tikzpicture}
\caption{Fichera's corner model}
\label{FicheraBCs}
\end{figure}
Figure \ref{FicheraCorner} shows the von Mises stress results for the GFD and FE methods. As for the three other problems, the difference between the two von Mises stress solutions is presented in Figure \ref{LShapeResultsDifference}.
\begin{figure}[H]
\centering
\def10cm{8.5cm}
\subfloat[]{\includegraphics{FicheraCollocationArrows_2.pdf}}
\subfloat[]{\includegraphics[scale=0.327]{L_Shape_FEA.png}}
\caption{Fichera corner subjected to a uniform traction on the front side - von Mises stress results from the GFD method (a) and FEM (b) (264,726 nodes).}
\label{FicheraCorner}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[scale=0.33]{L_Shape_CollocationFEA.png}}
\subfloat[]{\includegraphics[scale=0.33]{L_Shape_CollocationFEA2.png}}
\caption{Fichera corner subjected to a uniform traction on the front side - Difference between von Mises stress results obtained from the GFD method and FEM (264,726 nodes). The stress concentration in the internal corners is larger for the FE model.}
\label{LShapeResultsDifference}
\end{figure}
It can be observed from Figure \ref{FicheraCorner} and Figure \ref{LShapeResultsDifference} that the stress concentration near the internal edges is larger for the FE model than for the GFD model. The GFD model leads to larger results only in the center of the corner. In order to visualize more precisely the results of this analysis, von Mises stress results are plotted in Figure \ref{ResultsCompFichera} along the axes D, D' and D" presented in Figure \ref{FicheraCorner}(a). The axes D and D' follow the edge of the model, while the axis D" is slightly offset from the internal corner as results are not available at the corner nodes for the GFD method. The truncated cube has an edge length of 4. The coordinate of the internal corner is (0,0,0).
\begin{figure}[H]
\centering
\begin{tabular}{c:c}
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm, ymin=0.4,ymax=2, xmin=-2,xmax=2, legend entries={GFD,FEA},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=x coordinate along axis D,ylabel=von Mises Stress]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X1-1, y=X1-1-Colloc
, col sep=comma] {LShapeCollocFEAComp.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X1-1, y=X1-1-FEA
, col sep=comma] {LShapeCollocFEAComp.csv};
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X1-2, y=X1-2-Colloc
, col sep=comma] {LShapeCollocFEAComp.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X1-2, y=X1-2-FEA
, col sep=comma] {LShapeCollocFEAComp.csv};
\end{axis}
\end{tikzpicture} &
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm, ymin=-0.10,ymax=2, xmin=-2,xmax=2, legend entries={GFD,FEA},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north east,xlabel=y coordinate along axis D',ylabel=von Mises Stress]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X2-1, y=X2-1-Colloc
, col sep=comma] {LShapeCollocFEAComp.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X2-1, y=X2-1-FEA
, col sep=comma] {LShapeCollocFEAComp.csv};
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X2-2, y=X2-2-Colloc
, col sep=comma] {LShapeCollocFEAComp.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X2-2, y=X2-2-FEA
, col sep=comma] {LShapeCollocFEAComp.csv};
\end{axis}
\end{tikzpicture} \\
(a) & (b) \\
\end{tabular}
\begin{tabular}{c}
\begin{tikzpicture}
\begin{axis}[height=7cm,width=7.5cm, ymin=0.7,ymax=2, xmin=-2,xmax=2, legend entries={GFD,FEA},legend style={ at={(0.5,-0.2)},anchor=south west,legend columns=1, cells={anchor=west}, font=\footnotesize, rounded corners=2pt,}, legend pos=north west,xlabel=z coordinate along axis D",ylabel=von Mises Stress]
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X3-1, y=X3-1-Colloc
, col sep=comma] {LShapeCollocFEAComp.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X3-1, y=X3-1-FEA
, col sep=comma] {LShapeCollocFEAComp.csv};
\addplot+[green,mark=square*,mark options={fill=green}] table [x=X3-2, y=X3-2-Colloc
, col sep=comma] {LShapeCollocFEAComp.csv};
\addplot+[red,mark=diamond*,mark options={fill=red}] table [x=X3-2, y=X3-2-FEA
, col sep=comma] {LShapeCollocFEAComp.csv};
\end{axis}
\end{tikzpicture}\\
(c) \\
\end{tabular}
\caption{The von Mises stress results comparison along the axes D, D' and D", as presented in Figure \ref{FicheraCorner}(a). Depending on the considered axis, either the GFD method or the FE method leads to the maximum observed stress. The largest stress is observed in the subfigure (c) for the FE method.}
\label{ResultsCompFichera}
\end{figure}
Figure \ref{ResultsCompFichera} shows that, depending on the considered axis, either the GFD method or the FE method leads to the largest results. For this model, the FE method is expected to give a higher stress concentration as the corner nodes have not been included in the GFD method but have been included in the FE model.
\paragraph{Discussion} \
From the figures presented in this section for the various problems considered, it can be observed that the results obtained with the GFD method are very close to the results obtained with the FEM. The stress concentrations are slightly larger for the flange, the blade and the horseshoe problems when the GFD method is used. This might be due to the use of the strong form of the equations, which allows solving the loading equations on the boundary of the domain. In the FEM, the problem is solved in a weak form using an integration over the domain. For the Fichera's corner problem, the largest von Mises stress concentration is observed for the FE method. This is due to the internal corner nodes which have not been included in the GFD method. The FEM is thus deemed more accurate.
\section{Conclusions} \label{Conclusions}
The aim of the paper was three-fold:
\begin{description}
\item[Brief review and primer] We briefly reviewed Taylor-series expansion based collocation/meshfree methods. Our aim here was not to be exhaustive, but to cover the main material available, to our knowledge. We also presented detailed derivation of the system matrices, in order to facilitate the entry of newcomers into the field and attempted to unify the generalised finite difference method and the discrete correction particle strength exchange method under one umbrella.
\item[Performance benchmarking] We provided a detailed benchmarking strategy for Taylor-series expansion based collocation methods as well as all data files including all input files, boundary conditions, point distribution and solution fields, so as to facilitate future benchmarking of new methods.
\item[New methods for non-smooth solutions] We proposed a few improvements to the original methods, both DCPSE and GFD, in order to treat problem with non-smooth solutions, including discontinuities, singularities or sharp gradients.
\end{description}
We noted that the various parameters involved in the methods have a significant impact on the solution, and that they should therefore be carefully chosen. In itself, this is a drawback compared to more parameter-robust methods, in particular the finite element method. Another main conclusion of this work is that common approaches used in practice to improve collocation methods must be used with caution as they do not always lead to the reduction of the overall error. We observed the following:
\begin{enumerate}[(1)]
\item For the GFD method, the weight function based on the 4$^{\text{th}}$ order spline leads to the minimum error for problems with a polynomial solution such as the pressurized cylinder. For singular problems, such as the L-shape in mode I loading, both linear and 4$^{\text{th}}$ order spline weight functions lead to a minimum error.
\item For the DC PSE method, the weight function based on the exponential functions leads to the minimum error for both polynomial and singular problems.
\item For the problems with a polynomial solution, a polynomial correction function basis leads to an error approximately fifteen time lower than with an exponential basis function. For the singular problems, an exponential correction function basis lead to an error approximately 5\% lower than with an exponential basis function. A polynomial correction function is recommended for most problems as the solution type in not known a priori.
\item Increased size of the node supports on the boundary helps decreasing the overall error, while increasing only slightly the number of non zero elements in the system matrix. For the polynomial problem considered, a reduction of a factor one hundred is observed between boundary support nodes composed of thirteen nodes and eighteen nodes. For the singular problem, no significant error reduction is observed.
\item Voronoi diagrams can be used to give additional information to the collocation methods on the spatial arrangement of the nodes over the domain.
\begin{enumerate}[(a)]
\item For the GFD method, Voronoi diagrams allow the selection of weights which depend on the node placement over the collocation node support.
\item For the DC PSE method, Voronoi diagrams are expected to improve the accuracy of the convolution, but they may also lead to an increased error for some node distributions.
\end{enumerate}
The use of Voronoi diagrams helps in reducing the error for the 2D cylinder problem with a free node distribution (based on Delaunay triangulation). A reduction of up to 17\% is observed for the GFD method and of up to 10\% for the DC PSE methods. For a regular node distribution an error increase of 3\% is observed for the considered problems when Voronoi diagrams are used. For the L-shape problem, the use of Voronoi diagrams has no significant impact on the error. It can be concluded that the use of Voronoi diagrams does not allow a significant error reduction for the considered node arrangements, and their use is not recommended in the general case.
\item The stabilization method reduces the error for the L-shape problem by respectively 25\% and 35\% for the GFD and DC PSE methods. A large error increase (up to a factor 30) is observed for both methods for the pressurized cylinder problem. This difference is due to the type of boundary conditions imposed. The stabilization method is more suitable to Dirichlet loaded problems than to Neumann loaded problems.
\item For problems with singularities and concave geometries, the visibility criterion improves the convergence of the solution when solved with iterative solvers. Moreover, it allows to significantly reduce the observed error. A reduction of up to 50\% and 55\% are respectively observed for the GFD and DC PSE methods when the visibility criterion is used. The use of this criterion for support node selection is recommended for all singular and concave problems.
\item Compared to other typical collocation methods (e.g., MLS, IMLS, RBF-FD), the GFD and DC PSE methods have shown good performance both in terms of observed error and computation time. The results obtained with the collocation methods are close to the results obtained using the FEM and more accurate for some problems. For large 3D problems, the GFD method leads to very similar results as those obtained using FEM.
\item A slightly larger stress concentration has been observed for the flange, the blade and the horseshoe problems when solved with the GFD method compared to the results from FEA. For the Fichera's corner problem, the stress concentration obtained with the FEM is slightly more pronounced than that obtained using GFD method. This is due to the fact that the corner nodes have not been included in the GFD method, and thereby, the FEM results represent more accurately the actual solution.
\end{enumerate}
To summarize, we have proposed for the GFD and DC PSE methods a set of optimal parameters that can be used to readily solve any linear elastic problem. We also showed that point collocation methods may be used effectively for problems with singularities and 3D problems of industrial size. Using the visibility criterion for concave and singular problems improves the convergence of the methods and leads to a significant error reduction. A logical next step in collocation methods is to investigate the use of enriched weight functions and enriched stencils near singularities in order to improve the results obtained in regions of rapid field change. Similarly, a posteriori error estimation driven local refinement, vastly simplified in collocation methods, should be investigated, which is the topic of ongoing work in our teams. Finally, a massively parallel approach, if possible based on graphical processing units, should be investigated to accelerate the solution scheme.
\begin{acknowledgements}
St\'ephane P.A. Bordas and Satyendra Tomar thank partial funding for their time provided by the European Research Council Starting Independent Research Grant (ERC Stg grant agreement No. 279578) RealTCut ``Towards real time multiscale simulation of cutting in non-linear materials with applications to surgical simulation and computer guided surgery". The authors are also grateful for the funding from the Luxembourg National Research Fund (INTER/FWO/15/10318764). This is a pre-print of an article published in Archives of Computational Methods in Engineering. The final authenticated version is available online at: https://doi.org/10.1007/s11831-019-09357-5.
\end{acknowledgements}
\nocite{*}
\bibliographystyle{unsrt}
|
2,877,628,089,373 | arxiv | \section*{Supplemental Material}
\section{A. BdG equation and vortex solution}
We provide more details of the calculations for the emergence of a QAV and MZM at an isolated interstitial magnetic ion in s-wave superconductors with strong SOC in the absence of applied external magnetic fields. The Hamiltonian of the system has been discussed in the main text for both the bulk states and the TSS. For convenience, we write the total Hamiltonian as $H=H_{n}+H_p$, where $H_n$ is the normal part and $H_p$ the pairing part. They are given by,
\begin{eqnarray}
H_n&=& H_{\rm kin}+H_{\rm soc}+H_{\rm ex}
\label{normalh} \\
H_p&=&\int d\mathbf{r}\mathbf{\Delta(r)}\psi_\uparrow^{\dagger}(\mathbf{r})
\psi_\downarrow^{\dagger}(\mathbf{r})+h.c..
\label{pairingh}
\end{eqnarray}
The complex pairing potential $\mathbf{\Delta}(r)=g\langle\psi_\downarrow(\mathbf{r})
\psi_\uparrow(\mathbf{r})\rangle$ is determined self-consistently for an attraction $g$. In Eq.~(\ref{normalh}), $H_{\rm soc}$ and $H_{\rm ex}$ are the SOC and the exchange interaction produced by the magnetic ion discussed in their operator forms in Eqs (2) and (3) in the main text, while $H_{\rm kin}$ describes the different kinetic energy of the bulk band and the surface states in our effective theory. The gauge invariance requires the use of canonical momentum operators in the Hamiltonian, i.e. ${\bf p}\to {\boldsymbol{\pi}}={\bf p}-{e\over c}{\bf A}$ where ${\bf A}(\mathbf{r})$ is the vector potential. In the spinor notation, $\psi(\mathbf{r})=(\psi_{\uparrow}(\mathbf{r}),
\psi_{\downarrow}(\mathbf{r}))^T$,
\begin{eqnarray}
H_{kin}=\int d\mathbf{r}\psi^{\dagger}(\mathbf{r})
\left[-{1\over2m^*}({\bf p}-{e\over c}{\bf A})^2-\varepsilon_f\right]\psi(\mathbf{r})
\label{hkin-bulk}
\end{eqnarray}
describes the parabolic dispersion of a hole-like bulk band near ${\bf p}=0$ ($\Gamma$ point) in the continuum limit, and
\begin{eqnarray}
H_{kin}^\prime=\int d\mathbf{r}\psi^{\dagger}(\mathbf{r}) \left[v_D({\boldsymbol{\sigma}}\times\boldsymbol{\pi})\cdot\mathbf{z}-
\varepsilon_f'\right]\psi (\mathbf{r})
\label{hkin-tss}
\end{eqnarray}
the helical Dirac fermion TSS. As in the main text, we will continue to use primed quantities for the TSS.
The total Hamiltonian $H$ can be solved conveniently using the Bogoliubov transformation
\begin{eqnarray}
\psi_{\sigma}(\mathbf{r})=\sum_{n}\bigl[u_{n\sigma}(\mathbf{r})\gamma_{n}
+v_{n\sigma}^{*}(\mathbf{r})\gamma_{n}^\dagger\bigr], \qquad
\psi_{\sigma}^\dagger(\mathbf{r})=\sum_{n}\bigl[u_{n\sigma}^{*}
(\mathbf{r})\gamma_{n}^\dagger+v_{n\sigma}(\mathbf{r})\gamma_{n}\bigr], \label{phi}
\end{eqnarray}
where $\gamma_n^\dagger$ and $\gamma_n$ are the creation and destruction operators of a Bogoliubov quasiparticle,
\begin{eqnarray}
\gamma_{n}^\dagger=\int d\mathbf{r}\sum_{\sigma}\bigl[u_{n\sigma}(\mathbf{r})\psi_{\sigma}^{\dagger}
(\mathbf{r})+v_{n\sigma}(\mathbf{r})\psi_{\sigma}(\mathbf{r})\bigr], \quad
\gamma_{n}=\int d\mathbf{r}\sum_{\sigma}\bigl[u_{n\sigma}^{*}(\mathbf{r})\psi_{\sigma}
(\mathbf{r})+v_{n\sigma}^{*}(\mathbf{r})
\psi_{\sigma}^\dagger(\mathbf{r})\bigr].
\label{gamma}
\end{eqnarray}
In terms of the Nambu spinors $\Phi_n(\mathbf{r})=(u_{n\uparrow}(\mathbf{r}),u_{n\downarrow}(\mathbf{r}),
v_{n\downarrow}(\mathbf{r}),-v_{n\uparrow}(\mathbf{r}))^T$, the Schr\"odinger equation can be written as a BdG equation,
\begin{eqnarray}
\left [\begin{array}{cc}
\widehat H_n({\bf A}) & \mathbf{\Delta(r)} \\
\mathbf{\Delta}^*(\bf r) & -\sigma_y \widehat H_n^{*}({\bf A})\sigma_y
\end{array} \right] \Phi_n(\mathbf{r})=E_n\Phi_n(\mathbf{r}),
\label{bdg}
\end{eqnarray}
where $\widehat H_n$ is the operator corresponding to the normal part of the Hamiltonian in Eq.~(\ref{normalh}).
We studied both vortex-free and vortex solutions with the pairing potential ${\mathbf\Delta}(\mathbf{r})=\Delta(r)e^{i\nu\theta}$, where the integer winding number $\nu$ is the vorticity. In each case, the BdG equation is diagonalized to obtain the quasiparticle energy spectrum $E_n$ and the eigenstate wavefunctions $\Phi_n(r)$. The gap function is then calculated as in the standard BCS theory,
\begin{eqnarray}
\mathbf{\Delta}(\mathbf{r})=\frac{g}{2}\sum_{E_n\le\omega_D}\bigl[u_{n\uparrow}
(\mathbf{r})v_{n\downarrow}^*(\mathbf{r})-u_{n\downarrow}
(\mathbf{r})v_{n\uparrow}^*(\mathbf{r})\bigr],
\label{gap}
\end{eqnarray}
where $g$ is the attraction and $\omega_D$ is the energy cutoff.
Concurrently, the spatially varying current density is determined using \cite{Sgygi}
\begin{eqnarray}
{\bf j}({\bf r})&&={e\hbar\over2m^*i}\sum_{n\sigma}\biggl[v_{n\sigma}(\mathbf{r})\bigl(\boldsymbol{\nabla}-{ie\over \hbar c}{\bf A}\bigr) v_{n\sigma}^*(\mathbf{r})-h.c.\biggr]+{e\over2}\sum_{n\sigma}
\biggl[v_{n\sigma}(\mathbf{r})\bigl(-r\lambda_{\rm so}(r)\bigr) v_{n\sigma}^*(\mathbf{r})+h.c.\biggr]{\hat{\boldsymbol\theta}}.
\label{current}
\end{eqnarray}
Note that although a QAV is obtained in the absence of the external magnetic field (i.e. without the external vector potential), the circulating current ${\bf j}(\mathbf{r})$ will generate a dynamic vector potential according to the Maxwell equation $\boldsymbol{\nabla}\times\boldsymbol{\nabla}\times{\bf A}(\mathbf{r})={4\pi\over c}{\bf j}(\mathbf{r})$, from which the profile of ${\bf A}(\mathbf{r})$ can be obtained. This procedure can be repeated by inserting the calculated $\mathbf{\Delta}(\mathbf{r})$ and ${\bf A}(\mathbf{r})$ back into the BdG equation (\ref{bdg}) until self-consistency is reached \cite{Sgygi}.
Let's first consider the case of the parabolic bulk band. The case of the Dirac fermion TSS will be discussed later. In order to determine the quantum numbers of the vortex states, it is convenient to transform to the London gauge \cite{Scdm} where the pairing potential ${\mathbf\Delta}(\mathbf{r})$ is real and the quasiparticle wavefunctions have well defined properties under a $2\pi$ rotation \cite{SdGbook}. This is achieved by the following transformation $\Phi_n(\mathbf{r})\to\Psi_n(\mathbf{r})=e^{-i{\nu\over2}\theta\tau_z}\Phi_n(\mathbf{r})$, where $\tau_z=\pm1$ acts in the particle-hole channel.
The covariant paring potential $\mathbf{\Delta}(\mathbf{r})\to \mathbf{\Delta}^\prime(\mathbf{r})=\mathbf{\Delta}(\mathbf{r})e^{-i\nu\theta}=\Delta(\mathbf{r})$ is real and the BdG equation becomes,
\begin{eqnarray}
\left [\begin{array}{cc}
\widehat H_n({\bf A}^\prime) & {\Delta(r)} \\
{\Delta(r)} & -\sigma_y \widehat H_n^{*}({\bf A}^\prime)\sigma_y
\end{array} \right] \Psi_n(\mathbf{r})=E_n\Psi_n(\mathbf{r}),
\label{bdg-1}
\end{eqnarray}
where the transformed vector potential
\begin{equation}
{\bf A}^\prime(\mathbf{r})={\bf A}(\mathbf{r})-{\nu\hbar c\over2er}{\hat{\boldsymbol\theta}}.
\label{gaugefield}
\end{equation}
To utilize the rotational symmetry about the $z$-axis, we study a SC layer in the disc geometry shown in Fig.~2b, with an isolated magnetic ion located at the center in polar coordinates ${\bf r}=(r,\theta)$. Since our low-energy effective theory separately treats the bulk band and the TSS, we ignore the dispersion along the $z$-direction for simplicity \cite{Snote}. In this gauge, the quasiparticle wavefunction $\Psi_n$ acquires a multiplicative factor of $(-1)^\nu$ under a $2\pi$ rotation \cite{SdGbook}, since the vector potential in Eq.~(\ref{gaugefield}) produces a magnetic flux line through the center of the vortex that carries $\nu$ number of SC flux quantum. Consequently, when $\Psi_n(\mathbf{r})$ is expanded into partial waves, i.e.
$\Psi_n(\mathbf{r})=e^{i\mu\theta}\Psi_{n\mu}(r)$, we obtain $\mu=\ell-{\nu\over2}$ where $\ell$ is an integer. As a result, the quantum number $\mu=\pm{1\over2},\pm{3\over2},\dots$ is a half-odd integer for the vortex states when $\nu$ is odd, i.e. for vortices of odd vorticity. Note that in the original paper of Caroli et. al. \cite{Scdm}, an error was made with respect to the property of $\mu$, which was corrected later by de Gennes \cite{SdGbook}. Substituting $\Psi_n(\mathbf{r})$ into the BdG equation (\ref{bdg-1}), the kinetic energy terms read
\begin{equation}
\tau_z\bigl[-i\hbar\boldsymbol{\nabla} - \tau_z{e\over c}{\bf A}^\prime(\mathbf{r})\bigr]^2e^{i\mu\theta}\Psi_{n\mu}(r)
=e^{i\mu\theta}\tau_z\bigl[-i\hbar\boldsymbol{\nabla}-\tau_z{e\over c}{\bf A}(\mathbf{r})+(\mu+\tau_z{\nu\over2}){1\over r}\bigr]^2\Psi_{n\mu}(r).
\end{equation}
Having determined the quantum number $\mu$ of the vortex states, it is clear that we could have started with the BdG equation (\ref{bdg}) and make the following change of variables
$$\Phi_n(\mathbf{r})=e^{i\mu\theta+i{\nu\over2}\tau_z\theta}\Psi_{n\mu}(r)$$
to arrive at the correct wavefunction \cite{Sbardeen,Sgygi}. Written out explicitly,
\begin{eqnarray}
\Phi_{n\mu}(r,\theta)=e^{i\mu\theta}[u_{n\mu+{\nu\over2}\uparrow}(r)
e^{i{\nu\over2}\theta},
u_{n\mu+{\nu\over2}\downarrow}(r)e^{i{\nu\over2}\theta},
v_{n\mu-{\nu\over2}\downarrow}(r)e^{-i{\nu\over2}\theta},
-v_{n\mu-{\nu\over2}\uparrow}(r)e^{-i{\nu\over2}\theta}
]^T,
\label{phi-mu}
\end{eqnarray}
where the principal quantum number $n$ is determined by solving the radial wavefunctions $u(r),v(r)$ in the resulting BdG equation. Nevertheless, Eq.~(\ref{gaugefield}) uncovers an important point.
In the regime $r\ll\lambda_p$ with $\lambda_p$ the penetration depth, it is known that the vector potential $A_\theta(r)\sim {1\over2}r h_{\rm eff}$ \cite{Scdm,Sbardeen}, where $h_{\rm eff}={\nu\phi_0\over2\pi\lambda_p^2}$ is the effective field along the $z$-direction and $\phi_0={hc\over2e}$ is the SC flux quantum. It reaches a maximum around $r\sim\xi$ where $\xi$ is the coherence length \cite{Sgygi}. Thus, the ratio of the vector potential to the gauge field is bounded by the order of $(\xi/\lambda_p)^2$. As a result, the effects of the vector potential ${\bf A}(\mathbf{r})$ and the magnetic field are very small and negligible for type-II superconductors where $\xi\ll\lambda_p$. As it is common practice \cite{Scdm,Sgygi90,Smachida}, we ignore ${\bf A}(\mathbf{r})$ in our calculations for simplicity. In the regime $r\gg\lambda_p$, ${\bf A}^\prime(\mathbf{r})\to0$. The main effect of the self-consistent vector potential is to screen out the supercurrent outside the vortex and cutoff the vortex line energy $\sim\rho_s\ln(\lambda_p/\xi)$ by the penetration depth $\lambda_p$, where $\rho_s$ is the superfluid density/stiffness. This is approximately accounted for by considering a disc of radius $R$ under the open boundary condition $\Delta(r)=0$ for $r>R\sim\lambda_p$. Thus, our calculated vortex binding energy and the transition from the vortex-free to the QAV phase are good estimates that can be considered as upper bounds.
{By fitting the ARPES data around the $\Gamma$ point in FeSe and Fe(Te,Se) superconductors \cite{Skunjiang,Spzhang,Spzhang-new}, we construct a parabolic hole-like band in the continuum limit with an effective mass $m^*\simeq4.08m_e$ and a Fermi energy $\varepsilon_f\simeq-4.52$meV. The BCS coherence length is $\xi=\hbar v_f/\pi\Delta\approx2.76$nm for the parabolic dispersion considered, which roughly agrees with the experimental value of $\sim2$nm for the coherence length \cite{Scdepth}.
The low superfluid density $\rho_s$ in Fe(Te,Se) superconductors \cite{Srhos}, which is considerably smaller than even the cuprate superconductors, ensures a small vortex line energy. The line tension of the QAV is estimated in the main text. Here we give an estimate of the magnitude of the magnetic field $H(0)$ at the center of the anomalous vortex. The latter is given by twice the value of $H_{c1}$ \cite{SdGbook}, i.e. $H(0)={\phi_0\over2\pi\lambda_p^2}\ln\kappa$. Using the experimental values of $\lambda_p$ and $\xi$, we find $H(0)\sim70$G. This value becomes much smaller ($\sim2$G) if the estimate is done using only the single hole-like band in the effective theory.
In either case, the magnetic field is very weak and its Zeeman energy can be ignored, especially compared to the exchange field $m_0$ already present due to the magnetic ion. For numerical convenience, we define a length $l_0$ such that $\frac{\hbar^2}{2m^*\l_0^2}=10$meV, which gives $l_0=0.966nm\approx0.35\xi$. Setting $l_0\equiv1$, all lengths are dimensionless in unit of $l_0$. The numerical results reported here are obtained on discs of radius $R=250$. The magnetic ion induced SOC and exchange coupling are modeled with an exponential decay length $r_0=2$. We choose $g=11$meV and $\omega_D=4.7$meV such that the pairing gap approaches the bulk value $\Delta=1.5$meV far away from the magnetic ion.}
Following the pioneering works of Caroli, de Gennes, and Matricon \cite{Scdm}, and Bardeen et. al \cite{Sbardeen}, the $r$-dependent radial functions can be conveniently expanded in the basis of the Bessel functions \cite{Sgygi,Smachida}. The self-consistent pairing function profiles $\Delta(r)$ are then plotted in Figs~2c,e,g for the three cases studied and the corresponding energy level spectra are shown in Figs~2d,f,h. To compare to the tunneling conductance measured by STM, we calculate the local density of states (LDOS) as a function of bias energy according to
\begin{eqnarray}
\frac{dI}{dV}(r,V)\propto\sum_{n\sigma}\left[|u_{n\sigma}(r)|^2f'(E_n-eV)
+|v_{n\sigma}(r)|^2f'(E_n+eV)\right], \nonumber
\end{eqnarray}
where $f'(E)$ is the derivative of the Fermi distribution function $f(E)$. We include a thermal broadening with a temperature $T=1.5$K in the calculated LDOS of the TSS at the magnetic ion site plotted in Fig.~3c and 3d, which is the lowest temperature at which the STM tunneling conductance is measured.
\section{B. Vortex-Free and Vortex Solutions for Bulk States}
\paragraph*{Vortex-free solutions}
For the bulk vortex-free YSR state, $\nu=0$ and $\mathbf{\Delta(r)} =\Delta(r)$. The wavefunction in Eq.~(\ref{phi-mu}) becomes
\begin{eqnarray}
\Phi_{n\ell}(\mathbf{r})=e^{i\ell\theta}[u_{n\ell\uparrow}(r),
u_{n\ell\downarrow}(r),v_{n\ell\downarrow}(r),-v_{n\ell\uparrow}(r)]^T. \nonumber
\end{eqnarray}
The BdG equation (\ref{bdg}) is solved in the subspace of fixed angular momentum quantum number $\ell$ by projecting the radial wavefunctions $u_{n\ell}(r)$ and $v_{n\ell}(r)$ onto a set of Bessel functions normalized on the disc,
\begin{eqnarray}
\bigl[u_{n\ell\sigma}(r),v_{n\ell\sigma}(r)\bigr]
&=&\sum_{j=1}^N \bigl[u_{nj\ell\sigma},v_{nj\ell\sigma}\bigr]\phi_{j\ell}(r),
\end{eqnarray}
where
\begin{eqnarray}
\phi_{j\ell}(r)&=&\frac{\sqrt{2}}{RJ_{\ell+1}(\beta_{j\ell})}J_\ell
\left(\beta_{j\ell}\frac{r}{R}\right),\qquad j=1,...,N.
\label{bessel}
\end{eqnarray}
Here, the argument $\beta_{j\ell}$ is the $j$-th zero of the $\ell$-th order Bessel function of the first kind $J_\ell(x)$. Since there is an infinite number of zeros for $J_\ell(x)$, $N$ is introduced as a cutoff for $j$, which determines the dimension of the BdG equation in each $\ell$-channel. We also choose a cutoff $L_c$ for the highest angular momentum channel to be considered. The BdG equation thus reduces to a $4N\times4N$ matrix eigenvalue problem
\begin{eqnarray}
\begin{bmatrix}
T_\ell-L_\ell-M_\ell-\Lambda_\ell & 0& \Delta_\ell &0 \\
0 & T_{\ell}-L_\ell+M_\ell+\Lambda_\ell & 0 & \Delta_{\ell} \\
\Delta_\ell^T &0 & -T_{\ell}-L_\ell-M_\ell+\Lambda_\ell & 0 \\
0 & \Delta_{\ell}^T & 0 &-T_\ell-L_\ell+M_\ell-\Lambda_\ell
\end{bmatrix} \Psi_{n\ell}=E_n^\ell \Psi_{n\ell},
\label{bdg-vortexfree}
\end{eqnarray}
with the matrix elements given by
\begin{eqnarray}
(T_\ell)_{ij}&=&-\biggl[\frac{1}{2m^*}\biggl({\beta_{i\ell}^2\over R^2}\biggr)
+\varepsilon_f\biggr] \delta_{ij}
\label{kinetic} \\
\bigl[(\Delta_\ell)_{ij},(M_\ell)_{ij}, (L_\ell)_{ij}, (\Lambda_\ell)_{ij}\bigr] &=&\int_{0}^{R}rdr\bigl[\Delta(r),{1\over2}m(r),\ell m(r),\ell\lambda_{\rm so}(r)\bigr] \phi_{i\ell}(r)\phi_{j\ell}(r).
\label{elements}
\end{eqnarray}
From the obtained spinors $\Psi_{nl}^T=(u_{1\uparrow},...,u_{N\uparrow},u_{1\downarrow},...,
u_{N\downarrow},v_{1\downarrow},...,v_{N\downarrow},-v_{1\uparrow},...,
-v_{N\uparrow})$, where the indices $n$ and $\ell$ are omitted for simplicity,
we can construct the wavefunctions $\Phi_{n\ell}(\mathbf{r})$ and solve the gap function self-consistently. We typically work with $N=200$, which is sufficient for obtaining consistent numerical results.
\begin{figure}
\begin{center}
\fig{5in}{disp.eps}\caption{(a)Free-particle dispersion $E(\beta_{j\ell})$ as function of $\beta_{j\ell}/R$. (b) Splitting of partial wave dispersions (zoomed in near Fermi level) by SOC $\lambda_{\rm so}(r)=\lambda_0\exp{(-r/r_0)}$ with $\lambda_0=20$meV and $r_0=2$. Solid (dashed) lines are for $\ell$-th partial wave carrying spin up (down). (c-d): Vortex-free SC state without magnetic ion. (c) Self-consistent pairing potential profile $\Delta(r)$. (d) Quasiparticle energy spectrum and tunneling density of states.}
\end{center}
\vskip-0.5cm
\end{figure}
It is instructive to note that the matrix element $T_\ell$ in Eq.~(\ref{kinetic}) describes the free-particle dispersion appearing on the diagonals of the BdG equation (\ref{bdg-vortexfree}) in the partial wave representation. Thus, the kinetic energy $E(k)=-\frac{1}{2m^*}k^2-\varepsilon_f$ has been transformed into
$E(\beta_{j\ell})=-{1\over 2m^*}({\beta_{j\ell}^2\over R^2})-\varepsilon_f$ on the disc with $\beta_{j\ell}/R$ playing the role of $k$. In Fig.S1a, the doubly spin-degenerate dispersion $E(\beta_{j\ell})$ is plotted versus $\beta_{j\ell}/R$, tracing out the hole-like band around $\Gamma$ point as partial waves of different angular momentum $\ell$ populate different points in the curve. The effects of the SOC $\lambda_{\rm so}(r)$, which gives rise to $\Lambda_\ell$ in the BdG equation, can be understood as follows. The spin and angular momentum of the partial waves are locked to form $j_z=\ell\pm{1\over2}$ states and split off from the dispersion of the unaffected $\ell=0$ channel. If $\lambda(r)$ had no spatial dependence, i.e. being a constant, the dispersions become an infinite set of equally spaced spin-split Landau levels, very much like applying opposite magnetic fields to different spin channels on a disc. However, the rapid decay of $\lambda_{\rm so}(r)$ away from the magnetic ion implies that the effects of the SOC will be limited to small but nonzero $\ell$-channels with large probability densities within the decay length $r_0$. In Fig.S1b, the calculated dispersions for $\lambda_0=20$meV are shown for the partial waves in different $\ell$ channels, zooming in close to the Fermi level. While the $\ell=1$ and $\ell=3$ states spin-orbit split away, the dispersions of the $\ell=8$ states collapse back onto the unaffected $\ell=0$ channel.
Numerically, the self-consistency process is time-consuming and limits the largest number of angular momentum channels (cutoff $L_c$) to be included. In the absence of the magnetic ion, the self-consistently determined $\Delta(r)$ using $L_c=150$ is shown in Fig.~S1c with the corresponding quasiparticle energy spectrum in Fig.~S1d for the vortex-free SC state. The paring potential profile $\Delta(r)$ begins to reduce from the uniform $\Delta$ for $r>170$, which is a consequence of the finite cutoff $L_c$ that amplifies the large-$r$ boundary effects under the disc geometry. As a result, states appear with energies just inside the expected gap energies of $\pm1.5$meV due to the ``soft boundary'' effects. This can also be seen in the rounding of the gap edge and the coherence peaks in the tunneling density of states shown in Fig.~S1d at the center of the disk. With increasing $L_c$, the boundary will become sharper and be pushed closer to the physical boundary at $R=250$. Since the physics we are interested in concerns the local effects of the SOC and exchange coupling that are limited to the small region around the magnetic ion in both the vortex and vortex-free solutions, we use the following algorithm in our numerical calculations. We first perform a self-consistent calculation in the absence of the magnetic ion using a large angular momentum cutoff $L_c=150$. The pairing function $\Delta(r)$ at large distances with $r\ge50$ is then fixed and a smaller cutoff $L_c=50$ is used in the self-consistent calculations in the presence of the magnetic ion with matching $\Delta(r)$ profile for $r\ge50$. We verified that such an algorithm improves the efficiency of the numerical computations greatly and, at the same time, ensures that our results are not affected by the boundary effects at large distances away from the center of the disk. The vortex-free solution in the presence of the magnetic ion is presented in Figs ~2g and 2h in the main text with the in-gap YSR bound states in the presence of the SOC.
\paragraph*{Vortex solutions}
Consider a single $\nu=-1$ vortex with $\mathbf{\Delta(r)}=\Delta(r)e^{-i\theta}$. The wavefunction in Eq.~(\ref{phi-mu}) becomes
\begin{eqnarray}
\Phi_{n\mu}(\mathbf{r})&=&e^{i\mu\theta}[u_{n\mu-{1\over2}\uparrow}(r)
e^{-i\frac{\theta}{2}},u_{n\mu-{1\over2}\downarrow}(r)e^{-i\frac{\theta}{2}},
v_{n\mu+{1\over2}\downarrow}(r)
e^{i\frac{\theta}{2}},-v_{n\mu+{1\over2}\uparrow}(r)e^{i\frac{\theta}{2}}]^T. \label{phi-nonrelativity}
\end{eqnarray}
Similar to the vortex-free case discussed before, the radial wave functions can be expanded using Bessel functions
\begin{eqnarray}
u_{n\mu-{1\over2}\sigma}(r)
&=&\sum_{j=1}^N u_{nj\mu-{1\over2}\sigma}
\phi_{j\mu-{1\over2}}(r), \qquad
v_{n\mu+{1\over2}\sigma}(r)=\sum_{j=1}^N
v_{nj\mu+{1\over2}\sigma}\phi_{j\mu+{1\over2}}(r),
\nonumber
\end{eqnarray}
where $\phi_{j\mu\pm{1\over2}}$ is the $j$-th order Bessel function in Eq.~(\ref{bessel}) with effective angular momenta $\mu_\pm=\mu\pm{1\over2}$ under the same cutoff $j=1,\dots, N$.
The BdG equation amounts to a $4N\times4N$ matrix eigenvalue problem
\begin{eqnarray}
\begin{bmatrix}
(T-L-M-\Lambda)_{\mu_-} & 0& \Delta_{\mu_-\mu_+} &0 \\
0 & (T-L+M+\Lambda)_{\mu_-} & 0 & \Delta_{\mu_-\mu_+} \\
\Delta_{\mu_-\mu_+}^T &0 & -(T+L+M-\Lambda)_{\mu_+}
& 0 \\
0 & \Delta_{\mu_-\mu_+}^T & 0 &-(T+L-M+\Lambda)_{\mu_+}
\end{bmatrix} \Psi_{n\mu}=E_n^{\mu} \Psi_{n\mu}
\nonumber
\end{eqnarray}
where $\Psi_{n\mu}^T=(u_{1\uparrow},...,u_{N\uparrow},u_{1\downarrow},...,
u_{N\downarrow},v_{1\downarrow},...,v_{N\downarrow},-v_{1\uparrow},...,
-v_{N\uparrow})$ with the indices $n$ and $\mu_\pm$ omitted for simplicity. Note that the BdG equation in the case of a vortex state involves both the $\mu_-$ and $\mu_+$ channels that are coupled by the pairing matrix element,
\begin{eqnarray} (\Delta_{\mu_-\mu_+})_{ij}&=&\int_{0}^{R}rdr\Delta(r)\phi_{i\mu_-}(r)
\phi_{j\mu_+}(r).
\end{eqnarray}
The rest of the matrix elements in the vortex BdG equation, i.e. $T_{\mu_\pm}$, $M_{\mu_\pm}$, $L_{\mu_\pm}$, and $\Lambda_{\mu_\pm}$, have the same expressions as the vortex-free case given in Eqs (\ref{kinetic}) and (\ref{elements}). The self-consistency procedure is the same as in the vortex-free case. The vortex solutions in both the absence and presence of the SOC and exchange coupling induced by the magnetic ion are presented in Fig.~2 and discussed in detail together with the mid-gap CdGM states in the main text.
\section{C. Vortex Solution for the electron band}
Similar to the hole band around the $\Gamma$ point, the electron band around the $M$ point in FeSe and Fe(Te,Se) superconductors can be described approximately by an electron-like parabolic dispersion $E_e(k)={1\over2m_e^*}k^2-\varepsilon_f^e$ as shown in Fig.~S2(a) by the blue-solid line. By fitting the ARPES data \cite{Skunjiang,Spzhang,Spzhang-new}, the effective mass of electron band $m_e^*\simeq1.33m_e$ with $m_e$ the free electron mass, and the Fermi energy $\epsilon_f^e\simeq25$meV. The paring gap of the electron band is $\Delta_e\simeq4$meV \cite{Smiaohu}, which can be imposed self-consistently using $g_e=64$meV and $\omega_D^e=6$meV in the BCS gap equation. Note that the impurity induced SOC is inversely proportional to the effective mass, i.e. $\lambda_{\rm so}(r)\propto {1\over m^*}$, in both sign and magnitude \cite{Selliot,Syafet}. Thus, when writing the SOC $\lambda_{\rm so}(r)=\lambda_0^e e^{-r/r_0}$ for the electron band, $\lambda_0^e$ should have an opposite sign and be scaled by the ratio of the effective mass in comparison to that for the hole band. Thus, we use $\lambda_0^e=-21$meV, while keeping $r_0$ unchanged.
\begin{figure*}
\begin{center}
\fig{5.0in}{electron.eps}\caption{(a) Effective bulk state electron band (blue line) near the $M$ point extracted from the ARPES data \cite{Spzhang} . The red dashed line is the effective hole band shifted from the $\Gamma$ point for comparison. (b) Electron band $\nu=1$ vortex binding energy as a function of the exchange field $m_0$ at $\lambda_0^e=-21$meV, showing the emergence of the QAV state beyond $m_0^c\simeq23.2$meV. (c) Low-energy CdGM vortex core states in the normal field-induced vortex in the absence of magnetic ion. (d) Low-energy vortex core states of the QAV induced by magnetic ion for $m_0=25$meV.}
\end{center}
\vskip-0.5cm
\end{figure*}
The solution of the vortex-free states for the electron band can be obtained using the same procedure discussed above for the hole band, with the corresponding substitutions of $m_e^*$, $\varepsilon_f^e$, and $\lambda_0^e$. For the vortex solution, as discussed in the main text, we need to preserve the chirality of the CdGM core states by considering a vortex in the electron band pairing order parameter $\mathbf{\Delta(r)}=\Delta(r)e^{i\nu\theta}$ with the vorticity $\nu=1$, which is opposite to that of the hole band. The wavefunction in Eq.~(\ref{phi-mu}) becomes
\begin{eqnarray}
\Phi_{n\mu}(\mathbf{r})&=&e^{i\mu\theta}[u_{n\mu+{1\over2}\uparrow}(r)
e^{i\frac{\theta}{2}},u_{n\mu+{1\over2}\downarrow}(r)e^{i\frac{\theta}{2}},
v_{n\mu-{1\over2}\downarrow}(r)
e^{-i\frac{\theta}{2}},-v_{n\mu-{1\over2}\uparrow}(r)e^{-i\frac{\theta}{2}}]^T. \label{phi-nonrelativity}
\end{eqnarray}
Similar to the case for the hole band, the radial wave functions can be expanded in the basis of Bessel functions.
The BdG equation amounts to a $4N\times4N$ matrix eigenvalue problem
\begin{eqnarray}
\begin{bmatrix}
(T-L-M-\Lambda)_{\mu_+} & 0& \Delta_{\mu_+\mu_-} &0 \\
0 & (T-L+M+\Lambda)_{\mu_+} & 0 & \Delta_{\mu_+\mu_-} \\
\Delta_{\mu_+\mu_-}^T &0 & -(T+L+M-\Lambda)_{\mu_-}
& 0 \\
0 & \Delta_{\mu_+\mu_-}^T & 0 &-(T+L-M+\Lambda)_{\mu_-}
\end{bmatrix} \Psi_{n\mu}=E_n^{\mu} \Psi_{n\mu}
\nonumber
\end{eqnarray}
where $\Psi_{n\mu}^T=(u_{1\uparrow},...,u_{N\uparrow},u_{1\downarrow},...,
u_{N\downarrow},v_{1\downarrow},...,v_{N\downarrow},-v_{1\uparrow},...,
-v_{N\uparrow})$ with the indices $n$ and $\mu_\pm$ omitted for simplicity. The BdG equation in the presence of a vortex involves both the $\mu_-$ and $\mu_+$ channels that are coupled by the pairing matrix element,
\begin{eqnarray} (\Delta_{\mu_+\mu_-})_{ij}&=&\int_{0}^{R}rdr\Delta(r)\phi_{i\mu_+}(r)
\phi_{j\mu_-}(r).
\end{eqnarray}
The matrix elements for the kinetic energy is given by
\begin{equation} (T_{\mu_\pm})_{ij}=\biggl[\frac{1}{2m_e^*}\biggl({\beta_{i\mu_\pm}^2\over R^2}\biggr)
-\varepsilon_f^e\biggr] \delta_{ij}.
\end{equation}
The rest of the matrix elements in the vortex BdG equation, i.e. $M_{\mu_\pm}$, $L_{\mu_\pm}$, and $\Lambda_{\mu_\pm}$, have the same expressions as given in Eq.~(\ref{elements}).
In Fig.~S2b, the calculated vortex binding energy is shown as a function of the exchange field $m_0$ for the electron band.
The QAV from the electron band emerges beyond $m_0^{c,e}\simeq23.2$meV. Fig.~S2c shows the vortex core CdGM states in the normal field induced vortex in the absence of the magnetic ion. These low-energy vortex core states are pushed into the continuum by the exchange field in the QAV nucleated at the magnetic ion, as shown in Fig.~S2d for $m_0=25$meV.
\section{D. Helical Dirac fermion TTS coupled to quantum anomalous
vortices}
Finally, we discuss the electron-doped TSS coupled to the QAV. The helical Dirac fermions carry an extra Berry phase \cite{Ssharlai99,Sniu-rmp10} since $\partial_x\pm i\partial_y=\exp{(\pm i\theta)}(\partial_r\pm i\partial_\theta/r)$ in the dispersion.
The corresponding wavefunction in Eq.~(\ref{phi-mu}) for a single $\nu=1$ vortex is therefore given by
\begin{eqnarray} \Phi_{n\mu}(\mathbf{r})=e^{i\mu\theta}[u_{n\mu\uparrow}(r),u_{n\mu+1\downarrow}(r)
e^{i\theta},v_{n\mu-1\downarrow}(r)e^{-i\theta},-v_{n\mu\uparrow}(r)]^T. \label{phi-relativity}
\end{eqnarray}
Due to the combination of the vorticity-induced phase and the Berry phase, the quantization condition reflected in Eq.~(\ref{phi-relativity}) now requires the quantum number $\mu$ to be an integer, i.e. $\mu=0,\pm1,\pm2,\dots$. This is the crucial difference compared to the vortex wavefunction for the parabolic bulk band discussed before where $\mu$ is a half-integer. The integer $\mu$ allows the presence of zero-energy mode in the CdGM states.
The BdG equation (\ref{bdg}) contains the kinetic part $H_{\rm kin}^\prime$ given in Eq.~(\ref{hkin-tss}), $H_{\rm soc}^\prime=\lambda_{\rm so}^\prime(r)L_z\sigma_z$ and $H_{\rm ex}^\prime=-m^\prime(r)(L_z+{\sigma_z\over2})$. Moreover, the BdG equation for the Dirac fermions will mix $\mu$ and $\mu\pm1$ channels of the wavefunction in Eq.~(\ref{phi-relativity}). Expanding the radial parts using the Bessel functions as before,
\begin{eqnarray}
u_{n(\mu,\mu+1)\sigma}(r)&=&\sum_{j=1}^Nu_{nj(\mu,\mu+1)\sigma}
\phi_{j(\mu,\mu+1)}(r), \\
\qquad v_{n(\mu,\mu-1)\sigma}(r)&=&\sum_{j=1}^Nv_{nj(\mu,\mu-1)\sigma}
\phi_{j(\mu,\mu-1)}(r),
\end{eqnarray}
we obtain the BdG equation as a $4N\times4N$ matrix eigenvalue problem
\begin{eqnarray}
\begin{bmatrix}
-(L+M-\Lambda)_\mu-\varepsilon_f'& V_{\mu,\mu+1} & \Delta_{\mu,\mu-1} &0 \\
V_{\mu,\mu+1}^T& -(L-M+\Lambda)_{\mu+1}-\varepsilon_f' & 0 & \Delta_{\mu+1,\mu} \\
\Delta_{\mu,\mu-1}^T &0 & -(L+M+\Lambda)_{\mu-1}+\varepsilon_f'& -V_{\mu-1,\mu} \\
0 & \Delta_{\mu+1,\mu}^T & -V_{\mu-1,\mu}^T & -(L-M-\Lambda)_\mu+\varepsilon_f'
\end{bmatrix} \Psi_{n\mu}=E_n^\mu \Psi_{n\mu}.
\nonumber
\end{eqnarray}
The matrix elements in the above BdG equation are given by
\begin{eqnarray}
&&(V_{\mu,\mu^\prime})_{ij}=\frac{2v_D}{R}
\frac{\beta_{i\mu}\beta_{j\mu^\prime}}{\beta_{i\mu}^2-\beta_{j\mu^\prime}^2}, \qquad
(\Delta_{\mu,\mu^\prime})_{ij}=\int_{0}^{R}rdr\Delta_{\rm QAV}(r)
\phi_{i\mu}(r)\phi_{j\mu^\prime}(r),
\label{dirac-kinetic} \\
&&\bigl[(M_\mu)_{ij}, (L_\mu)_{ij}, (\Lambda_\mu)_{ij}\bigr] =\int_{0}^{R}rdr\bigl[{1\over2}m^\prime(r),\mu m^\prime (r),\mu\lambda_{\rm so}^\prime(r)\bigr] \phi_{i\mu}(r)\phi_{j\mu}(r),
\label{dirac-elements}
\end{eqnarray}
and $\Psi_{n\mu}^T=(u_{1\uparrow},...,u_{N\uparrow},
u_{1\downarrow},..,u_{N\downarrow},v_{1\downarrow},..,v_{N\downarrow},
-v_{1\uparrow},...,-v_{N\uparrow})$ with the indices $n,\mu,\mu\pm1$ omitted for simplicity. In Eq.~(\ref{dirac-kinetic}), $\Delta_{\rm QAV}(r)$ is the pairing profile of the QAV generated by the magnetic ion from the bulk states.
The energy spectrum of the TSS and the LDOS at the magnetic ion are shown in Fig.~3 and compared to the STM tunneling conductance measured at the excess ion sites in Fe(Te,Se) in the main text. The localized mode at zero energy corresponds to $\mu=0$. It is associated with the creation operator $\gamma_0^\dagger$. From Eq.~(\ref{gamma})
\begin{eqnarray}
\gamma_{0}^\dagger&=&\int d^2\mathbf{r} \bigl [ u_{0\uparrow}(r)\psi_{\uparrow}^{\dagger}(\mathbf{r})
+v_{0\uparrow}(r)\psi_{\uparrow}(\mathbf{r}) +u_{0\downarrow}(r)e^{i\theta}
\psi_{\downarrow}^{\dagger}
(\mathbf{r})+v_{0\downarrow}(r)e^{-i\theta}
\psi_{\downarrow}(\mathbf{r})\bigr],
\end{eqnarray}
where the localized wavefunctions of the zero energy mode $u_{0\uparrow}(r)$, $v_{0\uparrow}(r)$, $u_{0\downarrow}(r)$, $v_{0\downarrow}(r)$ are plotted in Fig.S3 a-d. Their behaviors clearly show that $u_{0\uparrow}(r)=v_{0\uparrow}(r)$ and $u_{0\downarrow}(r)=v_{0\downarrow}(r)$. As a consequence,
\begin{equation}
\gamma_{0}^\dagger=\gamma_{0},
\end{equation}
indicating that the localized zero energy mode is a charge neutral Majorana zero-mode.
\begin{figure}
\begin{center}
\fig{5in}{uv.eps}\caption{The localized wavefunctions of the zero energy mode (a) $u_{0\uparrow}(r)$, (b) $u_{0\downarrow}(r)$, (c) $v_{0\uparrow}(r)$, (d) $v_{0\downarrow}(r)$.}
\end{center}
\vskip-0.5cm
\end{figure}
The nonmagnetic part of the impurity potential $U(r)$ has not been included in the present study since the effects of a time-reversal invariant potential in $s$-wave superconductors are known to be weak and do not lead to qualitatively new physics. We verified that including $U(r)$ only makes the mid-gap states more localized spatially, but does not change qualitatively the obtained results.
|
2,877,628,089,374 | arxiv | \section{Introduction}
\label{Section: Introduction}
\IEEEPARstart{A}{utomotive} vehicles have evolved significantly over the course of time \cite{Beiker2016}. The gradual transition from purely mechanical automobiles to those with greater incorporation of electrical, electronic and computer-controlled sub-systems occurred in phases over the course of the past century; with each phase improving performance, convenience and reliability of these systems. Modern vehicles are increasingly adopting electrical, electronic, computing and information sub-systems along with software algorithms for low-level control as well as high-level advanced driver assistance system (ADAS) or autonomous driving (AD) features \cite{Meissner2020}. This naturally brings in the interplay between different levels of mechanical, electrical, electronic, networking and software sub-systems among a single vehicle system, thereby transforming them from purely mechanical systems, which they were in the past, to complex multidisciplinary systems \cite{Gumiel2022}. As such, while it may have been justifiable for earlier ADAS/AD feature developers to focus on core software development, the increasing complexity and interdisciplinary nature of modern automotive systems can benefit from synergistic hardware-software co-design complemented with integrated verification and validation by following the mechatronics principles.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figure_1.pdf}
\caption{Extended V-model fostering mechatronics approach of system design, verification and validation for autonomous vehicles. The model depicts evolution of a concept into a product through decomposition, design, development, integration and testing across component, sub-system, system and system-of-systems levels in a unified concurrent interdisciplinary engineering framework.}
\label{fig1}
\end{figure}
Mechatronics engineering \cite{Bolton1995, deSilva2004, Onwubolu2005} focuses on concurrent and synergistic integration of mechanical, electrical and electronics engineering, computer science and information technology for development and validation of complex interdisciplinary systems. This ideology is derived from the fact that various components of a ``mechatronic'' system, often belonging to a multitude of disciplines, influence each other and hence have a design impact at the component, sub-system, system and system-of-systems levels. The resulting ''mechatronic'' realization now builds on capabilities endowed by the various constituent layers. In such a milieu, the system development approach has also evolved from relatively ad-hoc to the more formal V-model \cite{Gausemeier2002}, building on the modular software development and validation roadmap \cite{Brohl1995}. This model has evolved through several state-of-the-art progressions \cite{Eigner2017} and our work seeks to further formalize the adoption of mechatronics approach of system conceptualization, design, development, integration and testing for autonomous vehicles (refer Fig. \ref{fig1}).
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figure_2.pdf}
\caption{AutoDRIVE ecosystem fosters mechatronics design principles at two levels: [A] primitive reconfigurability allows permutations and combinations of addition, removal or replacement of selective components and sub-assemblies of the vehicle to better suit the application; [B] advanced reconfigurability allows complete modification of existing hardware and software architectures, and provides an opportunity for introducing new features and functionalities to the ecosystem.}
\label{fig2}
\end{figure*}
A recent book\cite{Pathrose2022} highlights best practices for industrial design, development and validation of autonomous vehicles and notes the significant adoption of model-based design (MBD) for system integration and testing. However, similar adoption of such streamlined workflows by academia has lagged behind \cite{Pathrose2022}. This gap could be explained by the virtue of standardization (e.g., ISO 26262 \cite{ISO26262}, ISO/IEC 33061 \cite{ISO33061}, VDI 2221 \cite{VDI2221}, VDI 2206 \cite{VDI2206}, AUTOSAR \cite{Furst2009}, etc.) in industries versus the fact that majority of academic projects are deployed using fragmented hardware-software ecosystems (e.g. hobby platforms) with a key focus on developing low-cost initial proof-of-concept implementations. Additionally, such an opportunistic and potentially uninformed selection of hardware \cite{MITRacecar2017, F1TENTH2019, MuSHR2019} and software \cite{Gazebo, CARLA, Cognata} toolchains hinders adoption of co-design and concurrent engineering thinking to full extent.
In this paper, we discuss the design philosophy and one of the key motivation factors behind AutoDRIVE ecosystem\footnote{Webpage: \texttt{\url{https://autodrive-ecosystem.github.io}}} \cite{AutoDRIVEEcosystem2022, AutoDRIVEReport2021} – adopting and promoting mechatronics approach of system design, verification and validation for autonomous vehicles, with an emphasis on creating a streamlined pathway for seamless transition to ultimate industrial practice. This paper also describes a detailed case-study which demonstrates the methodical adoption of mechatronics approach for designing, developing and validating a scaled vehicle in the context of autonomous parking\footnote{Video: \texttt{\url{https://youtu.be/piCyvTM2dek}}} application using a modular probabilistic framework.
\section{Multidisciplinary Design}
\label{Section: Multidisciplinary Design}
AutoDRIVE offers an open-access, open-interface and flexible ecosystem for scaled autonomous vehicle development by permitting access to and alteration of hardware as well as software aspects of the multidisciplinary autonomous vehicle design, thereby making it an apt framework for demonstrating the claims and contributions of this work. Particularly, AutoDRIVE ecosystem offers the following two levels of reconfigurability, thereby promoting hardware-software co-design (refer Fig. \ref{fig2}).
\begin{itemize}
\item \textbf{Primitive Reconfigurability:} The native vehicle of AutoDRIVE ecosystem, called ``Nigel'', is modular enough to support out-of-the-box hardware reconfigurability in terms of swapping and replacing selective components and sub-assemblies of the vehicle, in addition to flexibly updating the vehicle firmware and/or autonomous driving software stack (ADSS) to better suit the target application.
\item \textbf{Advanced Reconfigurability:} The completely open-hardware, open-software architecture of AutoDRIVE ecosystem allows modification of vehicle chassis parameters (different form factors and aspect ratios), powertrain configuration (variable driving performance), component mounting profiles (relocation/replacement of components), as well as firmware and ADSS architecture (software flexibility).
\end{itemize}
The fundamental step in system design is requirement specification, without which the design cannot be truly validated to be right or wrong, it can only be surprising \cite{Young1985}. Since AutoDRIVE was intended to be a generic ecosystem for rapidly prototyping autonomous driving solutions, the requirement elicitation resulted in a superset of requirements demanded by the application case study discussed in this paper. Furthermore, with AutoDRIVE, there is always a scope for updating the designs of various components, sub-systems and systems for expanding the ecosystem. That being said, following is a summary of functional requirement specifications for Nigel as of this version of the ecosystem.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figure_3.pdf}
\caption{Conceptualization and design of scaled autonomous vehicle: [A] hardware-software architecture; [B] firmware design specifications; [C] modular perception, planning and control architecture for autonomous parking application.}
\label{fig3}
\end{figure*}
\begin{itemize}
\item General design guidelines:
\begin{itemize}
\item{Open-source hardware and software}
\item{Inexpensive and user-friendly architecture}
\item{Manufacturing technology agnostic designs}
\item{Modularly reconfigurable components/sub-systems}
\item{Integrated and comprehensive resources and tools}
\end{itemize}
\item Perception sub-system shall offer:
\begin{itemize}
\item{Ranging measurements (preferably 360$^\circ$)}
\item{RGB visual feed (preferably front as well as rear)}
\item{Positional measurements/estimation}
\item{Inertial measurements/estimation}
\item{Actuation feedback measurements/estimation}
\end{itemize}
\item Computation and communication sub-systems shall offer:
\begin{itemize}
\item{Hierarchical computation topology}
\item{GPU-enabled high-level edge computation platform}
\item{Embedded low-level computation platform}
\item{Vehicle-to-everything communication interface}
\end{itemize}
\item Locomotion and signaling sub-systems shall offer:
\begin{itemize}
\item{Kinodynamically constrained drivetrain and steering}
\item{Standard automotive lighting and signaling}
\end{itemize}
\end{itemize}
The functional system requirements were decomposed into mechanical, electronics, firmware and ADSS design specifications and carefully studied to analyze any potential trade-offs so as to finalize the components and ultimately come up with a refined system architecture design (refer Fig. \ref{fig3}).
The proposed hardware-software architecture of the scaled autonomous vehicle system is divided into eight sub-systems viz. chassis, power, computation, communication, software, sensors, actuators and lights, each with its own share of components (refer Fig. \hyperref[fig3]{\ref*{fig3}-A}). The embedded firmware architecture for low-level data acquisition and control is depicted in Fig. \hyperref[fig3]{\ref*{fig3}-B}, which links the data sources to the respective data sinks after processing the information.
Finally, Fig. \hyperref[fig3]{\ref*{fig3}-C} depicts high-level architecture of the autonomous parking solution described in this paper. Particularly, it is shown how this candidate autonomy solution uses modular algorithms for simultaneous localization and mapping (SLAM) \cite{HectorSLAM2011}, odometry estimation \cite{RF2O2016}, localization \cite{AMCL2001}, global \cite{AStar1968} and local \cite{TEBPlanner2017} path planning, and motion control. Implementation descriptions are necessarily brief due to the space limitations; however, further details can be found in this technical report \cite{AutoDRIVEReport2021}.
\section{Virtual Prototyping and Testing}
\label{Section: Virtual Prototyping and Testing}
Virtual prototypes help expedite the design process by validating the designs against system requirements through simulation, and suggesting design revisions at an early stage.
The scaled autonomous vehicle system was virtually prototyped and tested in three phases. First, the mechanical specifications, motions and fit were carefully analyzed using a parametric computer aided design (CAD) assembly of the system in conjunction with the physical modeling approach for multi-body dynamic systems (refer Fig. \hyperref[fig4]{\ref*{fig4}-A}). Parallelly, the electronic sub-systems were prototyped using the physical modeling approach, and also by adopting electronic design automation (EDA) workflow (refer Fig. \hyperref[fig4]{\ref*{fig4}-B}). Next, the firmware for low-level control (front wheel steering angle and rear wheel velocity) of the vehicle was verified to produce reliable results (within a specified tolerance of 3e-2 rad for steering angle and 3e-1 rad/s for wheel velocity) through model-in-the-loop (MIL) and software-in-the-loop (SIL) testing (refer Fig. \hyperref[fig4]{\ref*{fig4}-C}).
The knowledge gained through this process was used to update the AutoDRIVE Simulator (refer Fig. \hyperref[fig4]{\ref*{fig4}-D}) from its initial version discussed in \cite{AutoDRIVESimulator2021, AutoDRIVESimulatorReport2020} to the one described in \cite{AutoDRIVEEcosystem2022, AutoDRIVEReport2021}. The updated simulator was then employed for verification and validation of individual ADSS modules and finally, the integrated autonomous parking solution was also verified using the same toolchain (refer Fig. \hyperref[fig5]{\ref*{fig5}-A}). Particularly, we tested the vehicle in multiple environments, which included unit tests for validating the SLAM, odometry, localization, planning and control algorithms, followed by verification of the integrated pipeline with and without the addition of dynamic obstacles, which were absent while mapping the environment. The autonomous navigation behavior was analyzed for 5 sample trials and verified to fit within an acceptable tolerance threshold of 2.5e-2 m; the acceptable parking pose tolerance was set to be 5e-2 m for linear positions in X and Y directions and 8.73e-2 rad for the angular orientation about Z-axis.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figure_4.pdf}
\caption{Development and system integration of scaled autonomous vehicle: [A] mechanical assembly; [B] electronic schematic; [C] MBD workflow depicting MIL, SIL, PIL, HIL and VIL testing of vehicle firmware; [D] virtual prototype in AutoDRIVE Simulator; [E] physical prototype in AutoDRIVE Testbed.}
\label{fig4}
\end{figure*}
\section{Hybrid Prototyping and Testing}
\label{Section: Hybrid Prototyping and Testing}
All models or virtual prototypes involve certain degrees of abstraction, ranging from model fidelity to simulation settings, and as such, cannot be treated as perfect representations of their real-world counterparts. Therefore, once the virtual prototyping and preliminary testing of the system has been accomplished, the next step is to prototype and validate it in a hybrid fashion (partly virtual and partly physical), focusing more on high-level system integration. This method of hybrid prototyping and testing is extremely beneficial since it follows a gradual transition from simulation to reality, thereby enabling a more faithful system verification framework and providing a room for potential design revisions even before complete physical prototyping is accomplished.
The scaled vehicle system was subjected to hybrid testing by running processor-in-the-loop (PIL), hardware-in-the-loop (HIL) and vehicle-in-the-loop (VIL) tests on the embedded firmware for confirming minimum deviation from MIL and SIL results, specified by the same tolerance values of 3e-2 rad for steering angle and 3e-1 rad/s for wheel velocity (refer Fig. \hyperref[fig4]{\ref*{fig4}-C}). The performance of integrated autonomous vehicle system was then validated using hybrid testing in two phases.
First, we deployed the ADSS on the physical vehicle's on-board computer, which was interfaced with AutoDRIVE Simulator to receive live sensor feed from the virtual vehicle, process it and generate appropriate control commands, and finally relay these commands back to the simulated vehicle. Specifically, for the autonomous parking solution (refer Fig. \hyperref[fig5]{\ref*{fig5}-A}), we deployed and tested each of the SLAM, odometry, localization, planning and control algorithms for satisfactory performance. This was naturally followed by deployment and validation of the integrated pipeline for accomplishing reliable (within a specified tolerance of 2.5e-2 m) source-to-goal navigation (within a goal pose tolerance of 5e-2 m and 8.73e-2 rad) in different environments, wherein a subset of cases included dynamic obstacles as discussed earlier.
Next, we collected real-world sensor data using AutoDRIVE Testbed and replayed it as a real-time stimulus to the ADSS deployed on the physical vehicle's on-board computer running in-the-loop with AutoDRIVE Simulator. This way, we increased the ``real-world'' component of the hybrid test and verified the autonomous parking solution for expected performance (within same tolerance values as mentioned earlier). Particularly, the real-world data being collected/replayed was occupancy-grid map of the environment built by executing the SLAM module on the physical vehicle, which inherently resulted as a unit test of this module in real-world conditions. The simulated vehicle had to then localize against this real-world map while driving in the virtual scene and navigate autonomously from source to goal (parking) pose, which further tested the robustness of the integrated pipeline against minor environmental variations and/or vehicle behavior.
\section{Physical Prototyping and Testing}
\label{Section: Physical Prototyping and Testing}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Figure_5.pdf}
\caption{Verification and validation of scaled autonomous vehicle performance: [A] virtual/hybrid and [B] physical validation of (i) integrated system, unit testing of (ii) SLAM, (iii) odometry, (iv) localization, (v) planning and control modules in AutoDRIVE Simulator/Testbed; [C] repeatability/reliability analysis represented as mean and standard deviation of 5 trials for each deployment type with acceptable trajectory tolerance in green and parking tolerance in purple.}
\label{fig5}
\end{figure*}
Once the system confirms satisfactory performance under hybrid testing conditions, the next and final stage in mechatronic system development is physical prototyping and testing (refer Fig. \hyperref[fig4]{\ref*{fig4}-E}). In order to physically validate the modular autonomy application (refer Fig. \hyperref[fig5]{\ref*{fig5}-B}), we initially carried out unit tests to confirm the performance of each of the SLAM, odometry, localization, planning and control algorithms followed by deployment of the integrated stack for autonomous parking application (refer Fig. \hyperref[fig5]{\ref*{fig5}-C}). The vehicle was confirmed to exhibit a reliable (within a specified tolerance of 2.5e-2 m) source-to-goal navigation (within a goal pose tolerance of 5e-2 m and 8.73e-2 rad). Again, for testing the robustness of ADSS we introduced dynamic obstacles that were not existent while environment mapping was performed.
\section{Conclusion}
\label{Section: Conclusion}
In this work, we presented an extended V-model fostering mechatronics approach of system design, verification and validation for autonomous vehicles. Further, we discussed the design philosophy of AutoDRIVE ecosystem, which is to exploit and promote the mechatronics approach for autonomous vehicle development across scales and inculcate a habit of following it from academic education and research to industrial deployments. We also demonstrated the methodical adoption of mechatronics approach for designing, developing and validating a scaled autonomous vehicle in the context of a detailed case study pertaining to autonomous parking using a modular probabilistic framework; including both qualitative and quantitative remarks. We showed that the design, development as well as verification and validation of the scaled autonomous vehicle with regard to the aforementioned case study could be successfully accomplished within a stringent time-frame of about one month \cite{AutoDRIVEReport2021}. It is to be noted that although the exact timeline of any multidisciplinary project may vary depending upon factors such as skill set, experience and number of individuals involved, lead time in the supply chain, etc., the mechatronics approach definitely proves to be efficient in terms of minimizing the design-development iterations by the virtue of synergistic integration in a concurrent engineering thinking framework. This provides a room for the rectification of any design issues early in the development cycle, thereby increasing the chances of successful verification and validation with minimal loss of time and resources.
\balance
\bibliographystyle{IEEEtran}
|
2,877,628,089,375 | arxiv | \section{Introduction}
It is well know that solving large scale eigenvalue problems becomes a fundamental problem in modern science and engineering society.
Among these eigenvalue problems, there exist many nonlinear eigenvalue problems
\cite{Bao,BaoDu,CancesChakirMaday,ChenGongHeYangZhou,ChenGongZhou,ChenHeZhou,KohnSham,Martin,ParrYang,SulemSulem}.
However, it is not an easy task to solve high-dimensional nonlinear eigenvalue problems which come from
physical and chemical sciences.
The multigrid method and other efficient preconditioners provide an optimal order algorithm for solving
boundary value problems since they can obtain the theoretical error by the linear scale computation work.
We introduce the papers:
Bramble and Zhang \cite{BrambleZhang}, Scott and Zhang \cite{ScottZhang}, Xu \cite{Xu},
and books: Bramble \cite{Bramble}, Brenner and Scott \cite{BrennerScott}, Hackbusch \cite{Hackbusch_Book},
McCormick \cite{McCormick}, Shaidurov \cite{Shaidurov} to the interested readers.
Recently, we develop a type of
multigrid method for linear eigenvalue problems \cite{LinXie,LinXie_Multigrid,Xie_Steklov,Xie_Nonconforming,Xie_JCP}.
Then the aim of this paper is to present a type of multigrid
scheme for nonlinear
eigenvalue problems based on the multilevel correction method \cite{LinXie}.
With this method, solving nonlinear eigenvalue problem will not be
more difficult than solving the corresponding linear boundary value problem.
The multigrid method for nonlinear eigenvalue problem is based on a
series of finite element spaces with different level of accuracy
which can be built with the same way as the multilevel
method for boundary value problems \cite{Xu}.
It is worth pointing out that besides the multigrid method,
other types of numerical algorithms such as BPX multilevel preconditioners,
algebraic multigrid method and domain decomposition preconditioners \cite{BrennerScott}
can also act as the linear algebraic solvers for the
multigrid method of the nonlinear eigenvalue problem.
The corresponding error and computational work estimates of the proposed multigrid
scheme for the nonlinear eigenvalue problem will be analyzed. Based
on the analysis, the new method can obtain optimal errors with an almost optimal
computational work. The eigenvalue
multigrid procedure can be described as follows: (1)\
solve the nonlinear eigenvalue problem in the coarsest finite element space;
(2)\ solve an additional linear boundary value problem with multigrid method on the refined mesh using
the previous obtained eigenvalue multiplying the corresponding
eigenfunction as the load vector; (3)\ solve a nonlinear eigenvalue problem
again on the finite element space which is constructed by combining
the coarsest finite element space with the obtained eigenfunction
approximation in step (2). Then go to step (2) for next loop until stop.
In this method, we replace solving nonlinear eigenvalue problem on the finest
finite element space by solving a series of linear boundary value problems with multigrid scheme
in the corresponding series of finite element spaces and a series of nonlinear
eigenvalue problems in the coarsest finite element space. So this multigrid method
can improve the overfull efficiency of solving eigenvalue problems.
An outline of the paper goes as follows. In Section 2, we introduce
finite element method for nonlinear eigenvalue problem and some
assumptions in this paper. Two correction steps are given in Sections
3 and 4 based on fixed-point iteration and Newton iteration, respectively.
In Section 5, we propose a type of multigrid
algorithm for solving the nonlinear eigenvalue problem by finite element method.
Section 6 is devoted to estimating the computational work for the multigrid method defined in Section 5.
Some concluding remarks are given in the last section.
\section{Finite element method for nonlinear eigenvalue problem}
In this section, we introduce the finite element method for the nonlinear
eigenvalue problem, some notation and error estimates of
the finite element approximation for eigenvalue problems.
The letter $C$ (with or without subscripts) denotes a generic
positive constant which may be different at its different occurrences through the paper.
For convenience, the symbols $\lesssim$, $\gtrsim$ and $\approx$
will be used in this paper. That $x_1\lesssim y_1, x_2\gtrsim y_2$
and $x_3\approx y_3$, mean that $x_1\leq C_1y_1$, $x_2 \geq c_2y_2$
and $c_3x_3\leq y_3\leq C_3x_3$ for some constants $C_1, c_2, c_3$
and $C_3$ that are independent of mesh sizes (see, e.g., \cite{Xu}).
We use the standard notation for Sobolev spaces $W^{s,p}(\Omega)$
and their associated norms, semi-norms
\cite{BrennerScott,Ciarlet}. For $p=2$, denote $H^s(\Omega)=W^{s,2}(\Omega)$
and $H_0^1(\Omega)=\{v\in H^1(\Omega): v|_{\partial\Omega}=0\}$, where
$v|_{\partial\Omega}$ is understand in the sense of trace, $\|\cdot\|_{s,\Omega}=\|\cdot\|_{s,2,\Omega}$,
and $(\cdot,\cdot)$ is the standard $L^2(\Omega)$ inner product.
In this paper, we are concerned with the following nonlinear eigenvalue problem:\\
Find $(\lambda,u)$ such that
\begin{equation}\label{Nonlinear_Eigenvalue_Problem}
\left\{
\begin{array}{rcl}
-\Delta u+f(x,u)&=&\lambda u,\ \ \ {\rm in}\ \Omega,\\
u&=&0,\ \ \ \ \ {\rm on}\ \partial\Omega,\\
\int_{\Omega}u^2d\Omega&=&1,
\end{array}
\right.
\end{equation}
where $\Omega\subset \mathcal{R}^d$ denotes the computing domain and $f(x,u)$
is a smooth enough function such that the eigenvalue problem (\ref{Nonlinear_Eigenvalue_Problem})
has only real eigenvalues.
In this paper, we set $V=H_0^1(\Omega)$.
For the aim of finite element discretization, we define the corresponding
weak eigenvalue problem as follows:\\
Find $(\lambda,u)\in \mathcal{R}\times V$ such that $b(u,u)=1$ and
\begin{eqnarray}
a(u,v)&=&\lambda b(u,v),\quad \forall v\in V, \label{weak_problem}
\end{eqnarray}
where
\begin{eqnarray*}
a(u,v):=\int_{\Omega}\big(\nabla u\nabla v+f(x,u)v\big)d\Omega,\ \ \
b(u,v):=\int_{\Omega}uvd\Omega.
\end{eqnarray*}
Now, let us define the finite element approximations of the problem
(\ref{weak_problem}). First we generate a shape-regular
decomposition of the computing domain $\Omega\subset \mathcal{R}^d\
(d=2,3)$ into triangles or rectangles for $d=2$ (tetrahedrons or
hexahedrons for $d=3$). The diameter of a cell $K\in\mathcal{T}_h$
is denoted by $h_K$. The mesh diameter $h$ describes the maximum
diameter of all cells $K\in\mathcal{T}_h$. Based on the mesh
$\mathcal{T}_h$, we can construct the linear finite element space denoted by
$V_h\subset V$. In order to apply multigrid scheme, we
start the process on the original mesh $\mathcal{T}_H$ with the mesh
size $H$ and the original coarse linear finite element space $V_H$
defined on the mesh $\mathcal{T}_H$. We assume that
$V_h\subset V$ is a family of finite-dimensional spaces that satisfy
the following assumption:\\
For any $w \in V$
\begin{eqnarray}\label{Approximation_Property}
\lim_{h\rightarrow0}\inf_{v\in V_h}\|w-v\|_1 = 0.
\end{eqnarray}
The standard finite element method is to solve the following eigenvalue problem:\\
Find $(\bar\lambda_h, \bar u_h)\in \mathcal{R}\times V_h$ such that
$b(\bar u_h,\bar u_h)=1$ and
\begin{eqnarray}\label{weak_problem_Discrete}
a(\bar u_h,v_h)&=&\bar\lambda_hb(\bar u_h,v_h),\quad\ \ \ \forall v_h\in V_h.
\end{eqnarray}
Then we define
\begin{eqnarray}
\delta_h(u)=\inf_{v_h\in V_h}\|u-v_h\|_1.
\end{eqnarray}
For generality, we only state the following assumptions about the error estimate for the eigenpair
approximation $(\bar\lambda_h,\bar u_h)$ defined by (\ref{weak_problem_Discrete}) (see, e.g., \cite{CancesChakirMaday,ChenHeZhou}).
{\bf Assumption A1}:
The eigenpair approximation $(\bar\lambda_h,\bar u_h)$ of (\ref{weak_problem_Discrete}) has the following
error estimates
\begin{eqnarray}
\|u-\bar u_h\|_1 &\lesssim& \delta_h(u),\label{Error_Estimate_Eigenfunction}\\
|\lambda-\bar\lambda_h|+\|u-\bar u_h\|_0&\lesssim & \eta_a(V_h)\|u-\bar u_h\|_1,\label{Error_Estimate_Eigenvalue}
\end{eqnarray}
where $\eta_a(V_h)$ depends on the finite dimensional space $V_h$ and has the following property
\begin{eqnarray}\label{Property_Eta_h}
\lim_{h\rightarrow 0}\eta_a(V_h)=0, \ \ \ \eta_a(\widetilde{V}_{h})\leq \eta_a(V_h)\ \ {\rm if}\
V_h\subset \widetilde{V}_h\subset V.
\end{eqnarray}
{\bf Assumption A2}:\ Assume $V^h$ is a subspace of $V_h$.
Let us define the eigenpair approximation $(\lambda^h,u^h)$ by solving the eigenvalue problem as follows:
Find $(\lambda^h,u^h)\in\mathcal{R}\times V^h$ such that $b(u^h,u^h)=$ and
\begin{eqnarray}\label{Nonlinear_Eigenvalue_Problem_subspace}
a(u^h,v^h)&=&\lambda^hb(u^h,v^h),\ \ \ \ \forall v^h\in V^h.
\end{eqnarray}
Then the following error estimates hold
\begin{eqnarray}
\|\bar{u}_h-u^h\|_1 &\lesssim& \delta_h(\bar{u}_h),\label{Error_Estimate_Eigenfunction_Subspace}\\
|\bar{\lambda}_h-\lambda^h|+\|\bar{u}_h-u^h\|_0&\lesssim & \eta_a(V^h)\|\bar{u}_h-u^h\|_1,
\label{Error_Estimate_Eigenvalue_Subspace}
\end{eqnarray}
where
\begin{eqnarray}\label{Detlat_Definition_Subspace}
\delta_h(\bar{u}_h):=\inf_{v^h\in V^h}\|\bar{u}_h-v^h\|_1.
\end{eqnarray}
In order to design and analyze the multilevel correction method for the nonlinear eigenvalue problems, we also
need the following assumptions for the nonlinear function $f(\cdot,\cdot):\mathcal{R}\times V\rightarrow \mathcal{R}$.
{\bf Assumption B}:
The nonlinear function $f(x,\cdot)$ has the following estimate
\begin{eqnarray}\label{Nonlinear_Estimate_Fix}
|(f(x,w)-f(x,v),\psi)|\lesssim \|w-v\|_0\|\psi\|_1,\ \ \ \forall w\in V, \ \ \forall v\in V,\ \ \forall \psi\in V.
\end{eqnarray}
{\bf Assumption C}:
The nonlinear function $f(x,\cdot)$ has the following estimate
\begin{eqnarray}\label{Nonlinear_Estimate_Newton}
|(f(x,w)-f(x,v)-f_v(x,v)(w-v),\psi)|&\lesssim& \|w-v\|_0\|\psi\|_1,\ \ \forall w\in V,\nonumber\\
&&\ \ \forall v\in V,\ \ \forall \psi\in V.
\end{eqnarray}
For more discussions about the function $f(x,\cdot)$, please refer to
\cite{CancesChakirMaday,ChenGongHeYangZhou,Xu_Nonlinear} and the papers cited therein.
\section{One correction step based on fixed-point iteration}
In this section, we introduce a type of correction step based on the fixed-point iteration
to improve the accuracy of the current eigenpair approximation.
This correction step contains solving an auxiliary linear boundary value problem with multigrid method
in the finer finite element space and a nonlinear eigenvalue problem on the
coarsest finite element space.
Assume we have obtained an eigenpair approximation
$(\lambda_{h_k},u_{h_k})\in\mathcal{R}\times V_{h_k}$. Now we
introduce a type of correction step to improve the accuracy of the
current eigenpair approximation $(\lambda_{h_k},u_{h_k})$. Let
$V_{h_{k+1}}\subset V$ be a finer finite element space such that
$V_{h_k}\subset V_{h_{k+1}}$. Based on this finer finite element space,
we define the following correction step.
\begin{algorithm}\label{Correction_Step_Fix}
One Correction Step based on Fixed-point Iteration
\begin{enumerate}
\item Define the following auxiliary boundary value problem:
Find $\widehat{u}_{h_{k+1}}\in V_{h_{k+1}}$ such that
\begin{eqnarray}\label{aux_problem_fix}
\hskip-0.5cm (\nabla\widehat{u}_{h_{k+1}},\nabla v_{h_{k+1}})=
\lambda_{h_k}b(u_{h_k},v_{h_{k+1}})-(f(x,u_{h_k}),v_{h_{k+1}}), \ \forall v_{h_{k+1}}\in V_{h_{k+1}}.
\end{eqnarray}
Solve this equation with multigrid method to obtain an approximation
$\widetilde{u}_{h_{k+1}}\in V_{h_{k+1}}$ with error estimate
\begin{eqnarray}\label{Multigrid_Accuracy}
\|\widehat{u}_{h_{k+1}}-\widetilde{u}_{h_{k+1}}\|_a\leq C\eta_a(V_{h_k})\delta_{h_k}(u).
\end{eqnarray}
\item Define a new finite element
space $V_{H,h_{k+1}}=V_H+{\rm span}\{\widetilde{u}_{h_{k+1}}\}$ and solve
the following eigenvalue problem:
Find $(\lambda_{h_{k+1}},u_{h_{k+1}})\in\mathcal{R}\times V_{H,h_{k+1}}$ such
that $b(u_{h_{k+1}},u_{h_{k+1}})=1$ and
\begin{eqnarray}\label{Eigen_Augment_Problem_fix}
a(u_{h_{k+1}},v_{H,h_{k+1}})&=&\lambda_{h_{k+1}} b(u_{h_{k+1}},v_{H,h_{k+1}}),\ \ \
\forall v_{H,h_{k+1}}\in V_{H,h_{k+1}}.
\end{eqnarray}
\end{enumerate}
Summarize above two steps into
\begin{eqnarray*}
(\lambda_{h_{k+1}},u_{h_{k+1}})={\it
Correction}(V_H,\lambda_{h_k},u_{h_k},V_{h_{k+1}}).
\end{eqnarray*}
\end{algorithm}
\begin{theorem}\label{Error_Estimate_One_Correction_Theorem_Fix}
Assume {\bf Assumptions A1}, {\bf A2} and {\bf B} hold.
The resultant approximation $(\lambda_{h_{k+1}},u_{h_{k+1}})\in\mathcal{R}\times V_{h_{k+1}}$ by Algorithm \ref{Correction_Step_Fix}
and the eigenpair approximation $(\bar{\lambda}_{h_{k+1}},\bar{u}_{h_{k+1}})$ by the direct finite element method
in $V_{h_{k+1}}$ have the following estimates
\begin{eqnarray}
\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1 &\lesssim& \varepsilon_{h_{k+1}}(u),
\label{Estimate_u_u_h_{k+1}_fix}\\
|\bar{\lambda}_{h_{k+1}}-\lambda_{h_{k+1}}|+\|\bar u_{h_{k+1}}-u_{h_{k+1}}\|_0
&\lesssim&\eta_a(V_H)\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1,
\label{Estimate_u_h_{k+1}_Nagative_fix}\\
|(f(x,\bar{u}_{h_{k+1}})-f(x,u_{h_{k+1}}),v)|
&\lesssim&\eta_a(V_H)\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1\|v\|_1,
\ \forall v\in V.\label{Nonlinear_Estimate_k+1_fix}
\end{eqnarray}
where $\varepsilon_{h_{k+1}}(u):= \eta_a(V_{h_k})\delta_{h_k}(u)
+\|\bar{u}_{h_k}-u_{h_k}\|_0+|\bar{u}_{h_k}-\lambda_{h_k}|$.
\end{theorem}
\begin{proof}
From (\ref{weak_problem_Discrete}) and (\ref{aux_problem_fix}),
the following inequalities hold for any $v_{h_{k+1}}\in V_{h_{k+1}}$
\begin{eqnarray*}
&&\big(\nabla(\bar{u}_{h_{k+1}}-\widehat{u}_{h_{k+1}}),\nabla v_{h_{k+1}}\big)\nonumber\\
&=&b(\bar{\lambda}_{h_{k+1}}\bar{u}_{h_{k+1}}-\lambda_{h_k}u_{h_k},v_{h_{k+1}})+
(f(x,u_{h_k})-f(x,\bar{u}_{h_{k}}),v_{h_{k+1}})\nonumber\\
&\lesssim&\big(|\bar{\lambda}_{h_{k+1}}-\lambda_{h_k}|
+\|\bar{u}_{h_{k+1}}-u_{h_k}\|_0\big)\|v_{h_{k+1}}\|_1\nonumber\\
&\lesssim&\big(|\bar{\lambda}_{h_{k+1}}-\bar{\lambda}_{h_k}|+|\bar{\lambda}_{h_k}-\lambda_{h_k}|
+\|\bar{u}_{h_{k+1}}-\bar{u}_{h_k}\|_0+\|\bar{u}_{h_k}-u_{h_k}\|_0\big)\|v_{h_{k+1}}\|_1\nonumber\\
&\lesssim& \big(\eta_a(V_{h_k})\delta_{h_k}(u)
+\|\bar{u}_{h_k}-u_{h_k}\|_0+|\bar{u}_{h_k}-\lambda_{h_k}|\big)\|v_{h_{k+1}}\|_1.
\end{eqnarray*}
Then we have
\begin{eqnarray}\label{Estimate_u_tilde_u_h_{k+1}_fix}
\|\bar{u}_{h_{k+1}}-\widehat{u}_{h_{k+1}}\|_1
\lesssim \eta_a(V_{h_k})\delta_{h_k}(u)+\|\bar{u}_{h_k}-u_{h_k}\|_0+|\bar{u}_{h_k}-\lambda_{h_k}|.
\end{eqnarray}
Combining (\ref{Estimate_u_tilde_u_h_{k+1}_fix}) and the accuracy (\ref{Multigrid_Accuracy}) leads to the
following estimate
\begin{eqnarray}\label{Error_tilde_u_h_{k+1}_u_final_fix}
\|\bar{u}_{h_{k+1}}-\widetilde{u}_{h_{k+1}}\|_1\lesssim
\eta_a(V_{h_k})\delta_{h_k}(u)+\|\bar{u}_{h_k}-u_{h_k}\|_0+|\bar{u}_{h_k}-\lambda_{h_k}|.
\end{eqnarray}
Now we come to estimate the error for the eigenpair solution
$(\lambda_{h_{k+1}},u_{h_{k+1}})$ of problem (\ref{Eigen_Augment_Problem_fix}).
Based on {\bf Assumptions A1}, {\bf A2} and {\bf B}, and the definition of $V_{H,h_{k+1}}$,
the following estimates hold
\begin{eqnarray}\label{Error_u_u_h_{k+1}_fix}
\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1\lesssim \inf_{v_{H,h_{k+1}}\in
V_{H,h_{k+1}}}\|\bar{u}_{h_{k+1}}-v_{H,h_{k+1}}\|_1
\lesssim \|\bar{u}_{h_{k+1}}-\widetilde{u}_{h_{k+1}}\|_1,
\end{eqnarray}
and
\begin{eqnarray}
|\bar{\lambda}_{h_{k+1}}-\lambda_{h_{k+1}}|+\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_0
&\lesssim& \eta_a(V_{H,h_{k+1}})\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1,\label{Error_u_u_h_{k+1}_Negative_Fix}\\
|(f(x,\bar{u}_{h_{k+1}})-f(x,u_{h_{k+1}}),v)| &\lesssim&
\eta_a(V_{H,h_{k+1}})\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1\|v\|_1,\nonumber\\
&&\ \ \ \ \ \ \quad\quad\quad\quad\quad\quad\ \forall v\in V.\label{Error_Nonlinear_k+1_Fix}
\end{eqnarray}
From (\ref{Property_Eta_h}), (\ref{Error_tilde_u_h_{k+1}_u_final_fix}), (\ref{Error_u_u_h_{k+1}_fix}),
(\ref{Error_u_u_h_{k+1}_Negative_Fix}) and (\ref{Error_Nonlinear_k+1_Fix}), we can obtain the desired results
(\ref{Estimate_u_u_h_{k+1}_fix}), (\ref{Estimate_u_h_{k+1}_Nagative_fix}) and
(\ref{Nonlinear_Estimate_k+1_fix}).
\end{proof}
\section{One correction step based on Newton iteration}
In this section, we present another type of correction step based on Newton iteration
(always has better convergence property)
to improve the accuracy of the given eigenpair approximations.
This correction method also contains solving an auxiliary linear
boundary value problem with multigrid method
in the finer finite element space and a nonlinear eigenvalue problem on the
coarsest finite element space.
Similarly, assume we have obtained an eigenpair approximation
$(\lambda_{h_k},u_{h_k})\in\mathcal{R}\times V_{h_k}$.
Let $V_{h_{k+1}}\subset V$ be a finer finite element space such that
$V_{h_k}\subset V_{h_{k+1}}$.
In this section, we define the bilinear form $a_{h_k}(w,v)$ as follows
\begin{eqnarray}\label{Definition_a_h_k}
a_{h_k}(w,v)=(\nabla w,\nabla v)+(f_u(x,u_{h_k})w,v).
\end{eqnarray}
Here, we assume the linearized operator $L_u:=-\Delta +f_u(x,u)$ is nonsingular and
$u_{h_k}$ is close enough to $u$
such that the following properties hold \cite[Lemma 2.1]{Xu_Nonlinear}
\begin{eqnarray}
\sup_{0\neq v_{h_{k+1}}\in V_{h_{k+1}}}\frac{a_{h_k}(w_{h_{k+1}},v_{h_{k+1}})}{\|v_{h_{k+1}}\|_1}
&\gtrsim& \|w_{h_{k+1}}\|_1,\ \ \ \ \ \forall w_{h_{k+1}}\in V_{h_{k+1}},\label{Inf_Sup_Condition}\\
|a_{h_k}(w,v)|&\lesssim& \|w\|_1\|v\|_1,\ \ \ \ \ \forall w\in V,\ \forall v\in V.\label{Boundednness}
\end{eqnarray}
Now we define the correction step as follows.
\begin{algorithm}\label{Correction_Step_Newton}
One Correction Step based on Newton Iteration
\begin{enumerate}
\item Define the following auxiliary boundary value problem:
Find $\widehat{e}_{h_{k+1}}\in V_{h_{k+1}}$ such that
\begin{eqnarray}\label{aux_problem}
a_{h_k}(\widehat{e}_{h_{k+1}},v_{h_{k+1}})&=&\lambda_{h_k}b(u_{h_k},v_{h_{k+1}})-(\nabla u_{h_k},\nabla v_{h_{k+1}})
,\nonumber\\
&&\quad\quad\ -(f(x,u_{h_k}),v_{h_{k+1}}), \ \ \forall v_{h_{k+1}}\in V_{h_{k+1}}.
\end{eqnarray}
Solve this equation with multigrid method \cite{Shaidurov,Xu_Two_Grid} to obtain an approximation
$\widetilde{e}_{h_{k+1}}\in V_{h_{k+1}}$ with error estimate
$\|\widehat{e}_{h_{k+1}}-\widetilde{e}_{h_{k+1}}\|_a\leq C\delta_{h_{k+1}}(u)$
and set $\widetilde{u}_{h_{k+1}}=u_{h_k}+\widetilde{e}_{h_{k+1}}$.
\item Define a new finite element
space $V_{H,h_{k+1}}=V_H+{\rm span}\{\widetilde{u}_{h_{k+1}}\}$ and solve
the following eigenvalue problem:
Find $(\lambda_{h_{k+1}},u_{h_{k+1}})\in\mathcal{R}\times V_{H,h_{k+1}}$ such
that $b(u_{h_{k+1}},u_{h_{k+1}})=1$ and
\begin{eqnarray}\label{Eigen_Augment_Problem}
a(u_{h_{k+1}},v_{H,h_{k+1}})&=&\lambda_{h_{k+1}} b(u_{h_{k+1}},v_{H,h_{k+1}}),\ \ \
\forall v_{H,h_{k+1}}\in V_{H,h_{k+1}}.
\end{eqnarray}
\end{enumerate}
Summarize above two steps onto
\begin{eqnarray*}
(\lambda_{h_{k+1}},u_{h_{k+1}})={\it
Correction}(V_H,\lambda_{h_k},u_{h_k},V_{h_{k+1}}).
\end{eqnarray*}
\end{algorithm}
\begin{theorem}\label{Error_Estimate_One_Correction_Theorem}
Assume {\bf Assumptions A1}, {\bf A2} and {\bf C} hold.
The resultant approximation $(\lambda_{h_{k+1}},u_{h_{k+1}})\in\mathcal{R}\times V_{h_{k+1}}$ by Algorithm \ref{Correction_Step_Newton}
and the eigenpair approximation $(\bar{\lambda}_{h_{k+1}},\bar{u}_{h_{k+1}})$ by the direct finite element method
in $V_{h_{k+1}}$ have the following estimates
\begin{eqnarray}
&&\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1\lesssim \varepsilon_{h_{k+1}}(u),\label{Estimate_u_u_h_{k+1}}\\
&&|\bar{\lambda}_{h_{k+1}}-\lambda_{h_{k+1}}|+\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_0
\lesssim\eta_a(V_H)\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1,
\label{Estimate_u_h_{k+1}_Nagative}\\
&&|(f(x,\bar{u}_{h_{k+1}})-f(x,u_{h_{k+1}})-f_u(x,u_{h_{k+1}})(\bar{u}_{h_{k+1}}-u_{h_{k+1}}),v)|
\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \lesssim\eta_a(V_H)\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1\|v\|_1,
\ \ \ \forall v\in V,\label{Nonlinear_Estimate_k+1}
\end{eqnarray}
where $\varepsilon_{h_{k+1}}(u):=\eta_a(V_{h_k})\delta_{h_k}(u)+\|\bar{u}_{h_k}-u_{h_k}\|_0
+|\bar{\lambda}_{h_k}-\lambda_{h_k}|$.
\end{theorem}
\begin{proof}
From (\ref{weak_problem_Discrete}) and (\ref{aux_problem}),
the following estimates hold for any $v_{h_{k+1}}\in V_{h_{k+1}}$
\begin{eqnarray}\label{Estimate_1_Fix}
&&a_{h_k}\big(\bar{u}_{h_{k+1}}-u_{h_k}-\widehat{e}_{h_{k+1}},v_{h_{k+1}}\big)\nonumber\\
&=&a_{h_k}(\bar{u}_{h_{k+1}}-u_{h_k},v_{h_{k+1}})-b(\lambda_{h_k}u_{h_k},v_{h_{k+1}})
+(\nabla u_{h_k},\nabla v_{h_{k+1}})\nonumber\\
&&\ \ \ \ \ +(f(x,u_{h_k}),v_{h_{k+1}})\nonumber\\
&=&(\nabla \bar{u}_{h_{k+1}},\nabla v_{h_{k+1}})+(f_u(x,u_{h_k})(\bar{u}_{h_{k+1}}-u_{h_k}),v_{h_{k+1}})
-b(\lambda_{h_k}u_{h_k},v_{h_{k+1}})\nonumber\\
&&\ \ \ \ \ +(f(x,u_{h_k}),v_{h_{k+1}})\nonumber\\
&=&(f(x,u_{h_k})-f(x,\bar{u}_{h_{k+1}})+f_u(x,u_{h_k})(\bar{u}_{h_{k+1}}-u_{h_k}),v_{h_{k+1}})\nonumber\\
&&\ \ \ \ \ +b(\bar{\lambda}_{h_{k+1}}\bar{u}_{h_{k+1}}-\lambda_{h_k}u_{h_k},v_{h_{k+1}})\nonumber\\
&\lesssim&\big(\|\bar{u}_{h_{k+1}}-u_{h_k}\|_0+|\bar{\lambda}_{h_{k+1}}-\lambda_{h_k}|\big)\|v_{h_{k+1}}\|_0\nonumber\\
&\lesssim&\big(\|\bar{u}_{h_{k+1}}-\bar{u}_{h_k}\|_0+\|\bar{u}_{h_k}-u_{h_k}\|_0
+|\bar{\lambda}_{h_{k+1}}-\bar{\lambda}_{h_k}|+|\bar{\lambda}_{h_k}-\lambda_{h_k}|\big)\|v_{h_{k+1}}\|_1\nonumber\\
&\lesssim&\big(\eta_a(V_{h_k})\delta_{h_k}(u)+\|\bar{u}_{h_k}-u_{h_k}\|_0
+|\bar{\lambda}_{h_k}-\lambda_{h_k}|\big)\|v_{h_{k+1}}\|_1.
\end{eqnarray}
Combing (\ref{Inf_Sup_Condition}) and (\ref{Estimate_1_Fix}), we have the following estimates
\begin{eqnarray}\label{Estimate_u_tilde_u_h_{k+1}}
\|\bar{u}_{h_{k+1}}-u_{h_k}-\widehat{e}_{h_{k+1}}\|_1 &\lesssim
& \sup_{0\neq v_{h_{k+1}}\in V_{h_{k+1}}}\frac{a_{h_k}(\bar{u}_{h_{k+1}}-u_{h_k}
-\widehat{e}_{h_{k+1}},v_{h_{k+1}})}{\|v_{h_{k+1}}\|_1}\nonumber\\
&\lesssim& \eta_a(V_{h_k})\delta_{h_k}(u)+\|\bar{u}_{h_k}-u_{h_k}\|_0
+|\bar{\lambda}_{h_k}-\lambda_{h_k}|.
\end{eqnarray}
Then from (\ref{Estimate_u_tilde_u_h_{k+1}}) and the accuracy
$\|\widehat{e}_{h_{k+1}}-\widetilde{e}_{h_{k+1}}\|_1\lesssim \eta_a(V_{h_k})\delta_{h_k}(u)$,
the following inequality hold
\begin{eqnarray}\label{Error_tilde_u_h_{k+1}_u_final}
\|\bar{u}_{h_{k+1}}-\widetilde{u}_{h_{k+1}}\|_1&\lesssim&\eta_a(V_{h_k})\delta_{h_k}(u)+\|\bar{u}_{h_k}-u_{h_k}\|_0
+|\bar{\lambda}_{h_k}-\lambda_{h_k}|.
\end{eqnarray}
Now we come to estimate the error for the eigenpair solution
$(\lambda_{h_{k+1}},u_{h_{k+1}})$ of problem (\ref{Eigen_Augment_Problem}).
Based on {\bf Assumptions A1}, {\bf A2} and {\bf C}, and the definition of $V_{H,h_{k+1}}$,
the following estimates hold
\begin{eqnarray}\label{Error_u_u_h_{k+1}}
\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1\lesssim\inf_{v_{H,h_{k+1}}\in
V_{H,h_{k+1}}}\|\bar{u}_{h_{k+1}}-v_{H,h_{k+1}}\|_1\lesssim
\|\bar{u}_{h_{k+1}}-\widetilde{u}_{h_{k+1}}\|_1,
\end{eqnarray}
and
\begin{eqnarray}
&&|\bar{\lambda}_{h_{k+1}}-\lambda_{h_{k+1}}|+\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_0\lesssim
\eta_a(V_{H,h_{k+1}})\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1,\label{Error_u_u_h_{k+1}_Negative}\\
&&|(f(x,\bar{u}_{h_{k+1}})-f(x,u_{h_{k+1}})-f_u(x,u_{h_{k+1}})(\bar{u}_{h_{k+1}}
-u_{h_{k+1}}),v)|\lesssim \nonumber\\
&&\quad\quad\quad\quad\quad\quad\eta_a(V_{H,h_{k+1}})\|\bar{u}_{h_{k+1}}-u_{h_{k+1}}\|_1\|v\|_1.
\ \ \ \forall v\in V,\label{Error_Nonlinear_k+1}
\end{eqnarray}
From (\ref{Property_Eta_h}), (\ref{Error_tilde_u_h_{k+1}_u_final}), (\ref{Error_u_u_h_{k+1}}),
(\ref{Error_u_u_h_{k+1}_Negative}) and (\ref{Error_Nonlinear_k+1}), the desired results
(\ref{Estimate_u_u_h_{k+1}}), (\ref{Estimate_u_h_{k+1}_Nagative}) and (\ref{Nonlinear_Estimate_k+1})
can be obtained and the proof is complete.
\end{proof}
\section{Multigrid scheme for the eigenvalue problem}
In this section, we introduce a type of multigrid correction
scheme based on the {\it One Correction Step} defined in Algorithms
\ref{Correction_Step_Fix} and \ref{Correction_Step_Newton}.
This type of multigrid method can obtain the optimal error
estimate as same as solving the nonlinear eigenvalue problem directly on the finest
finite element space.
In order to do multigrid scheme, we define a sequence of triangulations $\mathcal{T}_{h_k}$
of $\Omega$ determined as follows. Suppose $\mathcal{T}_{h_1}$ is produced from $\mathcal{T}_H$
by regular refinement and let $\mathcal{T}_{h_k}$ be obtained
from $\mathcal{T}_{h_{k-1}}$ via regular refinement (produce $\beta^d$ subelements) such that
$$h_k\approx\frac{1}{\beta}h_{k-1},\ \ \ \ k=2,\cdots,n.$$
Based on this sequence of meshes, we construct the corresponding linear finite element spaces such that
\begin{eqnarray}\label{FEM_Space_Series}
V_{H}\subseteq V_{h_1}\subset V_{h_2}\subset\cdots\subset V_{h_n},
\end{eqnarray}
and the following relation of approximation errors hold
\begin{eqnarray}\label{Error_k_k_1}
\delta_{h_k}(u)\approx\frac{1}{\beta}\delta_{h_{k-1}}(u),\ \ \ k=2,\cdots,n.
\end{eqnarray}
\begin{algorithm}\label{Multi_Correction}
Eigenvalue Multigrid Scheme
\begin{enumerate}
\item Construct a series of nested finite element
spaces $V_{h_1}, V_{h_2},\cdots,V_{h_n}$ such that
(\ref{FEM_Space_Series}) and (\ref{Error_k_k_1}) hold.
\item Solve the following nonlinear eigenvalue problem:
Find $(\lambda_{h_1},u_{h_1})\in \mathcal{R}\times V_{h_1}$ such that
$b(u_{h_1},u_{h_1})=1$ and
\begin{eqnarray}\label{Initial_Eigen_Problem}
a(u_{h_1},v_{h_1})&=&\lambda_{h_1}b(u_{h_1},v_{h_1}),\ \ \ \ \forall v_{h_1}\in V_{h_1}.
\end{eqnarray}
\item Do $k=1,\cdots,n-1$\\
Obtain a new eigenpair approximation
$(\lambda_{h_{k+1}},u_{h_{k+1}})\in \mathcal{R}\times V_{h_{k+1}}$
by a correction step defined by Algorithm \ref{Correction_Step_Fix} or \ref{Correction_Step_Newton}
\begin{eqnarray}
(\lambda_{h_{k+1}},u_{h_{k+1}})=Correction(V_H,\lambda_{h_k},u_{h_k},V_{h_{k+1}}).
\end{eqnarray}
End Do
\end{enumerate}
Finally, we obtain an eigenpair approximation
$(\lambda_{h_n},u_{h_n})\in \mathcal{R}\times V_{h_n}$.
\end{algorithm}
\begin{theorem}\label{Error_MultiGrid_Theorem}
Assume we have conditions of Theorem \ref{Error_Estimate_One_Correction_Theorem_Fix}
for Algorithm \ref{Multi_Correction} with the
correction step defined by Algorithm \ref{Correction_Step_Fix}, or conditions of
Theorem \ref{Error_Estimate_One_Correction_Theorem}
for Algorithm \ref{Multi_Correction} with the
correction step defined by Algorithm \ref{Correction_Step_Newton}.
After implementing Algorithm \ref{Multi_Correction}, the resultant
eigenpair approximation $(\lambda_{h_n},u_{h_n})$ has the following
error estimates
\begin{eqnarray}
\|\bar{u}_{h_n}-u_{h_n}\|_1&\lesssim&\beta^2\eta_a(V_{h_n})\delta_{h_n}(u),\label{Multi_Correction_Err_fun}\\
|\bar{\lambda}_{h_n}-\lambda_{h_n}|+\|\bar{u}_{h_n}-u_{h_n}\|_0&\lesssim&\eta_a(V_{h_n})\delta_{h_n}(u).
\label{Multi_Correction_Err_fun_L2_Eigenvalue}
\end{eqnarray}
under the condition $C\beta\eta_a^2(V_H)<1$ for the constant $C$ hidden in concerned inequalities.
\end{theorem}
\begin{proof}
Here we only give the proof for the case of the
correction step defined by Algorithm \ref{Correction_Step_Fix} and the proof for
Algorithm \ref{Correction_Step_Newton} case can be given similarly.
From the definition of Algorithm \ref{Multi_Correction}, we know that
$\bar{u}_{h_1}=u_{h_1}$, $\bar{\lambda}_{h_1}=\lambda_{h_1}$.
When $k=2$, from Theorem \ref{Error_Estimate_One_Correction_Theorem_Fix}
and Algorithm \ref{Multi_Correction}, the following
estimates hold
\begin{eqnarray}
\|\bar{u}_{h_2}-u_{h_2}\|_1&\lesssim&\eta_a(V_{h_1})\delta_{h_1}(u),\label{Error_u_h_1_1}\\
|\bar{\lambda}_{h_2}-\lambda_{h_1}|+\|\bar{u}_{h_2}-u_{h_2}\|_0&\lesssim&\eta_a(V_H)\|\bar{u}_{h_2}-u_{h_2}\|_1\nonumber\\
&\leq& \eta_a(V_H)\eta_a(V_{h_1})\delta_{h_1}(u),\label{Error_u_h_1_nagative_1}\\
|(f(x,\bar{u}_{h_2})-f(x,u_{h_2}),v)|&\lesssim& \eta_a(V_H)\|\bar{u}_{h_2}-u_{h_2}\|_1\|v\|_1\nonumber\\
&\lesssim& \eta_a(V_H)\eta_a(V_{h_1})\delta_{h_1}(u)\|v\|_1,\ \ \ \forall v\in V.\label{Nonlinear_Estimate_1_fix}
\end{eqnarray}
Based on Theorem \ref{Error_Estimate_One_Correction_Theorem_Fix}, (\ref{Error_k_k_1}),
(\ref{Error_u_h_1_1})-(\ref{Nonlinear_Estimate_1_fix}) and recursive argument, the final
eigenfunction approximation $u_{h_n}$ has the following estimates
\begin{eqnarray}\label{Error_u_h_n_Multi_Correction}
\|\bar{u}_{h_n} - u_{h_n}\|_1 &\lesssim& \eta_{a}(V_{h_{n-1}}) \delta_{h_{n-1}}(u)+
\|\bar{u}_{h_{n-1}} - u_{h_{n-1}}\|_0 +|\bar{\lambda}_{h_{n-1}}-\lambda_{h_{n-1}}|\nonumber\\
&\lesssim& \eta_{a}(V_{h_{n-1}}) \delta_{h_{n-1}}(u)+
\eta_a(V_H)\|\bar{u}_{h_{n-1}} - u_{h_{n-1}}\|_1\nonumber\\
&\lesssim& \eta_{a}(V_{h_{n-1}}) \delta_{h_{n-1}}(u)+
\eta_a(V_H)\eta_{a}(V_{h_{n-2}}) \delta_{h_{n-2}}(u)\nonumber\\
&&\ \ \ \ + \eta_a^2(V_H)\|\bar{u}_{h_{n-2}} - u_{h_{n-2}}\|_1\nonumber\\
&\lesssim& \sum^{n-1}_{k=1}\big(\eta_{a}(V_H)\big)^{n-k-1}\eta_a(V_{h_k})
\delta_{h_k}(u)\nonumber\\
&\lesssim&\Big(\sum^{n-1}_{k=1}\big(\beta^2\eta_{a}(V_H)\big)^{n-k-1}\Big)
\beta^2\eta_a(V_{h_n})\delta_{h_n}(u)\nonumber\\
&\lesssim& \frac{1}{1-\beta^2\eta_a(V_H)}\beta^2\eta_a(V_{h_n})\delta_{h_n}(u)
\lesssim \beta^2\eta_a(V_{h_n})\delta_{h_n}(u).
\end{eqnarray}
This is the desired result (\ref{Multi_Correction_Err_fun}).
Similarly to the proof for Theorem \ref{Error_Estimate_One_Correction_Theorem_Fix}, we can obtain the
result (\ref{Multi_Correction_Err_fun_L2_Eigenvalue}) and the proof is complete.
\end{proof}
\begin{remark}
The results (\ref{Multi_Correction_Err_fun}) and (\ref{Multi_Correction_Err_fun_L2_Eigenvalue}) mean that
eigenpair approximation by the multigrid method have the same accuracy both in $L^2(\Omega)$ and $H^1(\Omega)$
as we solve the nonlinear eigenvalue problem directly by the finite element method.
\end{remark}
\begin{corollary}
Under the conditions of Theorem \ref{Error_MultiGrid_Theorem}, the eigenpair approximation $(\lambda_{h_n},u_{h_n})$
by the multigrid method defined by Algorithm \ref{Multi_Correction}
has the following error estimates
\begin{eqnarray}
\|u-u_{h_n}\|_1 &\lesssim&\delta_{h_n}(u),\label{Multi_Correction_Err_fun_Final}\\
|\lambda-\lambda_{h_n}|+\|u-u_{h_n}\|_0 &\lesssim&\eta_a(V_{h_n})\delta_{h_n}(u).
\label{Multi_Correction_Err_fun_L2_Eigenvalue_Final}
\end{eqnarray}
\end{corollary}
\section{Work estimate of eigenvalue multigrid scheme}
In this section, we estimate the computational work
for {\it Eigenvalue Multigrid Scheme} defined by Algorithm \ref{Multi_Correction}.
We will show that Algorithm \ref{Multi_Correction} makes solving eigenvalue problem need almost the
same work as solving the corresponding linear boundary value problem by the
multigrid method.
First, we define the dimension of each level linear finite element space as
\begin{eqnarray*}
N_k := {\rm dim}V_{h_k},\ \ \ k=1,\cdots,n.
\end{eqnarray*}
Then we have
\begin{eqnarray}\label{relation_dimension}
N_k \thickapprox\Big(\frac{1}{\beta}\Big)^{d(n-k)}N_n,\ \ \ k=1,\cdots,n.
\end{eqnarray}
The computational work for the second step in Algorithm \ref{Correction_Step_Fix} or
\ref{Correction_Step_Newton} is different
from the linear eigenvalue problems
\cite{LinXie,Xie_Steklov,Xie_Nonconforming,Xie_JCP}.
In this step, we need to solve a nonlinear eigenvalue problem (\ref{Eigen_Augment_Problem_fix})
or (\ref{Eigen_Augment_Problem}).
Always, some type of nonlinear iteration method (self-consistent iteration or
Newton type iteration)
is used to solve this nonlinear eigenvalue problem. In each nonlinear iteration
step, we need to
build the matrix on the finite element space $V_{H,h_k}$ ($k=2,\cdots,n$) which
needs the computational
work $\mathcal{O}(N_k)$.
Fortunately, the matrix building can be carried out by the parallel way easily
in the finite element space since it has no data transfer.
\begin{theorem}
Assume we use $m$ computing-nodes in Algorithm \ref{Multi_Correction},
the nonlinear eigenvalue problem solved in the coarse spaces $V_{H,h_k}$ ($k=1,\cdots, n$)
and $V_{h_1}$ need work $\mathcal{O}(M_H)$ and $\mathcal{O}(M_{h_1})$, respectively, and
the work of multigrid method for solving the boundary value problem in $V_{h_k}$ be $\mathcal{O}(N_k)$
for $k=2,3,\cdots,n$. Let $\varpi$ denote the nonlinear iteration times when we solve
the nonlinear eigenvalue problem (\ref{Eigen_Augment_Problem_fix})
or (\ref{Eigen_Augment_Problem}).
Then in each computational node, the work involved
in Algorithm \ref{Multi_Correction} has the following estimate
\begin{eqnarray}\label{Computation_Work_Estimate}
{\rm Total\ work}&=&\mathcal{O}\Big(\big(1+\frac{\varpi}{m}\big)N_n
+ M_H\log N_n+M_{h_1}\Big).
\end{eqnarray}
\end{theorem}
\begin{proof}
Let $W_k$ denote the work in any processor
of the correction step in the $k$-th finite element space $V_{h_k}$.
Then with the correction definition, we have
\begin{eqnarray}\label{work_k}
W_k&=&\mathcal{O}\left(N_k +M_H+\varpi\frac{N_k}{m}\right).
\end{eqnarray}
Iterating (\ref{work_k}) and using the fact (\ref{relation_dimension}), we obtain
\begin{eqnarray}\label{Work_Estimate}
\text{Total work} &=& \sum_{k=1}^nW_k\nonumber =
\mathcal{O}\left(M_{h_1}+\sum_{k=2}^n
\Big(N_k + M_H+\varpi\frac{N_k}{m}\Big)\right)\nonumber\\
&=& \mathcal{O}\Big(\sum_{k=2}^n\Big(1+\frac{\varpi}{m}\Big)N_k
+ (n-1) M_H + M_{h_1}\Big)\nonumber\\
&=& \mathcal{O}\left(\sum_{k=2}^n
\Big(\frac{1}{\beta}\Big)^{d(n-k)}\Big(1+\frac{\varpi}{m}\Big)N_n
+ M_H\log N_n+M_{h_1}\right)\nonumber\\
&=& \mathcal{O}\left(\big(1+\frac{\varpi}{m}\big)N_n
+ M_H\log N_n+M_{h_1}\right).
\end{eqnarray}
This is the desired result and we complete the proof.
\end{proof}
\begin{remark}
Since we have a good enough initial solution $\widetilde{u}_{h_{k+1}}$
in the second step of Algorithm \ref{Correction_Step_Fix} or \ref{Correction_Step_Newton},
then solving the nonlinear eigenvalue problem (\ref{Eigen_Augment_Problem_fix})
or (\ref{Eigen_Augment_Problem}) always does not
need many nonlinear iteration times (always $\varpi\leq 3$).
In this case, the complexity in each computational node will be $\mathcal{O}(N_n)$ provided
$M_H\ll N_n$ and $M_{h_1}\leq N_n$.
\end{remark}
\section{Concluding remarks}
In this paper, we give a type of multigrid scheme
to solve nonlinear eigenvalue problems. The idea here is to use the multilevel correction method
to transform the solution of the nonlinear eigenvalue problem to a series of solutions of the corresponding linear
boundary value problems with multigrid method and a series of nonlinear eigenvalue problems
on the coarsest finite element space.
The proposed multigrid method can be applied to practical nonlinear eigenvalue problems
\cite{CancesChakirMaday,ChenGongHeYangZhou,ChenGongZhou,ChenHeZhou}.
We can replace the multigrid method by other types of efficient iteration schemes
such as algebraic multigrid method, the type of preconditioned schemes based on
the subspace decomposition and subspace corrections (see, e.g., \cite{BrennerScott, Xu}), and the
domain decomposition method (see, e.g., \cite{ToselliWidlund}).
Furthermore, the framework here can also be coupled with
parallel method and the adaptive refinement technique.
These will be investigated in our future work.
|
2,877,628,089,376 | arxiv | \section{Introduction}
The subject of this paper is bootstrap percolation, a type of two-state cellular automaton introduced by Chalupa, Leith and Reich in 1979 \cite{CLR} to model certain interacting particle systems in physics. In \emph{$r$-neighbour bootstrap percolation} on a graph $G$, vertices are either \emph{infected} or \emph{uninfected}, and the states of vertices evolve at discrete times according to the following process. At time $t=0$, there is an initial set $A\subset V(G)$ of infected vertices, and all other vertices in the graph are uninfected. Thereafter, at each discrete time, uninfected vertices become infected if at least $r$ of their neighbours are already infected, while infected vertices remain infected forever. Thus, we set $A_0=A$, and for each integer $t\geq 1$, the set of infected vertices at time $t$ is
\[
A_t := A_{t-1} \cup \big\{ v\in V(G): |N(v)\cap A_{t-1}| \geq r \big\},
\]
where $N(v)$ denotes the set of neighbouring vertices of $v$ in $G$. The graph $G$ is often taken to be $\Z^d$ or $[n]^d=\{1,\dots,n\}^d$, where in both cases edges are between vertices which differ by exactly $1$ in exactly one coordinate. We write $[A]=\cup_{t=0}^\infty A_t$ and call $[A]$ the \emph{closure} or \emph{span} of $A$. We say \emph{$A$ percolates $G$} if $[A]=V(G)$. Occasionally we use the notation $[X]_t$ to mean the set of infected vertices at time $t$ when the initial set is $X$. A subset $U$ of $V(G)$ is said to be \emph{internally spanned} if $U\subset[A\cap U]$.
Bootstrap percolation may be thought of as a monotone version of the Glauber dynamics of the Ising model, and it is here that many of its applications lie. For example, Fontes, Schonmann and Sidoravicius \cite{FSSIsing} and Morris \cite{MorrisGlauber} used results from bootstrap percolation to prove bounds on the critical threshold for fixation at the Gibbs state in the Ising model. Bootstrap percolation has also found applications in crack formation, clustering phenomena, dynamics of glasses \cite{GST}, sandpiles \cite{FLP}, jamming \cite{DGLBD}, and many other areas of statistical physics \cite{Adler,AdLev,ASA}.
Many of the most widely studied questions in bootstrap percolation ask what one can say about the properties of the system when the initial set is chosen randomly. By ``randomly'' here we mean that each vertex of $V(G)$ is included in $A$ independently with probability $p$; sometimes we say that $A$ is a \emph{$p$-random} subset of $V(G)$, and we write $A\sim\bin(V(G),p)$. One would like to know how likely percolation is to occur in this setting, as a function of the graph $G$, the infection parameter $r$, and the initial infection probability $p$. In the case of $r$-neighbour bootstrap percolation on the lattice graph $[n]^d$, with $d$ fixed and $n$ tending to infinity, it is known that there exists a sharp phase transition for percolation for all $2\leq r\leq d$. This means that there is a function $p_c=p_c([n]^d,r)$ such that for all $\epsilon>0$, if $p\leq(1-\epsilon)p_c$ then with high probability there is no percolation, while if $p\geq(1+\epsilon)p_c$ then with high probability there is percolation. The function $p_c$ is known as the \emph{critical probability} for percolation. A certain weaker form of this result was proved by Aizenman and Lebowitz \cite{AL} in 1988 for $r=2$: this was the paper that started the study of the critical probability on finite graphs. The analogous results for $r\ge 3$ were proved considerably later: by Cerf and Cirillo \cite{CerfCir} for $r=3$ and Cerf and Manzo \cite{CerfManzo} for $r\ge 4$. The sharper form we have just stated has a similar history: in 2002 Holroyd \cite{Hol} proved it for $r=2$, in 2009 Balogh, Bollob\'as and Morris \cite{BBM3D} proved it for $r=3$, and the full result was proved by Balogh, Bollob\'as, Duminil-Copin and Morris \cite{BBDCM} in 2012. Sharp thresholds are also known to exist for several other bootstrap models, including the hypercube \cite{BBhyp,BBMhigh} and a number of other models on $\Z^2$ \cite{DCH,DCvE}. Moreover, recent work of Bollob\'as, Smith and Uzzell \cite{BSUgen} shows that similar threshold behaviour, albeit in a weaker sense, is exhibited by a considerably larger class of two-dimensional bootstrap percolation processes. In some cases the phase transition is much sharper than stated above: among other results, Balogh and Bollob\'as \cite{BBsharp} proved that for $d=r=2$ the fixed $\epsilon$ above can be replaced by any function $\epsilon (n)>0$ with $\epsilon (n)(\log n)^2/\log\log n=O(1)$.
Given a graph $G$ and an initial infection probability $p$ such that percolation is likely to occur, one would also like to know how long percolation takes. Thus, letting $T$ denote the random variable $\min\{t:A_t=V(G)\}$, which we call the \emph{percolation time} of the set $A$, the question is to determine information about the distribution of $T$. In particular, how concentrated is $T$?
All known proofs of upper bounds for the critical probability in the various bootstrap percolation processes also give some (rather limited) information about the percolation time, although the bounds one can extract are never explicitly stated in these papers. For example, the methods in \cite{AL} and \cite{Hol} for proving that percolation is likely to occur in two-neighbour bootstrap percolation on $[n]^2$ show that if $p\geq(1+\epsilon)p_c([n]^2,2)$ then $T\leq n(\log n)^{2+o(1)}$ with high probability as $n$ tends to infinity. (Actually, a simple adaptation of the proof of this statement can be used to show under the same conditions that the percolation time satisfies the stronger inequality $T=O(n\log n)$ with high probability, and we use this adaptation in the proofs of both main theorems in this paper.) None of the proofs in these papers give any lower bounds for the percolation time.
The only known sharp results about the time of percolation relate to the $r$-neighbour model on the torus $(\Z/n\Z)^d$ when $p$ is close to $1$. With such a large initial infection probability, and therefore such a small percolation time, one might expect the events that sites in $(\Z/n\Z)^d$ are uninfected at time $t$ to be approximately independent, and therefore the number of uninfected sites at time $t$ to be approximately Poisson distributed. Bollob\'as, Holmgren, Smith and Uzzell \cite{BHSU} ($d$-neighbour in $d$ dimensions) and Bollob\'as, Smith and Uzzell \cite{BSUr} ($r$-neighbour in $d$ dimensions) make this heuristic precise using extremal techniques and the Stein-Chen method. They show that if $p$ satisfies certain conditions depending on $t$ and $n$ (which in particular imply that $p=1-o(1)$), then with high probability the percolation time is exactly equal to $t$, or in some cases to either $t$ or $t+1$. A corollary of that theorem says that if $\log\log n \ll \log 1/(1-p) \ll \log n$ then
\begin{equation}\label{eq:Tlargep}
T = \frac{(1+o(1))\log n}{2\log 1/(1-p)}
\end{equation}
with high probability. (We use the standard notation $f(n)\ll g(n)$ to mean that $g(n)/f(n)\to\infty$.) The condition $\log 1/(1-p)\ll\log n$ above is required to ensure the expression for $T$ tends to infinity; the theorems in \cite{BHSU,BSUr}, which are much stronger, do not require such a condition.
The first of the two main theorems in this paper says that the expression \eqref{eq:Tlargep} for the percolation time holds for a much broader range of sequences of initial infection probabilities: not only do we drop the condition $p=1-o(1)$, but in fact we only require $\liminf p\log\log n > \pi^2/9$.
Throughout this paper we use $T$ to denote the percolation time of a $p$-random subset of $[n]^2$ under the two-neighbour bootstrap percolation process. We also fix the constant $\lambda=\pi^2/18$, the significance of which will be made clear shortly. The following is our first theorem.
\begin{theorem}\label{th:large}
Let $0< p=p(n)<1 $ be such that $\liminf p\log\log n > 2\lambda$ and $1-p=n^{-o(1)}$ (that is, $\log 1/(1-p)\ll\log n$). Then
\[
T = \frac{(1+o(1))\log n}{2\log 1/(1-p)}
\]
with high probability as $n\to\infty$.
\end{theorem}
A natural example of an event that would prevent percolation happening by time $t$ is the existence of an empty $(2t+1)\times 2$ rectangle in the initial set $A$. (Such a rectangle with a site missing at either end would also suffice, but since we are only interested in determining $T$ asymptotically, we do not need to be that precise.) One can easily show that the largest $t$ for which such a rectangle is likely to exist is about $(\log n)/(2\log 1/q)$. This observation essentially proves the lower bound of Theorem \ref{th:large}; the real content of the theorem is therefore the upper bound.
From above, one would like to show that the existence of an empty $(2t-1)\times 2$ rectangle in the initial set $A$ is a necessary condition for the percolation time to be at least $t$ (with high probability; the statement is obviously not true deterministically). Suppose a site $x$ in $[n]^2$ is uninfected at time $t$. It is easy to see that $x$ must be contained in a $2\times 2$ square of uninfected sites at time $t-2$. In fact, provided $x$ is not too close to the boundary of $[n]^2$, it is easy to see that there must exist a sequence of $t-1$ initially uninfected sites, starting with the top-right site in the $2\times 2$ square, and continuing either up or right each time, and that a similar statement, with the correct mix of up/down and left/right, also holds for the three other sites in the $2\times 2$ square. We would like to show that by far the most probable way for this to occur is for these four paths to be aligned to form a $(2t-2)\times 2$ rectangle, or more specifically, we would like to show that the probability the four uninfected paths exist is not much more than $(1-p)^{4t-4}$, which is just the probability that a given $(2t-2)\times 2$ rectangle is initially empty.
A first attempt at a proof might go as follows. Assume that all four paths of uninfected sites start by growing out horizontally from the $2\times 2$ square, so that they form an uninfected rectangle of height $2$ and unknown length. Let us concentrate on the top-right path, which we call $P$. If the path ever strays away from the horizontal line it starts along, then that should be at the cost of many new uninfected sites, because a path of uninfected sites that contains corners is not closed. The trouble is that there are too many choices for the paths, so the cost of this gain in probability is a large combinatorial factor.
However, it is possible to salvage this attempt at a proof. Rather than counting top-right paths of sites individually, we look at the intersection of top-right paths with a much coarser grid of squares, of a certain side length $L$, and count these. First we allow an initial time $t'=BL/p=o(t)$, where $B$ is a constant. By this time we expect nearly all internally spanned squares of side length $L$ --- which we call \emph{$L$-cells} --- to have filled. Now consider just the first $t-t'$ sites in the top-right path $P$: at time $t'$ they are still uninfected, and they intersect a path of $L$-cells all of which are either not internally spanned or slow to fill; we call such $L$-cells \emph{bad}. There is now an optimization question: how large should $L$ be to minimize the probability of this event, that there is a path of bad $L$-cells? In order to have any hope of this method working, the probability that an $L$-cell is bad should be at most $(1-p)^{(1+c)L}$, for some $c>0$. This is because we would like to show that the probability there exists a path of bad $L$-cells is about $(1-p)^t$, so we need the additional $c$ to overcome the combinatorial factor that comes from taking a union bound over all paths. Thus, $L$ must be large enough for the probability that an $L$-cell is bad to be small. Another reason $L$ should be large is minimize the combinatorial factor. As $L$ increases, there are fewer paths of $L$-cells inside a square of side length $t-t'$, so the combinatorial factor decreases. On the other hand, $L$ cannot be too large, because the error time $t'=BL/p$ must be $o(t)$. The $L$ that we choose is the smallest $L$ such that the probability an $L$-cell is bad is approximately $(1-p)^{(1+c)L}$. (In fact we take $c=1$.)
This second attempt at a proof is also not quite right: the probability that an $L$-cell is bad, as we have defined it, is at least $(1-p)^L$ because if any of the four edges of the square is empty then the square cannot be internally spanned. On the other hand we have said that the probability needs to be at most $(1-p)^{(1+c)L}$, so our definition of bad cannot be the right one. The way around this is as follows. One can show that, at the scale we are considering, empty edges of the $L$-cell are the only first order obstructions to being internally spanned, and that by strengthening the definition of bad so that an $L$-cell is only bad if it is not internally spanned except possibly for one or more of its edges, then the probability that an $L$-cell is bad now correlates with $(1-p)^{2L}$. While this gives the desired probability bound, it is no longer true that the original path $P$ of uninfected sites intersects a long path of bad $L$-cells, because $P$ may intersect only the edges of one or more of the $L$-cells. However, these paths are so restricted that they contribute little combinatorially to the union bound.
Our second theorem establishes concentration of the percolation time up to a constant factor in the remaining case, when $p$ is supercritical but small. When $p$ is in the range of Theorem \ref{th:large} (we shall call this the ``large $p$ regime''), the percolation time is with high probability asymptotically equal to one half of the length of the longest initially empty double row or column. When $p$ is in the range of Theorem \ref{th:small}, the percolation time is with high probability much larger than the length of the longest initially empty double row or column. We shall call this the ``small $p$ regime'', although we shall usually reserve this phrase for the special case when $\liminf p\log\log n$ is strictly less than $2\lambda$.
\begin{theorem}\label{th:small}
There exists a function $\mu:(0,1)\to(0,\infty)$ such that the following holds. Let $p=p(n)$ be such that $\liminf p\log n>\lambda$ and $p\to 0$. Then
\begin{equation}\label{eq:small}
T = \Theta\left( \max\left\{ \sqrt{\frac{\log n - \mu(p)/p}{p}} \exp\left(\frac{\mu(p)}{p}\right) \, , \, \frac{\log n}{p} \right\} \right)
\end{equation}
with high probability as $n\to\infty$. In particular, the percolation time $T$ is concentrated up to a constant factor with high probability.
\end{theorem}
Let $t_1(n,p)$ denote the first of the two functions inside the maximum in \eqref{eq:small} and let $t_2(n,p)=(\log n)/p$ denote the second. If $\limsup p\log\log n<2\lambda$ then $t_1(n,p)\gg t_2(n,p)$, while if $\liminf p\log\log n>2\lambda$ then $t_1(n,p)\ll t_2(n,p)$. Thus, as $p$ becomes small, the point at which $t_1(n,p)$ starts to become larger than $t_2(n,p)$ (and thus $T$ starts to become concentrated around $t_1(n,p)$) occurs precisely at the point at which Theorem \ref{th:large} fails. Thus, for almost the entire range of $p$ for which Theorem \ref{th:small} applies but \ref{th:large} does not, $T$ is concentrated around $t_1(n,p)$, and therefore this result is the main content of Theorem \ref{th:small}. However, at the transition itself, when $p=2\lambda/\log\log n$, it is not possible to say which of the two functions is larger without knowing more about the function $\mu(p)$, so it is not possible to omit the function $t_2(n,p)$ from Theorem \ref{th:small}.
The nature of the $o(1)$ term in the function $\mu(p)$ is dependent on the second and higher order terms in the critical probability $p_c([n]^2,2)$. Unfortunately, even with the recent result of Morris \cite{MorrisSharp} identifying the second order term in $p_c([n]^2,2)$ up to a constant factor, it is only possible to say that $|\mu(p)-\lambda|$ is at most $O(\sqrt{p})$. Thus, since $e^{c/\sqrt{p}}\gg \sqrt{(\log n)/p}$ for small enough $p$ and constant $c$, the main feature of Theorem \ref{th:small} is not the formula for $T$ but the assertion that $T$ is concentrated to within a constant factor.
Holroyd \cite{Hol} has proved that the condition $\liminf p\log n>\lambda$ ensures that the initial set percolates with high probability, and indeed if $\limsup p\log n<\lambda$ then with high probability the initial set does not percolate. It is natural to ask whether the conclusion of Theorem \ref{th:small} holds conditioned only on percolation occurring, dropping the assumption that $\liminf p\log n>\lambda$. However, that is not the case. When $p\approx\lambda/\log n$, the probability of percolation is roughly constant and the number of critical droplets is approximately Poisson distributed. Thus, if percolation does occur, then the percolation time will depend on the number of critical droplets and their relative positions.
Blocking sets in the large $p$ regime are just (approximately) empty $(2t+1)\times t$ rectangles in the initial set $A$. In the small $p$ regime, blocking sets are much less straightforward. Loosely speaking, they are large, sparse regions of $A$. Before we say what we mean by ``sparse'' (and ``large''), we need to introduce the notion of a critical droplet. In bootstrap percolation on $[n]^2$ (and similar statements hold for other lattice grids in other dimensions) it is known that there is a threshold length, roughly at a power of $\log n$, such that the existence of an internally spanned rectangle with perimeter at least this length is enough to ensure percolation of the whole grid with high probability. Rectangles of this perimeter are known as \emph{critical droplets}. (There is a formal definition, which we give in the next section.) The sparse regions of $A$ which act as blocking sets in the small $p$ regime are maximal regions of the grid not containing an internally spanned critical droplet.
There are two parts to the proof of the lower bound in Theorem \ref{th:small}. First, we determine the size and shape of these maximal sparse regions. For this we use many of the same tools as we use in the large $p$ regime. Second, we show that the sparse regions percolate slowly, even under the additional assumption that the rest of the grid is initially full. The principal technical difficulty lies in showing that the spread of infection through the sparse regions occurs at the speed one would expect. This is the main result of Sections \ref{se:waves}, \ref{se:restrictions} and \ref{se:slowperc}. If the sparse region is infected quickly then we ask how the information travelled from the edge of the sparse region to the centre. We show that there must exist a sequence of internally spanned rectangles that are located much closer together than one would expect, and that, in a certain sense, join the edge of the sparse region to the centre. Such a sequence of rectangles. which is defined formally in Section \ref{se:waves}, is called a \emph{wave}. We bound the number of waves in terms of the size of the sparse region and the time it takes the region to become infected, and we also bound the probability that any given wave exists. A more detailed sketch of the proof is given at the beginning of Section \ref{se:lowersmall}.
The upper bound of Theorem \ref{th:small} is the easier of the two bounds, and is proved in Section \ref{se:uppersmall}. For the upper bound in the large $p$ regime we focus on squares of side length $L$. In the small $p$ regime we do something similar, although we work at a different scale $M$. We tile the grid $[n]^2$ with $M$-cells and wait an initial time $BM/p$, where again $B$ is a constant. As in the large $p$ regime, we expect most $M$-cells to have internally spanned by this time, and we call those which have not \emph{weakly bad} (\emph{weakly} here emphasizes that the property is weaker than that of being bad, because, unlike in the large $p$ regime, we only require that the whole cell, including its edges, is not internally spanned by time $BM/p$). The proof then uses a graph theoretic lemma that bounds the number of order $k$ connected induced subgraphs of a graph $G$, containing a specific vertex, in terms of $k$ and the maximum degree of $G$. This lemma allows one to say that the largest connected component of weakly bad $M$-cells is not too large --- in fact, the total area of the component is likely to be equal (to within a constant factor) to the area of the largest sparse region of the grid, where, as before, sparse means ``not containing an internally spanned critical droplet''. Finally we observe that any component of weakly bad $M$-cells is infected by the surrounding cells in time proportional to its size.
In Section \ref{se:defs} we recall some standard notation and lemmas from bootstrap percolation and we introduce some new notation specifically related to the percolation time. In Section \ref{se:critgrids} we make formal the notion and properties of a ``critical grid size'', which is a function $K=K(p)$ such that the probability a $p$-random subset of $[K]^2$ percolates is approximately constant. This may be thought of as an inverse to the problem of determining the critical probability, which is a function $p_c=p_c(n)$ such that the probability a $p_c$-random subset of $[n]^2$ percolates is approximately constant. We shall use this function $K$ as a basis for determining the larger grid sizes $L$ and $M$ already mentioned in the introduction. The proof of Theorem \ref{th:large} including the method of tiling with $L$-cells is then given in Section \ref{se:large}, and finally Sections \ref{se:uppersmall} and \ref{se:lowersmall} contain the proofs of the upper and lower bounds of Theorem \ref{th:small} respectively.
\section{Definitions and tools}\label{se:defs}
The first few definitions we need are used throughout the bootstrap percolation literature. Recall that a set $X\subset[n]^2$ is \emph{internally spanned} if $X\subset[X\cap A]$, where $A$ is (as always) the initial set. The set $X$ is \emph{empty} if $A\cap X=\emptyset$, it is \emph{occupied} if $A\cap X\neq\emptyset$, and it is \emph{full} if $A\cap X=X$. A \emph{droplet} is a rectangular subset of $[n]^2$ of the form
\[
D = [(a,b),(c,d)] := \big\{ (x,y)\in\Z^2 \, : \, a\leq x\leq c, \, b\leq y\leq d \big\}.
\]
The \emph{dimensions} of $D$ are $\dim(D)=(c-a+1,d-b+1)$, the \emph{long} and \emph{short side-lengths} of $D$ are respectively $\lg(D)=\max\{c-a+1,d-b+1\}$ and $\sh(D)=\min\{c-a+1,d-b+1\}$, and the \emph{semi-perimeter} of $D$ is $\phi(D)=\lg(D)+\sh(D)$. An \emph{$m$-cell} is a droplet $D$ with $\lg(D)=\sh(D)=m$. The \emph{interior} of an $m$-cell $D=[(a,b),(c,d)]$ is the $(m-2)$-cell $\interior(D)=[(a+1,b+1),(c-1,d-1)]$ and the \emph{edge} of $D$ is the set $\partial D = D\setminus\interior(D)$. The \emph{left edge} of $D$ is the set $[(a,b),(a,d)]$, and the \emph{right}, \emph{top} and \emph{bottom} edges of $D$ are defined similarly.
The concept of a critical droplet was mentioned briefly in the introduction, in the context of blocking sets in the small $p$ regime. Here we make that notion precise. Let $\gamma(p)=p^{-3}$. A \emph{critical droplet} is a droplet $D$ for which $\gamma(p)/2\leq\phi(D)\leq\gamma(p)$. The event that a set $X\subset[n]^2$ contains an internally spanned critical droplet is written $\Gamma(X)$. For brevity, we shall usually write $\gamma$ for $\gamma(p)$ and $\Gamma(n)$ for $\Gamma([n]^2)$.
The next few definitions relate specifically to the time of percolation. The event that the set $X$ is internally spanned is written $I(X)$. The event that $[X]_t=X$ (that is, that the set $X$ is internally spanned by time $t$) is denoted $I_t(X)$. The $m$-cell $D$ is \emph{strongly good} if it is internally spanned by time $Bm/p$, where $B$ is an absolute constant and the initial set is $A\sim\bin(D,p)$. Thus, $D$ is strongly good if $I_{Bm/p}(D)$ occurs. It is \emph{good} if its span by time $Bm/p$ contains $\interior(D)$. Formally, $D$ is good if $\interior(D)\subset[D\cap A]_{Bm/p}$. Finally, $D$ is \emph{semi-good} if it is good but not strongly good, \emph{weakly bad} if it is not strongly good, and \emph{bad} if it is not good. We write $\strongly(D)$ for the event that $D$ is strongly good and $\good(D)$ for the event that $D$ is good. We also use $\strongly$ and $\good$ for the associated indicator functions. Let $\eta_m$ be the probability that an $m$-cell is bad and $\theta_m$ the probability that an $m$-cell is weakly bad; thus, for an $m$-cell $D$,
\[
\eta_m = \P_p\big(\good(D)^c\big) \quad \text{and} \quad \theta_m = \P_p\big(\strongly(D)^c\big).
\]
One of the fundamental tools in the study of bootstrap percolation is the \emph{rectangles process}, an algorithm which exactly describes the evolution of the two-neighbour bootstrap process on $[n]^d$, but in a way which does not preserve infection times of sites. The algorithm was first observed by Aizenman and Lebowitz (\cite{AL}, Lemma 1), who used it to prove a lower bound for the critical probability of two-neighbour bootstrap percolation on $[n]^d$. The algorithm runs as follows. First, consider each initially infected site to be a droplet with dimensions $(1,1)$. Then repeat the following process: whenever there are two droplets $D_1$ and $D_2$ and sites $x_1\in D_1$ and $x_2\in D_2$ with $\|x_1-x_2\|_1\leq 2$, replace $D_1$ and $D_2$ by the smallest droplet containing both. (Observe that $D_1$ and $D_2$ need not be disjoint, and they may even be nested.) If two such droplets do not exist, stop the algorithm. The set of sites contained in the final configuration of rectangles is precisely the closure of the initial set.
It may seem strange that an algorithm which is not able to encode the times at which sites become infected should be useful for proving results about the time of percolation, but its importance lies in the following lemma, due to Aizenman and Lebowitz. The lemma says that if a droplet is internally spanned then it must also contain internally spanned droplets at all smaller scales.
\begin{lemma}\label{le:AL}
Let $D$ be an internally spanned droplet. Then for all $1\leq k\leq\lg(D)/2$ there exists an internally spanned droplet $D'\subset D$ such that $k\leq\lg(D')\leq 2k$.
\end{lemma}
The proof is immediate from the algorithm: if $D$ is the smallest droplet containing $D_1$ and $D_2$, and if $D_1$ and $D_2$ are close enough to be merged in the rectangles process, then it is easy to see that $\lg(D)\leq\lg(D_1)+\lg(D_2)$+1.
Another immediate and important consequence of the rectangles process (although there are many other ways of proving it) is the following lemma, the last of this section. A proof can be found in \cite[pp.~104--105]{CiM}.
\begin{lemma}\label{le:phi}
Let $D$ be a droplet internally spanned by a set $A$. Then $|A|\geq\phi(D)/2$.\qed
\end{lemma}
Finally, on a matter of notation, we remark that throughout the paper $c$ and $C$ will always denote absolute positive constants. To avoid accumulating notation we shall frequently reuse both $c$ and $C$ to mean different positive constants, occasionally even doing so inside a proof.
\section{Critical grid sizes and the inverse of the critical probability}\label{se:critgrids}
The problem of finding the critical probability for two-neighbour bootstrap percolation on $[n]^2$ (and similarly for other models of bootstrap percolation) can be thought of as that of finding $p$ as a function of $n$ (which is assumed to be large) such that a $p$-random initial subset of $[n]^2$ has approximately a constant probability of percolating. In this paper we make extensive use of pairs $(n,p)$ with this property, but here we require $n$ to be a function of $p$, rather than the other way around. Thus our problem is essentially that of finding the inverse of the critical probability, which we think of as the ``critical grid size''. That sounds easy enough, but for our applications we require a slightly stronger property than a constant probability of percolating: we require $n$ as function of $p$ (which need not be small -- this is another small technicality) such that a $p$-random initial subset $A$ of $[n]$ has the following two properties. First, that the probability $[n]^2$ is strongly good (that is, that $A$ spans $[n]^2$ in time at most $Bn/p$) is at least a small positive constant. Second, that the probability $A$ contains an internally spanned critical droplet is at most a slightly larger positive constant. On the surface these two properties seem to be very different, so it is not obvious that such an $n$ should exist.
Our first lemma shows that if $n$ is sufficiently large and $p$ is such that a $p$-random initial subset of $[n]^2$ contains an internally spanned critical droplet with probability at least a constant, then with probability only a very slightly smaller, $[n]^2$ is strongly good (with initial set $A$). The proof is a minor adaptation of the deduction of Theorem 1 (i) from Theorem 2 (i) in \cite{Hol}.
\begin{lemma}\label{le:holroydplus}
Let $\alpha,p,\epsilon\in(0,1)$ and let $n\in\N$ be sufficiently large. Suppose that $\P_p\big(\Gamma(n)\big)\geq\alpha$. Then $\P_p\big(I_{4n/p}([n]^2)\big)\geq(1-\epsilon)\alpha$.
\end{lemma}
\begin{proof}
If $\gamma/2<3p^{-1}\log n$ then $p$ is so large that the probability $[\sqrt{n}]^2$ is not internally spanned is $o(1/n)$. Hence, we can tile the grid with squares of side length $\sqrt{n}$ and with high probability they will all be internally spanned. Each such square takes time at most $n$ to fill, so $T\leq n$ with high probability. From now on we shall assume that $\gamma/2\geq 3p^{-1}\log n$.
Let $E$ be the event that every row or column of length $\gamma/2\geq 3p^{-1}\log n$ is non-empty. Thus
\begin{equation}\label{eq:probEc}
\P_p(E^c) \leq n^2 (1-p)^{3p^{-1}\log n} \leq \exp\big(2\log n - 3\log n\big) = o(1).
\end{equation}
Provided there exists an internally spanned critical droplet, the event $E$ ensures that $[n]^2$ is internally spanned. However, the proof only shows that the percolation time is $O(n\gamma)$. We introduce an additional event $F$ to ensure that the spread of infection from the critical droplet to the rest of the grid is fast, so that the percolation time is at most $4n/p$. First let $X_1(x,y)$ be the least $i\geq 0$ such that $(x+i,y)$ belongs to $A$. (It is convenient here to extend $A$ to a $p$-random subset of $\Z^2$ and to allow $x+i>n$. The intersection of the event $F$ with the event $E$ will imply that $x+i\leq n$ in all relevant cases.) Similarly, let $X_2(x,y)$ be the least $i\geq 0$ such that $(x,y+i)\in A$ (and again we allow $y+i>n$). Let
\[
X_1(x) = 2\sum_{y=1}^n X_1(x,y) \qquad \text{and} \qquad X_2(y) = 2\sum_{x=1}^n X_2(x,y).
\]
The purpose of defining $X_1(x)$ and $X_2(y)$ in this way is that if, for example, $[(x,1),(n,1)]$ is full and $X_1(x,y)\leq n-x$ for every $y$, then $X_1(x)+n$ is an upper bound for the time it takes $[(x,1),(n,n)]$ to become infected. To see this, observe that $[(x,2),(X_1(x,3),2)]$ is fully infected by time
\[
\max\big\{X_1(x,2),X_1(x,3)\big\} \leq X_1(x,2) + X_1(x,3),
\]
and inductively that $[(x,k),(X_1(x,k+1),k)]$ is fully infected by time
\[
\max\big\{X_1(x,k),X_1(x,k+1)\big\} \leq X_1(x,k) + X_1(x,k+1)
\]
for all $2\leq k\leq n-1$. It then takes at most another $n$ steps for the rest of $[(x,0),(n,n)]$ to become infected.
Now we define the event $F$ to be that $X_1(x)\leq 2n/p$ for every $1\leq x\leq n-p^{-3}$ and that $X_2(y)\leq 2n/p$ for every $1\leq y\leq n-p^{-3}$. Observe that
\[
\P_p\left(\frac{X_1(x)}{2} > \frac{2n}{p}\right) = \P_p\left(\bin\left(\frac{2n}{p},p\right)<n\right),
\]
and by standard Chernoff bounds (for example, Theorem A.1.18 of \cite{ProbMeth}), this is at most $e^{-n/4}$. Hence $\P_p(F^c) \leq n^2 e^{-n/4}$. By combining this with \eqref{eq:probEc} and the assumption that $\P_p\big(\Gamma(n)\big)\geq\alpha$, the probability that $\Gamma(n)\cap E\cap F$ fails is
\begin{align*}
\P_p\big((\Gamma(n)\cap E\cap F)^c\big) &\leq \P_p\big(\Gamma(n)^c\big) + \P_p(E^c) + \P_p(F^c) \\
&\leq 1 - \alpha + n^{-1} + n^2e^{-n/4} \\
&\leq 1 - (1-\epsilon)\alpha,
\end{align*}
provided $n$ is sufficiently large.
Finally, $E$ ensures that $X_i(x,y)\leq p^{-3}$ for $i=1,2$ and for every $x$ and $y$, so given that $\Gamma(n)\cap E\cap F$ occurs, the percolation time is at most
\[
p^{-6} + 2np^{-1} + n \leq 4n/p.
\]
Here we have used the fact that $\alpha>0$ implies $p\geq(1-\delta)p_c([n]^2,2)$ for some $\delta>0$, and hence $p^{-6} \ll n/p$.
\end{proof}
In the next lemma we determine the critical grid size as a function of $p$. The lemma uses the notion of a strongly good $m$-cell, which was defined to be an $m$-cell $D$ such that $I_{Bm/p}(D)$ holds for a large constant $B$.
\begin{lemma}\label{le:mu}
There exists $\delta>0$ and a function $\mu:(0,\delta)\to(0,\infty)$ satisfying $\mu(p)=\lambda+o(1)$, such that if $\hat{K}(p)=\exp\big(\mu(p)/p\big)$ then
\begin{enumerate}
\item the probability that a $\hat{K}(p)$-cell contains an internally spanned critical droplet is $1/2+o(1)$ as $p\to 0$, and
\item the probability that a $\hat{K}(p)$-cell is strongly good is at least $1/2$ for all $p\in(0,\delta)$, provided $B\geq 4$.
\end{enumerate}
\end{lemma}
\begin{proof}
We shall prove the existence of a function $\mu(p)$ such that the probability that a $\hat{K}(p)$-cell contains an internally spanned critical droplet is at least $1/2$ and at most $1/2+o(1)$. This will prove (i), and the step up from $\Gamma\big(\hat{K}(p)\big)$ to $\strongly\big(\hat{K}(p)\big)$ required for (ii) will be provided by Lemma \ref{le:holroydplus}.
We use Theorem $1$ of \cite{Hol}, in the following form: if $\liminf p\log n > \lambda$ then $\P_p\big(I(n)\big)\to 1$, while if $\limsup p\log n < \lambda$ then $\P_p\big(I(n)\big)\to 0$. If $[n]^2$ is internally spanned then certainly it contains an internally spanned critical droplet, by the rectangles process, Lemma \ref{le:AL}, so it follows that if $\liminf p\log n > \lambda$ then $\P_p\big(\Gamma(n)\big)\to 1$. For a corresponding statement from below we need to use the proof of Theorem $1$ in \cite{Hol}, rather than the statement of the theorem itself. The proof shows that if $\limsup p\log n \leq (1-\epsilon)\lambda$, then with high probability $[n]^2$ does not contain an internally spanned droplet with long side between $C/p$ and $2C/p$, where $C$ is a large constant depending on $\epsilon$. Thus, by the rectangles process, under the same assumptions, with high probability $[n]^2$ does not contain an internally spanned critical droplet.
It follows that, for any $\epsilon>0$ and any $\epsilon'>0$, if $p$ is sufficiently small, then
\begin{equation}\label{eq:inv1}
\P_p\bigg(\Gamma\Big(\Big\lceil e^{\frac{\lambda+\epsilon'}{p}} \Big\rceil\Big)\bigg) > 1-\epsilon
\end{equation}
and
\begin{equation}\label{eq:inv2}
\P_p\bigg(\Gamma\Big(\Big\lfloor e^{\frac{\lambda-\epsilon'}{p}} \Big\rfloor\Big)\bigg) < \epsilon.
\end{equation}
Now, cover $[n+1]^2$ with one copy of $[n]^2$, two copies of $[n]\times[\gamma]$, and one copy of $[\gamma]\times[n]$. Observe that if $[n+1]^2$ contains a critical droplet, then so must at least one of the four covering sets, so we may deduce that
\[
\P_p\big(\Gamma(n+1)\big) \leq \P_p\big(\Gamma(n)\big) + 3\P_p\big(\Gamma([n]\times[\gamma])\big).
\]
By tiling $[n]^2$ with $[n]\times[\gamma]$ rectangles we have
\[
1-\P_p\big(\Gamma(n)\big) \leq \Big(1-\P_p\big(\Gamma([n]\times[\gamma])\big)\Big)^{n/\gamma},
\]
so $\P_p\big(\Gamma(n)\big) \gg \P_p\big(\Gamma([n]\times[\gamma])\big)$ provided $n\gg\gamma$. Hence, if $p$ is sufficiently small, then
\[
\P_p\big(\Gamma(n+1)\big) \leq (1+\epsilon)\P_p\big(\Gamma(n)\big).
\]
Together with \eqref{eq:inv1} and \eqref{eq:inv2} it follows that there exists a function $\mu(p)=\lambda+o(1)$ such that the probability that a $\hat{K}(p)$-cell contains an internally spanned critical droplet is at least $1/2$ and at most $1/2+\epsilon$. As previously observed, this proves (i), and (ii) now follows from Lemma \ref{le:holroydplus}.
\end{proof}
We are now in a position to define the critical grid size $K$.
\begin{definition}\label{de:K}
Let $p_0>0$ be a quantity to be determined later, but which certainly satisfies $p_0<\delta$, where $\delta$ is as in Lemma \ref{le:mu}. The critical grid size $K=K(p)$ is defined by
\[
K(p) = \begin{cases} \exp\big(\mu(p)/p\big) & \text{if } p\leq p_0 \\ \exp\big(\mu(p_0)/p_0\big) & \text{if } p>p_0. \end{cases}
\]
(Thus, $K(p)$ is equal to the function $\hat{K}(p)$ defined in Lemma \ref{le:mu} when $p\leq p_0$.)
\end{definition}
The purpose of the parameter $p_0$ is that by taking $p_0$ sufficiently small we obtain an arbitrarily large lower bound for $K(p)$ uniformly over all $p\in(0,1)$. Furthermore, by Lemma \ref{le:mu} (for small $p$) and by coupling (for $p\geq p_0$), a $K(p)$-cell is strongly good with probablity at least $1/2$ for all $p$.
The function $\mu$ whose existence we have just proved in Lemma \ref{le:mu} is the precisely the function $\mu$ whose existence is asserted in Theorem \ref{th:small}. Thus, the significance of the factor of $K=\exp\big(\mu(p)/p\big)$ in the formula for $T$ in that theorem is that it is a grid size at which the probability of percolation is a constant.
The two functions $\mu$ and $K$ will continue to be used extensively throughout the paper, so it is worth bearing in mind their key properties: these are that $\mu(p)$ is equal to $\lambda+o(1)$ and that the probability a $p$-random subset of $[K]^2$ percolates is approximately constant.
\section{Large $p$}\label{se:large}
The lower bound in Theorem \ref{th:large} is better described as an observation. We state it here as a separate lemma so that it can be reused for part of the proof of the lower bound in Theorem \ref{th:small}. Here, and throughout the paper, we use $q$ to denote $1-p$, the probability that a site is initially uninfected.
\begin{lemma}\label{le:lb}
Let $p=p(n)$ be probabilities and, as usual, let $T$ be the percolation time of a $p$-random subset of $[n]^2$. Then
\[
T \geq \frac{(1+o(1))\log n}{2\log 1/q}
\]
with high probability as $n\to\infty$.
\end{lemma}
\begin{proof}
Let $t\geq 1$. Divide $[n]^2$ into $n^2/(4t+2)$ disjoint $(2t+1)\times 2$ rectangles. If any one of these rectangles is initially empty, then the middle two squares in that rectangle cannot be infected before the $t$th step, so $T$ must be at least $t$. Hence
\[
\P_p(T \leq t) \leq (1-q^{(4t+2)})^{n^2/(4t+2)}.
\]
The right-hand side is at most
\[
\exp\big[-q^{(4t+2)} n^2/(4t+2)\big] = \exp\Big(-\exp\big(2\log n - \log(4t+2) - (4t+2)\log(1/q)\big)\Big),
\]
which is $o(1)$ if
\[
\limsup_{n\to\infty} \frac{2t\log 1/q}{\log n} < 1. \qedhere
\]
\end{proof}
Now we begin the build up to the proof of the upper bound in Theorem \ref{th:large}. In the previous section we established the existence of the critical grid size $K=\exp\big(\mu(p)/p\big)$, where by ``critical'' in this context we mean that $p$-random initial subsets of cells of this side length percolate with probability bounded away from $0$ and $1$. Here we use this critical grid size as the base case of an induction argument which proves the existence of a larger, but not considerably larger, grid size $L$, with the property that the probability cells of this side length fail to percolate is essentially equal to the probability of the existence of an empty double row or column of the same length. The reason we want $L$ not to be too large is because it appears as an error term in the proof of the upper bound in Theorem \ref{th:large}.
The following inequality will form the basis of the induction argument we use to prove the important properties of $L$. Recall that we write $\eta_m$ for the probability that an $m$-cell $D$ is bad, where bad was defined to mean that $\interior(D)\not\subset[D\cap A]_{Bm/p}$.
\begin{lemma}\label{le:etaineq}
If $B\geq 50$, then for all $m\geq 1$ we have
\begin{equation}\label{eq:eta}
\eta_{2m} \leq \eta_m^4 + 100 m^2 q^{4m-8}.
\end{equation}
\end{lemma}
\begin{proof}
Suppose a $2m$-cell $D=[(1,1),(2m,2m)]$ is bad, and divide it into four disjoint $m$-cells $D_1$, $D_2$, $D_3$ and $D_4$, with bottom-left corners at $(1,1)$, $(1,m+1)$, $(m+1,1)$ and $(m+1,m+1)$ respectively. Either all four $m$-cells are bad, or at least one of them is good. Suppose one of them is good, say $D_1$.
Let $S$ be a droplet such that $\lg(S)=m+1$ and $\sh(S)=m-2$. Suppose first that $S$ is taller than it is wide, so that $\dim(S)=(m-2,m+1)$. In that case we say that $S$ is \emph{traversable} if it has no empty double rows; so if, without loss of generality, $S=[(1,1),(m-2,m+1)]$, then $S$ is traversable if the sets $[(1,1),(m-2,2)]$, $[(1,2),(m-2,3)]$, $\dots$, $[(1,m),(m-2,m+1)]$ are all occupied. We say that $S$ is \emph{quickly traversable} if every site in $S$ except those in its topmost row is infected by time $25m/p$ provided the column immediately below $S$ is initially full. So again, if $S=[(1,1),(m-2,m+1)]$, $S'=[(1,0),(m-2,0)]$ and $S''=[(1,m+1),(m-2,m+1)]$, then $S$ is quickly traversable if $S'\cap(S\setminus S'')\subset[S'\cup(A\cap S)]_{25m/p}$. If instead $S$ is oriented so that $\dim(S)=(m+1,m-2)$ then similarly we say that $S$ is \emph{traversable} if it has no empty double columns and \emph{quickly traversable} if every site in $S$ except those in its leftmost column is infected by time $25m/p$ provided the column immediately to the left of it is initially full.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.8,>=latex]
\draw [fill=gray!50] (0.5,0.5) rectangle (4.5,4.5);
\draw [fill=gray!50] (1.5,4.5) rectangle (2,5);
\draw [fill=gray!50] (2.5,5) rectangle (3,5.5);
\draw [fill=gray!50] (0.5,6) rectangle (1,6.5);
\draw [fill=gray!50] (3,6.5) rectangle (3.5,7);
\draw [fill=gray!50] (1.5,4.5) rectangle (2,5);
\draw [fill=gray!50] (3.5,6) rectangle (4,6.5);
\draw [gray] (0.5,4.5) -- (0.5,10) (4.5,4.5) -- (4.5,10);
\foreach \x in {4.5,5,...,9.5}
\draw [gray] (0.5,\x) -- (4.5,\x);
\draw [decorate,decoration={brace,amplitude=4pt},xshift=-6pt] (0.5,4.55) -- (0.5,5.45) node [black,midway,xshift=-20pt] {$X_1$};
\draw [decorate,decoration={brace,amplitude=4pt},xshift=-6pt] (0.5,5.55) -- (0.5,6.45) node [black,midway,xshift=-20pt] {$X_3$};
\draw [decorate,decoration={brace,amplitude=4pt,mirror},xshift=6pt] (4.5,5.05) -- (4.5,5.95) node [black,midway,xshift=20pt] {$X_2$};
\draw [decorate,decoration={brace,amplitude=4pt,mirror},xshift=6pt] (4.5,6.05) -- (4.5,6.95) node [black,midway,xshift=20pt] {$X_4$};
\node at (-0.35,7) {$\vdots$};
\node at (5.35,7.5) {$\vdots$};
\draw (0,0) rectangle (10,10);
\draw (5,0) -- (5,10) (0,5) -- (10,5);
\draw (0,2.5) -- (-1,2.5) node [left] {$D_1$};
\draw (0,7.5) -- (-1,7.5) node [left] {$D_2$};
\draw (10,2.5) -- (11,2.5) node [right] {$D_3$};
\draw (10,7.5) -- (11,7.5) node [right] {$D_4$};
\end{tikzpicture}
\caption{The $2m$-cell $D$ is divided into four $m$-cells; $D_1$, which is semi-good (its interior is shown shaded), together with $D_2$, $D_3$ and $D_4$. The aim is to grow upwards from $D_1$ into $D_2$ quickly, by considering the left-most initially infected site in each double row above $\interior(D_1)$ in turn. In this figure, $X_1=3$, $X_2=5$ and $X_3=X_4=1$.}\label{fi:etaineq}
\end{figure}
The six droplets to which we need to apply these definitions are the following:
\begin{align*}
S_1 &= [(2,m),(m-1,2m)], & S_2 &= [(1,m+2),(m+1,2m-1)], \\
S_3 &= [(m+2,1),(2m-1,m+1)], & S_4 &= [(m,2),(2m,m-1)], \\
S_5 &= [(m+2,m),(2m-1,2m)], & S_6 &= [(m,m+2),(2m,2m-1)].
\end{align*}
Clearly droplets $S_1$, $S_3$ and $S_5$ all have dimensions $(m-2,m+1)$, while droplets $S_2$, $S_4$ and $S_6$ all have dimensions $(m+1,m-2)$. (See Figure \ref{fi:twoSs}
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.8,>=latex]
\draw [fill=gray!50] (0.5,0.5) rectangle (4.5,4.5);
\draw [very thick] (0.5,4.5) rectangle (4.5,10);
\draw [very thick] (4.5,5.5) rectangle (10,9.5);
\foreach \x in {5.5,6,...,9.5}
\draw [gray] (0.5,\x) -- (4.5,\x) (\x,5.5) -- (\x,9.5);
\draw (0,0) rectangle (10,10);
\draw (5,0) -- (5,10) (0,5) -- (10,5);
\draw (0,2.5) -- (-1,2.5) node [left] {$D_1$};
\draw (0,7.5) -- (-1,7.5) node [left] {$D_2$};
\draw (10,2.5) -- (11,2.5) node [right] {$D_3$};
\draw (10,7.5) -- (11,7.5) node [right] {$D_4$};
\draw (0.5,5.75) -- (-1,5.75) node [left] {$S_1$};
\draw (8.75,5.5) -- (8.75,4.25) -- (11,4.25) node [right] {$S_6$};
\end{tikzpicture}
\caption{Suppose that $D_1$ is good, that among every disjoint pair of droplets $S_i$ and $S_j$ at least one is traversable, and that no droplet $S_i$ is traversable but not quickly traversable. Then $D$ is also good.}\label{fi:twoSs}
\end{figure}
It is easy to see that if $D_1$ is good but $D$ is bad then at least two disjoint $S_i$ are not traversable, or at least one of the $S_i$ is traversable but not quickly traversable. The probability of the first of these is at most $15m^2(1-p)^{4m-8}$. To bound the probability of the second, suppose without loss of generality that $S_1$ is traversable but not quickly traversable, and let $X_i$ be the position of the first initially infected site (or pair of sites) along the $i$th double row $[(2,m+i-1),(m-1,m+i)]$, counting from the left, for $i=1,\dots,m$ (see Figure \ref{fi:etaineq}). Note that the time it takes $S_1$ to fill given that the row immediately below it is initially full is at most
\[
2\sum_{i=1}^m X_i + m.
\]
If this quantity is greater then $25m/p$ then, crudely,
\[
\max \left\{ \sum_{i \text{ odd}} X_i , \sum_{i \text{ even}} X_i \right\} > \frac{10m}{p}.
\]
Each of the sums on the left-hand side consists of independent Geometric random variables, so the probability that one of the sums is greater than $10m/p$ is just the probability that a $\bin(10m/p,p)$ random variable is less than $m/2$. This probability is
\begin{align}\label{eq:binombound}
\P\left(\bin\left(\frac{10m}{p},p\right)<\frac{m}{2}\right) &= \sum_{k=0}^{m/2} \binom{10m/p}{k} p^k (1-p)^{10m/p-k} \notag \\
&\leq 2 \binom{10m/p}{m/2} p^{m/2} (1-p)^{10m/p-m/2} \notag \\
&\leq 2 (20e)^{m/2} (1-p)^{10m/p-m/2} \notag \\
&\leq \exp\Big( -\big( (9/p)\log(1/q) - (1/2)\log(40e) \big) m \Big).
\end{align}
The inequality
\[
(9/p)\log 1/q - (1/2)\log(40e) > 4\log 1/q
\]
holds for all $p\in(0,1)$, so \eqref{eq:binombound} is at most $\exp(-4m\log 1/q)$ for all $p\in(0,1)$. Thus, the probability that at least one of the $S_i$ is traversable but not quickly traversable is at most $6\exp(-4m\log 1/q)$.
Putting these observations together, and noting that there were four choices for the good square, we have
\[
\eta_{2m} \leq \eta_m^4 + 60 m^2 q^{4m-8} + 24 q^{4m},
\]
which completes the proof.
\end{proof}
The inequality we have just derived in Lemma \ref{le:etaineq} is the tool that will drive the induction argument in the next lemma to prove the key property of the grid size $L$, which is that the probability percolation fails correlates with the probability of the existence of an empty double row or column. Before stating the lemma we define the grid size $L$.
\begin{definition}\label{de:L}
Let $A$ be a large constant. We define
\[
L:=6K^2\log 1/q.
\]
\end{definition}
\begin{lemma}\label{le:etaineq2}
Let $A\geq 3/\log 2$. Then the probability an $L$-cell is bad satisfies the inequality $\eta_L\leq 50L^2q^{-8}\exp(-2L\log 1/q)$.
\end{lemma}
\begin{proof}
By Lemma \ref{le:etaineq}, for any $m\geq 1$,
\[
\eta_{2m} \leq \max\big\{ 2\eta_m^4 \, , \, 200m^2q^{4m-8} \big\}.
\]
Since $\eta_K\leq 1/2$ by the definition of $K$, it follows by induction that
\[
\eta_{2^r K} \leq \max\big\{ 2^{-4^r + (4^r-1)/3} \, , \, 200 (2^{r-1}K)^2q^{4\cdot 2^{r-1}K-8} \big\}.
\]
Somewhat crudely, $\eta_{2^r K}$ is at most the second term in the maximum on the right-hand side if $(2/3) 4^r \log 2 \geq 2^{r+1}K \log 1/q$, which holds if
\[
2^r K \geq \frac{3}{\log 2} K^2 \log 1/q.
\]
Since $L\geq (3/\log 2) K^2 \log 1/q$, we conclude that
\[
\eta_L \leq 50 L^2 q^{-8} \exp\big( -2L\log 1/q \big)
\]
as required.
\end{proof}
The next ingredient we need is a technical lemma which will allow us to prove the convergence of a certain geometric series. This is the only point in the proof where we use the fact that $L$ is not too small.
\begin{lemma}\label{le:ratio}
Let $C>0$ be a constant, let $p_0$ (in the definition of $K$) be sufficiently small, and let $A$ (in the definition of $L$) be sufficiently large. Then
\[
\big( CL^2q^{-8} \big)^{1/L} (1-p)^{1/8} < 1.
\]
\end{lemma}
\begin{proof}
We require $CL^2q^{-8} < q^{-L/8}$, or, since $L=AK^2\log 1/q$, equivalently we require
\begin{equation}\label{eq:ratio}
\log(CA^2) + 4\log K + 2\log\log 1/q + 8\log 1/q < \frac{A}{8} K^2 (\log 1/q)^2.
\end{equation}
By taking $p_0$ sufficiently small in Definition \ref{de:K}, we obtain $\log K < K^2$ for all $p$. Thus, the left-hand side of \eqref{eq:ratio} is at most
\[
\log(CA^2) + 4\log K + 10\log 1/q \leq 40 (\log CA^2) (\log K) (\log 1/q) < AK^2\log 1/q
\]
provided $A$ is sufficiently large.
\end{proof}
An \emph{up-right $m$-path} is a sequence of $m$-cells $D_1,\dots,D_u$ such that, for $1\leq i\leq u-1$, if $D_i=[(a,b),(c,d)]$ then $D_{i+1}$ is either equal to $[(a+m,b),(c+m,d)]$ or to $[(a,b+m),(c,d+m)]$. Thus, the bottom-left corner of $D_{i+1}$ is obtained from the bottom-left corner of $D_i$ by adding $m$ to exactly one of its coordinates, so the $m$-cells are disjoint, but consecutive cells are touching. The \emph{length} of the up-right $m$-path $D_1,\dots,D_u$ is $u$. An \emph{up-right path} is simply an up-right $1$-path, and we do not distinguish $1$-cells from sites.
The next lemma is the key step in the proof of the lower bound of the large $p$ theorem, and one of the most important lemmas in the paper. The bootstrap process is restricted to the positive quadrant of the plane and we ask how likely it is that the origin is uninfected at time $t$. We show that the answer is that it is roughly the same as the probability of there being an empty single row or column of length about $t$ starting at the origin. This latter event clearly implies that the origin is uninfected at time (about) $t$, so the interest is that the contribution to the probability from other configurations which also guarantee the origin is uninfected at time $t$ is small.
The idea behind the proof is as follows. If the origin is uninfected at time $t$ then there must exist an up-right path of length $t$ of initially uninfected sites starting at the origin. Unfortunately the crude way of estimating the probability of this event --- by taking a union bound over all paths --- gives much too large an estimate; we get $2^t(1-p)^t$, and the problem here lies in the combinatorial factor of $2^t$. To overcome this, we tile the positive quadrant with $L$-cells and wait an initial time $t'=BL/p$ so that all good $L$-cells will have filled (possibly except for their edges). We then look at the original up-right path of initially uninfected sites, and observe that the first $t-t'$ sites in that path (counting from the origin) must be uninfected at time $t'$. Now consider the up-right $L$-path of length $(t-t')/L$ induced by our up-right path of length $t-t'$. Each $L$-cell either is bad or intersects the up-right path only on its edges. In the first case we gain a probability of $(1-p)^{2L}$ (up to a polynomial correction) from Lemma \ref{le:etaineq2}. There is still a combinatorial factor involved in choosing the $L$-cells, but it is smaller than before, and is beaten by the gain in probability of a factor of $(1-p)^{L}$ over what we would have obtained had the path of uninfected sites stayed on the edge of the quadrant (this is where we use Lemma \ref{le:ratio}). In the second case, if $l$ is the total length of the path along edges of $L$-cells (which need not be consecutive) then we obtain a probability close to $(1-p)^{2l}$, because the up-right path is restricted to long, straight segments, and these must be part of double empty rows or columns. Furthermore, the highly restricted nature of the path also implies that there is only a small combinatorial loss. In both cases, the probability is much smaller than it would have been had the up-right path remained on one of the edges of the quadrant.
\begin{lemma}\label{le:upright}
Let $p\in(0,1)$ and $t\in\N$, and define $t'=BL(p)/p$, $D=[(0,0),(t,t)]$ and let $A\sim\bin(D,p)$. Then the probability that $(0,0)$ is uninfected at time $t$ is at most
\[
\frac{16(1-p)^{t-t'}}{p}.
\]
\end{lemma}
\begin{proof}
Suppose the origin is uninfected at time $t$. If a site $y$ is such that both $y+e_1$ and $y+e_2$ are infected at some time $s$, then $y$ is certainly infected at time $s+1$. It follows that there exists an up-right path $x_1,\dots,x_{t-t'+1}$ of uninfected sites at time $t'$ with $x_1=(0,0)$. Let $k$ be maximal such that both coordinates of $x_{t-t'+1-k}$ are non-zero, or if there is no such $k$ then set $k=0$. Thus $k=0$ corresponds to the existence of an unoccupied straight line of length $t-t'+1$ with one endpoint at the origin. We shall show that the event $k=0$ is the most likely way of ensuring that the origin is uninfected at time $t$.
The up-right path $x_{t-t'-k},\dots,x_{t-t'+1}$ intersects an up-right $L$-path $D_1,\dots,D_\tau$, where $x_{t-t'-k}$ is the bottom-left site of $D_1$ and $\tau\geq k/L$. Since $x_1,\dots,x_{t-t'+1}$ are uninfected at time $t'=BL/p$, none of the $L$-cells $D_1,\dots,D_\tau$ is strongly good, so each is either semi-good or bad.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.4,>=latex]
\tikzstyle{site}=[draw,fill=gray!50];
\draw [site] (0,0) rectangle (1,1);
\draw [site] (1,0) rectangle (2,1);
\draw [site] (2,0) rectangle (3,1);
\draw [site] (3,0) rectangle (4,1);
\draw [site] (4,0) rectangle (5,1);
\draw [site] (5,0) rectangle (6,1);
\draw [site] (6,0) rectangle (7,1);
\draw [site] (7,0) rectangle (8,1);
\draw [site] (7,1) rectangle (8,2);
\draw [site] (8,1) rectangle (9,2);
\draw [site] (9,1) rectangle (10,2);
\draw [site] (10,1) rectangle (11,2);
\draw [site] (10,2) rectangle (11,3);
\draw [site] (10,3) rectangle (11,4);
\draw [site] (10,4) rectangle (11,5);
\draw [site] (11,4) rectangle (12,5);
\draw [site] (12,4) rectangle (13,5);
\draw [site] (13,4) rectangle (14,5);
\draw [site] (13,5) rectangle (14,6);
\draw [site] (14,5) rectangle (15,6);
\draw [site] (15,5) rectangle (16,6);
\draw [site] (15,6) rectangle (16,7);
\draw [site] (16,6) rectangle (17,7);
\draw [site] (17,6) rectangle (18,7);
\draw [site] (18,6) rectangle (19,7);
\draw [site] (18,7) rectangle (19,8);
\draw [site] (18,8) rectangle (19,9);
\draw [site] (19,8) rectangle (20,9);
\draw [site] (20,8) rectangle (21,9);
\draw [site] (21,8) rectangle (22,9);
\draw [site] (22,8) rectangle (23,9);
\draw (7,0) rectangle (12,5);
\draw (12,0) rectangle (17,5);
\draw (12,5) rectangle (17,10);
\draw (17,5) rectangle (22,10);
\draw (0.5,0.5) -- (0.5,-1) node [below] {$x_1$};
\draw (7.5,0.5) -- (7.5,-1) node [below] {$x_{t-t'-k}$};
\draw (9.5,5) -- (9.5,6) node [above] {$D_1$};
\end{tikzpicture}
\caption{The shaded squares are sites forming an up-right path $(0,0)=x_1,x_2,\dots,x_{t-t'+1}$. In this example, $L=5$ and the four $5$-cells shown are the up-right $L$-path $D_1$, $D_2$, $D_3$, $D_4$.}\label{fi:upright}
\end{figure}
Let $E_2(i,j)$ denote the event that the $L$-cells $D_i,\dots,D_j$ are semi-good, and that at time $t'$ there exists an up-right path $y_1\dots,y_u$ of uninfected sites entirely contained within $\partial D_i\cup\dots\cup\partial D_j$, with $y_1$ an element of either the bottom or left edge of $D_i$ and $y_u$ an element of either the top or right edge of $D_j$.
Given an $m$-cell $D=[(a,b),(c,d)]$, let the \emph{left buffer} of $D$ be the $2\times(m-2)$ rectangle $[(a-1,b+1),(a,d-1)]$, and define similarly the \emph{right}, \emph{top} and \emph{bottom buffers} of $D$. Observe that, given adjacent $m$-cells $S_1$ and $S_2$ in an up-right $m$-path, either the right buffer of $S_1$ is the same as the left buffer of $S_2$, or the top buffer of $S_1$ is the same as the bottom buffer of $S_2$. Now suppose that among $D_1,\dots,D_\tau$ there is a sequence of $r$ consecutive semi-good cells $D_i,\dots,D_{i+r-1}$, so that the event $E_2(i,i+r-1)$ occurs. Let $\mathcal{B}$ denote the set of buffers of $D_i,\dots,D_{i+r-1}$, excluding the left and bottom buffers of $D_i$ and the top and right buffers of $D_{i+r-1}$. Since the interiors of $D_i,\dots,D_{i+r-1}$ are all full by time $t'$, the existence of an up-right path along the edges of $D_i,\dots,D_{i+r-1}$ of sites uninfected at time $t'$ implies that at least $r-1$ of the buffers in $\mathcal{B}$ were initially unoccupied. The reason for this is that if one considers sides of an $L$-cell to have unit length, then the $\ell_1$ distance between either the top-left or the bottom-right corner of $D_i$ and either the top-left or the bottom-right corner of $D_{i+r-1}$ is equal to $r-1$. Crucially, by the definition of $k$, these unoccupied buffers are all subsets of $D$. Each buffer is a set of $2(L-2)$ sites, so
\begin{equation}\label{eq:E2}
\P_p\big(E_2(i,i+r-1)\big) \leq 2^{r-1} (1-p)^{2(r-1)(L-2)}.
\end{equation}
(Had we chosen $k$ differently, some of the buffers may have been only half contained in $D$, which would render this bound incorrect. In other words, this is the point in the argument where, rather subtly, we use the fact that the up-right path has moved away from the boundary of $D$.) The bound in \eqref{eq:E2} does not give any information when $r=1$, but in that case we still have $\P_p\big(E_2(i,i)\big)\leq 4(1-p)^L$, since $D_i$ is only semi-good, so at least one of its edges is empty.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.5,>=latex]
\tikzstyle{buffer}=[thick,fill=gray!50];
\tikzstyle{cell}=[gray];
\tikzstyle{int}=[gray!70];
\draw [int] (0.5,0.5) rectangle (4.5,4.5);
\draw [int] (5.5,0.5) rectangle (9.5,4.5);
\draw [int] (10.5,0.5) rectangle (14.5,4.5);
\draw [int] (10.5,5.5) rectangle (14.5,9.5);
\draw [int] (15.5,5.5) rectangle (19.5,9.5);
\draw [buffer] (5.5,-0.5) rectangle (9.5,0.5);
\draw [buffer] (10.5,-0.5) rectangle (14.5,0.5);
\draw [buffer] (4.5,0.5) rectangle (5.5,4.5);
\draw [buffer] (9.5,0.5) rectangle (10.5,4.5);
\draw [buffer] (14.5,0.5) rectangle (15.5,4.5);
\draw [buffer] (0.5,4.5) rectangle (4.5,5.5);
\draw [buffer] (5.5,4.5) rectangle (9.5,5.5);
\draw [buffer] (10.5,4.5) rectangle (14.5,5.5);
\draw [buffer] (15.5,4.5) rectangle (19.5,5.5);
\draw [buffer] (9.5,5.5) rectangle (10.5,9.5);
\draw [buffer] (14.5,5.5) rectangle (15.5,9.5);
\draw [buffer] (10.5,9.5) rectangle (14.5,10.5);
\draw [cell] (0,0) rectangle (5,5);
\draw [cell] (5,0) rectangle (10,5);
\draw [cell] (10,0) rectangle (15,5);
\draw [cell] (10,5) rectangle (15,10);
\draw [cell] (15,5) rectangle (20,10);
\node at (2.5,2.5) {$D_1$};
\node at (7.5,2.5) {$D_2$};
\node at (12.5,2.5) {$D_3$};
\node at (12.5,7.5) {$D_4$};
\node at (17.5,7.5) {$D_5$};
\end{tikzpicture}
\caption{An up-right $L$-path of length five. Buffers in the set $\mathcal{B}$ for this path are the shaded rectangles.}\label{fi:semi}
\end{figure}
Let $E_1(i,j)$ denote the event that all of the $L$-cells $D_i,\dots,D_j$ are bad. By Lemma \ref{le:etaineq2}, the probability of $E_1(i,i+r-1)$ is at most $f(p)^r (1-p)^{2Lr}$, where $f(p)=50L^2q^{-8}$.
Now, there exists a finite sequence $0=b_1<s_1<b_2<s_2<b_3<\dots$, where the last term is equal to $\tau$, such that the event
\[
E = E_1(b_1,s_1-1) \cap E_2(s_1,b_2-1) \cap E_1(b_2,s_2-1) \cap E_2(s_2,b_3-1) \cap \dots
\]
occurs. Suppose that the last term of the sequence is $\tau=b_{u+1}-1$; the argument is similar if $\tau=s_{u+1}-1$. Let $v$ be the number of $i$ for which $b_{i+1}-1=s_i$; thus, $v$ is the number of times that there are three consecutive $L$-cells in the sequence $D_1,\dots,D_\tau$ that are of the form bad, semi-good, bad, in that order. We have
\begin{align*}
\P_p(E) &= \prod_{i=1}^u \P\big( E_1(b_i,s_i-1) \cap E_2(s_i,b_{i+1}-1) \big) \\
&\leq 4^v (1-p)^{Lv} \prod_{i=1}^u f(p)^{s_i-b_i} (1-p)^{2L(s_i-b_i)} 2^{b_{i+1}-s_1-1} (1-p)^{2(L-2)(b_{i+1}-s_i-1)} \\
&\leq \big(8f(p)\big)^\tau (1-p)^{(L-2)(2\tau - 2u + v)}.
\end{align*}
By partitioning sequences of consecutive semi-good $L$-cells into those of length $1$ and those of length greater than $1$, and since $2v+3(u-v)\leq\tau$, it follows that $2u-v\leq 2\tau/3$. Thus,
\[
\P_p(E) \leq \big(8f(p)\big)^\tau (1-p)^{4(L-2)\tau/3}.
\]
For a given $k$, and hence a given $\tau$, there are $2^\tau$ choices of up-right path of $L$-cells, and a further $2^\tau$ ways of choosing whether each $L$-cell is semi-good or bad. Therefore the probability that there exists an up-right path of uninfected sites of length $k$ starting from a given site is at most
\[
\big(32f(p)\big)^{k/L} (1-p)^{4k(1-2/L)/3} \leq \big(32f(p)\big)^{k/L} (1-p)^{5k/4}.
\]
Hence, the probability of the event $F$, which we define to be that the origin is uninfected at time $t$ in bootstrap percolation on the square $D$, is at most
\[
2 \sum_{k=0}^{t-t'} (1-p)^{t-t'-k} \big(32f(p)\big)^{k/L} (1-p)^{5k/4}.
\]
By taking $p_0$ (in the definition of $K$) sufficiently small and $A$ (in the definition of $L$) sufficiently large, and applying Lemma \ref{le:ratio}, the common ratio in the geometric series above, which is $\big(32f(p)\big)^{1/L}(1-p)^{1/4}$, has value at most $(1-p)^{1/8}$. Therefore,
\[
\P_p(F) \leq \frac{2 (1-p)^{t-t'}}{1-(1-p)^{1/8}} \leq \frac{16 (1-p)^{t-t'}}{p}. \qedhere
\]
\end{proof}
It is now just a small step to proving the upper bound in Theorem \ref{th:large}. We apply Lemma \ref{le:upright} to each of the four sites in a $2\times 2$ square of sites uninfected at time $t-2$ (which is possible if there is a site not too close to the boundary which is uninfected at time $t$). The lemma implies that the probability these four sites stay uninfected that long is approximately the same as the probability that they are initially at the centre of a double empty row or column of length about $2t$, which is what we require. There is a little more work to do to take into account the sites near the boundary of the grid (this would not be necessary if we were working on the torus rather than the grid). These sites have a greater probability of being uninfected at time $t$, but this is negated by the relatively small number of them.
\begin{proof}[Proof of Theorem \ref{th:large}]
The lower bound of Theorem \ref{th:large} is Lemma \ref{le:lb}, so we only have to prove the upper bound.
Let $t'=BL/p+2$. Suppose a site $x$ is uninfected at time $t\geq 2$, and suppose first that $x$ is not within distance $t$ of the boundary of $[n]^2$. It is easy to check that $x$ must be contained in a $2\times 2$ square of sites uninfected at time $t-2$, say $x_1$, $x_2$, $x_3$ and $x_4$. Let $D_1$, $D_2$, $D_3$ and $D_4$ be the four $t$-cells such that $x_i\in D_i$ for each $i$ and $x_i\notin D_j$ if $j\neq i$. Since $x_i$ is uninfected at time $t-2$ in bootstrap percolation with initial set $A\sim\bin([n]^2,p)$, it is also uninfected at time $t-2$ when the initial set is restricted to $A\cap D_i$. Applying Lemma \ref{le:upright} to each $D_i$ in turn, we find the probability that $x_1$, $x_2$, $x_3$ and $x_4$ are all uninfected at time $t-2$ is at most
\[
\left(\frac{16 (1-p)^{t-t'}}{p}\right)^4.
\]
It follows that the probability there exists a site $x$, which is uninfected at time $t$, but which is not within distance $t$ of the boundary of $[n]^2$, is at most
\begin{equation}\label{eq:xmiddle}
4n^2 \left( \frac{16 (1-p)^{t-t'}}{p} \right)^4 = 4 \exp \big( 2\log n + 4\log 1/p - 4(t-t')\log 1/q + O(1) \big).
\end{equation}
This is $o(1)$ if
\begin{equation}\label{eq:tt'}
t-t' \geq \frac{(1+\epsilon)\log n}{2\log 1/q}.
\end{equation}
When $x$ is close to the boundary of $[n]^2$, the calculation is similar. The probability that $x$ is uninfected at time $t$ is much larger, but to compensate for this there are fewer choices for $x$. Briefly, there at most $4nt$ sites within distance $t$ of one of the sides of $[n]^2$, but not within the same distance of one of the corners. Each such site which is uninfected at time $t$ has an adjacent site in an appropriate direction which is uninfected at time $t-1$. Applying Lemma \ref{le:upright} to this pair of sites and taking the union bound gives the probability that any of these sites is uninfected at time $t-1$ is at most
\begin{equation}\label{eq:xside}
8nt \left( \frac{16 (1-p)^{t-t'}}{p} \right)^2 = \exp \big( \log n - 2(t-t')\log 1/q + o(\log n) \big) = o(1),
\end{equation}
if \eqref{eq:tt'} holds. Similarly, there are at most $4t^2$ sites within distance $t$ of one of the corners of $[n]^2$, and by Lemma \ref{le:upright}, each has probability at most $16 (1-p)^{t-t'}/p$ of being uninfected at time $t$. Taking a union bound, the probability any of these is uninfected at time $t$ is at most
\begin{equation}\label{eq:xcorner}
4t^2\frac{16 (1-p)^{t-t'}}{p} = \exp\big( -(t-t')\log 1/q + o(\log n) \big) = o(1),
\end{equation}
if \eqref{eq:tt'} is satisfied.
Combining \eqref{eq:xmiddle}, \eqref{eq:xside} and \eqref{eq:xcorner}, recalling that $t'=BL/p+2=O\big(K^2(\log 1/q)/p\big)$, and using the notation of the statement of the theorem, we have
\[
T \leq \frac{(1+o(1))\log n}{2\log 1/q} + O\bigg(\frac{K^2\log 1/q}{p}\bigg)
\]
with high probability as $n$ tends to infinity. The deduction of the upper bound in Theorem \ref{th:large} from this statement is simply the assertion is that if $\liminf p\log\log n > 2\lambda$ then
\[
\frac{K^2\log 1/q}{p} \ll \frac{\log n}{\log 1/q},
\]
which is an easy computation. This completes the proof of Theorem \ref{th:large}.
\end{proof}
\section{Upper bound for small $p$}\label{se:uppersmall}
The majority of the lemmas we shall use in the proof of the upper bound in the small $p$ regime are similar to the lemmas used in the proof of the upper bound in the large $p$ regime. In fact, some (those which were covered in Section \ref{se:critgrids}) are identical, and for the rest (those which were covered in Section \ref{se:large}), we observe, omitting most of the details, that only small modifictions are required to adapt them to the small $p$ setting.
Our first lemma is an analogy of Lemma \ref{le:etaineq} in which ``bad'' is replaced by ``weakly bad'' (or equivalently, ``good'' is replaced by ``strongly good''). It is worth recalling that an $m$-cell was defined to be bad if its interior is not contained in the span of the whole whole cell by time $Bm/p$, while it is weakly bad if it is not internally spanned by the same time. Thus, the property of being bad is a stronger property of an $m$-cell than that of being weakly bad. (It is also worth recalling that the probability an $m$-cell is weakly bad is written $\theta_m$.)
\begin{lemma}\label{le:thetaineq}
If $B\geq 50$, then for all $m\geq 1$ we have
\[
\theta_{2m} \leq \theta_m^4 + 50m^2 q^{2m}.
\]
\end{lemma}
\begin{proof}
The proof of the lemma is similar to the proof of Lemma \ref{le:etaineq}, except many of the details are simpler. The advantage here is that we may assume that $D_1$ is strongly good, not just good, and this allows us to modify the meaning of \emph{traversable} so that it applies to the $m$-cells $D_1,\dots,D_4$, not to the $(m+1)\times(m-2)$ droplets $S_1,\dots,S_6$, and so that $D_i$ is traversable if all its \emph{single} rows and columns are occupied. Then the probability that at least two of the $m$-cells $D_2$, $D_3$, $D_4$ are not traversable is at most $12m^2(1-p)^{2m}$. The remainder of the proof is the same.
\end{proof}
The next definition and the lemma following are the analogies of Definition \ref{de:L} (of the grid length $L$) and Lemma \ref{le:etaineq2}.
\begin{definition}\label{de:M}
Let $A$ be a large constant. We define
\[
M := \max\left\{ A\sqrt{p\log(n/K)}K \, , \, A\log(n/K) \right\}.
\]
\end{definition}
The definition of $M$, like the expression for $T$ in Theorem \ref{th:small}, is a maximum of two terms. (Of course, this is not a coincidence: Theorem \ref{th:small} says precisely that $T=\Theta(M/p)$.) As remarked in the introduction to the paper, the second of the two terms in the maximum (which there we called $t_2(n,p)$, so here it would be $pt_2(n,p)$) is only larger than the first, and therefore only relevant, when $\limsup p\log\log n\geq 2\lambda$. Thus, in the range in which Theorem \ref{th:large} does not supersede Theorem \ref{th:small}, the second term is only relevant when $p$ is approximately equal to $2\lambda/\log\log n$, and therefore the central part of Theorem \ref{th:small} is the assertion that $T$ is concentrated to within a constant factor when the first of the two terms is the larger.
\begin{lemma}\label{le:thetaineq2}
There exist constants $c,C>0$ such that if $p$ is sufficiently small then the probability an $M$-cell is weakly bad satisfies
\[
\theta_M \leq \max\Big\{ \exp\big(-cp\log(n/K)\big) , \exp(-cpM) \Big\}.
\]
\end{lemma}
\begin{proof}
This time notice that the quantity $50K^2q^{2K}$ can be made arbitrarily small by taking $p$ sufficiently small. As in the proof of Lemma \ref{le:etaineq2}, we obtain
\[
\theta_{2^r K} \leq \max\big\{ 2^{-(2/3)4^r} , 100 (2^{r-1}K)^2q^{2\cdot 2^{r-1}K} \big\}.
\]
and hence
\begin{equation}\label{eq:thetamax}
\theta_M \leq \max\Big\{ \exp\big(-c(M/K)^2\big) , C(M/K)^2\exp\big(-c(M/K)Kp\big) \Big\}
\end{equation}
for constants $c,C>0$. Now $M\geq A\sqrt{p\log(n/K)}K$ by definition, so $(M/K)^2\geq A^2p\log(n/K)$. Also, $pM\gg\log M$. Hence
\[
\theta_M \leq \max\Big\{ \exp\big(-cp\log(n/K)\big) , \exp(-cpM) \Big\}
\]
with a different constant $c$.
\end{proof}
The final lemma we need before we can prove the upper bound in Theorem \ref{th:small}, and the only without an analogue in the large $p$ regime, is the following result which we shall use to bound the probability that there exists a large connected component of weakly bad $L$-cells. A proof can be found in \cite[pp.~129--132]{CiM}.
\begin{lemma}\label{le:coffeetime}
Let $G$ be a graph with maximum degree $d$. Then the number of connected induced subgraphs of $G$ of order $k$ that contain a given vertex is at most $(e(d-1))^k$.\qed
\end{lemma}
\begin{proof}[Proof of the upper bound in Theorem \ref{th:small}]
Tile $[n]^2$ with disjoint $M$-cells. After an initial time $t'=50M/p$, all uninfected sites will be contained in weakly bad $M$-cells. Consider the graph of $M$-cells in which there is an edge between two cells if they have a common side. Clearly this graph has maximum degree $4$. By Lemma \ref{le:coffeetime}, the probability there exists a connected component of weakly bad $M$-cells of order at least $k$ is at most
\begin{equation}\label{eq:probbadcomps}
\left(\frac{n}{M}\right)^2 (3e)^k \theta_M^k \leq \exp\big(2\log(n/M) + k\log\theta_M + O(k)\big).
\end{equation}
This quantity tends to zero if
\[
k \geq \frac{-3\log(n/M)}{\log\theta_M}.
\]
Recall from Lemma \ref{le:thetaineq2} that
\[
\theta_M \leq \max\Big\{ \exp\big(-cp\log(n/K)\big) , \exp(-cpM) \Big\}
\]
for constants $c,C>0$. Noting also that $\log(n/M)\geq\log(n/K)$, it follows that \eqref{eq:probbadcomps} is $o(1)$ provided $k$ satisfies
\[
k \geq \max\left\{ \frac{3}{cp},\frac{3\log(n/M)}{cpM} \right\}.
\]
So with $k$ equal to the maximum of these two expressions, with high probability the largest component of weakly bad $M$-cells has size at most $k$. Any component of $M$-cells has at least one cell with at least two sides not connected to the rest of the component, so given that all other cells are strongly good, that cell becomes infected after at most $2M$ additional time steps. Continuing, the entire component of weakly bad $M$-cells becomes fully infected in time at most $t=2Mk$. Hence, with high probability, the percolation time $T$ is at most
\[
t'+t \leq \frac{50M}{p} + \frac{6M}{cp} + \frac{6\log(n/M)}{cp} \leq C\sqrt{\frac{\log(n/K)}{p}}K + C\frac{\log n}{p}
\]
for some constant $C>0$.
\end{proof}
\section{Lower bound for small $p$}\label{se:lowersmall}
Recall that Theorem \ref{th:small} states that if $\liminf p\log n>\lambda$ and $p\to 0$ then $T=\Theta(M/p)$ with high probability, where $M$ was defined by
\[
M = \max\left\{ A\sqrt{p\log(n/K)}K \, , \, A\log(n/K) \right\}.
\]
In the previous section we proved the upper bound. Here we concentrate on the lower bound, and since we have already proved in Lemma \ref{le:lb} that $T=\Omega\big((\log n)/p\big)$ with high probability, we only have to prove that
\[
T \geq c\sqrt{\frac{\log(n/K)}{p}}K
\]
with high probability, for some constant $c>0$. Thus, in this section we shall always assume that $p$ is sufficiently small that $\sqrt{p\log(n/K)}K\geq\log(n/K)$, and hence that $M=A\sqrt{p\log(n/K)}K$.
At the basic level the idea behind our proof of the lower bound in the small $p$ regime is quite simple: we show that with high probability there exists a region of the grid in which the initial configuration $A$ is in some sense relatively sparse, and then that even if all the sites outside of this area are initially infected, the percolation time must still be quite large. A little more precisely, we shall find as large an area of the grid as possible not containing an internally spanned critical droplet (that is what we mean here by ``sparse''). Letting this area be $D$, we then generously take a new initial set $A'$ to consist of the closure of $D\cap A$ together with all sites outside of $D$, and observe that since $A\subset A'$, the percolation time of $A$ is certainly at least the percolation time of our new initial set $A'$.
How long should the set $A'$ take to percolate? The answer to that question depends on the shape of the droplet $D$, so the question we must answer first is: how should we choose the ratio of the sides of the droplet $D$ in order to maximize the expected percolation time of $A'$? There are two effects to balance. First, diagonal lines of infected sites are becoming infected from the corners of $D$ deterministically at rate $1$. Second, sites in $D$ are infected with density $p$, so we expect the sides of $D$ to become infected at rate $p$. After a moment's thought, one realizes this means that the optimal ratio of sides for $D$ should be $p:1$. The majority of this section of the paper deals with formalizing this heuristic: that the sides of $D$ should become infected at rate $p$.
Recall that $K=K(p)=\exp\big(\mu(p)/p\big)$, where $\mu(p)=\lambda+o(1)$. The function $M=A\sqrt{p\log(n/K)}K$ is designed so that the largest region $D$ of the grid that is likely not to contain an internally spanned critical droplet has area $M^2/p$. Combining this with our observation about the optimal ratio of the sides of this droplet, it follows that the droplet $D$ should have long side length $M/p$ and short side length $M$. We define an \emph{$M$-slab} to be any such droplet; thus, $D$ is an $M$-slab if $\lg(D)=M/p$ and $\sh(D)=M$.
Suppose that an $M$-slab is filled in time $t$ assuming that all sites outside the $M$-slab are initially infected, where $t$ is a large constant factor smaller than $M/p$. We ask what route the infection took from the edge of the $M$-slab to the centre. Suppose the route came via the bottom edge. Then we can say, deterministically, that there must be a sequence of internally spanned droplets such that together they do not leave an empty double row between the bottom edge of the $M$-slab and the centre, and such that the sum of the horizontal distances between consecutive internally spanned droplets is considerably smaller than one would expect. This says that part of our supposedly sparse $M$-slab is much more dense than even an average $M$-slab.
A sequence of droplets joining the boundary of an $M$-slab to the centre, such as the one described in the previous paragraph, is called a \emph{wave} (the definition is made precise in the next section). By counting the number of possible waves and estimating their probabilities, we show that the probability there exists a wave with small sum of horizontal distances between the droplets --- which is equivalent to the $M$-slab filling quickly --- is small. The details of the proof are long and technical, and the reader who is in a hurry may choose to omit them without losing the flow of the argument. (However, some of the definitions that occur alongside these arguments \emph{are} important, such as those of a wave and a slow $M$-slab.) The main part of the proof of the small $p$ lower bound theorem occurs in Section \ref{se:pflowersmall}.
\subsection{Waves and flood times of $M$-slabs}\label{se:waves}
The next definition is central to this part of the paper. It is the structure that we shall use to encode how an $M$-slab could percolate quickly.
\begin{definition}
A \emph{wave} is a sequence of droplets $(D_1,\dots,D_k)$, where $D_i=[(a_i,b_i),(c_i,d_i)]$ for each $i$, satisfying the following conditions:
\begin{enumerate}
\item the droplets are disjoint: $D_i\cap D_j=\emptyset$ if $i\neq j$;
\item the droplets are closed: $[D_1\cup\dots\cup D_k]=D_1\cup\dots\cup D_k$;
\item $b_i<b_{i+1}\leq d_i+2<d_{i+1}+2$ for $i=1,\dots,k-1$.
\end{enumerate}
\end{definition}
The \emph{height} of the wave, $h(W)$, is defined to be $d_k-b_1+1$. The three conditions of a wave imply that consecutive droplets do not overlap horizontally, so if $x\in D_i$ and $y\in D_{i+1}$ then $x\neq y$. Thus the quantity
\[
t_i = \min\{|a_{i+1}-c_i|,|a_i-c_{i+1}|\} = \max\{a_{i+1}-c_i,a_i-c_{i+1}\}
\]
is the horizontal distance between droplets $D_i$ and $D_{i+1}$. The \emph{time} of the wave, $t(W)$, is
\[
t(W) := \sum_{i=1}^{k-1} (t_i-1).
\]
The concept of the time of a wave is important. If the set of initially infected sites consists of the union of the row of sites immediately below $D_1$ (extending as far as necessary) and $D_1\cup\dots\cup D_{k-1}$, and if $D_k$ is a single site, then $t(W)$ is a lower bound for the time it takes $D_k$ to become infected.
A wave $W=(D_1,\dots,D_k)$ inside a droplet $D=[(a,b),(c,d)]$ is an \emph{up-wave} if $b_1=b$, and a \emph{down-wave} if $d_k=d$. Although the property of being an up- or down-wave depends on the parent droplet $D$, we shall rarely make reference to this.
The \emph{upper crest} of a wave $W=(D_1,\dots,D_k)$ in a droplet $D$ is the set
\[
\crest^+(W) = \{(x_1,x_2)\in D: b_k-1\leq x_2\leq d_k\} \setminus D_k,
\]
and similarly the \emph{lower crest} is the set
\[
\crest^-(W) = \{(x_1,x_2)\in D: b_1\leq x_2\leq d_1+1\} \setminus D_1.
\]
If $x=(x_1,x_2)$ is in the upper crest of $W$, then the \emph{upper $W$-time of $x$} is defined to be
\[
t^+(x,W) = t(W) + \min\{|x_1-c_k|,|a_k-x_1|\}
\]
if $x_1\notin [a_k,c_k]$, and $0$ otherwise, while if $x$ is in the lower crest of $W$, then the \emph{lower $W$-time of $x$} is defined to be
\[
t^-(x,W) = t(W) + \min\{|x_1-c_1|,|a_1-x_1|\}
\]
if $x_1\notin [a_1,c_1]$, and $0$ otherwise.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.4,>=latex]
\tikzstyle{drop}=[fill=gray!50];
\fill [gray!30] (0,11) rectangle (25,13);
\draw [gray!80] (0,11) -- (25,11) (0,13) -- (25,13);
\draw (-0.5,0) -- (25.5,0);
\draw [drop] (3,0) rectangle (4,1);
\draw [drop] (13,1) rectangle (14,2);
\draw [drop] (18,3) rectangle (20,5);
\draw [drop] (7,4) rectangle (11,7);
\draw [drop] (15,8) rectangle (16,9);
\draw [drop] (20,9) rectangle (22,11);
\draw [drop] (5,12) rectangle (6,13);
\draw [<->] (4,1.5) -- node [above] {$t_1$} (13,1.5);
\draw [<->] (14,2.5) -- node [above] {$t_2$} (18,2.5);
\draw [<->] (26,0) -- node [right] {$h(W)$} (26,13);
\node at (2,1) {$D_1$};
\node at (15,1) {$D_2$};
\node at (21,4) {$D_3$};
\node at (6,5.5) {$D_4$};
\node at (10,12) {$\crest^+(W)$};
\end{tikzpicture}
\caption{An example of an up-wave $W$.}\label{fi:wave}
\end{figure}
Let $D=[(a,b),(c,d)]$ be a droplet and let
\[
D_0=[D\cap A] \cup \big([(a-1,b-1),(c+1,d+1)]\setminus D\big).
\]
Thus, $D_0$ is the union of the closure of $A$ restricted to $D$ and the horizontal and vertical lines adjacent to the edges of $D$.
\begin{definition}
For $t\geq 0$, the \emph{$t$-flood of $D$}, which we write as $[[D]]_t$, is defined to be the set $[D_0]_t\cap D$. The \emph{flood time} of $x\in D$ is the minimal $t$ such that $x\in[[D]]_t$ (which is well defined, because $D\subset[D_0]$). The flood time of $D$ itself is defined to be the maximum of the flood times of the sites belonging to $D$, or equivalently, it is the minimal $t$ such that $[[D]]_t=D$.
\end{definition}
It is easy to see that
\begin{equation}\label{eq:tflood}
A_t \subset D^c \cup [[D]]_t.
\end{equation}
The reason for this is that, firstly, $A \subset D^c \cup [[D]]_0$, because $D^c\cap A\subset D^c$ and $D\cap A\subset [[D]]_0$, and then \eqref{eq:tflood} follows because $D^c \cup [[D]]_t = [D^c\cup [[D]]_0]_t$. This simple observation means that we can bound from below the percolation time of the grid by the flood time of any given droplet.
Given a site $x=(x_1,x_2)\in D$, the \emph{width of $x$}, $w(x)$, is the minimum horizontal distance from $x$ to the exterior of $D$; specifically, $w(x) = \min\{c-x_1,x_1-a\}+1$. Similarly, the \emph{height of $x$}, $h(x)$, is the minimum vertical distance from $x$ to the exterior of $D$; thus, $h(x) = \min\{d-x_2,x_2-b\}+1$. The \emph{down-wake of $x$} is the set
\[
\{y=(y_1,y_2)\in D: |y_1-x_1|+y_2 \leq x_2\}.
\]
One may think of the down-wake of $x$ as the set of sites in the $45^\circ$ pyramid below $x$, with $x$ at the apex. The \emph{up-}, \emph{left-} and \emph{right-wake of $x$} are similarly defined. An easy induction shows that if $t$ is the flood time of $x$ and $t$ is strictly positive, then one of the four wakes of $x$ is wholly contained in the $t$-flood of $D$.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.4,>=latex]
\tikzstyle{drop}=[fill=gray!50];
\tikzstyle{flood}=[fill=gray!30];
\path [flood] (0,0) -- (9,0) -- (9,1) -- (7,1) -- (7,2) -- (4,2) -- (4,3) -- (3,3) -- (3,4) -- (2,4) -- (2,5) -- (1,5) -- (1,6) -- (0,6) -- (0,0);
\path [flood] (0,15) -- (6,15) -- (6,14) -- (5,14) -- (5,13) -- (4,13) -- (4,12) -- (3,12) -- (3,11) -- (2,11) -- (2,10) -- (1,10) -- (1,9) -- (0,9) -- (0,15);
\path [flood] (25,0) -- (19,0) -- (19,1) -- (20,1) -- (20,2) -- (21,2) -- (21,3) -- (22,3) -- (22,4) -- (23,4) -- (23,5) -- (24,5) -- (24,6) -- (25,6) -- (25,0);
\path [flood] (25,15) -- (19,15) -- (19,14) -- (20,14) -- (20,13) -- (21,13) -- (21,12) -- (22,12) -- (22,11) -- (23,11) -- (23,10) -- (24,10) -- (24,9) -- (25,9) -- (25,15);
\path [flood] (10,0) rectangle (16,1);
\path [flood] (7,15) -- (7,14) -- (8,14) -- (8,13) -- (17,13) -- (17,14) -- (18,14) -- (18,15) -- (7,15);
\draw (0,0) rectangle (25,15);
\draw (0,0) -- (9,0) -- (9,1) -- (7,1) -- (7,2) -- (4,2) -- (4,3) -- (3,3) -- (3,4) -- (2,4) -- (2,5) -- (1,5) -- (1,6) -- (0,6) -- (0,0);
\draw (0,15) -- (6,15) -- (6,14) -- (5,14) -- (5,13) -- (4,13) -- (4,12) -- (3,12) -- (3,11) -- (2,11) -- (2,10) -- (1,10) -- (1,9) -- (0,9) -- (0,15);
\draw (25,0) -- (19,0) -- (19,1) -- (20,1) -- (20,2) -- (21,2) -- (21,3) -- (22,3) -- (22,4) -- (23,4) -- (23,5) -- (24,5) -- (24,6) -- (25,6) -- (25,0);
\draw (25,15) -- (19,15) -- (19,14) -- (20,14) -- (20,13) -- (21,13) -- (21,12) -- (22,12) -- (22,11) -- (23,11) -- (23,10) -- (24,10) -- (24,9) -- (25,9) -- (25,15);
\draw (10,0) rectangle (16,1);
\draw (7,15) -- (7,14) -- (8,14) -- (8,13) -- (17,13) -- (17,14) -- (18,14) -- (18,15) -- (7,15);
\draw [drop] (2,1) rectangle (3,2);
\draw [drop] (9,0) rectangle (10,1);
\draw [drop] (18,3) rectangle (19,4);
\draw [drop] (11,5) rectangle (12,6);
\draw [drop] (4,8) rectangle (7,10);
\draw [drop] (12,13) rectangle (13,14);
\draw [drop] (20,11) rectangle (22,13);
\draw [drop] (22,4) rectangle (23,5);
\draw [drop] (16,7) rectangle (17,9);
\end{tikzpicture}
\caption{The set $[[D]]_0=[D\cap A]$ is shown by the dark rectangles. The $6$-flood $[[D]]_6$ is the union of the light and dark areas.}\label{fi:flood}
\end{figure}
\begin{lemma}\label{le:wave}
Let $D$ be a droplet. Let $x$ be a site in $D$ with strictly positive flood time $t$, and suppose that $t<w(x)$. Then $[[D]]_0$ contains an up- or down-wave with height at least $h(x)$ and time at most $t$.
\end{lemma}
\begin{proof}
The proof is by induction on the flood time $t$. We strengthen the claim slightly by proving, under the same conditions, that $[[D]]_0$ contains a wave $W$ with height at least $h(x)$, such that \emph{either} $W$ is an up-wave, $x$ is in the upper crest of $W$, and $t^+(x,W)\leq t$, \emph{or} $W$ is a down-wave, $x$ is in the lower crest of $W$, and $t^-(x,W)\leq t$.
If $x$ has flood time $1$ then $x$ lies on the top or bottom edge of $D$, is not at one of the corners (since $w(x)>1$), and is adjacent to (exactly one) site in $[[D]]_0$, which must also lie on the top or bottom edge of $D$, so the claim is true.
Suppose the claim is true for all sites with flood time $t-1$. Since $x$ has flood time $t$, at least one of its four neighbours must have flood time $t-1$. Such a neighbour $y$ has width $w(y)\geq w(x)-1$, so the induction hypothesis applies and without loss of generality there is an up-wave $W$ with height at least $h(y)$ such that $y$ is in the upper crest of $W$ and the upper $W$-time of $y$ is at most $t-1$. Observe that if $h(W)$ is at least $h(x)$ then the same wave $W$ satisfies the conditions of the claim for $x$, since then $x$ is in the upper crest of $W$ and $t^+(x,W)\leq t$. So we may assume that $h(x)>h(W)$. This means that $y$ cannot be equal to $x\pm e_1$, since that would imply $h(x)=h(y)\leq h(W)$.
If both $x+e_2$ and $x-e_2$ have flood time at most $t-1$, and their associated waves are $W$ and $W'$ respectively, then $W$ must be a down-wave and $W'$ an up-wave, and one of $h(W)$ or $h(W')$ must be greater than $h(x)$, which is again a contradiction.
We are left with the case $y=x-e_2$ (if $y=x+e_2$ then $W$ is a down-wave and the argument is similar), $h(y)=h(W)$, and each of $x+e_1$, $x-e_1$ and $x+e_2$ either belongs to $[[D]]_0$ or has flood time at least $t$. In fact at least one of $x+e_1$, $x-e_1$ and $x+e_2$ must belong to $[[D]]_0$, because otherwise at most one neighbour of $x$ would be in $[[D]]_{t-1}$, so $x$ would not be infected at time $t$.
\emph{Case 1.} Suppose $x-e_1\in[[D]]_0$; the case $x+e_1\in[[D]]_0$ is treated in the same way. Let $D'=[(a,b),(c,d)]$ be the maximal droplet in $[[D]]_0$ that contains $x-e_i$. If $D'$ is one of the droplets in the wave $W$ then it must be that $h(W)\geq h(x)$ and $t(x,W)\leq t$, so $W$ satisfies the conditions for $x$. So we may assume instead that $D'$ is not one of the droplets in $W$. We shall show there exists $j\leq k$ such that $W'=(D_1,\dots,D_j,D')$ is an up-wave, $h(W')\geq h(x)$, and $t^+(x,W')\leq t$.
First we need to establish that $W'$ satisfies the three conditions of a wave. The first two (that the droplets are disjoint and form a closed set) are trivially satisfied for all $j$. Since $W$ is wave, the third condition for a wave is satisfied for all $1\leq i\leq j-1$, so it remains to show that $b_j<b\leq d_j+2<d+2$. The third of these inequalities is satisfied by all $j$ because $d_j\leq h(W)<h(x)\leq d$. The second inequality is satisified when $j=k$, because $b\leq h(x)=h(y)+1=h(W)+1=d_k+1$. Choose $j$ to be minimal such that the second inequality is satisfied. If $j=1$ then we take $W'=(D')$ if $b=1$ and $W'=(D_1,D')$ if $b>1$. Otherwise, if $j>1$, then $b_j\leq d_{j-1}+2$ because $W$ is a wave, and we have $d_{j-1}+2<b$ by the minimality of $j$, so $b_j<b$, and therefore $j$ satisfies the first inequality. This proves that $W'$ is a wave, and hence, since $W$ is an up-wave, that $W'$ is also an up-wave. The inequality $h(W')\geq h(x)$ follows immediately because $x\in D'$.
It remains to show that $t^+(x,W')\leq t$. For this, it is enough to have $t(W')\leq t-1$, because $x$ is horizontally adjacent to $D'$. We have
\begin{align}\label{eq:tplusineq}
t(W') &\leq t(W) + \min\{|a-c_j|,|a_j-c|\} - 1 \notag \\
&\leq t(W) + \min\{|y_1-c_j|,|a_j-y_1|\} \notag \\
&\leq t^+(y,W),
\end{align}
which is at most $t-1$, as required. This completes the case in which $x\pm e_1\in[[D]]_0$.
\emph{Case 2.} Now suppose $x+e_2\in[[D]]_0$. As before, let $D'=[(a,b),(c,d)]$ be the maximal droplet in $[[D]]_0$ that contains $x-e_i$. Observe that $D'$ cannot be one of the droplets in $D$, because $b=h(x)+1=h(y)+2=h(W)+2$. We shall show that $W'=(D_1,\dots,D_k,D')$ is an up-wave, $h(W')\geq h(x)$, and $t^+(x,W')\leq t$. That $W'$ is a wave is clear, because the first two properties of a wave (that the droplets are disjoint and form a closed set) are again trivially satisfied, and the third condition, that $b_k<b\leq d_k+2<d+2$, is also satisfied, because we have just observed that $b=d_k+2$. Given that $W'$ is a wave, it is automatically an up-wave, and the inequality $h(W')\geq h(x)$ is also clear, because $x\in D'$. Our final task, then, is to show that $t^+(x,W')\leq t$. Since $x_1\in[a,c]$, the condition is equivalent to $t(W')\leq t$. But now the calculation is the same as in \eqref{eq:tplusineq}, which completes the proof of this case, and also the proof of the lemma.
\end{proof}
\subsection{Subcriticality and restrictions of waves}\label{se:restrictions}
It turns out to be too difficult to do calculations with waves considered in the level of generality we have so far permitted, because there are too many choices for the dimensions of the $D_i$. The purpose of this short section is to introduce new, related structures, which allow only three types of droplet. The reason we are able to restrict the number of types of droplet so strongly is because we only ever consider waves in subcritical regions of the grid, so there are no large internally spanned droplets. The advantage of introducing the simplified strctures is that calculations involving these structures are much simpler. The disadvantage is that the new structures are not necessarily themselves waves, because the droplets may overlap. However, this causes only very minor complications.
First, let us say that an $M$-slab $D$ is \emph{subcritical} if the largest internally spanned droplet in $D$ has semi-perimeter at most $\gamma$. All of the $M$-slabs we shall consider will be subcritical. Similarly we say that a wave $W=(D_1,\dots,D_k)$ is subcritical if $\phi(D_i)\leq\gamma$ for $1\leq i\leq k$.
Let $\sigma$ be a fixed positive integer to be specified later. The \emph{$(1,\sigma,\gamma)$-restriction} of a subcritical wave $W=(D_1,\dots,D_k)$ is the sequence of droplets $(D_1',\dots,D_{k'}')$ obtained by applying the following algorithm to $W$.
\begin{enumerate}
\item Let $i$ be minimal such that $\sigma+2\leq\phi(D_i)\leq\gamma$, or if no such $i$ exists then move on to the next step. Replace $D_i=[(a,b),(c,d)]$ by the $\gamma$-cell $[(a,b),(a+\gamma-1,b+\gamma-1)]$. Remove from the sequence all droplets $D_j=[(a',b'),(c',d')]$ which are such that $b'\geq b$ and $d'\leq d$. Repeat this step until all droplets are either $\gamma$-cells or have semi-perimeter at most $\sigma+1$.
\item Let $i$ be minimal such that $3\leq\phi(D_i)\leq\sigma+1$, or if no such $i$ exists then stop. Replace $D_i=[(a,b),(c,d)]$ by the $\sigma$-cell $[(a,b),(a+\sigma-1,b+\sigma-1)]$. Remove from the sequence all droplets $D_j=[(a',b'),(c',d')]$ which are such that $b'\geq b$ and $d'\leq d$. Repeat this step until all droplets are either $\gamma$-cells, $\sigma$-cells, or consist of a single site, and then stop.
\end{enumerate}
Note that the definition implies that $D_i\subset D_i'$ for all $i$. As mentioned above, the $(1,\sigma,\gamma)$-restriction of a wave is not necessarily a wave, because $D_i'\cap D_{i+1}'$ may be non-empty. However, the following lemma, which is merely an observation, is the only disjointness property we need.
\begin{lemma}\label{le:isrestricted}
Let $W=(D_1,\dots,D_k)$ be a subcritical wave and let $W'=(D_1',\dots,D_{k'}')$ be the $(1,\sigma,\gamma)$-restriction of $W$. Suppose that the $D_i$ are all internally spanned. Then for each $1\leq j\leq k'$, the following holds. If $D_j'$ is a single site then it is internally spanned. If $D_j'$ is a $\sigma$-cell then it contains a droplet of semi-perimeter between $3$ and $\sigma+1$ which is internally spanned. If $D_j'$ is a $\gamma$-cell then it contains a droplet of semi-perimeter between $\sigma+2$ and $\gamma$ which is internally spanned. Furthermore, the internally spanned droplets associated with the $D_j'$ are disjoint. \qed
\end{lemma}
Let $W'=(D_1',\dots,D_{k'}')$ be the $(1,\sigma,\gamma)$ restriction of a wave $W$, and let $D_i'=[(a_i',b_i'),(c_i',d_i')]$ for $1\leq i\leq k'$. The \emph{height} of $W'$, like that of a wave, is defined to be $h(W')=d_{k'}'-b_1'+1$. The horizontal displacement between droplets $D_i'$ and $D_{i+1}'$ is
\[
t_i' = \max\{a_{i+1}'-c_i',a_i'-c_{i+1}',1\}.
\]
The time of $W'$ is then defined to be
\[
t(W') = \sum_{i=1}^{k'} (t_i'-1).
\]
\begin{lemma}\label{le:htrestricted}
Let $W'$ be the $(1,\sigma,\gamma)$-restriction of a subcritical wave $W$. Then $h(W')\geq h(W)$ and $t(W')\leq t(W)$. \qed
\end{lemma}
\subsection{Slow percolation of subcritical $M$-slabs}\label{se:slowperc}
This is the part of the proof of the lower bound in the small $p$ regime where the main calculations occur. We show that the probability there exists an up-wave with height $h$ and time $t$ inside a subcritical $M$-slab is small provided $pt/h$ is at most a small constant. This corresponds to the intuition that infection should spread in towards the centre of a subcritical $M$-slab at rate $\Theta(p)$.
The \emph{anchor} of a wave or restricted wave $W=(D_1,\dots,D_k)$ is the ordered pair $(a,b)$, where $D_1=[(a,b),(c,d)]$ for some $c,d$. A wave is said to be \emph{anchored} if its anchor is fixed.
Let $V(h,t)$ be the event that there exists a subcritical wave $W=(D_1,\dots,D_k)$ with anchor at the origin such that $W$ has height exactly $h$ and time exactly $t$. Let $V_\Gamma(h,t)$ denote the event that $V(h,t)$ occurs and that the number of $\gamma$-cells in the $(1,\sigma,\gamma)$-restriction of $W$ is at least $hp/\gamma$, and let $V_{\Gamma^c}(h,t) = V(h,t) \cap V_\Gamma(h,t)^c$. The calculation to show that $P_p\big(V(h,t)\big)$ is small provided $pt/h$ is small is slightly different according to whether or not the wave $W$ contains a large number of $\gamma$-cells, so we separate the calculation into two parts.
\begin{lemma}\label{le:fewcrit}
Let $pt/h$ be sufficiently small and let $\sigma$ be sufficiently large. Then
\[
\P_p\big(V_{\Gamma^c}(h,t)\big) \leq h^3 e^{-h/\gamma}.
\]
\end{lemma}
\begin{proof}
Our first task is to find an upper bound for the number of possible $(1,\sigma,\gamma)$-restrictions of an anchored wave of a given height and time. To that end, let $W'=(D_1',\dots,D_{k'}')$ be a sequence of droplets such that there exists a wave of which $W'$ is the $(1,\sigma,\gamma)$-restriction. (The only reason we demand the existence of the wave is to limit the number of possibilities for the positions of the droplets $D_i'$.) Let $a$ be the number of single site droplets in $W'$, $b$ the number of $\sigma$-cells, and $c$ the number of $\gamma$-cells. Thus $k'=a+b+c$. Fix also the time $t$ of $W'$ and anchor $W'$ at $(0,0)$. The number of choices for the order of the three sizes of droplets is
\begin{equation}\label{eq:dropsizes}
\binom{a+b+c}{a,b,c} \leq 3^{a+b+c}.
\end{equation}
If $D_i'$ is a single site then there are two choices for the vertical displacement $b_i'-d_{i-1}'$; specifically, either $b_i'=d_{i-1}'+1$ or $b_i'=d_{i-1}'+2$. Similarly if $D_i'$ is a $\sigma$-cell there are $\sigma+1$ choices for $b_i'$, and if it is a $\gamma$-cell there are $\gamma+1$ choices. In total this gives
\begin{equation}\label{eq:dropvert}
2^a(\sigma+1)^b(\gamma+1)^c \leq C^{a+b+c}\sigma^b\gamma^c
\end{equation}
choices for the vertical displacements between the droplets, for some $C>0$. Next, recall that
\[
t(W') = \sum_{i=1}^{a+b+c} (t_i'-1),
\]
where $t_i'=\max\{a_{i+1}'-c_i',a_i'-c_{i+1}',1\}$. Thus there are $\binom{t}{a+b+c}$ choices for the $t_i'$ and a further $3^{a+b+c}$ choices for whether we have $a_{i+1}'-c_i'\geq 1$, or $a_i'-c_{i+1}'\geq 1$, or neither. If neither then the droplets overlap horizontally; in this case there are (crudely) at most $(3\sigma)^{2b}(3\gamma)^{2c}$ choices for $a_{i+1}'-a_i'$. (If the larger of the two cells is a $\gamma$-cell then there are at most $3\gamma$ choices for the displacement, while if it is a $\sigma$-cell then there are at most $3\sigma$ such choices. Furthermore, each cell can contribute to at most two displacements.) In total, there are at most
\begin{equation}\label{eq:drophoriz}
\binom{t}{a+b+c} 3^{a+b+c} (3\sigma)^{2b} (3\gamma)^{2c} \leq C^{a+b+c} \sigma^{2b} \gamma^{2c} \binom{t}{a+b+c}
\end{equation}
choices for the horizontal displacements $a_{i+1}'-a_i'$, for some $C>0$. Multiplying together the number of choices for the order of the sizes of the droplets \eqref{eq:dropsizes}, the vertical displacements \eqref{eq:dropvert}, and the horizontal displacements \eqref{eq:drophoriz}, we have that the number of possibilities for the sequence of droplets in the restricted wave $W'$ is at most
\[
C^{a+b+c} \sigma^{3b} \gamma^{3c} \binom{t}{a+b+c}
\]
for some constant $C>0$. Since $\sigma$ is also a constant and $\gamma=p^{-3}$, this is at most
\begin{equation}\label{eq:numwaves}
C^{a+b+c} p^{-9c} \binom{t}{a+b+c}.
\end{equation}
Our next task is to bound the probability that a given $W'$ is the $(1,\sigma,\gamma)$-restriction of a wave of internally spanned droplets. The probability that a $\sigma$-cell contains an internally spanned droplet of semi-perimeter between $3$ and $\sigma+1$ is $O(p^2)$, because the internally spanned droplet must contain at least two initially infected sites, and there are a constant number of choices for their positions. The probability that a $\gamma$-cell contains an internally spanned droplet $D$ with semi-perimeter between $\sigma+2$ and $\gamma$ is $O(p^{\sigma/2-5})$. The reason is as follows. The internally spanned droplet $D$ must itself contain another internally spanned droplet $D'$ having semi-perimeter between $\sigma+2$ and $2(\sigma+2)$, by Lemma \ref{le:AL}. The probability that $D'$ is internally spanned is $O(p^{\sigma/2+1})$, by Lemma \ref{le:phi}, and there are $O(\gamma^2)$ choices for $D'$, so the probability that a $\gamma$-cell contains an internally spanned droplet of semi-perimeter between $\sigma+2$ and $\gamma$ is at most
\[
O\big( p^{\sigma/2+1} \gamma^2 \big) = O(p^{\sigma/2-5}).
\]
Hence, the probability that there exists a wave $W$ of internally spanned droplets such that $W'$ is the $(1,\sigma,\gamma)$-restriction of $W$ is at most
\begin{equation}\label{eq:probwave}
p^a (Cp^2)^b (Cp^{\sigma/2-5})^c = C^{a+b} p^{a+2b+(\sigma/2-5)c}
\end{equation}
for some positive constant $C$.
The wave $W$ has height $h$ and time $t$, so its $(1,\sigma,\gamma)$-restriction has height at least $h$ and time at most $t$, by Lemma \ref{le:htrestricted}. Hence we have $2a+(\sigma+1)b+(\gamma+1)c\geq h$, or very crudely, $a+\sigma b+\gamma c\geq h/2$. Combining this with the bound in \eqref{eq:numwaves} for number of possibilities for $W'$ and the bound in \eqref{eq:probwave} for the probability that a given $W'$ is the $(1,\sigma,\gamma)$-restriction of a wave of internally spanned droplets gives
\begin{equation}\label{eq:Vht}
\P_p\big( V_{\Gamma^c}(h,t) \big) \leq \sum_{\substack{0\leq a,b,c\leq h \\ a+\sigma b+\gamma c\geq h/2 \\ c\leq hp/\gamma}} C^{a+b+c} \binom{t}{a+b+c} p^{a+2b+(\sigma/2-14)c}.
\end{equation}
Hence, by Stirling's formula and the assumption that $\sigma$ is sufficiently large, we have
\begin{equation}\label{eq:nasty}
\P_p\big( V_{\Gamma^c}(h,t) \big) \leq \sum_{\substack{0\leq a,b,c\leq h \\ a+\sigma b+\gamma c\geq h/2 \\ c\leq hp/\gamma}} \left(\frac{Ct}{a+b+c}\right)^{a+b+c} p^{a+2b+3c}.
\end{equation}
The calculation needed in order to bound the right-hand side of \eqref{eq:nasty} is routine and unenlightening, so it is deferred until Lemma \ref{le:calc} in the Appendix. (It is in this calculation that use the assumption that $c<hp/\gamma$.) The calculation implies that
\begin{equation}\label{eq:VGammac}
\P_p\big( V_{\Gamma^c}(h,t) \big) \leq \sum_{\substack{0\leq a,b,c\leq h \\ a+\sigma b+\gamma c\geq h/2 \\ c\leq hp/\gamma}} e^{-(a+b+c)}.
\end{equation}
Thus, using the observation that $a+b+c\geq h/\gamma$, we have
\[
\P_p\big( V_{\Gamma^c}(h,t) \big) \leq h^3 e^{-h/\gamma},
\]
which is the inequality claimed in the statement of the lemma.
\end{proof}
\begin{lemma}\label{le:manycrit}
Let $pt/h=O(1)$, let $p$ be sufficiently small, and suppose that $\sigma\geq 36$. Then
\[
\P_p\big(V_\Gamma(h,t)\big) \leq e^{-hp^4}.
\]
\end{lemma}
\begin{proof}
Suppose $V_\Gamma(h,t)$ occurs. Let $W$ be the associated subcritical wave and let $W'=(D_1',\dots,D_{k'}')$ be its $(1,\sigma,\gamma)$-restriction, with $D_i'=[(a_i',b_i'),(c_i',d_i')]$ for $1\leq i\leq k'$. Recall that the number of $\gamma$-cells in $W'$ must be at least $hp/\gamma$, by the definition of the event $V_\Gamma(h,t)$. Let the $\gamma$-cells in $W'$ be $D_{i_1}'\dots,D_{i_c}'$, where $c\geq hp/\gamma$. We would like to bound the number of possible positions for these $\gamma$-cells. First, no two droplets can have the same $b_i$ coordinate, so there are at most $\binom{h}{c}$ choices for $b_{i_1}',\dots,b_{i_c}'$. Next, recall that the time of $W'$ is defined to be
\[
t(W') = \sum_{i=1}^{k'-1} (t_i'-1),
\]
where $t_i' = \max\{ a_{i+1}'-c_i',a_i'-c_{i+1}',1 \}$ is the horizontal offset between droplets $D_i'$ and $D_{i+1}'$. Suppose $D_{i_{j+1}}'$ lies to the right of $D_{i_j}'$, so $a_{i_{j+1}}'-c_{i_j}'\geq 1$. Observe that $c_l'-a_l'\leq\sigma$ whenever $D_l'$ is not a $\gamma$-cell, and hence
\[
a_{i_{j+1}}'-c_{i_j}' = \sum_{l=i_j}^{i_{j+1}-1} (a_{l+1}'-c_l') + \sum_{l=i_j+1}^{i_{j+1}-1} (c_l'-a_l') \leq \sum_{l=i_j}^{i_{j+1}-1} t_l' + (i_{j+1}-i_j-1)\sigma.
\]
The same inequality holds for $a_{i_j}'-c_{i_{j+1}}'$ if $D_{i_{j+1}}'$ lies to the left of $D_{i_j}'$, so we have
\[
\max\{ a_{i_{j+1}}'-c_{i_j}',a_{i_j}'-c_{i_{j+1}}',0 \} \leq \sum_{l=i_j}^{i_{j+1}-1} t_l' + (i_{j+1}-i_j-1)\sigma.
\]
Summing over $\gamma$-cells we obtain
\[
\sum_{j=1}^c \max\{a_{i_{j+1}}'-c_{i_j}',a_{i_j}'-c_{i_{j+1}}',0\} \leq \sum_{i=1}^{k'} t_i' + (k'-c)\sigma = (t(W')+k') + (k'-c)\sigma = O(t),
\]
where here we have used $t(W')\leq t(W)=t$ from Lemma \ref{le:htrestricted} and $k'\leq t(W')$. Hence there are at most $\binom{O(t)}{c}$ choices for the absolute values of the non-zero horizontal offsets. There are a further $3^c$ choices for whether $a_{i_{j+1}}'-c_{i_j}'\geq 1$, or $a_{i_j}'-c_{i_{j+1}}'\geq 1$, or neither. When the offset is zero, there are at most $3\gamma$ choices for the value of $a_{i_{j+1}}'-a_{i_j}'$. The differences $a_{i_{j+1}}'-a_{i_j}'$ together with the $b_{i_j}'$ uniquely define the positions of the $\gamma$-cells. It follows from these observations that there are at most
\begin{equation}\label{eq:gammachoices}
\binom{h}{c} \binom{O(t)}{c} (9\gamma)^c
\end{equation}
choices for the $\gamma$-cells.
By the same calculation as in the proof of Lemma \ref{le:fewcrit}, the probability that a $\gamma$-cell contains an internally spanned droplet with semi-perimeter between $\sigma+2$ and $\gamma$ is $O(p^{\sigma/2-5})$, and by Lemma \ref{le:isrestricted}, each $\gamma$-cell must contain such an internally spanned droplet. Combined with the bound for the number of choices of $\gamma$-cells in \eqref{eq:gammachoices}, this proves that
\[
\P_p\big( V_\Gamma(h,t) \big) \leq \sum_{c\geq hp/\gamma} \binom{h}{c} \binom{O(t)}{c} (C\gamma p^{\sigma/2-5})^c.
\]
Using Stirling's formula, we have
\[
\P_p\big( V_\Gamma(h,t) \big) \leq \sum_{c\geq hp/\gamma} \left(\frac{Cht\gamma p^{\sigma/2-5}}{c^2}\right)^c \leq \sum_{c\geq hp/\gamma} \left(C(pt/h)\gamma^3 p^{\sigma/2-8}\right)^c
\]
for some new constant $C$. Now, $pt/h=O(1)$, $\gamma=p^{-3}$, and $\sigma\geq 36$, so
\[
\P_p\big( V_\Gamma(h,t) \big) \leq \sum_{c\geq hp/\gamma} (Cp)^c \leq e^{-hp/\gamma},
\]
again for a new $C$, provided $p$ is sufficiently small.
\end{proof}
An $M$-slab $D$ is defined to be \emph{slow} if $[[D]]_{cM/p} \neq D$, where $c<1/2$ is a small positive constant, and otherwise it is \emph{fast}. We write $F(D)$ for the event that $D$ is fast.
In the following lemma and in the proof of Theorem \ref{th:small} we shall need to use Harris's Lemma \cite{Harris} (later generalized by Fortuin, Kasteleyn, and Ginibre \cite{FKG}, and now better known as the FKG inequality) to bound the probabilities of certain intersections of non-independent increasing and decreasing events. The definitions of \emph{increasing} and \emph{decreasing} are the usual percolation-theoretic definitions: an event $E$ is increasing if for all pairs of configurations $\omega$, $\omega'\in\{0,1\}^{[n]^2}$ such that $\omega\subset\omega'$, the implication $\omega\in F \; \Rightarrow \; \omega'\in F$ holds; it is \emph{decreasing} if the converse implication $\omega'\in F \; \Rightarrow \; \omega\in F$ holds. Harris's Lemma is as follows.
\begin{lemma}\label{le:harris}
Let $E$ and $F$ be increasing events and let $G$ be a decreasing event. Then
\[
\P_p(E\cap F) \geq \P_p(E) \cap \P_p(F)
\]
and
\[
\P_p(E\cap G) \leq \P_p(E) \cap \P_p(G) \tag*{\qed}
\]
\end{lemma}
\begin{lemma}\label{le:subandfast}
Let $D$ be an $M$-slab and let $p$ be sufficiently small. Then
\[
\P\big( F(D) \given \Gamma(D)^c \big) \leq \frac{1}{2}.
\]
\end{lemma}
\begin{proof}
Suppose an $M$-slab $D$ is subcritical and fast. There must exist a site $x$ with strictly positive flood time $t\leq\tau:=cM/p$ such that $x$ has $\ell_1$ distance at most $\gamma$ from the centre of $D$. Certainly $t<w(x)$, so Lemma \ref{le:wave} implies that $[[D]]_0$ contains an up- or down-wave $W=(D_1,\dots,D_k)$ with height at least $h(x)\geq M/4$ and time at most $t$. There are $M/p$ choices for the anchor of $W$ and two choices for whether it is an up- or down-wave. Hence
\[
\P_p\big(F(D) \given \Gamma(D)^c\big) \leq 2 (M/p) \sum_{t\leq\tau} \sum_{M/4\leq h\leq M} \Big( \P_p\big(V_\Gamma(h,t)\given\Gamma(D)^c\big) + \P_p\big(V_{\Gamma^c}(h,t)\given\Gamma(D)^c\big) \Big).
\]
The events $V_\Gamma(h,t)$ and $V_{\Gamma^c}(h,t)$ are increasing for all $h$ and $t$, while the event $\Gamma(D)^c$ is decreasing. So Harris's Lemma implies that
\[
\P_p\big(V_\Gamma(h,t)\given\Gamma(D)^c\big) = \frac{\P_p\big(V_\Gamma(h,t)\cap\Gamma(D)^c\big)}{\P_p\big(\Gamma(D)^c\big)} \leq \frac{\P_p\big(V_\Gamma(h,t)\big)\P_p\big(\Gamma(D)^c\big)}{\P_p\big(\Gamma(D)^c\big)} = \P_p\big(V_\Gamma(h,t)\big),
\]
and similarly $\P_p\big(V_{\Gamma^c}(h,t)\given\Gamma(D)^c\big) \leq \P_p\big(V_{\Gamma^c}(h,t)\big)$, in both cases for all $h$ and $t$. So we have
\[
\P_p\big(F(D) \given \Gamma(D)^c\big) \leq 2 (M/p) \sum_{t\leq\tau} \sum_{M/4\leq h\leq M} \Big( \P_p\big(V_\Gamma(h,t)\big) + \P_p\big(V_{\Gamma^c}(h,t)\big) \Big).
\]
Now we use the inequality for $\P_p(V_{\Gamma^c}(h,t))$ from Lemma \ref{le:fewcrit} and the inequality for $\P_p(V_\Gamma(h,t))$ from Lemma \ref{le:manycrit}. These give
\begin{align}\label{eq:subandfast}
\P_p\big(F(D)\given\Gamma(D)^c\big) &\leq 2 (M/p) \sum_{t\leq\tau} \sum_{M/4\leq h\leq M} \big( h^3 e^{-hp^3} + e^{-hp^4} \big) \notag \\
&= O\left( M^6 p^{-2} \exp(-Mp^4 / 4) \right).
\end{align}
Recall that $M=A\sqrt{p\log(n/K)}K$ and $K=\exp\big(\mu(p)/p\big)$. Now, $1/p \ll M$ and $\log M = O(1/p) \ll Mp^4$, so we can reduce \eqref{eq:subandfast} to
\[
\P_p\big(F(D)\given\Gamma(D)^c\big) \leq \exp(-Mp^4 / 8).
\]
Now, we have
\[
Mp^4 = p^{7/2} (\log n)^{1/2} K(p) = \exp\left( \frac{\lambda}{p} + \frac{\log\log n}{2} - o\left(\frac{1}{p}\right) \right) \to \infty.
\]
so $\P_p\big(F(D)\given\Gamma(D)^c\big) \leq 1/2$ if $p$ is sufficiently small.
\end{proof}
\subsection{Proof of the lower bound in Theorem \ref{th:small}}\label{se:pflowersmall}
\begin{proof}[Proof of the lower bound in Theorem \ref{th:small}]
As remarked in the introduction to Section \ref{se:lowersmall}, the bound $T\geq c(\log n)/p$ (with high probability) follows from Lemma \ref{le:lb}, so we only have to prove that
\[
T \geq c\sqrt{\frac{\log(n/K)}{p}}K
\]
with high probability, for some constant $c>0$. (In fact, we shall prove this statement with the same constant $c$ as in definition of a slow $M$-slab.)
Our proof will show that $[n]^2$ contains a slow $M$-slab with high probability. This will be sufficient, because the percolation time of $[n]^2$ is certainly at least the flood time of any given $M$-slab, and an $M$-slab was defined to be slow if its flood time was at least $cL/p$.
Let $D$ be an $M$-slab. We have
\begin{equation}\label{eq:probfast}
\P_p\big(F(D)^c\big) \geq \P_p\big(F(D)^c\cap\Gamma(D)^c\big) = \P_p\big(F(D)^c\given\Gamma(D)^c\big) \P_p\big(\Gamma(D)^c\big).
\end{equation}
The first probability, that $D$ is slow given that it is subcritical, is at least $1/2$ by Lemma \ref{le:subandfast}. So it remains for us to bound the second probability, that $D$ is subcritical.
Tile $D$ with $K$-cells $D_1,D_2,\dots,D_k$ so that neighbouring $K$-cells overlap by a distance of $2\gamma$; this ensures that any critical droplet in $D$ is entirely contained in at least one of the $K$-cells. The number of $K$-cells is
\[
k = \left(1+\frac{2\gamma}{K}\right)^2 \frac{M^2/p}{K^2} = (1+o(1))\log\left(\frac{n}{K}\right)
\]
The events $\Gamma(D_1)^c,\dots,\Gamma(D_k)^c$ are decreasing, so we can apply Harris's Lemma (Lemma \ref{le:harris}) to show that the probability $D$ is subcritical is
\[
\P_p\big(\Gamma(D)^c\big) = \P_p\big(\Gamma(D_1)^c\cap\dots\cap \Gamma(D_k)^c\big) \geq \P_p\big(\Gamma(D_1)^c\big) \dots \P_p\big(\Gamma(D_k)^c\big).
\]
By Lemma \ref{le:mu}, the probability any one of the $K$-cells is critical is at most $3/4$, so we have $\P_p\big(\Gamma(D)^c\big) \geq 4^{-k}$. Together with \eqref{eq:probfast} and the inequality $\P\big(F(D)^c\given\Gamma^c(D)\big)\geq 1/2$ from Lemma \ref{le:subandfast}, this implies that
\[
\P_p\big(F(D)^c\big) \geq 4^{-k-1} = \exp\big(-(\log 4 + o(1))\log(n/K)\big).
\]
Let $E$ be the event that every $M$-slab contained in $[n]^2$ is fast. By dividing $[n]^2$ into disjoint $M$-slabs, we see that the probability there does not exist a slow $M$-slab is
\[
\P_p(E) \leq \Big(1 - \exp\big(-(\log 4 + o(1))\log(n/K)\big)\Big)^{\frac{n^2p}{M^2}} \leq \exp\left(\frac{-e^{-(\log 2 + o(1))\log(n/K)}n^2}{\log(n/K) K^2}\right).
\]
Hence
\[
\P_p(E) \leq \exp\Big(-\exp\big( (2 - \log 4 + o(1))\log(n/K) - \log\log(n/K) \big)\Big).
\]
Recall that $K=\exp\big(\mu(p)/p\big)$, where $\mu(p)=\lambda+o(1)$. This, combined with the condition $\liminf p\log n\geq (1+\epsilon)\lambda$ for some $\epsilon>0$, implies that $\log(n/K)=\Omega(n)$ provided $p$ is sufficiently small, and in particular this means that $\P_p(E)=o(1)$.
\end{proof}
\section{Appendix}\label{se:app}
\begin{lemma}\label{le:calc}
Let $a,b,c,h$ be non-negative reals satisfying $a+\sigma b+\gamma c\geq h$ and $c\leq hp/\gamma$. Then
\[
\left(\frac{\epsilon h}{a+b+c}\right)^{a+b+c} p^{b+2c} \leq e^{-(a+b+c)},
\]
provided $p$ and $\epsilon$ are each at most absolute constants.
\end{lemma}
\begin{proof}
After rearranging and replacing $\epsilon$ by $\epsilon/e$ it is sufficient to show that
\[
f(a,b,c) := (b+2c)\log\left(\frac{1}{p}\right) - (a+b+c)\log\left(\frac{\epsilon h}{a+b+c}\right) \geq 0.
\]
We shall need the partial derivatives of $f$, which are
\begin{align*}
\frac{\partial f}{\partial a} &= 1 - \log\left(\frac{\epsilon h}{a+b+c}\right); \\
\frac{\partial f}{\partial b} &= \log\left(\frac{1}{p}\right) + 1 - \log\left(\frac{\epsilon h}{a+b+c}\right); \\
\frac{\partial f}{\partial c} &= 2\log\left(\frac{1}{p}\right) + 1 - \log\left(\frac{\epsilon h}{a+b+c}\right).
\end{align*}
Define
\[
Q = \inf \big\{ f(a,b,c) : a+\sigma b+\gamma c\geq h, \; a,b,c\geq 0 \big\}.
\]
The infimum exists because each of the partial derivatives is positive if the corresponding variable is sufficiently large, so we may restrict the domain to a compact set. Since $\partial f/\partial c > 0$ for all $c>0$, the infimum is achieved either when $a+\sigma b+\gamma c=h$ or when $c=0$.
First suppose $c=0$. In this case we have $\partial f/\partial b > 0$, so again either $a+\sigma b=h$ or $b=0$. If $b=c=0$ then $a\geq h$ and $f(a,0,0) = a\log (a/\epsilon h)$, which is non-negative if $C\leq 1$. If $c=0$ and $a+\sigma b=h$ then
\[
f(a,b,c) = f(h-\sigma b,b,0) = b\log\left(\frac{1}{p}\right) - \big(h-(\sigma-1)b\big)\log\left(\frac{\epsilon h}{h-(\sigma-1)b}\right) := g_1(b),
\]
say, and in this case
\[
Q = \inf \big\{ g_1(b) : 0\leq b\leq h/\sigma \big\}.
\]
But we have
\[
\frac{\partial g_1}{\partial b} = \log\left(\frac{1}{p}\right) + (\sigma-1)\left(\log\left(\frac{\epsilon h}{h-(\sigma-1)b}\right)-1\right) \geq \log\left(\frac{1}{p}\right)-(\sigma-1) > 0
\]
if $p<e^{-(\sigma-1)}$, so
\[
Q = \inf \big\{ g_1(b) : 0\leq b\leq h/\sigma \big\} = g_1(0) = 0,
\]
so the lemma holds in the case $c=0$.
Now suppose $a+\sigma b+\gamma c=h$. Here we have
\begin{align*}
f(a,b,c) &= f\big(h-\sigma b-\gamma c,b,c\big) \\
&= (b+2c)\log\left(\frac{1}{p}\right) - \big(h-(\sigma-1)b-(\gamma-1)c\big)\log\left(\frac{\epsilon h}{h-(\sigma-1)b-(\gamma-1)c}\right) \\
&:= g_2(b,c),
\end{align*}
say. This time,
\[
Q = \inf \big\{ g_2(b,c) : \sigma b+\gamma c\leq h, \; c\leq hp/\gamma, \; b,c\geq 0 \big\}.
\]
As with $g_1$, the partial derivative $\partial g_2/\partial b$ is strictly positive for all $b$, so
\[
Q = \inf \big\{ g_2(0,c) : 0\leq c\leq hp/\gamma \big\}.
\]
Observe that with $b=0$,
\[
\frac{\partial g_2}{\partial c} = 2\log\left(\frac{1}{p}\right) + (\gamma-1)\left(\log\left(\frac{\epsilon h}{h-(\gamma-1)c}\right)-1\right).
\]
Thus, $g_2(0,c)$ is decreasing when $c=0$, and the only zero of its partial derivative with respect to $c$ occurs at
\[
c_0 = \frac{h}{\gamma-1}\left(1-\exp\left(\frac{2\log 1/p}{\gamma-1}-1\right)\right) = \frac{h}{\gamma-1}\left(1-e^{-(1-o(1))}\right) \gg \frac{hp}{\gamma}.
\]
So $g_2(0,c)$ is decreasing on our entire range of $c$, which implies that $Q = g_2(0,hp/\gamma)$, and hence that
\[
\frac{Q}{h} = \frac{2p}{\gamma}\log\left(\frac{1}{p}\right) - \left(1-(\gamma-1)\frac{p}{\gamma}\right)\log\left(\frac{\epsilon}{1-(\gamma-1)p/\gamma}\right).
\]
Thus, $Q>0$ if $\epsilon<1$ and $p$ is sufficiently small.
\end{proof}
\section*{Acknowledgements}
The first and second authors were partially supported by NSF grant DMS-0906634 and EU MULTIPLEX grant 317532. The third author was supported by a CNPq postdoctoral bolsa.
The majority of this research was carried out while the authors were visitors at the R\'enyi Institute, Budapest. It was continued while the third author was a visitor at the University of Memphis, and again while all three authors were visitors at Microsoft Research, Redmond. The authors are grateful to the R\'enyi Institute and to MSR, Redmond for their kind hospitality, and the third author is additionally grateful for the hospitality of the University of Memphis.
\bibliographystyle{amsplain}
|
2,877,628,089,377 | arxiv | \section{Introduction}
In \cite{MR1069238} and \cite{MR1182488} Jones and Okikiolu proved that a bounded set $E\subset\mathbb R^n$ is contained in a rectifiable curve
if and only if
\begin{align}
\label{BJ}
\int_0^\infty\int_{\mathbb R^n} \beta^E_\infty(x,t)^2\, dx\, \frac{dt}{t^n} < \infty,
\end{align}
where
\[ \beta^E_\infty(x,t) = \inf_L t^{-1}\sup\left\{\, d_e(y,L)\, :\, y\in B^{d_e}_E(x,t)\, \right\}, \]
with the infimum taken over all lines in $\mathbb R^n$.
Here and in the sequel $d_e$ denotes the euclidean metric in $\mathbb R^n$ (for any $n$ in question each time)
and $B^\rho_Y(x,r) = B_Y(x,r) = \{ y\in X\, :\, \rho(y,x) \leq r \}$
for a metric space $(X,\rho)$, $Y\subset X$, $x\in X$ and $r\geq 0$.
In \cite{MR1113517} David and Semmes gave a higher dimensional version of the above theorem for $k$-regular subsets of $\mathbb R^n$
(where $k$ is an integer between $0$ and $n$)
by showing that a closed $k$-regular set $E\subset\mathbb R^n$ has big pieces of Lipschitz images of $\mathbb R^k$ if and only if
there is $C<\infty$ such that
\begin{align}
\label{urehto}
\int_0^r\int_{B_E(z,r)} \beta^E_1(x,t)^2\, d\mathcal{H}^k_E(x)\, \frac{dt}{t} \leq Cr^k
\end{align}
for all $z\in E$ and $r>0$, where
\begin{align*}
\beta^E_1(x,t) = t^{-k-1}\inf_L\int_{B_E(x,t)} d_e(y,L)\, d\mathcal{H}^k_E(y)
\end{align*}
with infimum taken over all $k$-planes in $\mathbb R^n$.
In fact David and Semmes gave in \cite{MR1113517} several equivalent conditions to \eqref{urehto} and said that
a closed $k$-regular set $E\subset\mathbb R^n$ is uniformly rectifiable if it satisfies these conditions.
Above the metric notions are of course taken with respect to $d_e$. More generally, we say that
a metric space $(X,\rho)$ is $k$-\emph{regular} if there exists a constant $C\in\mathbb R$ such that
$C^{-1}r^k \leq \mathcal{H}^k_X(B^\rho_X(x,r)) \leq Cr^k$ for any $x\in X$ and $r\in ]0,\rho(X)]$,
where $\mathcal{H}^k_X$ is the $k$-dimensional Hausdorff measure on $(X,\rho)$.
The smallest such constant $C$ will be denoted by $C_{(X,\rho)}$.
Further we say that $(X,\rho)$ \emph{has big pieces of bilipschitz images of subsets of} $\mathbb R^k$ (with constants $K$ and $c$)
if for any $x\in X$ and $r\in ]0,\rho(X)[$ there exists a $K$-bilipschitz function $f:A\to X$
(w.r.t. the metrics $d_e$ and $\rho$) with
$A\subset B^{d_e}_{\mathbb R^k}(0,r)$ such that $\mathcal{H}^k_X(f(A)\cap B_X(x,r))\geq cr^k$.
In \cite{MR2373273} Schul extended the one dimensional result of Jones and Okikiolu to Hilbert spaces (with the condition \eqref{BJ} modified in an appropriate way).
Further in \cite{MR2371434} Ferrari, Franchi and Pajot gave for a compact subset $E$ of the Heisenberg group $\mathbb H^1$
(endowed with its Carnot-Carath\'eodory metric)
an analogue of the condition~\eqref{BJ}
which measures the deviation of $E$ from a best approximating Heisenberg straight line
(i.e. an element of $\mathcal V^1$, see \eqref{V}) at different scales and locations.
They showed that this condition is sufficient for $E$ to be contained in a rectifiable curve.
Juillet gave in \cite{MR2789375} an example which shows that it is not necessary.
Following \cite{MR2371434} we define for a $k$-regular subset $E$ of the Heisenberg group $\mathbb H^n$ an analogue for \eqref{urehto}
and give a proof for the following theorem.
For the definitions see Sections~\ref{secH} and \ref{secbd}.
Theorem~\ref{th} is invariant under bilipschitz change of metric (see also \eqref{beta1}).
Note that the Kor\'anyi metric $d$ (see \eqref{d}) which we below use exclusively is bilipschitz equivalent with the
usual Carnot-Carath\'eodory metric on $\mathbb H^n$.
\begin{theorem}
\label{th}
Let $k\in\mathbb N$ and $E$ be a $k$-regular subset of the Heisenberg group $\mathbb H^n$.
If there is a constant $C$ such that
\begin{align}
\label{oletus}
\int_0^r\int_{B_E(x,r)} \beta^E_1(y,t)^2\, d\mathcal{H}^k_E(y)\, \frac{dt}{t} \leq Cr^k\qquad\text{for all $x \in E$ and $r>0$},
\end{align}
then $E$ has big pieces of bilipschitz images of subsets of $\mathbb R^k$.
\end{theorem}
The proof given here follows \cite{MR1113517}. A similar method is applied also in \cite{MR1709304}.
For readability and consistency we give a quite detailed proof although mostly the adaptation from \cite{MR1113517} is trivial or at least straightforward.
In this article $|\cdot|$ denotes the euclidean $k$-norm for any $k$ in question.
The cardinality of a finite set $X$ is denoted by $\#X$.
Further $\mathcal{P}(X)=\{ Y\, :\, Y\subset X \}$ for any set $X$, and the symbol $\Xint-$ is used to denote an average integral.
\section{Some notations and preliminaries on Heisenberg groups}
\label{secH}
The Heisenberg group $\mathbb H^n$ is the unique simply connected and connected Lie group of step two and dimension $2n+1$
with one dimensional center. As a set it may be identified with $\mathbb R^{2n+1}$. The points $x \in \mathbb H^n$ are written as
$x = (x',x_{2n+1})$ with $x' \in \mathbb R^{2n}$ and $x_{2n+1} \in \mathbb R$. The group operation is given by
\[ x \cdot y = ( x' + y' , x_{2n+1} + y_{2n+1} + 2A(x',y')), \]
where
\[ A(x',y') = \sum_{i=1}^n ( x_{i+n}y_i - x_iy_{i+n} ). \]
Note that the inverse of $x$, denoted also by $x^{-1}$, is $-x = (-x',-x_{2n+1})$ and the neutral element is $(0,0)$.
We equip $\mathbb H^n$ by a metric $d$ defined by
\begin{align}
\label{d}
d(x,y) = \lVert y^ {-1} \cdot x \rVert,\qquad\text{where $\lVert x \rVert = (|x'|^4 + x_{2n+1}^2)^{1/4}$}.
\end{align}
The metric $d$ is left invariant i.e. for each $p\in\mathbb H^ n$ the left translation $\tau_p:x\mapsto p\cdot x$ is
an isometry from $(\mathbb H^n,d)$ to itself.
Note that for $x,y\in\mathbb R^{2n} \times \{ 0 \}$ the conditions
$A(x',y') = 0$, $x\cdot y = x + y$ and $d(x,y) = |x-y|$
are equivalent.
A linear subspace $V\subset\mathbb R^{2n}$ is said to be \emph{isotropic} if $A(x,y)=0$ for all $x,y\in V$.
For $k \in \mathbb N$ denote
\begin{align*}
\mathcal V_0^k = \left\{\, V\times\{ 0\} \, :\, \text{$V$ is an isotropic $k$-dimensional linear subspace of $\mathbb R^{2n}$}\, \right\}.
\end{align*}
In other words $\mathcal V_0^k$ is the collection of the $k$-dimensional homogenous horizontal subgroups of $\mathbb H^n$ (see \cite{MR2955184}).
We note that $\mathcal V_0^k \neq \emptyset$ if and only if $0 \leq k \leq n$.
Set
\begin{align}
\label{V}
\mathcal V^k = \left\{\, \tau_p(V)\, :\, \text{$V \in \mathcal V_0^ k$ and $p \in \mathbb H^n$}\, \right\}.
\end{align}
Each $V \in \mathcal V^k$ is a $k$-dimensional affine subspace of $\mathbb R^{2n+1}$ because $\tau_p$ is an affine mapping whose linear part has determinant $1$.
Note also that for any $V\in\mathcal V^k$
\begin{align}
\label{dV}
d(x,y) = |x'-y'|\qquad\text{for all $x,y\in V$}.
\end{align}
Namely, let $V=\tau_p(V_0)$ for $V_0\in\mathcal V_0^k$, $x=p\cdot x_0$ and $y=p\cdot y_0$. Since $(p\cdot z)' = p' + z'$ for any $z\in\mathbb H^n$,
one has $d(x,y) = d(x_0,y_0) = |x_0'-y_0'| = |x'-y'|$.
For $V \in \mathcal V^k$ we define the projection $P_V : \mathbb H^n \to V$ by setting
\begin{align*}
P_V = \tau_{p} \circ P^e_{\tau_{-p}(V)} \circ \tau_{-p},
\end{align*}
where $p \in V$ and $P^e_L:\mathbb H^n\to L$ is the euclidean orthogonal projection to the linear subspace $L \in \mathcal V^k_0$ (called \emph{horizontal projection} in \cite{MR2789472}).
Note that the definition of $P_V$ is correct, because letting $V_0 \in \mathcal V_0^k$ and $x\in\mathbb H^n$
\[ P^e_{V_0}(a\cdot x) = P^e_{V_0}(a'+x',0) = P^e_{V_0}(a',0) + P^e_{V_0}(x',0) = P^e_{V_0}(a) \cdot P^e_{V_0}(x) \]
for every $a\in\mathbb H^n$, and hence
\begin{align*}
p \cdot v \cdot P^e_{V_0}((p\cdot v)^{-1} \cdot x) = p \cdot v \cdot (P^e_{V_0}(-v) \cdot P^e_{V_0}(p^{-1} \cdot x))
= p \cdot P^e_{V_0}(p^{-1}\cdot x).
\end{align*}
for any $p\in\mathbb H^n$ and $v\in V_0$.
It is easy to see (using the left invariance of $d$) that $P_V$ is 1-Lipschitz for any $V\in\mathcal V^k$.
\begin{remark}\rm
\label{remProttr}
A linear map $\varphi:\mathbb H^n\to\mathbb H^n$ is called a \emph{rotation}
if $\varphi(x)_{2n+1} = x_{2n+1}$, $A(x',y')=A(\varphi(x)',\varphi(y)')$ and $|\varphi(x)'-\varphi(y)'| =|x'-y'|$ for all $x$ and $y$.
If $\varphi$ is a rotation then clearly $\varphi(x\cdot y)=\varphi(x)\cdot\varphi(y)$
and $\varphi(V) \in \mathcal V^k$ for any for any $x,y\in\mathbb H^n$ and $V\in\mathcal V^k$. Hence
\[ P_{\tau_p\circ\varphi(V)}\circ\tau_p\circ\varphi = \tau_p\circ\varphi\circ P_V \]
for any $p\in\mathbb H^n$, $V\in\mathcal V^k$ and rotation $\varphi$.
Namely, if $V=\tau_q(V_0)$ for $q\in\mathbb H^n$ and $V_0\in\mathcal V_0^k$ then for $x\in\mathbb H^n$
\begin{align*}
&P_{\tau_p\circ\varphi(V)}(p\cdot\varphi(x)) = P_{p\cdot\varphi(q)\cdot\varphi(V_0)}(p\cdot\varphi(x)) \\
&= p\cdot\varphi(q)\cdot P^e_{\varphi(V_0)}((p\cdot\varphi(q))^{-1}\cdot p\cdot\varphi(x))
= p\cdot\varphi(q)\cdot P^e_{\varphi(V_0)}(\varphi(x)-\varphi(q)) \\
&= p\cdot\varphi(q)\cdot \varphi(P^e_{V_0}(x-q))
= p\cdot\varphi(q\cdot P^e_{V_0}(x-q))
= p\cdot\varphi(P_V(x)).
\end{align*}
For any $V_0,W_0\in\mathcal V_0^k$ there is a rotation $\varphi$ such that $W_0=\varphi(V_0)$ (see \cite{MR2955184}).
Hence for any $V,W\in\mathcal V^k$ there is a rotation~$\varphi$ and $p\in\mathbb H^n$ such that $W=\tau_p\circ\varphi(V)$.
Notice that the rotations are isometries.
\end{remark}
Denote $X_k = \{\, x\in\mathbb H^n\, :\, \text{$x_i=0$ for all $i>k$}\, \}$.
\begin{lemma}
\label{lePdist}
For any $x\in\mathbb H^n$ and $V\in\mathcal V^k$
\[ d(x,P_V(x)) \leq 3d(x,V). \]
\end{lemma}
\begin{proof}
By the left invariance of $d$ this follows from \cite{MR2789472} (at least with $3$ replaced by some constant).
Let us give here another proof by a direct calculation.
By \ref{remProttr} one only needs show that
$d(x,P_{X_k}(x)) \leq 3d(x,X_k)$ for all $x\in\mathbb H^n$. Let $(z,y)\in\mathbb R^k\times\mathbb R^{2n-k}$, $u\in X_k$ and $C>1$.
Define $f:\mathbb R\to\mathbb R$ by setting
$f(t) = Cd((z,y,t),u)^4 - d((z,y,t),P_{X_k}(z,y,t))^4$. Let $t\in\mathbb R$. By denoting
$T=-2\sum_{i=1}^ku_iy_{n-k+i}$ and $U=-2\sum_{i=1}^kz_iy_{n-k+i}$ we have
\[ f(t) = C\left(|z-u|^2+|y|^2\right)^2+C(t-T)^2-|y|^4-(t-U)^2. \]
Now
\begin{align*}
f(t) \geq f\left(\frac{CT-U}{C-1}\right) = C\left(|z-u|^2+|y|^2\right)^2 - \frac{C(T-U)^2}{C-1} - |y|^4.
\end{align*}
Since
\begin{align*}
|T-U| = 2\left|\sum_{i=1}^k(z_i-u_i)y_{n-k+i}\right| \leq 2|z-u||y| \leq \left(|z-u|^2+|y|^2\right),
\end{align*}
we have
\begin{align*}
f(t) \geq \frac{C(C-2)}{C-1}\left(|z-u|^2+|y|^2\right)^2-|y|^4 \geq 0
\end{align*}
by choosing $C\geq 3$.
\end{proof}
For $V,W\in\mathcal V^k$ denote
\[ \kulma{V}{W} = \min\{\, C \geq 1\, : \text{$d(x,y) \leq Cd(P_W(x),P_W(y))$ for all $x,y\in V$}\, \}. \]
Let $P^e_L$ denote also the euclidean orthogonal projection from $\mathbb R^{2n}$ to an affine subspace $L\subset\mathbb R^{2n}$.
For any $Y\subset\mathbb H^n$ we write $Y'=\{ x'\, :\, x \in Y \}$.
Let $V=\tau_p(V_0)$ for $p\in\mathbb H^n$ and $V_0\in\mathcal V_0^k$. Then and
\[ P_V(x)'=(p\cdot P^e_{V_0}(x-p))' = p'+P^e_{V_0}(x-p)' = p'-P^e_{V_0}(p)'+P^e_{V_0}(x)' \]
for any $x$ (by the linearity of $P^e_{V_0})$.
Thus, since $V' = p' + V_0'$, we have by \eqref{dV}
\begin{align}
\label{PV}
d(P_V(x),P_V(y))=|P_V(x)'-P_V(y)'|=|P^e_{V_0}(x)'-P^e_{V_0}(y)'|=|P^e_{V'}(x')-P^e_{V'}(y')|
\end{align}
for any $x,y\in\mathbb H^n$.
Particularly the following equality holds.
\begin{lemma}\rm
\label{lekulmat}
Let $V,W\in\mathcal V^k$. Then
\[ \kulma{V}{W}=\min\{\, C \geq 1\, : \text{$|x-y| \leq C|P^e_{W'}(x)-P^e_{W'}(y)|$ for all $x,y\in V'$}\, \}. \]
\end{lemma}
\section{Beta numbers and dyadic cubes}
\label{secbd}
Let $k\in\{1,\dotsc,n\}$. From this on we assume that $E$ is a $k$-regular subset of $\mathbb H^n$.
Denote $B(x,r) = B^d_{\mathbb H^n}(x,r)$ and $\mu = \mathcal{H}^k|_E$, where $\mathcal{H}^k$ is the $k$-dimensional Hausdorff measure on $\mathbb H^n$ (with respect the metric $d$).
By \cite{MR1096400} there exist constants $\alpha, D \in ]1,\infty[$ (depending only on $k$ and the regularity constant $C_E$)
and a collection $\Delta^* = \bigcup_{j\in\mathbb Z} \Delta_j \subset \mathcal{P}(E)$ such that each $Q\in\Delta^*$ is open in $E$ and
\begin{align}
\label{D1}
&\text{$\mu\biggl(E\backslash \bigcup_{Q\in\Delta_j} Q\biggr) = 0$ for all $j\in\mathbb Z$.} \\
\label{D2}
&\text{If $Q, R \in \Delta_j$ and $Q \neq R$, then $Q \cap R = \emptyset$.} \\
\label{D3}
&\text{If $Q\in\Delta_j$, $R\in\Delta_l$ and $j \leq l$, then $Q \subset R$ or $Q \cap R = \emptyset$.} \\
\label{D4}
&\text{$d(Q) \leq D\alpha^j$ for all $Q\in\Delta_j$.} \\
\label{D5}
&\text{If $Q \in \Delta_j$, then $B(x,D^{-1}\alpha^j) \cap E \subset Q$ for some $x\in E$.} \\
\label{D6}
&\text{$\mu\left(\{ x\in Q\, :\, d(x,E\backslash Q) \leq t\alpha^j \}\right) \leq Dt^{1/D}\mu(Q)$ for all $Q\in\Delta_j$, $t > 0$.}
\end{align}
By \eqref{D4} and \eqref{D5} also $D^{-k}C_E^{-3}\alpha^{jk} \leq \mu(Q) \leq C_E D^k\alpha^{jk}$ for $Q\in\Delta_j$
if $\alpha^j \leq Dd(E)$. Thus by defining $J_0 = \inf\{ j \, :\, \Delta_j = \{E\} \}$ (here $J_0 = \infty$ if $d(E) = \infty$) and
$\mathcal Z = \{ j\in\mathbb Z\, :\, j\leq J_0 \}$
and taking $D$ larger we can assume that
\begin{align}
\label{D}
D^{-1}\alpha^j \leq d(Q) \leq D\alpha^j \quad\text{and}\quad D^{-1}\alpha^{jk} \leq \mu(Q) \leq D\alpha^{jk}
\end{align}
for all $Q\in\Delta_j$, $j\in\mathcal Z$. Set $\Delta = \bigcup_{j\in\mathcal Z} \Delta_j$.
If $(X,\rho) \in \{(\mathbb H^n,d),(\mathbb R^k,d_e) \}$ we write $Z(r) = \{ x\in X\, :\, \rho(x,Z) \leq r\}$ for any $Z\subset X$ and $r>0$.
We further denote
\begin{align*}
\lambda Q &= Q((\lambda-1)d(Q)) \cap E, \\
\lambda F &= Q((\lambda-1)d_e(F))
\end{align*}
for any $Q\subset E$, $F\subset\mathbb R^k$ and $\lambda > 1$.
Each constant in this article may depend on $k$ and $C_E$ without special mention.
For future let $\varepsilon$ and $\delta$ be small positive constants and $K_0$ and $K$ large constants.
We will fix $K_0$ first, $\delta$ second and $\varepsilon$ after $K$.
The constants $C$ in Sections~\ref{secbd}--\ref{secF3} depend on
$\varepsilon$, $K$, $\delta$ or $K_0$ only if it is separately mentioned.
Eventually every constant will depend only on $k$, $C_E$ and $C$ from \eqref{oletus}.
For $x\in \mathbb H^n$, $t>0$ and $F \subset E$ with $d(F)>0$ denote
\begin{align}
\label{beta1}
\beta_1(x,t) = \beta^E_1(x,t) &= t^{-k-1}\inf_{V\in\mathcal V^k} \int_{B(x,t)} d(y,V)\, d\mu y, \\
\notag
\beta_\infty(F) &= d(F)^{-1}\inf_{V\in\mathcal V^k}\sup\left\{\, d(y,V)\, :\, y\in F\, \right\}.
\end{align}
We say that $E$ satisfies the \emph{weak geometric lemma} if for each $\lambda_1>0$ and $\lambda_2 > 1$
there is a constant $C(\lambda_1,\lambda_2)$ such that
\begin{align}
\label{wgl}
\sum_{j\in\mathcal Z}\sum_{\substack{Q\in\Delta_j \\ Q\subset R \\ \beta_\infty(\lambda_2 Q) > \lambda_1}} \mu(Q) \leq C(\lambda_1,\lambda_2)\mu(R)
\qquad\text{for all $R\in\Delta$}.
\end{align}
Denote
\begin{align*}
\mathcal G_1 = \left\{\, Q\in\Delta\, :\, \text{$KQ \subset V(\varepsilon^2 d(Q))$ for some $V\in\mathcal V^k$}\, \right\}.
\end{align*}
Clearly \eqref{wgl} implies
\begin{align}
\label{C0}
\sum_{j\in\mathcal Z}\sum_{\substack{Q\in\Delta_j\backslash\mathcal G_1 \\ Q\subset R}} \mu(Q) \leq C(\varepsilon,K)\mu(R)
\qquad\text{for all $R\in\Delta$}.
\end{align}
Note also that \eqref{oletus} implies \eqref{wgl} (and hence \eqref{C0}).
For the proof see for example \cite{MR1113517}.
(This clearly remains valid even if $\mathbb H^n$ and $\mathcal V^k$ are replaced by any $k$-regular metric space $(X,\rho)$
and $\mathcal A\subset \mathcal{P}(X)$ with $\inf\{ \rho(x,V)\, :\, V\in\mathcal A \} = 0$ for all $x\in E$.)
For each $Q\in\mathcal G_1$ we let $V_Q\in\mathcal V^k$ be such that $KQ \subset V_Q(\varepsilon^2 d(Q))$.
\begin{lemma}
\label{leL}
There is a constant $c>0$ such that for any $Q\in\mathcal G_1$ there exists
$\{y_0,\dotsc,y_k\} \subset V_Q \cap Q(\varepsilon^2 d(Q))$ such that
$d(y_{i+1},L_i) > cd(Q)$ for all $i\in\{0,\dotsc,k-1\}$,
where $L_i \subset V_Q$ is the $i$-dimensional affine subspace with $\{y_0,\dotsc,y_i\} \subset L_i$.
\end{lemma}
\begin{proof}
Choose some $x_0 \in Q$ and take $L_0=\{y_0\} \subset V_Q$ such that $d(x_0,y_0) = d(x_0,V_Q) \leq \varepsilon^2 d(Q)$.
Assume now that $i<k$ and the $i$-dimensional affine subspace $L_i \subset V_Q$ is defined.
Suppose to the contrary that
$V_Q \cap Q(\varepsilon^2 d(Q)) \subset L_i(c d(Q))$, where $c>0$ is a constant determined later.
Then $Q \subset L_i((c+\varepsilon)d(Q))$.
Denote $F = L_i \cap Q((c+\varepsilon)d(Q))$ and let $H$ be a maximal subset of $F$ such that $d(z,w) > ad(Q)$ for distinct $z,w\in H$,
where $a>0$ is a constant fixed later.
Then
\[ Q \subset F((c+\varepsilon)d(Q)) \subset \bigcup_{y\in H} B(y,(c+\varepsilon+a)d(Q)) \]
and
\[ \#H\cdot \left(\frac{a d(Q)}{2}\right)^i \leq (1+2(c+\varepsilon))^id(Q)^i \]
by \eqref{dV}.
Picking $x(y)\in B(y,(c+\varepsilon+a)d(Q))$ for each $y\in H$ we get
\begin{align*}
\mu(Q) &\leq \sum_{y\in H} \mu(B(x(y),2(c+\varepsilon+a)d(Q)) \leq \#H\cdot C_E 2^ k(c+\varepsilon+a)^kd(Q)^k \\
&\leq \left(\frac{2+4(c+\varepsilon)}{a}\right)^i C_E 2^k(c+\varepsilon+a)^kd(Q)^k.
\end{align*}
By choosing $a$ and $c$ suitably (depending only on $k$ and $C_E$) and then $\varepsilon>0$ small enough one gets the contradiction
with \eqref{D}.
\end{proof}
\begin{lemma}
\label{leQP}
If $Q\in\mathcal G_1$ and $V\in\mathcal V^k$ is such that $Q \subset V(2K\varepsilon^2 d(Q))$,
then $\kulma{V_Q}{V} \leq 1 + \varepsilon$.
\end{lemma}
\begin{proof}
Let $\{y_0,\dotsc,y_k\} \subset V_Q \cap Q(\varepsilon^2 d(Q))$ as in Lemma~\ref{leL}.
Then $\{y_0,\dotsc,y_k\} \subset V((1+2K)\varepsilon^2 d(Q))$ and $d(y_i,y_j) > cd(Q)$ for all distinct $i,j\in\{0,\dotsc,k\}$.
Thus by Lemma~\ref{lePdist}
\begin{align*}
d(y_i,y_j) &\leq d(P_V(y_i),P_V(y_j)) + d(y_i,P_V(y_i)) + d(y_j,P_V(y_j)) \\
&\leq d(P_V(y_i),P_V(y_j)) + 6(1+2K)\varepsilon^2 d(Q) \\
&\leq d(P_V(y_i),P_V(y_j)) + 6(1+2K)c^{-1}\varepsilon^2 d(y_i,y_j)
\end{align*}
for all $i,j\in\{1,\dotsc,k\}$, where $c$ is as in Lemma~\ref{leL}.
Choosing $\varepsilon$ small enough depending on $K$ (and $c$) we get by \eqref{PV} and \eqref{dV} that
$|x-y| \leq (1+\varepsilon)|P^e_{V'}(x)-P^e_{V'}(y)|$ for all $x,y\in V_Q'$.
The claim now follows from Lemma~\ref{lekulmat}.
\end{proof}
\section{Stopping time regions}
\label{secF}
In this section we mostly follow \cite[Sections 7 and 8]{MR1113517}. We use the same assumptions and notations as in Section~\ref{secbd}
assuming additionally that $E$ satisfies the weak geometric lemma~\eqref{wgl}.
For any $Q\in\Delta^*$ denote $\mathcal C(Q) = \{ R\in\Delta_{j_Q-1}\, :\, R \subset Q \}$, where $j_Q=\min\{\, j\in\mathbb Z\, :\, Q\in\Delta_j\, \}$.
If $j_Q < J_0$, we also denote by $O(Q)$ the unique $R\in\Delta^*$ for which $Q\in\mathcal C(R)$.
For any $\mathcal S\subset\Delta$ let $\min(\mathcal S)$ be the set of minimal (with respect to inclusion) cubes in $\mathcal S$.
\begin{lemma}
There is $\mathcal G\subset\mathcal G_1$ and $\mathfrak F\subset\mathcal{P}(\mathcal G)$ such that $\mathcal G = \bigcup_{\mathcal S\in\mathfrak F}\mathcal S$ and
the following conditions are satisfied:
\begin{itemize}
\item[\rm(F1)] For all $R\in\Delta$
\[ \sum\limits_{j\in\mathcal Z}\sum\limits_{\substack{Q\in\Delta_j\backslash\mathcal G \\ Q\subset R}} \mu(Q) \leq C(\varepsilon,K)\mu(R). \]
\item[\rm(F2)]
If $\mathcal S_1,\mathcal S_2\in\mathfrak F$ and $\mathcal S_1\neq\mathcal S_2$, then $\mathcal S_1\cap\mathcal S_2 = \emptyset$.
\item[\rm(F3)]
Each $\mathcal S\in\mathfrak F$ has a largest element with respect to inclusion, denoted by $Q(\mathcal S)$.
\item[\rm(F4)]
If $Q \in \mathcal S$, $R\in\Delta$ and $Q\subset R \subset Q(\mathcal S)$, then $R \in \mathcal S$.
\item[\rm(F5)]
$\kulma{V_Q}{V_{Q(\mathcal S)}} \leq 1+\delta$ for all $Q\in\mathcal S$.
\item[\rm(F6)]
If $Q\in\mathcal S$, $\mathcal C(Q)\subset\mathcal G$ and $\kulma{V_R}{V_{Q(\mathcal S)}} \leq 1+\delta$ for all $R\in\mathcal C(Q)$, then $\mathcal C(Q)\subset\mathcal S$.
\item[\rm(F7)]
$Q\in\min(\mathcal S)$ if and only if the following two conditions are satisfied:
\begin{itemize}
\item[\textbullet] $Q\in\mathcal S$
\item[\textbullet] $\mathcal C(Q)\backslash\mathcal G\neq\emptyset$ or $\kulma{V_R}{V_{Q(\mathcal S)}} > 1+\delta$ for some $R\in\mathcal C(Q)$
\end{itemize}
\end{itemize}
\end{lemma}
\begin{proof}
Assume first that $E$ is unbounded.
Let $p\in E$ and set $\mathcal{D} = \min\bigl(\bigcup_{j\in\mathbb N}\mathcal{D}_j\bigr)$, where
$\mathcal{D}_j = \left\{\, R\in\Delta_j\, :\, B(p,\alpha^j) \cap O(R)\neq\emptyset\, \right\}$.
Clearly each $Q\in\Delta$ is included in some $R\in\bigcup_{j\in\mathbb N}\mathcal{D}_j$.
Let $j\in\mathbb Z$ and $Q\in\Delta_j$ be such that $d(p,Q) > (D^3\alpha+1)\alpha^{j}$.
We next show that there exists $R\in\mathcal{D}$ such that $Q\subset R$.
We just let $R$ be the minimal qube in $\bigcup_{i\in\mathbb N}\mathcal{D}_i$ such that $Q\subset R$.
Since trivially $\mathcal{D}_0 \subset \mathcal{D}$, we assume $R\in\mathcal{D}_i$ for $i\geq 1$.
Now $i>j$ because $Q\not\in\mathcal{D}_j$. Thus $Q\subset R^*$ for some $R^* \in \mathcal C(R)$.
By the minimality of $R$ we have $B(p,\alpha^{j_R-1}) \cap R = \emptyset$ from which we conclude $R\in\mathcal{D}$.
So by the regularity there is a constant $C$ such that for every $j\in\mathbb Z$ there is at most $C$ cubes in $\Delta_j$
which are not contained in
any cube in $\mathcal{D}$.
If $E$ is bounded we set $\mathcal{D} = \Delta_{J_0}$ (which contains only one qube).
Defining $\mathcal G = \mathcal G_1 \backslash \{\, Q \in \Delta\, :\, \text{$Q\not\subset R$ for all $R\in\mathcal{D}$}\, \}$ the condition~(F1) holds by
\eqref{C0} and the previous discussion.
For each $R\in\mathcal{D}$ we partition $\mathcal G(R) = \{ Q\in\mathcal G\, :\, Q \subset R \}$ into a family of
''stopping time regions'' as follows:
Let $Q_0$ be a maximal element in $\mathcal G(R)$. The family $\mathcal S$ is defined to be the unique subset of $\mathcal G(R)$
whose largest element is $Q_0$ and which satisfies the conditions~(F3)--(F7).
Then we repeat the process for $\mathcal G(R)\backslash\mathcal S$.
Since $R_1 \cap R_2 = \emptyset$ for distinct $R_1,R_2\in\mathcal{D}$, the condition~(F2) is satisfied.
\end{proof}
Notice that (F6) and (F7) imply
\begin{align}
\label{C6b}
Q \in \mathcal S\backslash\min(\mathcal S) \Longrightarrow \mathcal C(Q)\subset\mathcal S.
\end{align}
For $\mathcal S\in\mathfrak F$ denote
\begin{align*}
m_1(\mathcal S) &= \{\, Q\in\min(\mathcal S)\, :\, \mathcal C(Q)\backslash\mathcal G\neq\emptyset\, \}, \\
m_2(\mathcal S) &= \min(\mathcal S) \backslash m_1(\mathcal S)
\end{align*}
and further
\begin{align*}
\mathfrak F_1 &= \Bigl\{\, \mathcal S\in\mathfrak F\, : \mu\Bigl(\bigcup_{Q\in m_1(\mathcal S)} Q\Bigr) \geq \mu(Q(\mathcal S))/4\, \Bigr\}, \\
\mathfrak F_2 &= \Bigl\{\, \mathcal S\in\mathfrak F\, : \mu\Bigl( Q(\mathcal S) \backslash\bigcup_{Q\in\min(\mathcal S)} Q\Bigr) \geq \mu(Q(\mathcal S))/4\, \Bigr\}, \\
\mathfrak F_3 &= \Bigl\{\, \mathcal S\in\mathfrak F\, : \mu\Bigl(\bigcup_{Q\in m_2(\mathcal S)} Q\Bigr) \geq \mu(Q(\mathcal S))/2\, \Bigr\}.
\end{align*}
Clearly $\mathfrak F = \mathfrak F_1\cup\mathfrak F_2\cup\mathfrak F_3$.
\begin{lemma}
\label{leF1F2}
There is a constant $C=C(\varepsilon,K)$ such that
\[ \sum_{\substack{\mathcal S\in\mathfrak F_1 \cup\mathfrak F_2 \\ Q(\mathcal S)\subset R}} \mu(Q(\mathcal S)) \leq C\mu(R)\qquad\text{for all $R\in\Delta$}. \]
\end{lemma}
\begin{proof}
Let $R\in\Delta$. By \eqref{D} and (F1)
\begin{align*}
\sum_{\substack{\mathcal S\in\mathfrak F_1 \\ Q(\mathcal S)\subset R}} \mu(Q(\mathcal S))
\leq 4\sum_{\substack{\mathcal S\in\mathfrak F_1 \\ Q(\mathcal S)\subset R}} \mu\Bigl(\bigcup_{Q\in m_1(\mathcal S)} Q\Bigr)
\leq 4D^2\alpha^k\sum_{\substack{Q\in\Delta\backslash\mathcal G \\ Q\subset R}} \mu(Q) \leq 4D^2\alpha^kC(\varepsilon,K)\mu(R).
\end{align*}
Let $\mathcal S_1,\mathcal S_2\in\mathfrak F$, $\mathcal S_1\neq\mathcal S_2$. Then $Q(\mathcal S_1)\neq Q(\mathcal S_2)$ by (F2). If $Q(\mathcal S_1) \cap Q(\mathcal S_2)\neq\emptyset$ then \eqref{D3} implies
$Q(\mathcal S_1) \subset Q(\mathcal S_2)$ or $Q(\mathcal S_2) \subset Q(\mathcal S_1)$. Assume that $Q(\mathcal S_1) \subset Q(\mathcal S_2)$ and take minimal $Q\in\mathcal S_2$ such that
$Q(\mathcal S_1) \subset Q$. Since $Q \neq Q(\mathcal S_1)$ by (F2) one has $Q\in\min(\mathcal S_2)$ by \eqref{C6b}.
Thus the sets $Q(\mathcal S) \backslash\bigcup_{Q\in\min(\mathcal S)} Q$, $\mathcal S\in\mathfrak F$, are disjoint and
\begin{align*}
\sum_{\substack{\mathcal S\in\mathfrak F_2 \\ Q(\mathcal S)\subset R}} \mu(Q(\mathcal S))
\leq 4\sum_{\substack{\mathcal S\in\mathfrak F_2 \\ Q(\mathcal S)\subset R}} \mu\Bigl( Q(\mathcal S) \backslash\bigcup_{Q\in\min(\mathcal S)} Q\Bigr)
\leq 4\mu(R).
\end{align*}
\end{proof}
Now the goal is to show that
\begin{align}
\label{t}
\sum_{\substack{\mathcal S\in\mathfrak F_3 \\ Q(\mathcal S)\subset R}} \mu(Q(\mathcal S)) \leq C\mu(R)\qquad\text{for all $R\in\Delta$}.
\end{align}
The full assumption~\eqref{oletus} will be used (instead of the weaker condition~\eqref{wgl}) only on page~\pageref{ol} to get \eqref{t}.
After this Theorem~\ref{th} follows quite easily by the following lemma (see Section~\ref{end}).
For any $\mathcal S\in\mathfrak F$ define the function $h_\mathcal S:\mathbb H^n\to\mathbb R$ by setting
\begin{align*}
h_\mathcal S(x) = \inf\{\, d(x,Q) + d(Q)\, :\, Q\in\mathcal S\, \}.
\end{align*}
\begin{lemma}
\label{ledh}
If $\mathcal S\in\mathfrak F$ and $x,y\in K_0Q(\mathcal S)$ with $d(x,y) > D^{-2}\min\{h_\mathcal S(x),h_\mathcal S(y)\}$, then
$d(x,y) \leq (1+2\delta)d(P_{V_{Q(\mathcal S)}}(x),P_{V_{Q(\mathcal S)}}(y))$.
\end{lemma}
\begin{proof}
Assume that $d(x,y) > D^{-2}h_\mathcal S(x)$ and choose $Q\in\mathcal S$ such that
\[ d(x,y) > D^{-2}(d(x,Q)+d(Q)). \]
Let $R\in\mathcal S$ be the minimal cube such that $Q\subset R$ and $d(K_0R)\geq d(x,y)$.
Then
\[ d(y,R) \leq d(x,y) + d(x,R) \leq d(x,y) + d(x,Q) < (1+D^2)d(x,y) \]
and $d(R) \leq D^2(1+\alpha)d(x,y)$ by \eqref{D}.
Let $z,w\in V_R$ with $d(x,z) = d(x,V_R)$ and $d(y,w) = d(y,V_R)$.
Choosing $K$ large enough (depending on $K_0$ and $D$) and denoting $P=P_{V_{Q(\mathcal S)}}$ one gets by (F5)
\begin{align*}
&d(P(x),P(y)) \geq d(P(z),P(w)) - d(P(x),P(z)) - d(P(y),P(w)) \\
&\geq d(P(z),P(w)) - d(x,z) - d(y,w) \geq (1+\delta)^{-1}d(z,w) - 2\varepsilon d(R) \\
&\geq (1+\delta)^{-1}(d(x,y) - 2\varepsilon d(R)) - 2\varepsilon d(R)
> ((1+\delta)^{-1} - 4D^2\varepsilon(1+\alpha))d(x,y).
\end{align*}
The claim now follows by choosing $\varepsilon$ small enough (depending on $\delta$).
\end{proof}
\section{Function $g$ for $\mathcal S$}
\label{secS}
In this section we follow \cite[Section 8]{MR1113517} and use the same assumptions and notations as in Section~\ref{secF}.
Let $\mathcal S\in\mathfrak F$ be fixed and assume (in order to simplify notations) that $V_{Q(\mathcal S)}=X_k$.
Then $P_{V_{Q(\mathcal S)}}=P^e_{X_k}$ and $d(p,q)=|p-q|$ for any $p,q\in X_k$.
Because of the latter fact it is natural to denote by $d(F)$ the euclidean diameter of $F$ and by $d(p,F)$ the euclidean distance of $p$ and $F$
for any $F\subset\mathbb R^k$ and $p\in\mathbb R^k$ (though $d$ is a metric in $\mathbb H^n$).
We write $P(x)=(x_1,\dotsc,x_k)$ and $P^{\bot}(x)=(x_{k+1},\dotsc,x_{2n})$ for any $x\in\mathbb H^n$.
Denote also $B^k(p,r) = \{ q\in\mathbb R^k\, :\, |q-p| \leq r \}$ for $p\in\mathbb R^k$ and $r\geq 0$.
The letter $C$ in the calculations in Sections~\ref{secS}--\ref{end} denotes always some constant
but distinct appearances do not necessarily refer to the same constant
(even if they are in the same inequality chain).
Define the function $H:\mathbb R^k\to\mathbb R$ by setting
\begin{align*}
H(p) = \inf\{\, h(x) \, :\, P(x)=p\, \}
\end{align*}
and set $Z = \{ x\in E\, :\, h(x)=0 \}$.
Here we write shortly $h=h_\mathcal S$.
We immediately see that
\begin{align}
\label{H}
H(p) = \inf\{\, d(p,P(Q))+d(Q) \, :\, Q\in\mathcal S\, \}.
\end{align}
for any $p\in\mathbb R^k$. Namely, the inequality $H(p) \geq \inf\{\, d(p,P(Q))+d(Q) \, :\, Q\in\mathcal S\, \}$ follows from the 1-Lipschitzness of $P$.
The opposite inequality holds because for any $y\in\mathbb H^n$ and $p\in\mathbb R^k$ one can obviously choose $x\in P^{-1}(\{p\})$ such that
$d(x,y)=d(p,P(y))$. Note that $H(p)=0$ if and only if $p\in P(Z)$ (for example by the Bolzano--Weierstrass theorem).
For each $p\in\mathbb R^k\backslash P(Z)$ let $R_p$ be the largest dyadic cube in $\mathbb R^k$
containing $p$ and satisfying
\begin{align}
\label{defR}
20d(R_p) \leq \inf\{\, H(u)\, : u \in R_p\, \}.
\end{align}
Such a cube $R_p$ exists, because $H(p)>0$ and $H$ is continuous (1-Lipschitz).
Let $\{ R_i\, :\, i\in I\} \subset \{ R_p\, : p\in \mathbb R^k\backslash P(Z)\}$ be such that $\{R_i\}_{i\in I}$ covers $\mathbb R^k\backslash P(Z)$ and
$\sisus R_i\cap\sisus R_j = \emptyset$ for distinct $i,j\in I$. Notice that $I$ is countable and $R_i\cap P(Z)=\emptyset$ for any $i\in I$.
By the definition~\eqref{defR} and the 1-Lipschitzness of $H$
\begin{align}
\label{RH}
10d(R_i) \leq H(p) \leq 60d(R_i)\qquad\text{for any $p\in 10R_i$, $i\in I$.}
\end{align}
This gives the following lemma.
\begin{lemma}
\label{leR}
There is a constant $C$ such that whenever $10R_i\cap 10R_j\neq\emptyset$ for $i,j\in I$ then
$C^{-1}d(R_j) \leq d(R_i) \leq Cd(R_j)$.
\end{lemma}
Let $x_0\in Q(\mathcal S)$ be any fixed point. Denote
$U_j = B^k(P(x_0),2^{-j}K_0d(Q(\mathcal S)))$ and
$I_j = \{ i\in I\, :\, R_i\cap U_j\neq\emptyset\}$
for $j\in\mathbb R$.
By \eqref{H} and \eqref{RH} there exist constants $C_0$ (which may depend on $K_0$ by $C_0=CK_0$) and $C$ such that
for each $i\in I_0$ there is $Q_i\in\mathcal S$ for which
\begin{align}
\label{Qi1}
&C_0^{-1}d(R_i) \leq d(Q_i) \leq Cd(R_i), \\
\label{Qi2}
&d(P(Q_i) \cup R_i) \leq Cd(R_i).
\end{align}
(In \eqref{Qi2} we use the fact that $P$ is 1-Lipschitz.)
For each $i\in I_0$ let $A_i : \mathbb R^k \to \mathbb R^{2n-k}$ be the affine function whose graph is $V_{Q_i}'$.
By Lemma~\ref{lekulmat} and (F5) (and by choosing $\delta\leq 1$)
\begin{align}
\label{LipB}
\Lip(A_i) \leq \sqrt{(1+\delta)^2-1} < 2\sqrt{\delta}.
\end{align}
\begin{lemma}
\label{leA}
There is a constant $C$ such that whenever $10R_i\cap 10R_j\neq\emptyset$ for $i,j\in I_0$ then
$d(Q_i \cup Q_j) \leq Cd(R_j)$ and
\[ |A_i(p)-A_j(p)| \leq C\sqrt{\varepsilon}d(R_j)\qquad\text{for all $p\in 100R_j$}. \]
\end{lemma}
\begin{proof}
For the first part let $x,y\in Q_i \cup Q_j$. By \eqref{Qi1} and Lemma~\ref{leR} one may assume that $x\in Q_i$ and $y\in Q_j$ are such that $d(x,y)\geq d(Q_j)$.
Since by definition $h(y)\leq d(Q_j)$, Lemma~\ref{ledh}, \eqref{Qi2} and Lemma~\ref{leR} give
\[ d(x,y) \leq (1+2\delta)|P(x)-P(y)| \leq Cd(R_j). \]
Thus by choosing $K$ large enough depending on $K_0$ \eqref{Qi1} gives $Q_i\subset KQ_j$ (and $Q_j\subset KQ_i$).
Now Lemma~\ref{leQP} (with $Q=Q_j$ and $V=V_{Q_i}$) gives
\begin{align*}
\kulma{V_{Q_j}}{V_{Q_i}} \leq 1+\varepsilon.
\end{align*}
Let $p\in 100R_j$ and $z\in Q_j$. Take $y\in V_{Q_j}$ such that $d(y,z)\leq\varepsilon d(Q_j)$. Then by \eqref{Qi1} and Lemma~\ref{leR}
\[ |y'-P^e_{V_{Q_i}'}(y')| = d_e(y',V_{Q_i}') \leq d(y,z)+d(z,V_{Q_i}) \leq \varepsilon(d(Q_i)+d(Q_j)) \leq C\varepsilon d(R_j) \]
and by \eqref{Qi1} and \eqref{Qi2}
\[ |P(y)-p| \leq |P(y)-P(z)|+|P(z)-p| \leq d(y,z)+ d(P(Q_j)\cup 100R_j) \leq Cd(R_j). \]
Denote $v=(p,A_j(p))$. Using the Pythagorean theorem, the above estimates, Lemma~\ref{lekulmat} and (F5)
\begin{align*}
|v-P^e_{V_{Q_i}'}(v)|^2 &= |v-P^e_{V_{Q_i}'}(y')|^2 - |P^e_{V_{Q_i}'}(v)-P^e_{V_{Q_i}'}(y')|^2 \\
&\leq (|v-y'|+|y'-P^e_{V_{Q_i}'}(y')|)^2 - |P^e_{V_{Q_i}'}(v)-P^e_{V_{Q_i}'}(y')|^2 \\
&\leq (|v-y'|+C\varepsilon d(R_j))^2 - (1+\varepsilon)^{-2}|v-y'|^2 \\
&\leq C\varepsilon d(R_j)^2
\end{align*}
and therefore by (F5)
\[ |A_i(p)-A_j(p)| \leq |v-P^e_{V_{Q_i}'}(v)|+(1+\delta)|P(P^e_{V_{Q_i}'}(v))-P(v)| \leq C\sqrt{\varepsilon}d(R_j). \]
\end{proof}
For each $i\in I_0$ let $\tilde{\phi}_i:\mathbb R^k\to [0,1]$ be a $C^2$ function such that
$\tilde{\phi}_i(p)=1$ for all $p\in 2R_i$, $\tilde{\phi}_i(p)=0$ for all $p\in\mathbb R^k\backslash 3R_i$ and
\begin{align}
\label{phitilde}
\begin{split}
|\partial_j\tilde{\phi}_i| &\leq Cd(R_i)^{-1}, \\
|\partial_j\partial_m\tilde{\phi}_i| &\leq Cd(R_i)^{-2}
\end{split}
\end{align}
for all $j,m\in\{1,\dotsc,k\}$.
Set
\begin{align*}
\phi_i(p) = \frac{\tilde{\phi}_i(p)}{\sum_{j\in I_0}\tilde{\phi}_j(p)}\qquad\text{for any $p\in U_0\backslash P(Z)$, $i\in I_0$}.
\end{align*}
For each $p\in P(Z)$ there is $x(p)$ such that $P^{-1}(\{p\}) \cap K_0Q(\mathcal S)= \{x(p)\}$ by Lemma~\ref{ledh}.
We now define a function $g:U_0\to\mathbb R^{2n-k}$ by setting
\begin{align*}
g(p) = \begin{cases}
\sum\limits_{i\in I_0} \phi_i(p)A_i(p), &\text{if $p\in U_0\backslash P(Z)$} \\
P^{\bot}(x(p)), &\text{if $p\in P(Z)$}.
\end{cases}
\end{align*}
\begin{lemma}
\label{leglip}
The function $g$ is $C\sqrt{\delta}$-Lipschitz.
\end{lemma}
\begin{proof}
By taking $\varepsilon/\delta$ small enough we get as in \cite[equation (8.19)]{MR1113517} that
\begin{align}
\label{g2Rj}
|g(p)-g(q)| \leq 3\sqrt{\delta}|p-q|\qquad\text{for $p,q\in 2R_j \cap U_0$, $j\in I_0$}.
\end{align}
By Lemma~\ref{ledh}
\begin{align}
\label{gPZ}
|P^{\bot}(y)-g(q)| \leq 2\sqrt{\delta(1+\delta)}|P(y)-q|\qquad\text{for $q\in P(Z)$, $y\in Q(\mathcal S)$}.
\end{align}
Let for a while $j\in I_0$, $p\in R_j \cap U_0$, $y\in Q_j$ and $q\in P(Z)$.
Then
\begin{align*}
|g(p)-A_j(p)| \leq C\sqrt{\varepsilon}d(R_j)
\end{align*}
by the definition of $g$ and Lemma~\ref{leA}
(because the supports of the functions $\phi_i$ have bounded overlap by Lemma~\ref{leR}), and
\begin{align*}
|A_j(p)-A_j(P(y))| \leq 2\sqrt{\delta}|p-P(y)| \leq C\sqrt{\delta}d(R_j)
\end{align*}
by \eqref{LipB} and \eqref{Qi2}. By (F5)
\begin{align*}
|A_j(P(y))-P^{\bot}(y)| \leq (2+\delta)\varepsilon d(Q_j).
\end{align*}
Here $H(p) = H(p)-H(q) \leq |p-q|$. Thus by \eqref{gPZ}, \eqref{defR}, \eqref{Qi1} and \eqref{Qi2}
(choosing $\varepsilon$ small enough depending on $\delta$ and $K$)
\begin{align}
\label{gPZR}
|g(p)-g(q)| \leq C\sqrt{\delta}|p-q|\qquad\text{for $p\in P(Z)$, $q\in U_0\backslash P(Z)$}.
\end{align}
Lemma now follows easily from \eqref{g2Rj}, \eqref{gPZ} and \eqref{gPZR}.
\end{proof}
\begin{lemma}
\label{leC1}
There is a constant $C=C(K_0)$ such that
$P^{-1}(\{p\}) \cap K_0Q(\mathcal S) \subset CQ_i$
for all $p\in R_i$, $i\in I_0$.
\end{lemma}
\begin{proof}
Let $i\in I_0$, $p\in R_i$, $x\in P^{-1}(\{p\}) \cap K_0Q(\mathcal S)$ and $y\in Q_i$. We may assume that $d(x,y) > d(Q_i)$ (since otherwise $x\in 2Q_i$).
Then $d(x,y) > h(y)$ and Lemma~\ref{ledh} (by choosing $\delta$ small), \eqref{Qi2} and \eqref{Qi1} yield
$d(x,y) \leq 2|p-P(y)| \leq Cd(Q_i)$.
\end{proof}
\begin{lemma}
\label{leC2}
There is a constant $C=C(K_0)$ such that
$h(x) \leq C H(P(x))$ for all $x\in K_0Q(\mathcal S)$.
\end{lemma}
\begin{proof}
Let $x\in K_0Q(\mathcal S)$. By Lemma~\ref{ledh} one may assume that $P(x)\not\in P(Z)$.
Let $i\in I_0$ such that $P(x)\in R_i$ (such $i$ exists because $P$ is 1-Lipschitz).
Then
$h(x) \leq d(x,Q_i)+d(Q_i) \leq Cd(Q_i) \leq CH(P(x))$
by the previous lemma, \eqref{Qi1} and \eqref{RH}.
\end{proof}
\begin{lemma}
\label{legappr}
There exists a constant $C$ such that
\[ |P^{\bot}(x)-g(P(x))| \leq C\sqrt{\varepsilon}h(x) \]
for all $x\in K_0Q(\mathcal S)$.
\end{lemma}
\begin{proof}
Let $x\in K_0Q(\mathcal S)$ with $h(x)>0$. Now $H(P(x))>0$ by the previous lemma and so $P(x)\in R_i$ for some $i\in I_0$
and further $x\in CQ_i$ by Lemma~\ref{leC1}. As in the proof of Lemma~\ref{leglip}, choosing $K$ large enough depending on $K_0$
we get
$|A_i(P(x))-P^{\bot}(x)| \leq (2+\delta)\varepsilon d(Q_i)$
by (F5).
Since also
$|g(P(x))-A_i(P(x))| \leq C\sqrt{\varepsilon}d(R_i)$
by the definition of $g$ and Lemma~\ref{leA},
we get the result by \eqref{Qi1} and \eqref{RH}.
\end{proof}
\begin{lemma}
\label{ledA}
There is a constant $C$ such that whenever $10R_i\cap 10R_j\neq\emptyset$ for $i,j\in I_0$ then
\[ |\partial_m(A_i(p)-A_j(p))| \leq C\sqrt{\varepsilon}\qquad\text{for all $m\in\{1,\dotsc,k\}$ and $p\in\mathbb R^k$}. \]
\end{lemma}
\begin{proof}
Since $\partial_m A_i$ is constant for any $m$ and $i$ it is enough to prove the claim for a fixed
$p\in 10R_i\cap 10R_j$. Let $t=d(R_j)$. Then by Lemma~\ref{leA}
\[ |A_i(p+te_m)-A_j(p+te_m)-(A_i(p)-A_j(p))| \leq C\sqrt{\varepsilon}t, \]
which gives the result, because for any $i$ the quotient $t^{-1}(A_i(p+te_m)-A_i(p))$ does not depend on $t$.
\end{proof}
Using \eqref{phitilde} and Lemmas~\ref{leR}, \ref{leA} and \ref{ledA} one gets (see \cite[Lemma 8.22]{MR1113517})
\begin{lemma}
\label{leddg}
There is a constant $C$ such that
\[ |\partial_j\partial_mg(p)| \leq \frac{C\sqrt{\varepsilon}}{d(R_i)}\qquad\text{for all $j,m\in\{1,\dotsc,k\}$ and $p\in 2R_i\cap\sisus U_0$, $i\in I_0$}. \]
\end{lemma}
\begin{lemma}
\label{legsup}
There is a constant $C$ such that
$|g(p)| \leq CK_0\sqrt{\delta}d(Q(\mathcal S))$ for all $p\in U_0$.
\end{lemma}
\begin{proof}
Let $p\in U_0$, $i\in I_0$, $x\in Q_i$ and $y\in V_{Q_i}$ with $d(x,y)\leq\varepsilon d(Q_i)$.
By Lemma~\ref{lePdist}
\begin{align*}
|A_i(P(y))| = |P^\bot(y)| \leq |P^\bot(x)| + |P^\bot(y-x)| \leq 3d(x,X_k) + d(x,y) \leq 4\varepsilon d(Q(\mathcal S)).
\end{align*}
Further by \eqref{LipB} (and the Lipschitzness of $P$)
\begin{align*}
|A_i(p)-A_i(P(y))| &\leq 2\sqrt{\delta}|p-P(y)| \leq 2\sqrt{\delta}(|p-P(x)| + |P(x)-P(y)|) \\
&\leq 2\sqrt{\delta}((K_0+1)d(Q(\mathcal S)) + \varepsilon d(Q_i)).
\end{align*}
Thus $|A_i(p)| \leq CK_0\sqrt{\delta}d(Q(\mathcal S))$.
The desired estimate for $|g(p)|$ now follows from Lemma~\ref{leR}.
\end{proof}
\section{Function $\gamma$ for $\mathcal S$}
\label{secgamma}
In this section we use the same assumptions and notations as in Section~\ref{secS}.
Using Lemma~\ref{legsup} we extend $g$ from $U_0$ to a $C\sqrt{\delta}$-Lipschitz
function on $\mathbb R^k$ supported in $U_{-1}$.
For $p\in\mathbb R^k$ and $t>0$ set
\[ \gamma(p,t) = t^{-k-1} \inf_a \int_{B^k(p,t)} |g(u)-a(u)|\, du, \]
where the infimum is taken over all affine functions $a:\mathbb R^k\to\mathbb R^{2n-k}$.
Choosing $\delta$ small one has by Lemma~\ref{leglip}
\begin{align}
\label{gammaM}
\gamma(p,t) \leq 2t^{-k-1} \inf_M \int_{B^k(p,t)} d_e((u,g(u)),M)\, du,
\end{align}
where the infimum is taken over all $k$-planes $M\subset \mathbb R^{2n}$.
We follow \cite[Section 13]{MR1113517} and proof the next lemma.
\begin{lemma}
\label{leg}
Let $T=K_0d(Q(\mathcal S))/2$. There exists a constant~$C=C(K_0)$ such that
\[ \int_0^T \int_{U_1}\gamma(p,t)^2\, dp\, \frac{dt}{t}\leq C\varepsilon\mu(Q(\mathcal S))
+C\varepsilon^{-6k}\int_{K_0Q(\mathcal S)}\int_{h(x)/K_0}^T \beta_1(x,K_0t)^2\, \frac{dt}{t}\, d\mu x. \]
\end{lemma}
Notice that $2R_i \subset U_0$ for all $i\in I_1$ by \eqref{H} and \eqref{defR}.
Using Lemma~\ref{leddg} and Taylor's theorem one gets (see \cite[Lemma 13.7]{MR1113517})
\begin{lemma}
\label{legapu}
There is a constant $C$ such that
\[ \sum_{i\in I_1} \int_0^{d(R_i)} \int_{R_i} \gamma(p,t)^2\, dp\, \frac{dt}{t} \leq C\varepsilon \mu(Q(\mathcal S)). \]
\end{lemma}
We now assume that $p\in U_1$ and $H(p)/60 < t \leq T$.
Choose $z(p,t)\in Q(\mathcal S)$ such that $|p-P(z(p,t))| \leq 60t$ (see \eqref{H})
and let $z\in B(z(p,t),t)\cap E$.
Further let $V_{p,t}\in\mathcal V^k$ be such that
\begin{align}
\label{Vpt}
\int_{B(z,K_0t)} d(x,V_{p,t})\, d\mu x \leq 2(K_0t)^{k+1}\beta_1(z,K_0t).
\end{align}
By \eqref{gammaM}
\begin{align}
\label{apu1}
2^{-1}t^{k+1}\gamma(p,t) \leq \int_{B^k(p,t)} d_e((u,g(u)),V_{p,t}')\, du.
\end{align}
If $u\in B^k(p,t)\cap P(Z)$ then $P^{-1}(\{u\})\cap Q(\mathcal S)=\{x\}$, where $x'=(u,g(u))$ and $h(x)=0$.
Since $z\in K_0Q(\mathcal S)$ (by choosing $K_0\geq 2$),
\[ d(x,z)\leq (1+2\delta)|u-P(z)| \leq (1+2\delta)(|u-p|+|p-P(z(p,t))|+t)\leq Ct \]
by Lemma~\ref{ledh}.
Thus
\begin{align}
\label{apu2}
\int_{B^k(p,t)\cap P(Z)} d_e((u,g(u)),V_{p,t}')\, du \leq \int_{B(z,Ct)} d_e(x',V_{p,t}')\, d\mu x.
\end{align}
Let $I(p,t) = \{\, i\in I_0\, : R_i\cap B^k(p,t)\neq\emptyset\, \}$.
\begin{lemma}
\label{lesg1}
There is a constant $C=C(K_0)$ such that for any $i\in I(p,t)$
\[ d_e((u,g(u)),V_{p,t}') \leq d_e((u,g(u)),V_{Q_i}') + \sup\{\, d_e(w',V_{p,t}')\, :\, w\in V_{Q_i} \cap Q_i(Cd(Q_i))\, \} \]
for all $u\in R_i \cap U_0$.
\end{lemma}
\begin{proof}
Let $i\in I(p,t)$ and $u\in R_i$. Denote $y=(u,g(u))$. Let $w\in V_{Q_i}$ be such that $|y-w'| = d_e(y,V_{Q_i}')$.
Further let $v\in V_{p,t}$ be such that $|w'-v'| = d_e(w',V_{p,t}')$. Then
\[ d_e(y,V_{p,t}') \leq |y-v'| \leq |y-w'|+|w'-v'| \leq d_e(y,V_{Q_i}') + d_e(w',V_{p,t}'). \]
By the definition of $g$ and Lemma~\ref{leA}
\[ |u-P(w)| \leq |y-w'| \leq |y-(u,A_i(u))| \leq C\sqrt{\varepsilon}d(R_i). \]
Choosing $q\in Q_i$ and $v_q\in V_{Q_i}$ with $d(v_q,q)\leq \varepsilon d(Q_i)$ one has
by (F5), \eqref{Qi2} and \eqref{Qi1}
\begin{align*}
d(w,q) &\leq d(w,v_q)+d(v_q,q) \leq (1+\delta)|P(w)-P(v_q)|+d(v_q,q) \\
&\leq (1+\delta)(|P(w)-u| + |u-P(q)| + |P(q)-P(v_q)|)+d(v_q,q) \leq Cd(Q_i).
\end{align*}
\end{proof}
For any $i\in I(p,t)$
\begin{align}
\label{apu3}
\int_{B^k(p,t)\cap R_i} d_e((u,g(u)),V_{Q_i}')\, du \leq \int_{B^k(p,t)\cap R_i} |g(u)-A_i(u)|\, du \leq C\sqrt{\varepsilon}d(R_i)^{k+1}
\end{align}
by Lemma~\ref{leA} and the definitions of $A_i$ and $g$. Recall that $B^k(p,t) \subset U_0$ (because $p\in U_1$ and $t\leq T$).
\begin{lemma}
\label{lesg2}
For any constant $C'$ there is a constant~$C$ such that for any $i\in I(p,t)$
\[ d_e(w',V_{p,t}') \leq C\varepsilon d(R_i) + C\varepsilon^{-3k}\left(\Xint-_{2Q_i} d_e(x',V_{p,t}')^ {1/3}\, d\mu x \right)^3 \]
for all $w\in V_{Q_i} \cap Q_i(C'd(Q_i))$.
\end{lemma}
\begin{proof}
Let $i\in I(p,t)$ and $w\in V_{Q_i} \cap Q_i(C'd(Q_i))$.
If $y_0,\dotsc,y_k\in V_{Q_i} \cap Q_i(\varepsilon d(Q_i))$ are as in Lemma~\ref{leL} with $Q=Q_i$, then obviously (by \eqref{dV})
\[ d_e(w',V_{p,t}') \leq Cd_e(y_{j_0}',V_{p,t}'), \]
where $d_e(y_{j_0}',V_{p,t}') = \max\{ d_e(y_j',V_{p,t}')\, :\, j\in\{0,\dotsc,k \}\}$.
Let $z_0\in Q_i \cap B(y_{j_0},2\varepsilon d(Q_i))$. Then
\[ d_e(y_{j_0}',V_{p,t}') \leq d_e(x',V_{p,t}') + 3\varepsilon d(Q_i) \]
for all $x\in B := B(z_0,\varepsilon d(Q_i))$, and we have
\begin{align*}
\mu(B)d_e(w',V_{p,t}')^{1/3}
&\leq C\int_B \left(d_e(x',V_{p,t}') + 3\varepsilon d(Q_i)\right)^{1/3}\, d\mu x \\
&\leq C\mu(B)\left(3\varepsilon d(Q_i)\right)^{1/3} + C\int_B d_e(x',V_{p,t}')^{1/3}\, d\mu x.
\end{align*}
The claim now follows from the regularity, \eqref{D} and \eqref{Qi1}.
\end{proof}
\begin{lemma}
\label{lesg3}
There is a constant~$C$ such that
\[ \sum_{i\in I(p,t)}\mathcal{L}^k(R_i)\left(\Xint-_{2Q_i} d_e(x',V_{p,t}')^ {1/3}\, d\mu x \right)^3 \leq C\int_{B(z,Ct)} d_e(x',V_{p,t}')\, d\mu x. \]
(Here $\mathcal{L}^k$ is the Lebesgue measure on $\mathbb R^k$.)
\end{lemma}
\begin{proof}
For any $i\in I$ define $N_i:\mathbb H^n\to\mathbb R$ by setting
\begin{align*}
N_i = \sum_{j\in J(i)} \chi_{2Q_j}
\end{align*}
where $J(i) = \{\, j\in I\, :\, \text{$d(R_j) \leq d(R_i)$ and $2Q_i \cap 2Q_j\neq\emptyset$}\, \}$.
Notice that $N_i(x) \geq 1$ for all $x\in 2Q_i$, $i\in I$.
Let $l,m\in I$. If $2Q_l \cap 2Q_m\neq\emptyset$ and $d(R_l)\leq d(R_m)$, then
$d(R_l \cup R_m) \leq d(P(2Q_l) \cup R_l)+d(P(2Q_m) \cup R_m) \leq Cd(R_m)$
by \eqref{Qi1} and \eqref{Qi2}. Hence by \eqref{D}
\begin{align*}
\int_{2Q_i} N_i(x)\, d\mu x \leq \sum_{j\in J(i)} \mu(2Q_j) \leq C\sum_{j\in J(i)} \mathcal{L}^k(R_j) \leq C\mathcal{L}^k(R_i),
\end{align*}
and further by H\"older's inequality
\begin{align}
\label{apu4}
\left( \int_{2Q_i} d_e(x',V_{p,t}')^ {1/3}\, d\mu x \right)^3 \leq C\mathcal{L}^k(R_i)^2\int_{2Q_i} d_e(x',V_{p,t}') N_i(x)^{-2}\, d\mu x
\end{align}
for any $i\in I$.
If $x\in 2Q_l \cap 2Q_m$ and $N_l(x)=N_m(x)$ for $l,m\in I$, then by the definition necessarily $d(R_l)=d(R_m)$ and further (see above)
\begin{align}
\label{apu5}
\sum_{i\in I} \chi_{2Q_i}(x)N_i(x)^{-2} = \sum_{m=1}^\infty\biggl(m^{-2}\sum_{i\in J(x,m)} \chi_{2Q_i}(x) \biggr) \leq C,
\end{align}
where $J(x,m)=\{ i\in I\, :\, N_i(x)=m \}$.
By \eqref{apu4}, \eqref{apu5} and \eqref{D}
\begin{align}
\label{apu6}
\begin{split}
&\sum_{i\in I(p,t)}\mathcal{L}^k(R_i)\left(\Xint-_{2Q_i} d_e(x',V_{p,t}')^ {1/3}\, d\mu x \right)^3 \\
&\leq C\sum_{i\in I(p,t)}\int_{2Q_i} d_e(x',V_{p,t}')N_i(x)^{-2}\, d\mu x
\leq C\int_{\bigcup_{i\in I(p,t)}2Q_i} d_e(x',V_{p,t}')\, d\mu x.
\end{split}
\end{align}
Let $i\in I(p,t)$ and $x\in 2Q_i$.
Since $H(u) \leq H(p)+t \leq 61t$ for all $u\in B(p,t)$, one has $d(R_i)\leq 4t$ by \eqref{defR}.
Thus by \eqref{Qi1} and \eqref{Qi2}
\begin{align*}
|P(x)-P(z)| \leq d(P(2Q_i)\cup R_i) + d(R_i\cup\{p\}) + |p-P(z(p,t))| + t \leq Ct.
\end{align*}
Since $h(x) \leq 2d(Q_i) \leq Ct$ (by the definition of $h$ and \eqref{Qi1}), Lemma~\ref{ledh} implies $d(x,z)\leq Ct$.
(Namely, if $d(x,z) \geq Ct$ then $d(x,z) \leq (1+2\delta)|P(x)-P(z)|$ by Lemma~\ref{ledh}.)
The claim now follows from \eqref{apu6}.
\end{proof}
Now \eqref{apu1}, \eqref{apu2}, Lemma~\ref{lesg1}, \eqref{apu3}, Lemma~\ref{lesg2}, Lemma~\ref{lesg3} and \eqref{Vpt} give
(by choosing $\varepsilon < 1$ and $K_0$ large enough)
\[ \gamma(p,t) \leq C\varepsilon^{-3k}\beta_1(z,K_0t) + C\sqrt{\varepsilon}t^{-k-1}\sum_{i\in I(p,t)} d(R_i)^{k+1} \]
and further by the regularity
\begin{align}
\label{apu7}
\gamma(p,t)^2 \leq C\varepsilon^{-6k}t^{-k}\int_{B(z(p,t),t)} \beta_1(z,K_0t)^2\, d\mu z + C\varepsilon t^{-2(k+1)}\biggl(\sum_{i\in I(p,t)} d(R_i)^{k+1}\biggr)^2
\end{align}
for some constant~$C=C(K_0)$.
If $p\in U_1$, $H(p)/60<t\leq T$ and $z\in B(z(p,t),t)\cap E$ then
$z\in K_0Q(\mathcal S)$ and $|p-P(z)| \leq |p-P(z(p,t))| + |P(z(p,t))-P(z)| \leq Ct$.
We also have $h(z) \leq Ct$.
(Namely, choose $\tilde{u}\in P^{-1}(\{p\})$ with $h(\tilde{u})\leq H(p)+t$.
Then let $u\in Q(\mathcal S)$ be with $d(u,\tilde{u}) \leq h(\tilde{u}) + t$.
If $d(z,u)>h(u)$ then $d(z,u)\leq 2|P(z)-P(u)| \leq 2(|P(z)-p|+d(u,\tilde{u}))\leq Ct$ by Lemma~\ref{ledh}.
In any case $h(z)\leq h(u)+d(z,u) \leq Ct$.)
Thus
\begin{align}
\label{apu8}
\begin{split}
&\int_{U_1}\int_{H(p)/60}^T t^{-k}\int_{B(z(p,t),t)} \beta_1(z,K_0t)^2\, d\mu z\, \frac{dt}{t}\, dp \\
&\leq \int_{K_0Q(\mathcal S)}\int_{h(z)/C}^T t^{-k}\biggl(\int_{B^k(P(z),Ct)}\, dp\biggr) \beta_1(z,K_0t)^2\, \frac{dt}{t}\, d\mu z \\
&\leq C\int_{K_0Q(\mathcal S)}\int_{h(z)/C}^T \beta_1(z,K_0t)^2\, \frac{dt}{t}\, d\mu z
\end{split}
\end{align}
Since $d(p,R_i)\leq t$ and $d(R_i)\leq Ct$ if $p\in U_1$, $H(p)/60<t\leq T$ and $i\in I(p,t)$ (as mentioned after \eqref{apu6})
\begin{align}
\label{apu9}
\begin{split}
&C^{-1}\int_{U_1}\int_{H(p)/60}^T t^{-2(k+1)}\biggl(\sum_{i\in I(p,t)} d(R_i)^{k+1}\biggr)^2\, \frac{dt}{t}\, dp \\
&\leq \int_{U_1}\int_{H(p)/60}^T \sum_{i\in I(p,t)} d(R_i)^{k+1}\, \frac{dt}{t^{k+2}}\, dp
\leq \sum_{i\in I_0} d(R_i)^{k+1} \int_{d(R_i)/C}^T\int_{R_i(t)}\, dp\, \frac{dt}{t^{k+2}} \\
&\leq \sum_{i\in I_0} d(R_i)^{k+1} \int_{d(R_i)/C}^\infty (d(R_i)+2t)^k\, \frac{dt}{t^{k+2}}
\leq C\sum_{i\in I_0} d(R_i)^k \leq C\mu(Q(\mathcal S)),
\end{split}
\end{align}
where the last constant $C$ depends on $K_0$.
The last inequality follows from \eqref{D} because $d(R_i)\leq K_0d(Q(\mathcal S))/20$ by \eqref{H} and \eqref{defR}
and therefore $R_i\subset B^k(P(x_0),2K_0d(Q(\mathcal S)))$ for any $i\in I_0$.
From \eqref{apu7}, \eqref{apu8} and \eqref{apu9} one now gets
\begin{align*}
\int_{U_1}\int_{H(p)/60}^T \gamma(p,t)^2\, \frac{dt}{t}\, dp
\leq C\varepsilon \mu(Q(\mathcal S))
+ C\varepsilon^{-6k}\int_{K_0Q(\mathcal S)}\int_{h(z)/C}^T \beta_1(z,K_0t)^2\, \frac{dt}{t}\, d\mu z.
\end{align*}
Combining this with Lemma~\ref{legapu} we get Lemma~\ref{leg} because
$U_1 \subset P(Z) \cup \bigcup_{i\in I_1} R_i$, $H(P(Z))=\{0\}$ and $60t>H(p)$ by \eqref{RH} whenever $p\in R_i$ with $d(R_i)<t$ and $i\in I_0$.
\section{Estimate for $\mathcal S\in\mathfrak F_3$}
\label{secF3}
In this section we use the same assumptions and notations as in Section~\ref{secF}.
We follow \cite[Sections 14 and 11]{MR1113517} or \cite[Section 5]{MR1709304} and prove the next lemma.
\begin{lemma}
\label{leF3}
Let $\mathcal S\in\mathfrak F_3$. Then
\[ \int_{K_0Q(\mathcal S)}\int_{h(x)/K_0}^{K_0d(Q(\mathcal S))} \beta_1(x,K_0t)^2\, \frac{dt}{t}\, d\mu x > \varepsilon^{6k+1}\mu(Q(\mathcal S)). \]
\end{lemma}
Fix $\mathcal S\in\mathfrak F_3$ and suppose to the contrary that the claim is not true for $\mathcal S$.
Since the translations and the rotations are isometries we can assume that $V_{Q(\mathcal S)}=X_k$ (see \ref{remProttr}).
We use the same notations as in Section~\ref{secS}.
By Lemma~\ref{leg}
\begin{align}
\label{AT}
\int_0^{K_0d(Q(\mathcal S))/2} \int_{U_1}\gamma(p,t)^2\, dp\, \frac{dt}{t}\leq C(K_0)\varepsilon\mu(Q(\mathcal S)).
\end{align}
Let $\nu:\mathbb R^k\to\mathbb R$ be a radial $C^\infty$ function supported in $B^k(0,1)$ such that
$\int_{\mathbb R^k} f\nu = 0$
for any affine function $f:\mathbb R^k\to\mathbb R$ and
\[ \int_0^\infty |\hat\nu(tp)|^2\, \frac{dt}{t} = 1\]
for all $p\in\mathbb R^k\backslash\{0\}$. Denote $\nu_t(p)=t^{-k}\nu(t^{-1}p)$ for any $p\in\mathbb R^k$ and $t>0$.
Using Calder\'on's formula one can write
\[ g(p) = \int_0^\infty (\nu_t*\nu_t*g)(p)\, \frac{dt}{t} \]
for any $p\in\mathbb R^k$. (Notice that the above integral exists and depends continuously on $p$, because $g$ is
Lipschitz and has compact support.)
Set $L=K_0d(Q(\mathcal S))/5$ and write $g=g_1+g_2$, where $g_1,g_2:\mathbb R^k\to\mathbb R^{2n-k}$ are defined by
\begin{align*}
g_1(p) &= \int_L^\infty (\nu_t*\nu_t*g)(p)\, \frac{dt}{t} + \int_0^L (\nu_t*(\chi_{\mathbb R^k\backslash U_1}\cdot(\nu_t*g)))(p)\, \frac{dt}{t}, \\
g_2(p) &= \int_0^L (\nu_t*(\chi_{U_1}\cdot(\nu_t*g)))(p)\, \frac{dt}{t}.
\end{align*}
\begin{lemma}
\label{leg1}
There is a constant $C$ such that $|\partial_jg_1(p)| \leq C\sqrt{\delta}$ and $|\partial_i\partial_jg_1(p)| \leq C\sqrt{\delta}/L$
for any $i,j\in\{1,\dotsc,k\}$ and $p\in U_2$.
\end{lemma}
\begin{proof}
We first notice that
\begin{align}
\label{apu10}
\int_0^L (\nu_t*(\chi_{\mathbb R^k\backslash U_1}\cdot(\nu_t*g)))(p)\, \frac{dt}{t} = 0
\end{align}
for all $p\in U_{9/5} \supset U_2$.
Set
\[ \varphi(q) = \int_L^\infty (\nu_t*\nu_t)(q)\, \frac{dt}{t} \]
for all $q\in\mathbb R^k$.
By \eqref{apu10} one has $g_1(p)=(\varphi*g)(p)$ for all $p\in U_{9/5}$.
Fix $i,j\in\{1,\dotsc,k\}$ and $p\in U_2$.
Since $|\nu_t*\nu_t| \leq Ct^{-k}$ for any $t>0$, one has $|\varphi| \leq CL^{-k}$.
Further $|\nabla\varphi| \leq CL^{-k-1}$.
(Here $C$ depends on $\nu$.)
Particularly
\begin{align}
\label{apu11}
\int_{U_{-1}}|\varphi(p-q)|\, dq \leq C\qquad\text{and}\qquad\int_{U_{-1}}|\partial_i\varphi(p-q)|\, dq \leq \frac{C}{L}.
\end{align}
Since $\varphi$ is bounded and $g$ has compact support, Lemma~\ref{leglip} and the dominated convergence
give $\partial_jg_1(p) = (\varphi*\partial_j g)(p)$. (Notice that $g$ is differentiable almost everywhere by Rademacher's theorem.)
Thus, since $\partial_i\varphi$ is bounded and $\partial_jg$ is compactly supported and bounded,
we further have $\partial_i\partial_jg_1(p) = (\partial_i\varphi*\partial_j g)(p)$.
Since $\partial_j g$ is supported in $U_{-1}$, the claim now follows from Lemma~\ref{leglip} and \eqref{apu11}.
\end{proof}
Define $g_{2,m}$ for any $m\in\mathbb N$ by setting
\begin{align*}
g_{2,m}(p) &= \int_{1/m}^L (\nu_t*(\chi_{U_1}\cdot(\nu_t*g)))(p)\, \frac{dt}{t}
\end{align*}
for all $p\in\mathbb R^k$. Then $g_{2,m}\to g_2$ uniformly as $m\to\infty$. We also have that $g_{2,m}\to g_2$ in $L^2$
because $g_2$ is bounded and $\spt g_{2,m} \subset U_0$ for all $m$.
Now
\begin{align*}
\partial_jg_{2,m}(p) = \int_{1/m}^L (\partial_j\nu_t*(\chi_{U_1}\cdot(\nu_t*g)))(p)\, \frac{dt}{t}
\end{align*}
for any $p\in\mathbb R^k$, $j\in\{1,\dotsc,k\}$ and $m\in\mathbb N$.
Using this we find a constant~$C=C(\nu)$ such that
\begin{align}
\label{apu12}
\int_{\mathbb R^k} |\partial_jg_{2,m}(p)|^2\, dp \leq C\int_{1/m}^L \int_{U_1}\gamma(p,t)^2\, dp\, \frac{dt}{t}
\end{align}
for any $j\in\{1,\dotsc,k\}$ and $m\in\mathbb N$ (see \cite{MR1113517} or one dimensional case \cite[page~863]{MR1709304}).
Particularly $(g_{2,m})_m$ is a bounded sequence in the Sobolev space $W^{1,2}$ (by \eqref{AT}) and
a subsequence of $(\partial_jg_{2,m})_m$ converges weakly in $L^2$ to $\partial_jg_2$.
Thus by \eqref{apu12} and \eqref{AT}
\begin{align}
\label{dleqmu}
\int_{\mathbb R^k} |\partial_jg_2(p)|^2\, dp \leq C(K_0)\varepsilon\mu(Q(\mathcal S))
\end{align}
for any $j\in\{1,\dotsc,k\}$.
Define a function $N:\mathbb R^k\to\mathbb R$ by setting
\[ N(p) = \sup_B\frac{m_B(|g_2-m_B(g_2)|)}{d(B)}, \]
where the supremum is taken over all balls $B$ in $\mathbb R^k$ containing $p$ and having (positive) radius at most $L$.
Here we use the notation $m_B(f)=\Xint-_B f$ for locally integrable functions $f:\mathbb R^k\to\mathbb R$.
From Poincar\'e's inequality and the Hardy-Littlewood maximal inequality one now gets
(see \cite[page~75]{MR1113517} or one dimensional case \cite[Lemma~5.3]{MR1709304})
\begin{align}
\label{Nleqd}
\int_{\mathbb R^k} N(p)^2\, dp \leq C\max_j\int_{\mathbb R^k} |\partial_jg_2(p)|^2\, dp.
\end{align}
Since $g_2|_{U_2}$ is $C\sqrt{\delta}$-Lipschitz by Lemmas~\ref{leglip} and \ref{leg1}, one gets
(see \cite[Lemma~11.8]{MR1113517})
that for any closed ball $B\subset U_2$
\begin{align}
\label{apu13}
\sup_{p\in B} |g_2(p)-m_B(g_2)| \leq C\delta^{\frac{k}{2(k+1)}}d(B)N(q)^{\frac{1}{k+1}}
\end{align}
whenever $q\in B$.
Set $F=\{\, q\in U_3\, :\, N(q)^3 \leq \varepsilon\, \}$.
Using \eqref{apu13}, Lemma~\ref{leg1} and Taylor's theorem one gets (see \cite[Lemma~11.9]{MR1113517}) that
\begin{align}
\label{apu14}
\sup_{p\in B^k(p_0,r)} |g(p)-g(p_0)-Dg_1(p_0)(p-p_0)| \leq C\delta^{\frac{k}{2(k+1)}}\varepsilon^{\frac{1}{3(k+1)}}r + C\sqrt{\delta}L^{-1}r^2
\end{align}
whenever $r\leq L/4$ and $B^k(p_0,r)\cap F\neq\emptyset$.
For any $p\in U_2$ let $\Delta_p\subset\mathbb R^{2n}$ be the $k$-plane which is the graph of the affine function
$q\mapsto g(p)+Dg_1(p)(q-p)$.
\begin{lemma}
\label{lem2F}
If $Q\in m_2(\mathcal S)$ then $d(P(Q),F) > d(Q)$.
\end{lemma}
\begin{proof}
Suppose to the contrary that $Q\in m_2(\mathcal S)$ with $d(P(Q),F) \leq d(Q)$.
Since $\mathcal C(Q) \subset \mathcal G$ by definition of $m_2(\mathcal S)$, we get by (F7) a contradiction $Q\not\in\min(\mathcal S)$ by showing that
$\kulma{V_R}{V_{Q(\mathcal S)}} \leq 1 + \delta$ for for all $R\in\mathcal C(Q)$.
For that reason, fix $R \in \mathcal C(Q)$.
If now $2Kd(R) \geq d(Q(\mathcal S))$ then $\kulma{V_R}{V_{Q(\mathcal S)}} \leq 1 + \varepsilon$ by Lemma~\ref{leQP}.
Thus we may assume that $2Kd(R) < D^2\alpha d(Q(\mathcal S))$.
Pick $x\in Q$ and set $r= 3d(Q)$.
Now $r < L/K \leq L/4$ (by \eqref{D} choosing $K_0$ and $K$ large enough) and $B^k(P(x),r) \cap F \neq\emptyset$.
By this, Lemma~\ref{legappr} and \eqref{apu14}
\begin{align}
\label{apu15}
\begin{split}
d_e(y',\Delta_{P(x)}) &\leq | P^\bot(y) - g(P(x)) - Dg_1(P(x))(P(y)-P(x))| \\
&\leq C\sqrt{\varepsilon}h(y) + C\delta^{\frac{k}{2(k+1)}}\varepsilon^{\frac{1}{3(k+1)}}r + C\sqrt{\delta}K^{-1}r
\end{split}
\end{align}
for any $y\in 2Q$.
Let $y_0,\dotsc,y_k\in V_{R} \cap Q(\varepsilon^2 d(R))$ be as in Lemma~\ref{leL} (recalling that $R\in\mathcal G$).
Then by \eqref{dV}, \eqref{D} and \eqref{apu15}
\[ d_e(y_j',\Delta_{P(x)}) \leq C\left( \sqrt{\varepsilon} + \delta^{\frac{k}{2(k+1)}}\varepsilon^{\frac{1}{3(k+1)}} + \sqrt{\delta}K^{-1} \right)d(R) \]
for any $j\in\{1,\dotsc,k\}$.
Further
$d_e(y_{i+1}',L_i') > cd(R)$ for all $i\in\{0,\dotsc,k-1\}$,
where $L_i$ and $c$ are as in Lemma~\ref{leL}.
Thus the euclidean angle between $V_R'$ and $\Delta_{P(x)}$ is less than $\delta/9$ by taking $\varepsilon$ small enough and $K$ large enough depending on $\delta$.
Let $Q^*$ be the minimal cube in $\mathcal S$ such that $Q\subset Q^*$ and $2Kd(Q^*) \geq d(Q(\mathcal S))$.
Then $2Kd(Q^*) < D^2\alpha d(Q(\mathcal S))$ (by \eqref{D}) and by the above argument the angle between $V_{Q*}'$ and $\Delta_{P(x)}$ is also
less than $\delta/9$.
Now $\kulma{V_{Q^*}}{V_{Q(\mathcal S)}} \leq 1+\varepsilon$ by Lemma~\ref{leQP}. Choosing $\varepsilon/\delta$ small the euclidean angle between
$V_{Q^*}'$ and $V_{Q(\mathcal S)}'$ is less than $\delta/9$ (by Lemma~\ref{lekulmat}).
Thus the angle between $V_R'$ and $V_{Q(\mathcal S)}'$ is less than $\delta/3$ and so $\kulma{V_R}{V_{Q(\mathcal S)}} \leq 1 + \delta$ (by choosing $\delta$ small).
\end{proof}
For each $Q\in m_2(\mathcal S)$ pick $x_Q\in Q$.
By the $5r$-covering lemma we find $\mathcal{T}\subset m_2(\mathcal S)$ such that
the balls $B(x_Q,3d(Q))$, $Q\in\mathcal{T}$, are disjoint and
\[ G := \bigcup_{Q\in m_2(\mathcal S)} Q \subset \bigcup_{Q\in\mathcal{T}} B(x_Q,15d(Q)). \]
Since $d(x_Q,x_{R}) > 3\max\{d(Q),d(R)\} \geq h(x_Q)$ for any distinct $Q,R\in\mathcal{T}$, Lemma~\ref{ledh} gives
(by choosing $\delta$ small) that the balls $B^k(P(x_Q),d(Q))$, $Q\in\mathcal{T}$, are also disjoint.
Further $B^k(P(x_Q),d(Q)) \subset U_3\backslash F$ for any $Q\in m_2(\mathcal S)$ by Lemma~\ref{lem2F}.
Hence by \eqref{Nleqd} and \eqref{dleqmu}
\begin{align*}
\mu(G) &\leq \sum_{Q\in\mathcal{T}}\mu(B(x_Q,15d(Q))) \leq 15^kC_E\sum_{Q\in\mathcal{T}}d(Q)^k \\
&\leq C\mathcal{L}^k\biggl(\bigcup_{Q\in\mathcal{T}}B^k(P(x_Q),d(Q))\biggr) \leq C\mathcal{L}^k(U_3\backslash F) \\
&\leq C\varepsilon^{-2/3}\int_{U_3\backslash F} N(p)^2\, dp \leq C(K_0)\varepsilon^{1/3}\mu(Q(\mathcal S)).
\end{align*}
Choosing $\varepsilon$ small enough this means that $\mathcal S\not\in\mathfrak F_3$ which is a contradiction.
\section{End of the proof}
\label{end}
In this section we follow \cite[Sections 12 and 16]{MR1113517} and use the same assumptions and notations as in Section~\ref{secbd}
and assume further that \eqref{oletus} is satisfied.
Now the assumptions of Sections~\ref{secF} and \ref{secF3} are also satisfied (see Section~\ref{secbd}).
From this on the constants $C$ may depend without special mention on $\varepsilon$, $K$, $\delta$ or $K_0$.
\begin{lemma}
\label{leF}
There is a constant $C$ such that
\[ \sum_{\substack{\mathcal S\in\mathfrak F \\ Q(\mathcal S)\subset R}} \mu(Q(\mathcal S)) \leq C\mu(R)\qquad\text{for all $R\in\Delta$}. \]
\end{lemma}
\begin{proof}
For any $\mathcal S\in\mathfrak F$ denote
\[ E_\mathcal S = \{\, (x,t) \in K_0Q(\mathcal S) \times ]0,K_0d(Q(\mathcal S))[\, : \, h_\mathcal S(x) < K_0t\, \}. \]
Suppose for a while that $\mathcal S\in\mathfrak F$ and $(x,t)\in E_{\mathcal S}$. Then
$d(x,Q)+d(Q) < K_0t < K_0^2d(Q(\mathcal S))$ for some $Q\in\mathcal S$. Let $Q^*$ be the minimal cube in $\mathcal S$ such that $Q\subset Q^*$
and $K_0d(Q^*) > t$. Then (by \eqref{D}) $K_0d(Q^*) \leq D^2\alpha t$ and $C\mu(Q^*) > t^k$.
Since trivially $d(x,Q^*) \leq d(x,Q) < K_0t$ we conclude by the regularity that there is a constant $C$ such that
\begin{align}
\label{apu16}
\sum_{\mathcal S\in\mathfrak F} \chi_{E_\mathcal S}(x,t) \leq C\qquad\text{for all $(x,t) \in E\times\mathbb R$}.
\end{align}
Using Lemma~\ref{leF3}, \eqref{apu16}, \eqref{oletus}\label{ol} and \eqref{D} one gets for any $R\in\Delta$
\begin{align*}
\sum_{\substack{\mathcal S\in\mathfrak F_3 \\ Q(\mathcal S)\subset R}} \mu(Q(\mathcal S))
&< \varepsilon^{-6k-1} \sum_{\substack{\mathcal S\in\mathfrak F_3 \\ Q(\mathcal S)\subset R}}
\int_{K_0Q(\mathcal S)}\int_{h_\mathcal S(x)/K_0}^{K_0d(Q(\mathcal S))} \beta_1(x,K_0t)^2\, \frac{dt}{t}\, d\mu x \\
&\leq C\int_0^{K_0d(R)}\int_{K_0R} \beta_1(x,K_0t)^2\, d\mu x\, \frac{dt}{t}
\leq C\mu(R).
\end{align*}
The claim now follows from Lemma~\ref{leF1F2}.
\end{proof}
Theorem~\ref{th} follows from the following lemma.
\begin{lemma}
For any $\eta>0$ there is $C>0$ such that for all $z\in E$ and $r>0$ there is $F\subset\mathbb H^n$ and
a $C$-bilipschitz mapping $f:F\to\mathbb R^k$ such that
$\mu(B(z,r)\backslash F) \leq \eta r^k$.
\end{lemma}
\begin{proof}
Let $\eta>0$, $z\in E$ and $r\in\mathbb R$ with $0<r\leq d(E)$. Let $m_0\in\mathcal Z$ be such that $D\alpha^{m_0-1} < r\leq D\alpha^{m_0}$.
Set
\begin{align*}
\mathcal{R}_0 &= \{\, Q\in\Delta_{m_0}\, :\, Q\cap B(z,r)\neq\emptyset\, \}, \\
\tilde{\mathfrak F} &= \left\{\, \mathcal S \cap \{ Q\, :\, Q\subset R\}\, :\, \text{$\mathcal S\in\mathfrak F$, $R\in\mathcal{R}_0$}\, \right\} \backslash\left\{\emptyset\right\}.
\end{align*}
Further let
\[ \tilde{\Delta} = \bigcup_{j=-\infty}^{m_0} \tilde{\Delta}_j\qquad\text{and}\qquad\tilde{\mathcal G} = \mathcal G\cap\tilde{\Delta}, \]
where
\[ \tilde{\Delta}_j = \biggl\{\, Q\in\Delta_j\, :\, Q\subset\bigcup_{R\in\mathcal{R}_0} R\, \biggr\}. \]
One easily sees that (F1), (F2) and Lemma~\ref{leF} remain valid if $\Delta_j$, $\Delta$, $\mathcal G$ and $\mathfrak F$
are replaced by $\tilde{\Delta}_j$, $\tilde{\Delta}$, $\tilde{\mathcal G}$ and $\tilde{\mathfrak F}$.
(For Lemma~\ref{leF} this is because the new maximal cubes~$Q(\mathcal S)$, $\mathcal S\in\tilde{\mathfrak F}\backslash\mathfrak F$, belong to $\mathcal{R}_0$.)
For any $Q\in\tilde{\Delta}$ denote
\[ \sigma(Q) = \{\, x\in Q\, :\, d(x,E\backslash Q) \leq \tau\alpha^{j_Q}\, \}, \]
where $\tau$ is a small positive constant fixed later.
Let $\mathcal T=\mathcal T_1\cup\mathcal T_2\cup\mathcal T_3$, where $\mathcal T_1=\tilde{\Delta}\backslash\mathcal G$,
$\mathcal T_2=\{\, Q(\mathcal S)\, :\, \mathcal S\in\tilde{\mathfrak F}\, \}$ and $\mathcal T_3=\bigcup_{\mathcal S\in\tilde{\mathfrak F}} \min(\mathcal S)$.
For any $Q\in\mathcal T$ set $\ell(Q) = \#\{\, R\in\mathcal T\, :\, R\neq Q\subset R\, \}$.
Define
\[ F = \biggl( B(z,r) \cap \bigcap_{j\in\mathbb Z}\bigcup_{Q\in\Delta_j} Q \biggr) \backslash \left( F_1\cup F_2 \right), \]
where
\[ F_1 = \bigcup_{Q\in\mathcal T} \sigma(Q)\qquad\text{and}\qquad F_2=\bigcup_{\substack{Q\in\mathcal T \\ \ell(Q) > M}} Q. \]
Using \eqref{D1}, (F1), Lemma~\ref{leF} and \eqref{D6} and choosing $\tau$ small enough and $M$ large enough depending on $\eta$
one gets $\mu(B(z,r)\backslash F)\leq\mu(F_1\cup F_2) \leq \eta r^k$ (see \cite[pages~102--103]{MR1113517}).
For the definition on $f$ we first define a map $t:\mathcal T\to \mathcal{P}(\mathbb R^k)$ recursively as follows.
First, for each $Q\in\mathcal{R}_0 = \tilde{\Delta}_{m_0}$ let $t(Q)$ be a cube in $\mathbb R^ k$ with side length $\alpha^{j_Q}$.
Since $\#\mathcal{R}_0 \leq 2^ kC_ED^{k+1}$ by the regularity and \eqref{D}, one can choose the cubes $t(Q)$ so that
\begin{align}
\label{talku}
\alpha^{m_0}\leq d(t(Q_1),t(Q_2)) \leq C\alpha^{m_0}
\end{align}
for any distinct $Q_1,Q_2\in\mathcal{R}_0$ where $C$ is a constant (depending only on $C_E$, $D$ and $k$).
Let $Q\in\mathcal T$. Assume by recursion that a cube $t(Q)\subset\mathbb R^k$ has already been defined such that
\begin{align}
\label{rec}
l(t(Q)) = c_1^{\ell(Q)}\alpha^{j_Q},
\end{align}
where $l(G)=d(G)/\sqrt{k}$ for $G\subset\mathbb R^k$ and $c_1>0$ is a small constant to be chosen later.
Assume first that $Q\in\mathcal T_1\cup\mathcal T_3$. Then $\ell(R)=\ell(Q)+1$ for all $R\in\mathcal C(Q)\subset\mathcal T_1\cup\mathcal T_2$.
Since further $j_R<j_Q$ for all $R\in\mathcal C(Q)$ and $\#\mathcal C(Q)\leq D^2\alpha^k$ (by \eqref{D}),
one can choose by \eqref{rec} the cubes $t(R)$, $R\in\mathcal C(Q)$, such that
\begin{align}
\label{apu17}
l(t(R)) &= c_1^{\ell(R)}\alpha^{j_R}, \\
\label{apu18}
t(R) &\subset t(Q), \\
\label{apu19}
d(t(R),t(R_1)) &\geq c_1^{\ell(Q)+1}\alpha^{j_Q}
\end{align}
for all distinct $R,R_1\in\mathcal C(Q)$ provided $c_1$ is small enough (depending on $D$, $\alpha$ and $k$).
Assume now that $Q\in\mathcal T_2$ i.e. $Q=Q(\mathcal S)$ for some $\mathcal S\in\tilde{\mathfrak F}$.
Denote $W_Q=V_{Q(\mathcal S_0)}$ where $\mathcal S_0\in\mathfrak F$ is such that $\mathcal S=\mathcal S_0\cap\{ Q\, :\, Q\subset Q_0\}$ for some $Q_0\in\mathcal{R}_0$.
By the 1-Lipschitzness of $P_{W_Q}$, \eqref{D4}, \eqref{dV} and \eqref{rec} there is a function $\phi_Q:W_Q\to\mathbb R^k$ such that
\begin{align}
\label{phi1}
&\phi_Q(P_{W_Q}(Q)) \subset 2^{-1}t(Q), \\
\label{phi2}
&|\phi_Q(p)-\phi_Q(q)| = \frac{1}{4D}c_1^{\ell(Q)}d(p,q)
\end{align}
for all $p,q\in W_Q$.
Here $\lambda G = \{\, x\in G\, :\, d(x,\mathbb R^k\backslash G) \geq (1-\lambda)l(G)/2\, \}$ for $\lambda\in\mathbb R$ and a cube $G\subset\mathbb R^k$.
For any $R\in\min(\mathcal S)\backslash\{Q\}$ let $t(R)$ be a cube satisfying \eqref{apu17} and centered at $\phi_Q(P_{W_Q}(x_R))$,
where $x_R\in R$ is such that (see \eqref{D5} and \eqref{D4})
\begin{align}
\label{x_R}
B(x_R,D^{-2}d(R)) \cap E \subset R.
\end{align}
Then \eqref{apu18} holds for $Q$ and any $R\in\min(\mathcal S)$ by \eqref{phi1} and \eqref{rec} provided $2c_1 \leq \alpha$.
Since by \eqref{x_R} and \eqref{D3} $d(x_{R_1},x_{R_2}) \geq D^{-2}d(R_1) \geq D^{-2}h_{\mathcal S_0}(x_{R_1})$ for any distinct $R_1,R_2\in\min(\mathcal S)$,
one has by \eqref{phi2} and Lemma~\ref{ledh}
\begin{align*}
|\phi_Q(P_{W_Q}(x_{R_1}))-\phi_Q(P_{W_Q}(x_{R_2}))| \geq \frac{c_1^{\ell(Q)}d(x_{R_1},x_{R_2})}{4D(1+2\delta)}
\end{align*}
for any $R_1,R_2\in\min(\mathcal S)$.
Thus by using \eqref{x_R}, \eqref{D} and \eqref{apu17} and choosing $c_1$ small enough
\begin{align}
\label{apug1}
\begin{split}
d(t(R_1),t(R_2)) &\geq \frac{c_1^{\ell(Q)}d(x_{R_1},x_{R_2})}{8D(1+2\delta)} - \frac{\sqrt{k}(l(t(R_1))+l(t(R_2)))}{2}
+ \frac{c_1^{\ell(Q)}d(R_1,R_2)}{8D(1+2\delta)} \\
&\geq \left(\frac{1}{8D^3(1+2\delta)} - \sqrt{k}c_1D\right)c_1^{\ell(Q)}\max_{i=1,2}d(R_i)
+ \frac{c_1^{\ell(Q)}d(R_1,R_2)}{8D(1+2\delta)} \\
&\geq c_1^{\ell(Q)+1}(d(R_1)+d(R_2)+d(R_1,R_2))
\end{split}
\end{align}
for any distinct $R_1,R_2\in\min(\mathcal S)$.
By \eqref{phi2} and the 1-Lipschitzness of $P_{W_Q}$ also
\begin{align}
\label{apug2}
\begin{split}
d(t(R_1),t(R_2)) \leq \frac{c_1^{\ell(Q)}d(x_{R_1},x_{R_2})}{4D} \leq d(R_1)+d(R_2)+d(R_1,R_2)
\end{split}
\end{align}
for any $R_1,R_2\in\min(\mathcal S)$.
For any $x\in F$ let $Q(x)$ be the smallest cube in $\mathcal T$ which contains $x$.
Then $Q(x)\in\mathcal T_2$ and one can define $f:F\to\mathbb R^k$ by setting
\[ f(x) = \phi_{Q(x)}(P_{W_{Q(x)}}(x)) \]
for all $x\in F$.
From this on let $x,y\in F$ be distinct.
Suppose first that $x,y\in Q$ for some $Q\in\mathcal T_2$. Let $Q = Q(\mathcal S)$ be the smallest such cube.
Assume very first that there are distinct $R_1,R_2\in\min(\mathcal S)$ with $x\in R_1$ and $y\in R_2$.
Since $x,y\not\in F_1$ one has by \eqref{D4}
\begin{align}
\label{apu20}
d(x,y) \geq \frac{\tau(d(R_1) + d(R_2))}{3D} + \frac{d(R_1,R_2)}{3}.
\end{align}
Since $f(x)\in t(R_1)$ and $f(y)\in t(R_2)$ by definition of $f$, \eqref{phi1} and \eqref{apu18}, one gets
by using \eqref{apu17}, \eqref{D}, \eqref{apug2} and \eqref{apu20}
\begin{align*}
|f(x)-f(y)| &\leq d(t(R_1))+d(t(R_2))+d(t(R_1),t(R_2)) \\
&\leq D\sqrt{k}c_1\left( d(R_1)+d(R_2)\right) + d(R_1)+d(R_2)+d(R_1,R_2) \\
&\leq Cd(x,y).
\end{align*}
By \eqref{apug1} one also has
\begin{align*}
|f(x)-f(y)| \geq d(t(R_1),t(R_2)) \geq c_1^{\ell(Q)+1}(d(R_1)+d(R_2)+d(R_1,R_2)) \geq c_1^Md(x,y).
\end{align*}
Assume now that $y\in R_2\in\min(\mathcal S)$ and $x\not\in R$ for all $R\in\min(\mathcal S)$.
Since now $f(x) = \phi_Q(P_{W_Q}(x))$, the argument used to establish \eqref{apug1} and \eqref{apug2} also gives
\begin{align*}
c_1^{\ell(Q)+1}(d(R_2)+d(x,R_2)) \leq d(f(x),t(R_2)) \leq d(R_2)+d(x,R_2).
\end{align*}
Since further
\begin{align*}
\frac{\tau d(R_2)}{2D} + \frac{d(x,R_2)}{2} \leq d(x,y) \leq d(R_2) + d(x,R_2),
\end{align*}
and $f(y)\in t(R_2)$, one has
\begin{align*}
c_1^Md(x,y) \leq |f(x)-f(y)| \leq Cd(x,y).
\end{align*}
If $x\not\in R$ and $y\not\in R$ for all $R\in\min(\mathcal S)$, then $f(x) = \phi_Q(P_{W_Q}(x))$ and $f(y) = \phi_Q(P_{W_Q}(y))$.
In this case \eqref{phi2}, Lemma~\ref{ledh} and the 1-Lipschitzness of $P_{W_Q}$ give directly
\begin{align*}
\frac{c_1^M}{4D(1+2\delta)}d(x,y) \leq |f(x)-f(y)| \leq \frac{1}{4D}d(x,y).
\end{align*}
Let now $Q_1$ be the largest cube in $\tilde{\Delta}$ which contains $x$ but not $y$, and denote $Q_0=O(Q_1)$.
Assume that $x,y\in R\in\min(\mathcal S)$ (and that $Q(\mathcal S)$ is still the smallest cube in $\mathcal T_2$ with $x,y\in Q$).
Then necessarily $Q_0=R\in\mathcal T_3$ or $Q_0\in\mathcal T_1$, because otherwise $x,y \in Q_0\not\in\mathcal S\cup\mathcal T_1$ which contradicts the minimality of $Q(\mathcal S)$.
Now $y\in Q_2$ for some $Q_2\in\mathcal C(Q_0)\backslash\{ Q_1\}$ and $Q_1,Q_2\in\mathcal T$.
As before, by \eqref{D4} and the definition of $F$
\begin{align*}
D^{-1}\tau d(Q_1) \leq d(x,y) \leq d(Q_0).
\end{align*}
Further $f(x)\in t(Q_1) \subset t(Q_0)$ and $f(y)\in t(Q_2) \subset t(Q_0)$ (by definition of $f$, \eqref{phi1} and \eqref{apu18}).
Thus by \eqref{rec} (and \eqref{apu17}) and \eqref{D}
\begin{align}
\label{apu21}
|f(x)-f(y)| \leq d(t(Q_0)) \leq c_1^{\ell(Q_0)}\sqrt{k}\alpha^{j_{Q_0}} \leq \sqrt{k}D^2\alpha\tau^{-1}d(x,y),
\end{align}
and by \eqref{apu19} and \eqref{D4}
\begin{align}
\label{apu22}
|f(x)-f(y)| \geq c_1^{\ell(Q_0)+1}\alpha^{j_{Q_0}} \geq c_1^MD^{-1}d(Q_0) \geq c_1^MD^{-1}d(x,y).
\end{align}
Finally assume that there does not exist $Q\in\mathcal T_2$ with $x,y\in Q$.
Then $Q_1\in\mathcal{R}_0$ or $Q_0\in\mathcal T_1$.
In the latter case \eqref{apu21} and \eqref{apu22} are obtained as above.
In the former case
\begin{align*}
D^{-2}\tau\alpha^{m_0} \leq D^{-1}\tau d(Q_1) \leq d(x,y) \leq 2r \leq 2D\alpha^{m_0}
\end{align*}
by \eqref{D} and the definition on $F$.
Further
\begin{align*}
\alpha^{m_0} \leq |f(x)-f(y)| \leq (C+2)\alpha^{m_0}
\end{align*}
by \eqref{talku} and \eqref{rec}.
(This is again because $f(x)\in t(Q_1)$ and $f(y)\in t(Q_2)$, where $Q_2\in\mathcal{R}_0$ is such that $y\in Q_2$.)
This gives
\begin{align*}
\frac{1}{2D}d(x,y) \leq |f(x)-f(y)| \leq \frac{D^2(C+2)}{\tau}d(x,y).
\end{align*}
\end{proof}
\bibliographystyle{plain}
|
2,877,628,089,378 | arxiv |
\section{Introduction}
Originally proposed for the study of channels with feedback, directed information (DI) quantifies statistical and temporal dependencies between stochastic processes \cite{massey1990causality}.
It has seen a variety of applications in communications \cite{kramer1998directed}, portfolio theory \cite{permuter2008directed}, computational biology \cite{rao2006inference}, neuroscience \cite{wibral2014directed} and machine learning \cite{zhou2016causal,tiomkin2017unified}. Often, one wishes to optimize the DI over the distribution of some of the involved stochastic processes; e.g., channel capacity with and without feedback is given by the maximized DI over an appropriate set of input distributions~\cite{kim2008coding}.
However, the resulting optimization problem is usually intractable, with analytic solutions available only for a limited class of channels \cite{kim2006feedback,permuter2008capacity, sabag2015feedback,peled2019feedback}.
In the absence of analytic solutions, the capacity may be computed via DI optimization routines, such as Blahut-Arimoto-type algorithms \cite{naiss2012extension,charalambous2016directed} or methods based on Markov decision process (MDP) formulations and dynamic programming \cite{permuter2008capacity, elishco2014capacity} or reinforcement learning \cite{aharoni2020reinforcement} techniques. However, these approaches are only feasible under restrictive structural assumptions on the channel that enable tensorizing the multi-letter DI objective, e.g., unifilar\footnote{A finite-state channel is unifilar if its state evolves as a time-invariant deterministic function of the past input, output, and state tuple.} finite-state channels (FSCs) with feedback and symmetric channels. For more general channels\footnote{or when the feedback capacity itself is not algorithmically (Borel-Turing) computable \cite{grigorescu2022capacity}.}, the capacity can be bounded using the machinery of $Q$-graphs \cite{sabag2020graph, huleihel2021computable}, but tight bounds require an exhaustive search over an exponentially large space. Furthermore, all the aforementioned approaches require full knowledge of the channel probabilistic model, which is often unavailable in practice.
Empirical DI estimators may be employed to obtain approximate solutions, when the channel model is unknown but can be sampled from. For memoryless channels, joint estimation-optimization methods over continuous input spaces were proposed in \cite{letizia2021capacity, mirkarimi2021neural}. The case of channels with memory was recently treated in \cite{tsur2022neural} using the DI neural estimator (DINE) developed therein. The DINE parametrizes the Donsker-Varadhan representation of DI by recurrent neural networks (RNNs), approximates expectations by sample means, and optimizes the resulting objective over the parameter space. To compute the feedback capacity, \cite{tsur2022neural} further proposed an RNN-based generative model for continuous input distributions and jointly optimized it with DINE by propagating gradients through both models. These methods hinge on the end-to-end differentiability of the joint model, which fails to hold for discrete input alphabets. To the best of our knowledge, to date there is no known data-driven approach for optimizing (estimated) DI over discrete alphabets. The goal of this paper is to close this gap.
We propose a new method for optimizing DINE over discrete input alphabets. The input distribution is modeled by an RNN-based probability mass function (PMF) generator. We formulate the DI maximization as an MDP whose policy is modeled by the PMF generator.
Such a formulation allows us to utilize reinforcement learning techniques, and in particular, policy optimization via the policy gradients theorem \cite{sutton1999policy}. Combined with a DINE-based approximation of the MDP reward and a Monte-Carlo (MC) estimate of the policy gradient expression, this result in a tractable policy optimization objective. We then alternate between optimizing this objective and the DINE parameters, which yields an estimation-optimization procedure for the DI rate over discrete-input channels. Importantly, our approach does not rely on any knowledge of the channel transition kernel, but rather on the ability to sample its output.
We apply our approach to three main tasks concerning communication over noisy channels. First, we use the proposed method to estimate the capacity of several channels with memory. In all considered examples, the method either achieves the theoretical capacity value or converges between known upper and lower bounds. Furthermore, we show that the optimized PMF generator corresponds to known capacity-achieving input distributions. Second, we employ the generator to estimate a $Q$-graph \cite{sabag2016single}, that can be plugged into the algorithm from \cite{sabag2020graph} to obtain tight bounds on the feedback capacity of unifilar FSCs.
Finally, we leverage the developed method to perform probabilistic shaping of pulse amplitude modulation (PAM) and quadrature amplitude modulation (QAM) constellations over peak-power-constrained additive white Gaussian noise (AWGN) channels.
Our method yields nontrivial distributions, whose information transmission rate is higher than the one obtained from a uniform distribution, which is typically used in practice \cite{proakis1994communication}.
The remainder of the text is organised as follows.
Section \ref{sec:prel} provides preliminaries and technical background.
Section \ref{sec:di_opt} derives the DI optimizer, while Section \ref{sec:implementation} discusses implementation and algorithmic aspects.
In Section \ref{sec:exp_cap} we present empirical results for channel capacity estimation.
Applications to bounding techniques for the feedback capacity of unifilar FSCs and to probabilistic shaping are the focus of Sections \ref{sec:q_graph} and \ref{sec:prob_shape}, respectively. Proofs are provided in Section \ref{sec:proofs}. Section \ref{sec:conclusion} leaves concluding remarks and discusses future directions.
\section{Background and Preliminaries}\label{sec:prel}
\subsection{Notation}
Sets are denoted by calligraphic letters, e.g., $\cX$.
When $\cX$ is finite we use $|\cX|$ for its cardinality.
For any $n\in\NN$, $\cX^n$ is the $n$-fold Cartesian product of $\cX$, while $x^n=(x_1,\dots,x_n)$ denotes an element of $\cX^n$.
For $i,j\in\ZZ$ with $i\leq j$, we use the shorthand $x_i^j:=(x_i,\dots,x_j)$; the subscript is omitted when $i=1$.
We denote by $(\Omega,\cF,\PP)$ the underlying probability space on which all random variables are defined, which is assumed to be sufficiently rich. Expectations are denoted by $\EE$; we sometime write $\EE_P$ to stress that the underlying distribution is $P$.
The set of all Borel probability measures on $\cX$ is denoted by $\cP(\cX)$ and the $k$-dimensional probability simplex is denoted $\Delta_k$.
When $\cX$ is countable, we use $p$ for the PMF associated with $P\in\cP(\cX)$. Random variables are denoted by upper-case letters, e.g., $X$, and stochastic processes are denoted by blackboard bold letters, e.g., $\XX:=(X_i)_{i\in\NN}$.
For $P,Q\in\cP(\cX)$ such that $Q\ll P$, i.e., $Q$ is absolutely continuous w.r.t. $P$, we denote the Radon-Nykodim derivative of $P$ w.r.t. $Q$ by $\frac{dP}{dQ}$.
The KL divergence between $P$ and $Q$, with $P\ll Q$, is
$\DKL(P\|Q):=\EE_P\big[\log\frac{\mathrm{d}P}{\mathrm{d}Q}\big]$.
The MI between $(X,Y)\sim P_{XY}\in\cP(\cX\times \cY)$ is $\sI(X;Y) := \DKL(P_{XY}\|P_X\otimes P_Y)$, where $P_X$ and $P_Y$ are the marginals of $P_{XY}$.
The entropy of a discrete random variable $X\sim P$ is $H(X) := -\EE\left[\log p(X)\right]$.
\subsection{Directed Information and Channel Capacity}\label{subsec:di}
Originally proposed by Massey \cite{massey1990causality}, DI quantifies the amount of information one sequence of random variables causally conveys about another.
\begin{definition}[Directed information]
Let $(X^n,Y^n)\sim P_{X^n Y^n}\in\cP(\cX^n\times\cY^n)$.
The DI from $X^n$ to $Y^n$ is
\begin{equation}
\sI(X^n\to Y^n):= \sum_{i=1}^n \sI(X^i;Y_i|Y^{i-1}).
\end{equation}
\end{definition}
For infinite-time horizon, jointly stationary stochastic processes $\XX$ and $\YY$, the DI rate between them is defined as the asymptotic time-averaged DI:
\[
\sI(\XX\to\YY):= \lim_{n\to\infty}\frac{1}{n}\sI(X^n\to Y^n).
\]
Joint stationarity is indeed sufficient for the existence of this limit\cite{CovThom06}.
Both feedforward and feedback capacities of a sequence of channels $\{P_{Y^n\|X^n}\}_{n\in\NN}$, where $P_{Y^n\|X^n}:=\prod_{i=1}^n P_{Y_i|Y^{i-1}X^i}$, are characterized as\footnote{This formula assumes the so-called information stability property (see \cite{dobrushin1963general}).}
\begin{equation}
C = \lim_{n\to\infty}\sup_{P}\frac{1}{n}\sI(X^n\to Y^n),\label{eq:channel_cap}
\end{equation}
with $P=P_{X^n}$ for feedforward capacity and $P=P_{X^n\|Y^{n-1}}:=\prod_{i=1}^n P_{X_i|X^{i-1}Y^{i-1}}$ (which is termed the causal conditioned distribution) when feedback is present. Also note that when no feedback is present, we have $\sI(X^n;Y^n)=\sI(X^n\to Y^n)$ \cite{massey1990causality}.
\subsection{Directed Information Neural Estimation}
The DINE \cite{tsur2022neural} is an RNN-based estimator of $\sI(\XX\to\YY)$ from a sample $\Dn:=(X^n,Y^n)\sim P_{X^nY^n}$.
Its derivation begins with a representation of DI rate as the asymptotic difference of the following KL divergence terms:
\begin{align*}
\sD_{Y\|X}^{N}&:=
\DKL\left(P_{Y^0_{-(N-1)}\| X^0_{-(N-1)}}\middle\| P_{Y^{-1}_{-(N-1)}\| X^{-1}_{-(N-1)}}\otimes P_{\widetilde{Y}}\middle|P_{X^{0}_{-(N-1)}\|Y^{-1}_{-(N-1)}}\right),\\
\sD_{Y}^{N}&:=\DKL\left(P_{Y^0_{-(N-1)}}\middle\| P_{Y^{-1}_{-(N-1)}}\otimes P_{\widetilde{Y}}\right)\numberthis{}{}\label{eq:DI_rep_kls},
\end{align*}
where $\DKL(P_{Y|X}\|Q_{Y|X}|P_X):=\EE_{P_x}[\DKL(P_{Y|X}\|Q_{Y|X})]$.
The estimator utilizes the Donsker-Varadhan (DV) variational form of the KL divergence \cite[Theorem 3.2]{donsker1983asymptotic}, whereby for any $P, Q\in\cP(\cX)$ with $P\ll Q$, we have
\begin{equation}
\DKL\left(P \middle\| Q\right) = \sup_{f: \cX \to \mathbb{R}}\mathbb{E}_P\left[ f \right] -\log\left(\mathbb{E}_Q[ e^{f} ]\right)\label{eq:DV}.
\end{equation}
The supremum is taken over all measurable functions $f$ for which expectations are finite (termed DV potentials) and is achieved by $f^\star:=\log\frac{d P}{d Q}+c$, for any $c\in\RR$.
Applying the DV formula \eqref{eq:DV} to the KL divergences in \eqref{eq:DI_rep_kls}, we have
\begin{align}
\sD_{Y\|X}^{N} &= \sup_{f_{xy,N}:\cX^N\times \cY^N\mapsto\RR} \EE
\left[f_{xy,N}(Y^0_{-(N-1)}, X^0_{-(N-1)})\right] - \log\EE
\left[e^{f_{xy,N}(\tilde{Y},Y^{-1}_{-(N-1)},X^{-1}_{-(N-1)})}\right]\label{eq:DV_xy_expectation},\\
\sD_{Y}^{N} &= \sup_{f_{y,N}: \cY^N\mapsto\RR} \EE
\left[f_{y,N}(Y^0_{-(N-1)})\right] - \log\EE
\left[e^{f_{y,N}(\tilde{Y},Y^{-1}_{-(N-1)})}\right], \label{eq:DV_y_expectation}
\end{align}
where the supremum-achieving DV potentials of \eqref{eq:DV_xy_expectation} and \eqref{eq:DV_y_expectation} are respectively given by
\begin{equation}\label{eq:f_star_dv}
f^\star_{y,N} :=\log\frac{p_{Y_0|Y^{-1}_{-N}}}{p_{\tilde{Y}}},\qquad f^\star_{xy,N}:=\log\frac{p_{Y_0|X^0_{-N}Y^{-1}_{-N}}}{p_{\tilde{Y}}}.
\end{equation}
The optimal DV potentials \eqref{eq:f_star_dv} are then approximated by RNNs and expectations are estimated by sample means over jointly distributed input-output sequences. We first define the class of RNNs \cite{jin1995universal}.
\begin{definition}[RNN function class]\label{def:RNN_function_class}
Fix $k,d_i,d_o\in\NN$. The class $\GRNN^{(d_i,d_o,k)}$ of RNNs with $k$ neurons and input-output dimensions $(d_i,d_o)$ is the set of discrete-time, nonlinear systems with the following structure:
\[
\begin{split}
s_{t+1} &= -\alpha s_{t} + \mathrm{A}\sigma(s_t+\mathrm{B}x_{t}),\\
y_t &= \mathrm{C} s_t,
\end{split}
\]
where $s_t\in \RR^k$, $x_t\in\RR^{d_i}$, and $y_t\in\RR^{d_o}$ are, respectively, the state, input, and output (column) vectors, $\mathrm{A}\in\RR^{k\times k}$, $\mathrm{B}\in\RR^{k\times d_i}$, and $\mathrm{C}\in\RR^{d_o\times k}$ are the associated weight matrices, $\alpha\in(-1,1)$ is a fixed constant for controlling state decay, and $\sigma(x)=\frac{1}{1+e^{-x}}$ is the sigmoid function, which acts on vectors component-wise.
\end{definition}
Note that $\GRNN^{(d_{\mathsf{i}},d_{\mathsf{o}},k)}$ is a parametric class whose (finitely many) parameters belong to some parameter space $\Theta\subset\RR^d$, for an appropriate dimension $d$. When $k$ is fixed, we interchangeably denote functions from the above class explicitly as $g\in\cG_k^{(d_{\mathsf{i}},d_{\mathsf{o}})}$, or in their parametrized form $g_\theta$, where $\theta\in\Theta$. With this notation, the DINE objective is given by \cite{tsur2022neural}
\begin{equation}\label{eq:dine_obj}
\dine(\Dn,\theta_y,\theta_{xy})
:=\sup_{\theta_{xy}\in\Theta_{xy}}\hat{\sD}_{Y\|X}(\Dn,\theta_{xy})-\sup_{\theta_y\in\Theta_y}\hat{\sD}_{y}(\Dn,\theta_y),
\end{equation}
where $\theta_y\in\Theta_y$ and $\theta_{xy}\in\Theta_{xy}$ are the parameters of the RNNs $g_{\theta_y}\in\GRNN^{(d_y,1,k)}$ and $g_{\theta_{xy}}\in \GRNN^{(d_y+d_x,1,k)}$, respectively, the KL divergence estimators are given by
\begin{subequations}
\begin{align}
\hat{\sD}_Y(\Dn, \theta_y) &:= \frac{1}{n}\sum_{i=1}^n{g_{\theta_y}}\left(Y^i\right)-\log\left(\frac{1}{n}\sum_{i=1}^n e^{g_{\theta_y}\left(\widetilde{Y}_i,Y^{i-1}\right)}\right),\\
\hat{\sD}_{Y\|X}(\Dn,g_{\theta_{xy}}) &:= \frac{1}{n}\sum_{i=1}^n{g_{\theta_{xy}}}\left(Y^i,X^i\right)-\log\left(\frac{1}{n}\sum_{i=1}^n e^{g_{\theta_{xy}}\left(\widetilde{Y}_i,Y^{i-1},X^i\right)}\right),
\end{align}\label{eq:DINE_KL_est_main}%
\end{subequations}
and $\tilde{Y}^n$ is an i.i.d. sequence drawn from $\mathrm{Unif}(\cY)$.
The DINE is now given by optimizing both KL estimators over the corresponding parameter spaces:
\begin{align*}
\dine(\Dn)
&:=\sup_{\theta_{xy}\in\Theta_{xy}}\inf_{\theta_{y}\in\Theta_{y}}\dine(\Dn,\theta_y,\theta_{xy})\\
&=\sup_{\theta_{xy}\in\Theta_{xy}}\hat{\sD}_{Y\|X}(\Dn,\theta_{xy})-\sup_{\theta_y\in\Theta_{Y}}\hat{\sD}_{y}(\Dn,\theta_y),
\end{align*}
The DINE architecture is portrayed in Figure \ref{fig:dine_figs}.
For formal consistency guarantees for DINE, as well as implementation details, the reader is referred to \cite{tsur2022neural}.
\begin{figure}[t]
\begin{subfigure}[b]{0.4\textwidth}
\scalebox{.70}{\input{Figures/dine_dy_arxiv.tex}}
\caption{$\hat{\sD}_Y$ model.}
\label{fig:dine_dy}
\end{subfigure}
\hspace{0.45cm}
\begin{subfigure}[b]{0.4\textwidth}
\scalebox{.70}{\input{Figures/dine_complete.tex}
}
\caption{DINE architecture.}
\label{fig:dine_complete}
\end{subfigure}
\caption{The DINE model. Figure (a) depicts a single KL divergence estimator implementation and Figure (b) presents the complete DINE system.}
\label{fig:dine_figs}
\end{figure}
\subsection{Markov Decision Processes}\label{sec:mdp_def}
MDPs are discrete-time stochastic control processes that are used for sequential decision-making in stochastic systems \cite{bertsekas1976dynamic}. An MDP is described by a tuple $(\cZ,\cU,\cW,P_{W|Z,U},f,r)$, where $\cZ$ and $\cU$ are the state and action spaces, respectively, $W\sim P_{W|Z,U}(\cdot|z,u)\in\cP(\cW)$ is the disturbance given $(z,u)\in\cZ\times\cU$, $r:\cU\times\cZ\to\RR$ is the immediate reward, and the function $f:\cZ\times\cU\times\cW\to\cZ$ describes the state evolution, i.e., $z_{t+1} = f(z_t,u_t,w_t)$.
The action is determined by the stochastic policy $\bm{\pi}=(\pi_t)_{t\in\NN}$, where each $\pi_t$ is a conditional distribution of $U_t$ given $Z_t$, i.e., if $Z_t=z$ then $U_t\sim\pi_t(\cdot|z)\in\Delta_{\cU}$, for each $t\in\NN$.
We consider an \textit{infinite-horizon average-reward} MDP, where the objective is given by
\begin{equation}\label{eq:avg_reward}
\rho(\bm{\pi}) := \lim_{N\to\infty}\frac{1}{N}\sum_{t=1}^N \EE_{\bm{\pi}}\left[r(U_t,Z_t)\right],
\end{equation}
where the subscript $\bm{\pi}$ emphasizes that it induces the sequence distribution.
The goal of an MDP agent is to find a policy $\bm{\pi}$
that maximizes $\rho(\bm{\pi})$.
While a priori an optimizing $\bm{\pi}$ may be arbitrary, it turns out that for any infinite-horizon MDP, if $|\cZ|<\infty$, then $\argmax_{\bm{\pi}}\rho(\bm{\pi})$ contains a stationary policy \cite{bertsekas1976dynamic}, i.e., such that $\pi_t(\cdot|z)=\pi(\cdot|z)$ for some conditional distribution $\pi$ and all $z\in\cZ$ and $t\in\NN$.
\section{Directed Information Optimization}\label{sec:di_opt}
We develop a new method for optimizing the DINE over discrete input spaces, leveraging an RNN-based generative model of the input process PMF. To arrive at a tractable optimization objective, we formulate the DI rate optimization problem as an MDP and invoke the policy gradients theorem \cite{sutton1999policy} along with function approximation results and MC methods.
Henceforth, $\XX$ and $\YY$ denote jointly stationary discrete-time stochastic processes whose samples take values in $\cX$ and $\cY$, respectively. Although applicable in general, our method is presented in the context of communication channels, where $\cX$ and $\cY$ are interpreted as the channel input and output spaces, respectively. We assume that the size of the input alphabet $|\cX|=k<\infty$ is known. For simplicity of presentation, we focus on the case where $\cY$ is also finite, and the channel is described by the causally conditional PMF $p_{Y^n\|X^n}$.
Nonetheless, our derivation is independent of the output alphabet, and readily extends to channels with continuous outputs.
\subsection{DI Rate Optimization Problem}
We set up the DI rate optimization problem (see Section \ref{subsec:di}), modeling the input process PMF by a deep generative model as described next. The generative model takes an input-output pair from $\cX\times\cY$ and a simplex vector (that models the current input PMF), and outputs a new simplex vector (the updated input PMF). The PMF generator corresponding to a parameter vector $\phi\in\Phi\subset\RR^d$ is denoted by $h_\phi:\cX\times\cY\times\Delta_k\to\Delta_k$. Since each new simplex vector $p_t^\phi$ is specified by the parameters and the sequence of past input-output pairs $(X^{t-1},Y^{t-1})$, we treat the $t$-th output of $\pmf$ as the model for the conditional PMF of $X_t$ given that past:
\[
p_t^\phi=p^\phi(\cdot|X^{t-1},Y^{t-1}):= h_\phi(X_{t-1}, Y_{t-1},p^\phi_{t-1}),\qquad t\geq 1,
\]
where $(X_s,Y_s)\sim p_s^\phi p_{Y_s|X^sY^{s-1}}$ for each $s\leq t-1$.
The goal is now to optimize the DI rate over all input distributions that are modeled by $\pmf$, $\phi\in\Phi$, i.e., to solve
\begin{equation}\label{eq:di_phi_opt}
\sup_{\phi\in\Phi}\sI_{\phi}(\XX\to\YY),
\end{equation}
where the subscript $\phi$ designates that the underlying distribution for each $\sI(X^n\to Y^n)$, $n\in\NN$, in the DI rate expression is $\prod_{t=1}^n \pt p_{Y_t|X^{t}Y^{t-1}}$. Note that $\sI_{\phi}(\XX\to\YY)$ exists due to the stationarity of the joint distribution, and when $\Phi$ is compact, the supremum in \eqref{eq:di_phi_opt} is attained. To solve \eqref{eq:di_phi_opt}, we seek a tractable expression for the DI rate gradient $\nabla_\phi\sI_{\phi}(\XX\to\YY)$. The next section reformulates DI rate optimization as an MDP and employs the policy gradients theorem, alongside a DINE-based approximation of the MDP reward to arrive at an objective whose gradients coincide with those of the above.
The section concludes with a deep reinforcement learning policy optimization methodology for the calculation of \eqref{eq:di_phi_opt}.
\begin{table}[t]
\centering
\caption{DI rate optimization MDP formulation}
\begin{tabular}{||c | c ||}
\hline
MDP & DI optimization \\
\hline \hline
State \vspace{0.02cm}$Z_t$ & $X^{-1}_{-t},Y^{-1}_{-t}$ \\
\hline
Action $U_t$ & $X_0$ \\
\hline
Disturbance $W_t$ & $Y_0$ \\
\hline
Reward $r(U_t,Z_t)$ & Eqn. \eqref{eq:mdp_reward} \\
\hline \hline
\end{tabular}
\label{table:DP}
\end{table}
\subsection{MDP Formulation}
Recall that an MDP is given by the tuple $(\cZ,\cU,\cW,P_{W|Z,U},f,r)$. By the stationarity of the model, we may apply a reverse time-shift operator on each time step, so that the most recent step remains $t=0$ throughout. To obtain an MDP formulation of the DI rate optimization, we take the state as the accumulation of past channel inputs and outputs, i.e., $Z_t=(X^{-1}_{-t},Y^{-1}_{-t})$.\footnote{To ensure that the state space is the same for all $t$, we concatenate the state $Z_t=(X^{-1}_{-t},Y^{-1}_{-t})$ with infinitely many null symbols such that $\cZ$ is a space of half-infinite sequences.}
We view the channel input generator as an agent whose action $U_t=X_0$ at each step is drawn from the parametric policy $\pi_\phi(\cdot|Z_t)=p^\phi_{t}(\cdot|X^{-1}_{-t},Y^{-1}_{-t})$.
The disturbance is the channel output, distributed according to the conditional PMF $p_{Y_0|Y^{-1}_{-t}X^0_{-t}}$, and the immediate reward is given by the conditional expectation
\begin{equation}
r(U_t,Z_t) = \EE\left[\log\left(\frac{p_{Y_0|Y^{-1}_{-t},X^{0}_{-t}}(Y_0|Y^{-1}_{-t},X^{0}_{-t})}{p_{Y_0|Y^{-1}_{-t}}(Y_0|Y^{-1}_{-t})}\right)
\middle|X^{0}_{-t},Y^{-1}_{-t}\right],\label{eq:mdp_reward}
\end{equation}
for which we have $\EE\left[r(U_t,Z_t)\right] = \sI_\phi(X^0_{-t};Y_0|Y^{-1}_{-t})$. For feedforward communication, the MDP formulation remains unchanged, while the optimization is limited to policies that are independent of past channel outputs. The formulation is summarized in Table \ref{table:DP}, and gives rise to the desired MDP characterization (see Section \ref{proof:mdp_formulation} for the proof).
\begin{theorem}[MDP formulation]
\label{theorem:MDP_formulation}
The DI rate optimization problem \eqref{eq:di_phi_opt} is an infinite-horizon average-reward MDP with objective $\rho(\pi_\phi) = \sI_\phi(\XX\to \YY)$.
\end{theorem}
\begin{remark}[Relation to existing MDPs]
MDP formulations of capacity-optimization problems were previously used to calculate the feedback capacity of certain unifilar FSCs \cite{permuter2008capacity,elishco2014capacity,sabag2015feedback}, assuming the channel model is known.
In these formulations the MDP state space is a quantized version of $\Delta_{|\cS|}$, and $\rho(\pi)$ is a single-letter expression, which is optimized using dynamic programming algorithms.
The authors of \cite{tatikonda2008capacity} generalize the unifilar formulation but obtain a generally intractable objective.
Our formulation, on the other hand, can be viewed as a unifying MDP for all channels that have a stationary joint distribution. As in \cite{tatikonda2008capacity}, our MDP cannot be computed with traditional methods and calls for new ideas, utilizing approximation and estimation techniques.
\end{remark}
\begin{remark}[Existence of optimal solution]
Optimal policies are guaranteed for MDPs with finite state and action spaces \cite{bertsekas1976dynamic}. However, when the state space becomes infinite, an optimal policy is no longer guaranteed unless the MDP satisfies certain conditions on its ergodicity (cf. \cite{cavazos1992comparing,arapostathis1993discrete,ross2014introduction,xia2020existence}).
This is the case for prior methodologies that used MDPs to maximize DI, where the MDP state space is a probability simplex \cite{permuter2008capacity,elishco2014capacity,sabag2015feedback, aharoni2020reinforcement, shemuel2021feedback}.
\end{remark}
\subsection{Policy Gradients Theorem}
We leverage the MDP formulation to arrive at a tractable expression for $\nabla_\phi \rho(\pi_\phi)$ using reinforcement learning techniques. This approach is particularly well-suited here since we treat the channel as a black-box that can be sampled given an input sequence, and reinforcement learning offers powerful tools for solving data-driven MDPs.
We invoke the policy gradients theorem \cite[Theorem 1]{sutton1999policy}, that enables expressing $\nabla_\phi \rho(\pi_\phi)$ in terms of $\nabla_\phi \pi_\phi$, which is typically simpler to compute.
\begin{theorem}[Policy gradients]\label{theorem:policy_grads}
Let $(\cZ,\cU,\cW,P_{W|Z,U},f,r)$ be an infinite-horizon average-reward MDP with objective $\rho$, and consider a parameterized stationary policy $\pi_\phi$, where $\phi\in\Phi\subseteq\RR^d$ for some $d\in\NN$. Define the function
\begin{equation}
\sQ^{\pi_\phi}(u,z) :=\sum_{t=1}^\infty\EE\left[r(U_t,Z_t) - \rho(\pi_\phi)\big{|}Z_0=z,U_0=u\right].\label{eq:q_func_def}
\end{equation}
Then,
\begin{equation}\label{eq:policy_grad_thm}
\nabla_\phi\rho(\pi_\phi) = \sum_{z\in\cZ}p_Z^{\pi_\phi}(z)\sum_{u\in\cU}\nabla_\phi\pi_\phi(u,z) \sQ^{\pi_\phi}(u,z),
\end{equation}
where $p_Z^{\pi_\phi}$ is the stationary distribution of the MDP state sequence.
\end{theorem}
The state-action value function $\mathsf{Q}^{\pi_\phi}$ in \eqref{eq:q_func_def} quantifies the expected deviation of future rewards from $\rho(\pi_\phi)$, for a given state-action pair. The policy gradients theorem simplifies $\nabla_\phi \rho(\pi_\phi)$ by representing it in terms of the policy gradient $\nabla_\phi\pi_\phi$ and the function $\mathsf{Q}^{\pi_\phi}$. Using the identity $\partial_x \log f(x) = \frac{\partial_x f(x)}{f(x)}$ in \eqref{eq:policy_grad_thm}, we may further represent
\begin{align*}
\nabla_\phi\rho(\pi_\phi)
&= \sum_{z}p_Z^{\pi_\phi}(z)\sum_{u}\pi_\phi(u,z)\nabla_\phi\log\big(\pi_\phi(u,z)\big) \sQ^{\pi_\phi}(u,z)\\
&= \EE\left[ \nabla_\phi\log\big(\pi_\phi(U,Z)\big) \sQ^{\pi_\phi}(U,Z) \right].\numberthis{}\label{eq:log_der_identity}
\end{align*}
We next use the DINE to approximate $\sQ^{\pi_\phi}$ and arrive at a tractable expression for \eqref{eq:log_der_identity}.
\subsection{Approximation via DINE}
To compute the desired gradient via the right-hand side (RHS) of \eqref{eq:log_der_identity}, one must evaluate the function $\sQ^{\pi_\phi}$. This, however, requires calculation of the MDP reward \eqref{eq:mdp_reward}, which necessitates knowledge of the channel model. Even if the channel is given, traditional tabular algorithms cannot be efficiently applied due to the size of the MDP state space. We circumvent this by using the DINE \cite{tsur2022neural} to approximate $\sQ^{\pi_\phi}$ based only on samples from the channel.
Fix the PMF generator parameters $\phi\in\Phi$ and consider a dataset drawn from this input PMF and the channel $\Dn = (X^{n},Y^n)\sim \prod_{t=1}^n p^\phi_t p_{Y_t|Y^{t-1}X^t}$. We first take the DINE $\dine(\Dn)$ as a strongly consistent estimate of the true DI rate $\rho(\pi_\phi)$, cf. \cite[Theorem~2]{tsur2022neural}. Then, the function $r(U_t,Z_t)$ in the definition of $\sQ^{\pi_\phi}$ is approximated using the trained DINE RNNs $(g_{\theta_{y}}, g_{\theta_{xy}})$, as follows.
We consider the DV representation of the derived KL divergences in \eqref{eq:DV_xy_expectation} and \eqref{eq:DV_y_expectation}.
Subtracting their supremum-achieving DV potentials \eqref{eq:f_star_dv} yields the likelihood ratio
$f^\star_{xy,N} - f^\star_{y,N} = \log\left(\frac{p_{Y_0|X^0_{-N}Y^{-1}_{-N}}}{p_{Y_0|Y^{-1}_{-N}}}\right)$, where
\[
\EE[r(U_N,Z_N)] = \EE\left[f^\star_{xy,N}(X^0_{-N},Y^{0}_{-N}) - f^\star_{y,N}(Y^0_{-N})\right].
\]
As the DINE RNNs $(g_{\theta_{xy}},g_{\theta_{y}})$ are optimized to achieve the supremum of the empirical DV forms corresponding to \eqref{eq:DV_xy_expectation} and \eqref{eq:DV_y_expectation}, we define $\hat{r}_{\theta} := g_{\theta_{xy}} - g_{\theta_y}$, and take it as a proxy for $r$. This construction assumes that $\phi\in\Phi$ is fixed and that $g_{\theta_y}$ and $g_{\theta_{xy}}$ have been optimized for the induced joint distribution $\prod_{t=1}^n p^\phi_t p_{Y_t|Y^{t-1}X^t}$. This assumption will be further discussed in Section \ref{sec:implementation}, where the joint optimization algorithm is proposed.
For the last step in approximating $\sQ^{\pi_\phi}$, we observe that
\[
\lim_{t\to\infty}\EE\left[r(U_t,Z_t)\right] = \lim_{t\to\infty}\sI_{\phi}(X^0_{-t};Y_0|Y^{-1}_{-t})= \sI_\phi(\XX\to\YY)=\rho(\pi),
\]
which implies that the difference $r(U_t,Z_t)-\rho(\pi_\phi)$ becomes negligible as $t$ grows. This serves to justify truncating the infinite sum defining $\sQ^{\pi_\phi}$ at some $T<\infty$, which, together with the steps above, yields the approximation
\[
\hat{\sQ}_{\theta,t}(\Dn) := \sum_{i=t}^{t+T-1}
\hat{r}_\theta(Y^{i},X^{i})-\hat{\sI}(\Dn,\theta).
\]
Having this, we estimate the RHS of \eqref{eq:log_der_identity} by replacing the outer expectation with a MC evaluation taken over a long trajectory $(\pt, X_t, Y_t)_{t=1}^n$, which results in
\begin{equation}
\nabla_\phi\underbrace{\frac{1}{n-T}\sum_{t=1}^{n-T}\log \left(p^\phi_t(X_t)\right)\hat{\sQ}_{\theta,t}(\Dn)}_{=:\hat{\sJ}_\theta(\Dn,\phi)}\label{eq:alt_objective}
\end{equation}
as the final approximation of $\nabla_\phi\rho(\pi_\phi)$. This objective is readily differentiable w.r.t. $\phi$ based only on samples from the input generative model and the channel, as desired.
Consequently, the DI optimization scheme alternates between optimizing $\hat{\sI}(\Dn)$ and $\hat{\sJ}_\theta(\Dn,\phi)$, i.e., alternates between improving the approximation of $\hat{r}_\theta$ and policy optimization, respectively.
\begin{remark}[Performance guarantees]
While the DINE is a consistent estimator of DI \cite{tsur2022neural}, formal guarantees for our overall method would require a theoretical account of RNN-based policy optimization combined with MC schemes, which is currently unavailable \cite{lillicrap2015continuous}. Nevertheless, in Section \ref{sec:exp_cap} we show empirically that our approach performs well on all the considered examples.
\end{remark}
\subsection{Estimated Mutual Information Optimizer}\label{sec:mine_opt}
When the channel is memoryless the optimization problem \eqref{eq:di_phi_opt} reduces to maximizing $\sI_\phi(X;Y)$ over input generative models that are elements of the simplex $\Delta_k$.
We can take advantage of the memoryless structure in this special case and employ a simpler model.
First, $\pmf$ no longer needs to depend on past channel input-outputs pairs and can be taken as $p^\phi=\big(\softmax^1(\phi),\ldots,\softmax^m(\phi)\big)$,~where
\begin{equation}\label{eq:softmax_def}
\softmax^i(\phi) := \frac{\exp(\phi_i)}{\sum_{j=1}^m\exp(\phi_j)}, \qquad i=1,\dots,m
\end{equation}
is the $i$-th output of the softmax function.
Second, we replace the DINE with the MI neural estimator (MINE) \cite{belghazi2018mutual}:
\begin{equation}
\hat{\sI}_{\mathsf{MI}}(\Dn) := \sup_{\theta\in\Theta}\frac{1}{n}\sum_{t=1}^n g_\theta(X_i,Y_i) - \log\left( \frac{1}{n}\sum_{t=1}^n e^{g_\theta(X_t,\bar{Y}_t)}\right),
\end{equation}
where $g_\theta$ is a feedforward neural network, $\Dn=(X^n,Y^n)\stackrel{i.i.d.}{\sim}p^\phi p_{Y|X}$, and $\bar{Y}_t$ are negative samples that are independent of $X_t$ for $t=1,\dots n$.
MINE is optimized over feedforward networks, which are simpler and often present better convergence profiles.
In addition, MINE lower bounds the ground truth MI \cite[Remark~5]{tsur2022neural} (which is not guaranteed for DINE) and adheres to non-asymptotic error bounds \cite{sreekumar2021non,sreekumar2022neural}.
In Section \ref{sec:prob_shape} we demonstrate the MI-based optimization scheme to learn probabilistic shaping of constellations in the peak-power constrained AWGN (PP-AWGN).
The resulting differentiation objective takes the form:
\begin{equation}
\hat{\sJ}^{\mathsf{MI}}_\theta(\Dn,\phi) := \frac{1}{n}\sum_{t=1}^{n}\log\left( p^\phi(X_t)\right)\left(g_\theta(X_t, Y_t) - \hat{\sI}_{\mathsf{MI}}(\Dn)\right),\label{eq:alt_objective_mine}
\end{equation}
whose gradient can be simplified as follows (see Section \ref{proof:mine_based_grad} for the proof).
\begin{lemma}\label{lemma:MINE_grad}
The gradient of \eqref{eq:alt_objective_mine} w.r.t. $\phi$ is given by
\begin{equation}
\nabla_\phi\hat{\sJ}^{\mathsf{MI}}_\theta(\Dn,\phi) = \frac{1}{n}\sum_{t=1}^{n}\left(e_{X_t} - p^\phi\right)\left(g_\theta(X_t, Y_t) - \hat{\sI}_{\mathsf{MI}}(\Dn)\right),
\end{equation}
where $e_{X_t}=\big(\mathbbm{1}_{\{X_t=1\}}\ldots\mathbbm{1}_{\{X_t=m\}}\big)^\intercal$ is the $m$-dimensional standard basis vector whose $X_t$-th entry is 1.
\end{lemma}
\section{Implementation and Algorithm}\label{sec:implementation}
\begin{figure}[t]
\centering
\scalebox{.65}{\input{Figures/pmf_generator}}
\caption{The PMF model unrolled for feedback capacity. In the $t$-th step, $S^\phi_t$ is calculated from $(S^\phi_{t-1},X_t, Y_t)$ and then passed for the calculation of both $S^\phi_{t+1}$ and $\pt$.
}
\label{fig:pmf_gen}
\end{figure}
\subsection{PMF Generator}
Recall that the PMF generator calculates a mapping $h_\phi:\cX\times\cY\times\Delta_k\to\Delta_k$, whose output evolves according to $p^\phi_t = h_\phi(X_{t-1},Y_{t-1},p^{\phi}_{t-1})$.
We implement $h_\phi$ with a long short-term memory (LSTM) network.
LSTM is a type of recurrent neural network that uses gating mechanisms to selectively retain, input, output, and forget information over multiple time steps, allowing it to model long-term dependencies in sequential data (see \cite{hochreiter1997long} for more background on LSTMs).
We stack the LSTM network with additional fully-connected (FC) networks to increase the expressiveness of the model. The output layer of $h_\phi$ is given by an $m$-dimensional softmax activation \eqref{eq:softmax_def}.
Denoting the LSTM and FC maps by $g^\phi_1$ and $g^\phi_2$, respectively, the PMF generator output evolution is given by
\begin{equation}
S^\phi_t = g^\phi_1(X_{t-1},Y_{t-1},S^{\phi}_{t-1})\,,\quad
p^\phi_t= \sigma_{\text{sm}}\big(g^\phi_2(S^\phi_t)\big),\label{eq:pmf_state_def}
\end{equation}
where $S^\phi_t$ is the LSTM inner state at time $t$.
When feedforward capacity is considered, $Y_{t-1}$ is omitted from \eqref{eq:pmf_state_def}. The architecture of $h_\phi$ is illustrated in Figure \ref{fig:pmf_gen}.
\subsection{Combined System}
\begin{figure}[t]
\centering
\scalebox{.65}{\input{Figures/complete_di_model}}
\caption{The complete estimation-optimization model. Dashed arrows represent gradient propagation and filled blocks represent parametric models.}
\label{fig:complete_di_system}
\end{figure}
The combined system comprises both the PMF generator and the DINE models, as presented in Figure \ref{fig:complete_di_system}.
We therefore construct a joint estimation-optimization procedure based on alternating maximization between $\dine(\Dn,\theta_y,\theta_{xy})$ and $\hat{\sJ}_\theta(\Dn,\phi)$.
In each iteration, a single model is selected for parameter update, while the parameters of the other are fixed.
Optimizing the DINE improves the approximation accuracy of $\sQ^{\pi_\phi}$ for a fixed input PMF model $h_\phi$, while optimizing $h_\phi$ increases DI, as quantified by the current DINE model.
In each iteration, we perform the following steps:
First, we calculate $p^{\phi,n}$ and a dataset $D_n$ by sequentially calculating the PMFs, sampling from them, and propagating the sampled inputs through the channel.
Then, we pass $D_n$ through the DINE and calculate the loss function.
We update the models' parameters using stochastic gradient ascent;
for the $\theta$ update, we calculate $\nabla_{\theta_y,\theta_{xy}}\dine(\Dn,\theta_y,\theta_{xy})$, while for the $\phi$ update, we calculate $\nabla_\phi\hat{\sJ}_\theta(\Dn,\phi)$.
These steps are repeated until a convergence criterion is met or a predetermined number of updates is performed.
Finally, we evaluate the DINE objective on a long sequence of channel inputs and outputs to obtain a numerical estimate of the optimized DI.
See Algorithm~\ref{alg:di_opt} for the full list of steps.
The proposed joint optimization method involves a latent assumption; updating the PMF generator requires an accurate estimate of $\sQ^{\pi_\phi}$ w.r.t. the joint distribution
induced by the current value of $\phi$.
We therefore prioritize the training of the DINE model and apply several updates to the DINE parameters $(\theta_y,\theta_{xy})$ for each update of the PMF generator parameters~$\phi$.
\begin{algorithm}[ht]
\caption{Discrete alphabet DI optimization and estimation}
\label{alg:di_opt}
\textbf{input:} Discrete channel, feedback indicator\\
\textbf{output:} $\dine(\Dn, \NE)$, optimized $h_\phi$.
\algrule
\begin{algorithmic}
\State Initialize $g_{\theta_{y}}, g_{\theta_{xy}}, \NE$ with parameters $\theta_{y}$, $\theta_{xy}$ and $\phi$ and set a learning rate $\gamma$.
\If{feedback indicator}
\State Add feedback link to $h_\phi$
\EndIf
\Repeat
\State Compute $(\Dn, p^{\phi,n})$
\If{training DINE}
\State Compute $\hat{\sD}_{Y}(\Dn, \theta_y)$, $\hat{\sD}_{Y \| X}(\Dn, \theta_{xy})$ according to \eqref{eq:DINE_KL_est_main}
\State Update DINE parameters: \\
\hspace{\algorithmicindent}\hspace{\algorithmicindent}$\theta_{y} \leftarrow \theta_{y} + \gamma\nabla_{\theta_{y}}\hat{\sD}_{Y}(\Dn, \theta_{y})$\\
\hspace{\algorithmicindent}\hspace{\algorithmicindent}$\theta_{xy} \leftarrow \theta_{xy} + \gamma\nabla_{\theta_{xy}}\hat{\sD}_{Y \| X}(\Dn, \theta_{xy})$
\Else \hspace{0.15cm}(Train PMF generator)
\State Compute $\hat{\sJ}_\theta(\Dn, \phi)$ according to \eqref{eq:alt_objective}
\State Update PMF generator parameters: \\
\hspace{\algorithmicindent}\hspace{\algorithmicindent}$\phi \leftarrow \phi + \gamma\nabla_{\phi}\hat{\sJ}_\theta(\Dn, \phi)$
\EndIf
\Until{convergence}
\State MC evaluation of $\dine(\Dn)$ \\
\Return $\dine(\Dn)$ and $h_\phi$
\end{algorithmic}
\end{algorithm}
\section{Application 1: Capacity Estimation}\label{sec:exp_cap}
We employ Algorithm~\ref{alg:di_opt} to estimate the capacity of various channels with memory.
Performance is measured by comparing the optimized DI estimate with known capacity solutions and/or bounds.
Further, we compare the learned PMF structure with known capacity-achieving coding schemes.
Unlike the approach developed herein, all the considered reference methods require full knowledge of the channel law.
Implementation can be found on \href{https://github.com/DorTsur/discrete_di_optimization}{GitHub}.
\begin{remark}[Capacity optimization problem]
The proposed method aims to calculate the supremum of the DI rate w.r.t. a set of input distributions.
However, the capacity expression \eqref{eq:channel_cap} considers the opposite order of limit and supremum, i.e., taking the limit of the sequence of optimized normalized DI.
This order is known to be interchangeable for FSCs \cite{permuter2009finite}, which encompass most models of channels with memory.
\end{remark}
\subsubsection{\texorpdfstring{\underline{Gilbert-Eliott Channel}}{Gilbert-Eliott Channel}}
The Gilbert-Eliott (GE) channel is a time-varying binary symmetric channel (BSC) whose flip parameter is determined by a latent Markovian state that evolves according to the state diagram in Figure \ref{fig:ge_markov}. At state `Good' the flip probability is smaller, and thus the channel is better.
While it is straightforward to show that the optimal input is an i.i.d. $\mathrm{Ber}(0.5)$ process, the GE capacity is determined by a limiting expression, rather than a closed form formula \cite{mushkin1989capacity}.
We apply the proposed scheme to estimate the capacity of the GE channel and compare our results with a consistent estimate of $H(\YY|\XX):=\lim_{n\to\infty}\frac{1}{n}H(Y^n|X^n)$ from a long input-output sequence \cite{rezaeian2005computation}.
The method assumes the elements of $\XX$ are distributed according to the capacity-achieving distribution, i.e., $X_t\sim\mathrm{Ber}(0.5)$.
As seen in Figure \ref{fig:GE}, our method achieves the capacity for a variety of state transition values.
\begin{figure}[t]
\begin{subfigure}[ht]{.43\textwidth}
\centering
\scalebox{.75}{\input{Figures/GE_markov}}
\caption{}
\label{fig:ge_markov}
\end{subfigure}
\begin{subfigure}[ht]{.57\textwidth}
\centerline{\includegraphics[trim={10pt 1pt 20pt 20.4pt}, clip, width=\linewidth]{Figures/new_ge_rate.eps}}
\caption{}
\label{fig:GE}
\end{subfigure}
\caption{GE channel. Figure (a) depicts the channel state Markov chain. The transition distribution from "Good" to "Bad" and vice versa are $\mathrm{Ber}(b)$ and $\mathrm{Ber}(g)$, respectively; (b) presents the estimated capacity versus $b$ (with $g=3b$), compared to estimates obtained from \cite{rezaeian2005computation}.}
\end{figure}
\subsubsection{\texorpdfstring{\underline{Ising Channel}}{Ising Channel}} The binary Ising channel is a unifilar FSC, that evolves according to\footnote{When the channel model is known, $S_t$ is known at the encoder for any time step since the channel is unifilar.}
\begin{equation}\label{eq:Ising_eq}
Y_t = \begin{cases}
\sZ_{1/2}(X_t), & \text{if } S_{t-1}=0\\
\sS_{1/2}(X_t),& \text{otherwise}
\end{cases}, \qquad S_t = X_{t-1},
\end{equation}
where $\sZ_{1/2}$ and $\sS_{1/2}$ denote the Z- and S-channels with probability $1/2$.
The authors of \cite{elishco2014capacity} compute the capacity of the Ising channel using dynamic programming algorithms over a quantized state space. We estimate the capacity via Algorithm~\ref{alg:di_opt}, which converges to the ground truth capacity after a relatively small number of iterations; cf. Figure \ref{fig:Ising_convergence}.
We further evaluate our method by analysing the structure of the learned PMF. We construct a long trajectory $(p^{\phi,n}, x^n, s^n, y^n)$
and perform $k$-means clustering of $p^{\phi,n}$ for $k=4$.
We then examine the evolution of clustered $p_t^{\phi}$ according to past $(p^{\phi,t-1}, x^{t-1}, s^{t-1}, y^{t-1})$.
In this case, $p^\phi_t\in[0,1]$ and is treated as the parameter of a Bernoulli distribution.
The structure of $p^\phi_t$ is presented in Figure \ref{fig:pmf_model_structure}.
We note that the joint estimation-optimization procedure results in a nontrivial input PMF evolution, whose structure coincides with the analytical capacity-achieving coding scheme proposed in \cite{elishco2014capacity}.
\begin{figure}[t]
\begin{subfigure}[ht]{.47\textwidth}
\centerline{\includegraphics[trim={20pt 1pt 20pt 16.4pt}, clip, width=\linewidth]{Figures/ising_convergence.eps}}
\caption{}
\label{fig:Ising_convergence}
\end{subfigure}
\begin{subfigure}[ht]{.37\textwidth}
\centering
\scalebox{0.9}{\input{Figures/p_struct_Ising_fb}}
\caption{}
\label{fig:pmf_model_structure}
\end{subfigure}
\caption{Ising Channel. Figure (a) presents the DINE loss convergence on the Ising channel with feedback and Figure (b) presents the learned PMF model for $(\alpha_1,\alpha_2) = (0.456, 0.570)$. A blue arrow denotes a transition that occurs for any value of $(x_t,y_t,s_t)$.}
\end{figure}
\subsubsection{\texorpdfstring{\underline{Trapdoor Channel}}{Trapdoor Channel}} The trapdoor channel is a unifilar FSC whose state evolves according to $S_t = S_{t-1}\oplus X_t \oplus Y_{t}$, where $\oplus$ denotes the binary XOR operation, and its output is given by \eqref{eq:Ising_eq}.
We estimate both the feedforward and feedback capacities of this channel via Algorithm~\ref{alg:di_opt}.
For the feedback capacity, the algorithm converges to the analytic solution from \cite{permuter2008capacity}, as presented in Figure~\ref{fig:trapdoor_fb}.
The feedforward capacity of the trapdoor channel is an open problem. Upper and lower bounds on the capacity value were provided in \cite{huleihel2022capacity} and \cite{kobayashi2006capacity}; the former used bounds on the delayed feedback capacity, while the latter employ a Blahut-Arimoto-type algorithm, respectively.
Figure \ref{fig:trapdoor_ff_convergence} shows that the DINE convergence between these known bounds. In particular, this yields a new and improved estimate for the feedforward capacity of the trapdoor channel. Averaging over several runs, we arrive at the value of $\hat{C}^{\mathsf{FF}}_{\text{Trapdoor}} \approx 0.57246$.
\begin{figure}[t]
\begin{subfigure}[ht]{0.51\textwidth}
\centerline{\includegraphics[trim={20pt 1pt 20pt 16.4pt}, clip, width=\linewidth]{Figures/trapdoor_fb_new.eps}}
\caption{}
\label{fig:trapdoor_fb}
\end{subfigure}
\begin{subfigure}[ht]{.49\textwidth}
\centerline{\includegraphics[trim={20pt 1pt 20pt 16.4pt}, clip, width=\linewidth]{Figures/trapdoor_ff_convergence_new_ub.eps}}
\caption{}
\label{fig:trapdoor_ff_convergence}
\end{subfigure}
\caption{Figures (a) and (b) present DINE loss convergence to the feedback and feedforward trapdoor channel capacities ,respectively.
Estimated feedforwad capacity is compared with upper and lower bounds from \cite{huleihel2021computable} and \cite{kobayashi2006capacity}, respectively, and the estimated feedback capacity is compared with the analytical solution from \cite{permuter2008capacity}.}
\label{fig:trapdoor}
\end{figure}
\subsubsection{\texorpdfstring{\underline{NOST and POST Channels}}{NOST and POST Channels}} As a last example, we consider the Noisy Output is the STate (NOST) and Previous Output is the STate (POST) channels.
The channel outputs are given by \eqref{eq:Ising_eq}, but with an arbitrary parameter $p\in[0,1]$ (rather than $p=1/2$).
The NOST channel state evolves stochastically according to $S_t= \sZ_\eta(Y_t)$ for $\eta\in[0,1]$, and the POST channel is the special case where $\eta=0$.
We use Algorithm~\ref{alg:di_opt} to estimate the feedforward capacity of the POST channel and the feedback capacity of the NOST channel, considering various values of $p$ and $\eta$, respectively.
Figure \ref{fig:post_nost} compares of results to those from \cite{permuter2014capacity} for the POST and \cite{shemuel2021feedback} for the NOST channel.
The authors of \cite{permuter2014capacity} show that both the feedforward and feedback capacities of the $\sf{POST}(p)$ channel are equal to the capacity of the Z channel with the same parameter.
However, the capacity-achieving distribution of the feedforward POST channel can, in general, have infinite memory. This fact together with the accuracy of our capacity estimates testify to the expressiveness of our input distributions model.
\begin{figure}[t]
\begin{subfigure}[ht]{.5\textwidth}
\centerline{\includegraphics[trim={12pt 1pt 20pt 15pt}, clip, width=\linewidth]{Figures/POST_journ.eps}}
\caption{}
\label{fig:post}
\end{subfigure}
\begin{subfigure}[ht]{0.5\textwidth}
\centerline{\includegraphics[trim={12pt 1pt 20pt 15pt}, clip, width=\linewidth]{Figures/nost.eps}}
\caption{}
\label{fig:nost}
\end{subfigure}
\caption{Figure (a) presents the POST capacity estimate vs. the channel parameter $p$, compared with the analytical capacity value;
Figure (b) presents the estimated capacity for the NOST channel versus the flip probability $\eta$, compared with the method in \cite{shemuel2021feedback}.}
\label{fig:post_nost}
\end{figure}
\section{Application 2: Feedback Capacity Bound via Q-Graphs}\label{sec:q_graph}
We develop a method for calculating lower and upper bounds on the feedback capacity of unifilar FSCs, building on the tools from \cite{sabag2016single,sabag2020graph}. The method uses the outputs of Algorithm~\ref{alg:di_opt} to treat an optimization problem involving a structured auxiliary random variable, termed the $Q$-graph.
We begin with an introduction to $Q$-graphs and their utility for calculating capacity bounds. We then argue to the complexity of the current graph search approach is prohibitive and propose a $Q$-graph approximation algorithm based on the optimized PMF generator from Algorithm~\ref{alg:di_opt}. We demonstrate the performance of our approach both in terms of the estimated $Q$-graphs and the resulting capacity bounds.
The proposed method couples the capacity estimate with a methodology to calculate upper and lower bounds thereof.
\subsection{\texorpdfstring{Background: $Q$-Graphs}{Background: Q-Graphs}}
For a finite channel output alphabet $\cY$, a $Q$-graph is a directed connected graph with $|\cQ_g|$ nodes and $|\cY|$ distinct outgoing edges from each of the nodes, each uniquely labeled $y\in\cY$.
The node transition on a $Q$-graph is given by a time invariant deterministic function $f_Q:\cQ_g\times\cY\to\cQ_g$.
The authors of \cite{sabag2020graph} showed that for any given $Q$-graph $Q_g$, the feedback capacity of a unifilar FSC can be bounded from both above and below by solving a corresponding optimization problem over some set of input distributions.
Denoting the corresponding bounds by $\underline{\cL}(Q_g)$ and $\overline{\cL}(Q_g)$,
we have \cite[Theorems~2,3]{sabag2016single}:
\begin{equation}\label{eq:ub_lb_q_graph}
\underline{\cL}(Q_g)\leq\sC_{\mathsf{FB}}\leq\overline{\cL}(Q_g),
\end{equation}
where both $\underline{\cL}(Q_g)$ and $\overline{\cL}(Q_g)$ are given by $\sup_{P_{X|S,Q}\in\cP}\sI(X,S;Y|Q)$ but with a different set of distributions $\cP$.
For $\overline{\cL}(Q_g)$, we take $\cP=\cP_Q$, which is the set of input distributions that induce a unique stationary joint process $(S_i,Q_i)_{i\in\NN}$. For $\underline{\cL}(Q_g)$, a subset of $\cP_Q$ that admits some Markov criteria is considered.
In practice, given a graph $Q_g$, both upper and lower bounds are obtained by numerically solving the aforementioned problem (see \cite{sabag2020graph} for more details).
Our objective is to find $Q_g$ that either maximizes $\underline{\cL}(Q_g)$ or minimizes $\overline{\cL}(Q_g)$.\footnote{$|\cQ_g|<\infty$ implies that the respective $\argmin$ and $\argmax$ are non-empty, however, they are generally not guaranteed to coincide or overlap.}
\subsection{Existing Method and its Complexity}
The authors of \cite{sabag2020graph} propose an enumeration-based variant of an exhaustive search to find the best $Q$-graph for a given cardinality $\cQ_g$, termed graph-pooling.
%
%
This method requires solving both upper and lower bound optimization problems for all considered $Q$-graphs.
As discussed in \cite{sabag2020graph}, bounds on the cardinality of $\cQ_g$ are currently unknown. Consequently, the graph-pooling runtime is, in general, unbounded; if the search over all graphs with $|\cQ_g|$ nodes did not yield tight bounds on the feedback capacity, we continue to search over all graphs of size $|\cQ_g|+1$.
The next lemma shows that the number $Q$-graphs that the graph-pooling approach must consider is exponential in $|\cQ_g|$ (see Section \ref{appendix:GP_complex_bound} for the proof).
\begin{lemma}[Complexity lower bound on graph-pooling method]\label{lemma:GP_complex_bound}
Fix $|\cQ_g|$ and let $N_{\sf{GP}}$ be the number of $Q$-graphs of size $|\cQ_g|$ considered in the graph-pooling method. Then,
$N_{\sf{GP}} \geq e^{|\cQ_g|\log |\cQ_g|}$.
\end{lemma}
Lemma \ref{lemma:GP_complex_bound} suggests that finding a good $Q$-graph with large cardinality is computationally burdensome, even if each optimization problems have relatively low complexity. To overcome this, we next propose an RNN-based approximation method for potentially optimal $Q$-graphs that uses the outputs of Algorithm~\ref{alg:di_opt}, whose training time is independent of $|\cQ_g|$.
\subsection{\texorpdfstring{$Q$-graph Approximation Method}{Q-graph Approximation Method}}
A natural choice for a $Q$-graph is $Q_g=p_{S_t|Y^t}$, as it is the MDP state process in the solution of several unifilar FSCs \cite{permuter2008capacity,elishco2014capacity,sabag2015feedback,shemuel2021feedback}.
For these FSCs it was shown that $p_{S_t|Y^t}$ takes values in a finite subset of $\Delta_{|\cS|}$, which limits the $Q$-graph search space. In situations where such bounds are not available, our method will produce a quantized proxy of $p_{S_t|Y^t}$.
The procedure first approximates $p_{S_t|Y^t}$ using an LSTM network via a supervised learning scheme, and then extracts the graph structure from the learned approximation via $k$-means.
\begin{algorithm}[t]
\caption{$Q$-graph structure and $\CFB$ bounds calculation}
\label{alg:q_est}
\textbf{input:} Discrete channel, optimized PMF generator $\pmf$, and $|\cQ_g|$\\
\textbf{output:} $\hat{C}_{\mathsf{LB}}, \hat{C}_{\mathsf{UB}}$.
\algrule
\begin{algorithmic}
\State Initialize $g_{\psi}$ with parameters $\psi$ and a learning rate $\gamma$.
\State \textbf{Step 1: Train $\bm{g_{\psi}}$}
\Repeat
\State Compute $(S^n,Y^n)$ using $\pmf$ and channel.
\State Compute $\{q^\psi_t\}_{t=1}^n$ using $g_{\psi}$
\State Compute CE loss \eqref{eq:ce_loss} and update $\psi$: \\
\hspace{\algorithmicindent}\hspace{\algorithmicindent}$\psi \leftarrow \psi - \gamma\nabla_{\psi}\cL_{\mathsf{CE}}(Y^n,S^n,\psi)$
\Until{convergence}
\State \textbf{Step 2: Obtain bounds}
\State Generate a sequence $(q^{\psi,n},Y^n)$
\State Perform $k$-means clustering on $q^{\psi,n}$
\State Compute $M_Q$ from $(q^{\psi,n},Y^n)$
\State Compute $\hat{C}_{\mathsf{LB}}, \hat{C}_{\mathsf{UB}}$ from $M_Q$\\
\Return $\hat{C}_{\mathsf{LB}}, \hat{C}_{\mathsf{UB}}, M_Q$
\end{algorithmic}
\end{algorithm}
\subsubsection{\texorpdfstring{\underline{Mapping Approximation}}{Mapping Approximation}} We devise an RNN-based generative model $g_\psi:\Delta_{|\cS|}\times\cY\to \Delta_{|\cS|}$ with parameters $\psi\in\RR^d$.
The model receives a sequence of channel outputs $Y^n$, generated by the optimized PMF model and the channel, and recursively calculates a sequence $(q^\psi_t)_{t=1}^n$.
The model is initialized with $q_0=0$, and evolves according to
\[
q^\psi_t = g_{\psi}(q^\psi_{t-1},Y_t), \quad t=1,\dots,n.
\]
The model is trained to approximate the evolution of $p_{S_t|Y^t}$ by minimizing the \textit{categorical cross entropy}, given by
\begin{equation}\label{eq:ce_loss}
\cL_{\mathsf{CE}}(Y^n,S^n,\psi) :=
-\sum_{t=1}^n e_{S_t}^{\sT}\log q^\psi_t
= -\sum_{t=1}^n \log q^\psi_{t,S_t},
\end{equation}
where $e_{S_t}$ is a one-hot encoding of $S_t$, and $q^\psi_{t,S_t}$ is $S_t$-th coordinate of $q^\psi_t$.
\subsubsection{\texorpdfstring{\underline{Trajectory Analysis}}{Trajectory Analysis}}
Having an RNN approximation of $p_{S_t|Y^t}$, we aim to extract the underlying graph structure.
To that end, we construct a long sequence $(q^{\psi,n},Y^n)$ and apply $k$-means clustering\cite{jain1988algorithms} to $q^{\psi,n}$ such that each cluster represents an approximated graph node.
Next, we label the edges by taking the most frequent transition between each two nodes\footnote{A generalized version of $Q$-graphs can consider randomized mappings, therefore also including less frequent transitions.}.
The graph transition information is stored in a $3$-dimensional binary adjacency data structure, denoted $M_Q$, such that $M_Q(i,j,k)=1$ if the edge from node $i$ to node $j$ exists with label $y=k$.
We plug the $Q$-graph corresponding to $M_Q$ into the optimization problems from \cite{sabag2020graph} and compute $\underline{\cL}(M_Q)$ and $\overline{\cL}(M_Q)$.
When the resulting bounds are not tight we can repeat the analysis step for larger values of graph nodes $k$, using the same trajectory $(q^{\psi,n},Y^n)$.
The complete scheme is presented in Algorithm \ref{alg:q_est}.
We stress that while Algorithm~\ref{alg:di_opt} does not require access to the channel state sequence, the proposed bounds calculation method does rely on this information.
\subsection{Empirical Results}
We demonstrate the performance of Algorithm \ref{alg:q_est} on the Trapdoor and Ising channels, as the structure of $p_{S_t|Y^t}$ under the optimal input distribution is known for these examples.
Figures \ref{fig:Ising_q_comapre} and \ref{fig:trapdoor_q_compare} compare the learned $Q$-graphs with the optimal structure derived in \cite{elishco2014capacity} and \cite{permuter2008capacity}.
The estimated $p_{S_t|Y^t}$ coincides with the one obtained from the analytical solution, although the numerical values associated with the nodes are slightly different.
Nevertheless, these tweaked values do not affect the bound computation~\cite{sabag2020graph},
and using Equations (13) and (16) from \cite{sabag2020graph} we obtain the following capacity upper and lower bounds:
\begin{equation*}
\text{Trapdoor:}\quad \hat{C}_{\mathsf{UB}}-\hat{C}_{\mathsf{LB}} \approx 3.7615\cdot 10^{-08},\quad
\text{Ising:}\quad \hat{C}_{\mathsf{UB}}-\hat{C}_{\mathsf{LB}} \approx 2.4334\cdot 10^{-07}
\end{equation*}
The calculated bounds are extremely close, testifying to the potency of the proposed method and providing another useful byproduct of Algorithm~\ref{alg:di_opt}.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\scalebox{.75}{\input{Figures/new_est_Q_ising}}
\caption{Estimated $Q$-graph.}
\label{fig:q_Ising}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{0.4\textwidth}
\centering
\scalebox{.75}{\input{Figures/Q_Ising}
}
\caption{Analytical MDP state.}
\label{fig:dp_Ising}
\end{subfigure}
\caption{Ising channel. Comparison between the estimated structure (a) and the dynamic program optimized MDP state transition from \cite{elishco2014capacity} (b).}
\label{fig:Ising_q_comapre}
\end{figure}
\begin{figure}[t]
\centering
\hspace{-1cm}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\scalebox{.75}{\input{Figures/est_Q_trapdoor}}
\caption{Estimated $Q$-graph.}
\label{fig:q_trapdoor}
\end{subfigure}
\hspace{3cm}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\scalebox{.75}{\input{Figures/Q_trapdoor}}
\caption{Analytical MDP state. }
\label{fig:dp_trapdoor}
\end{subfigure}
\caption{Trapdoor channel. Comparison between the estimated structure (a) and the dynamic program optimized MDP state transition from \cite{permuter2008capacity} (b).}
\label{fig:trapdoor_q_compare}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.43\textwidth}
\centering
\includegraphics[ width=\linewidth]{Figures/1D_comparison_k.eps}
\caption{}
\label{fig:pam_vs_bounds}
\end{subfigure}
\begin{subfigure}[b]{0.43\textwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/PAM9_new.eps}
\caption{}
\label{fig:pam_vs_uniform}
\end{subfigure}
\caption{Estimated MI for the real valued PP-AWGN. Figure (a) shows a comparison of the optimized MI with capacity upper and lower bounds; and Figure (b) presents a comparison with the MI induced by the uniform distribution for $k=9$.}
\label{.}
\end{figure}
\section{Application III: Probabilistic Shaping of Constellations}\label{sec:prob_shape}
Conventional communication schemes consider equiprobable constellations \cite[Section~7]{proakis1994communication}.
By applying Algorithm~\ref{alg:di_opt} for MI optimization (see Section \ref{sec:mine_opt}), we propose a non-uniform shaping scheme and show that it outperforms the equiprobable constellation in terms of the communication rate.
Consider the PP-AWGN, given by
\begin{align*}
&Y = X+Z, \qquad \text{s.t.} \quad|X|\leq A,\quad P_X - a.s.,
\end{align*}
where $A>0$ and $Z$ is a centered Gaussian noise with variance $\sigma^2$. Our goal is to design a discrete constellation for $X$ that maximizes the transmission rate.
\begin{figure}[b]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/1d_snr_minu_10.eps}
\caption{$\pSNR=-10 [dB]$.}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[ width=\linewidth]{Figures/1d_snr_15.eps}
\caption{$\pSNR=15 [dB]$}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/1d_snr_30.eps}
\caption{$\pSNR=30 [dB]$}
\end{subfigure}
\caption{Learned probabilistic shaping for order $k=16$ and several values of $\pSNR$.}
\label{fig:PAM_constellations}
\end{figure}
\subsection{Real-Valued AWGN}\label{subsec:real_AWGN}
Smith showed that the capacity of the real-valued PP-AWGN channel is achieved by a discrete distribution supported inside $[-A,A]$, whose cardinality grows with $\pSNR$ \cite{smith1971information}.
We thus consider PAM constellations of different orders within $[-A,A]$.
Figure \ref{fig:pam_vs_bounds} compares the estimated MI to analytical upper \cite{ozarow1990capacity} and lower bounds \cite{thangaraj2017capacity}, for a range of $\pSNR$ values.
Evidently, the MI estimate converges between the bounds.
Recall that MINE itself serves as a lower bounds the ground truth MI \cite[Remark~5]{tsur2022neural}, posing our estimate as a new numerical lower bound on the PP-AWGN capacity.
We observe that the information rate saturates for each considered constellation orders, as SNR grows; this stems from the source entropy upper bound $\sI(X;Y)\leq H(X)$.
Figure \ref{fig:pam_vs_bounds} therefore reveals the values of $\pSNR$ beyond which a certain order $m$ is no longer optimal.
Figure \ref{fig:pam_vs_uniform} shows a comparison of the estimated optimized MI with the one induced by a uniform distribution over the constellation elements.
It is clear that the learned probabilistic shaping results in higher MI for every considered SNR~value.
The learned probabilistic shaping also corresponds to asymptotic values of the optimal input distribution derived in \cite{smith1971information}. Namely, it was shown that the optimal distribution converges towards a Bernoulli distribution on $\{-A,A\}$ as $\pSNR\to 0$, while for $\pSNR\to \infty$ the PMF becomes uniform distribution over the entire interval $[-A,A]$. Figure \ref{fig:PAM_constellations} depicts the learned probabilistic shaping for $k=16$, which indeed adheres to the mentioned asymptotic behavior. The intermediate point $\pSNR=12 [dB]$ is an example of when an interpolation between the two extreme cases is used.
\subsection{Complex AWGN}\label{subsec:complex_swgn}
To code for the complex AWGN channel, we consider a rectangular QAM constellations. The peak constraint now becomes as box constraint, whereby $\mathsf{Re}(X)$ and $\mathsf{Im}(X)$ are bound to the one-dimensional peak-power constraint.
Under the box constraint, it is shown in \cite{ikeda2010capacity} that the optimal input $X$ has $\mathsf{Re}(X)$ and $\mathsf{Im}(X)$ independent, due to the independence of the noise components.
Using this fact, we have the capacity bounds
\begin{equation}\label{eq:complex_awgn_bounds}
2C_{\mathsf{LB}}^{\mathsf{real}} \leq C \leq 2C_{\mathsf{UB}}^{\mathsf{real}},
\end{equation}
where $C_{\mathsf{LB}}^{\mathsf{real}}$ and $C_{\mathsf{UB}}^{\mathsf{real}}$ are the real-valued PP-AWGN capacity bounds from Section \ref{subsec:real_AWGN}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.46\textwidth}
\centering
\includegraphics[width=1\linewidth]{Figures/QAM_comparison_k.eps}
\caption{Information rate comparison.}
\label{fig:qam_vs_bounds}
\end{subfigure}
\begin{subfigure}[t]{0.53\textwidth}
\centering
\includegraphics[trim={1pt 55pt 1pt 1pt}, clip, width=\linewidth]{Figures/new_qam_vs_unif_combined.eps}
\caption{Comparison with uniform.}
\label{fig:qam_vs_unif1}
\end{subfigure}
\label{fig:qam_vs_unif_tot}
\caption{QAM probabilistic shaping performance. Figure (a) shows a comparison of the optimized MI for several constellation orders with analytical upper and lower bounds from \cite{ozarow1990capacity} and \cite{thangaraj2017capacity}, respectively; Figure (b) shows a comparison of the optimized MI with the MI induced by the uniform distribution.}
\end{figure}
Figure \ref{fig:qam_vs_bounds} compares our estimated MI values with the above bounds for several QAM orders.
In Figure \ref{fig:qam_vs_unif1} we compare the MINE estimate with the MI induced by the uniform distribution, calculated via the Gauss-Hermite integral approximation.
It is clear that the learned distribution outperforms the uniform one for all considered $\pSNR$ values.
Finally, Figure \ref{fig:QAM_constellations} shows the learned probabilistic shaping for $k=64$ and several values of $\pSNR$.
Since $\mathsf{Re}(X)$ and $\mathsf{Im}(X)$ are independent, we look for features similar to those observed for the real-valued AWGN channel.
Indeed, we see that for low values of $\pSNR$ the learned probabilistic shaping is uniform on the constellation edges, while as $\pSNR$ grows, the probabilistic shaping shifts towards a uniform distribution over the entire constellation.
Nontrivial input distributions, as shown in \ref{fig:QAM_constellations}(b), are observed for intermediate values of $\pSNR$.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/snr_minus_10_qam.eps}
\caption{$\pSNR=-10 [dB]$.}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[ width=\linewidth]{Figures/snr_12_qam.eps}
\caption{$\pSNR=12 [dB]$}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=1\textwidth]{Figures/snr_30_qam.eps}
\caption{$\pSNR=30 [dB]$}
\end{subfigure}
\caption{Learned QAM distribution for several values of $\pSNR$ and $k=64$. The marker size denotes the assigned probability.}
\label{fig:QAM_constellations}
\end{figure}
\section{Proofs}\label{sec:proofs}
\subsection{Proof of Theorem \ref{theorem:MDP_formulation}}\label{proof:mdp_formulation}
The proof shows that (i) $Z_t$ evolves as a function of $(Z_{t-1},U_{t-1},W_{t-1})$, (ii) $P_{W_t|W^{t-1},U^{t},Z^{t}} = P_{W_t|W_{t-1},U_{t},Z^{t}}$, and (iii) $\rho(\pi_\phi) = \sI_\phi(\XX\to\YY)$.
First, the functional relation $Z_t = f(Z_{t-1},U_t,W_t)$ follows by defining $f$ as a concatenation of $Z_{t-1}$ with $(U_t,W_t)$.
For (ii), we have
\begin{align*}
P_W(W_t=w|W^{t-1}=w^{t-1},Z^{t-1}=z^{t-1},U^{t-1}=u^{t-1}) &= \PP(Y_0=y|X^{0}_{-t}=x^{0}_{-t},Y^{0}_{-t}=y^{0}_{-t})\\
&=P_W(W_t=w|Z_{t-1}=z_{t-1},U_{t-1}=u_{t-1}).
\end{align*}
To establish (iii), observe that
\begin{align*}
\rho(\pi_\phi)
&=\lim_{N\to\infty}\frac{1}{N}\sum_{t=1}^N \EE\left[\EE\left[\log\frac{P_{Y_0|Y^{-1}_{-t},X^{0}_{-t}}}{P_{Y_0|Y^{-1}_{-t}}}\Big| X^{0}_{-t},Y^{-1}_{-t}\right]\right]\\% \label{eq:mdp_proof_1}\\
&= \lim_{N\to\infty}\frac{1}{N}\sum_{t=1}^N \sI_\phi(X^{0}_{-t};Y_0|Y^{-1}_{-t}) \\
&= \lim_{N\to\infty}\frac{1}{N}\sum_{t=1}^N \sI_\phi(X^{t};Y_t|Y^{t-1})\\% \label{eq:mdp_proof_3}\\
&= \lim_{N\to\infty}\frac{1}{N}\sI_\phi(X^{N}\to Y^N)\\% \label{eq:mdp_proof_2}\\
&= \sI_\phi(\XX\to\YY),
\end{align*}
where the third equality uses the stationarity of the joint distribution.
This concludes the proof. $\hfill\square$
\subsection{Proof of Lemma \ref{lemma:MINE_grad}}\label{proof:mine_based_grad}
Recall that we focus on the loss function:
\begin{equation}
\hat{\sJ}^{\mathsf{MI}}_\theta(\Dn,\phi) := \frac{1}{n}\sum_{t=1}^{n}\log\left( p^\phi(X_t)\right)\left(g_\theta(X_t, Y_t) - \hat{\sI}_{\mathsf{MI}}(\Dn)\right).
\end{equation}
First consider the partial derivative w.r.t. a single coordinate.
Assume, without loss of generality, that $\cX=[1,\dots,m]$ and fix $i\in[1,\dots,m]$.
Taking the derivative w.r.t. $\phi_i$, we have
\begin{align*}
\partial_{\phi_i}\hat{\sJ}^{\mathsf{MI}}_\theta(\Dn,\phi) = \frac{1}{n}\sum_{t=1}^{n}\partial_{\phi_i}\log\left( p^\phi(X_t)\right)\left(g_\theta(X_t, Y_t) - \hat{\sI}_{\mathsf{MI}}(\Dn)\right).
\end{align*}
Since $p^\phi(X_t) = \sigma_{\mathsf{sm}}^{X_t}(\phi)$, we obtain
\begin{align*}
\partial_{\phi_i}\log p^\phi(X_t) &= \partial_{\phi_i}\log e^{\phi_{X_t}}-\partial_{\phi_i}\log\left( \sum_{k=1}^{m}e^{\phi_k} \right)\\
&= \partial_{\phi_i}\phi_{X_t} - \frac{e^{\phi_i}}{\sum_{k=1}^{m} e^{\phi_k}}\\
&= \mathbbm{1}_{\{X_t=i\}} - \sigma_{\mathsf{sm}}^{X_t}(\phi).
\end{align*}
Therefore, the gradient is given by
\begin{equation}
\nabla_\phi\hat{\sJ}^{\mathsf{MI}}_\theta(\Dn,\phi) = \frac{1}{n}\sum_{t=1}^{n}\left(e_{X_t} - p^\phi\right)\left(g_\theta(X_t, Y_t) - \hat{\sI}_{\mathsf{MI}}(\Dn)\right).
\end{equation}
$\hfill\square$
\subsection{Proof of Lemma \ref{lemma:GP_complex_bound}}\label{appendix:GP_complex_bound}
To simplify notation, let $m:=|\cQ_g|$ and fix $m$.
Note that the number of graphs identical up to $y$ labeling is $(m)^{|\cY|m}$.
The authors of \cite{sabag2020graph} claim that the graph-pooling method reduces the number of $Q$-graphs the algorithm considers by a factor of $m!$.
Therefore,
\begin{equation}\label{eq:gp_init}
N_{\sf{GP}} = \frac{m^{m|\cY|}}{m!}.
\end{equation}
By the Stirling upper bound approximation of $m!$ we have
\begin{align*}
N_{\sf{GP}} &\geq \frac{m^{m|\cY|}}{\sqrt{e^2m}\frac{m^{m}}{e^{m}}}\\
& = e^{m}m^{m(|\cY|-1)-0.5}e^{-1}\\
&\geq e^{m}m^{m-0.5}\\
&\geq e^{m((m-0.5)\log m)}\\
& \geq e^{m\log m}.
\end{align*}
\section{Concluding Remarks and Future Directions}\label{sec:conclusion}
This work developed an optimization method for the estimated DI rate over communication channels with discrete input alphabets.
We proposed a deep generative model for the input PMF and derived an alternative optimization objective which easy to differentiate w.r.t. the parameters of the PMF model. This new objective was derived via an MDP formulation of the DI optimization problem, combined with the policy gradients method and DINE-based function approximation.
The overall procedure is an iterative estimation-optimization routine of the DI, where the estimation step involves training the DINE. To the best of our knowledge, ours is the first method that can optimize estimated DI over discrete inputs when the channel model is unknown.
To demonstrate the utility of our approach, we used it to estimate the capacities of various channels, under both feedforward and feedback communication schemes.
The capacity estimates demonstrated significant correspondence with known theoretical solutions and/or bounds, and the learned input PMF was shown to coincide with capacity-achieving input distributions.
In addition, we showed how to leverage the optimized input PMF model to calculate lower and upper bounds on the feedback capacity of unifilar FSCs via $Q$-graphs. Lastly, we demonstrated how our algorithm gives rise to probabilistic shaping schemes of PAM and QAM constellations for the PP-AWGN.
Our work enables the optimization of estimated DI, treating the channel as a black-box that can be sampled.
This method is beneficial for data-driven time-series tasks, when control over some of its elements is assumed.
In future work, we aim to apply the proposed scheme to multi-user communication channels by generalizing our framework to multi-agent reinforcement learning. We will also explore applications to sequential machine learning and sequential control.
\newpage
\bibliographystyle{unsrt}
|
2,877,628,089,379 | arxiv | \section{Supplemental Material: Enhancement of antiferromagnetic magnon-magnon entanglement by cavity cooling}
\end{document}
|
2,877,628,089,380 | arxiv | \section{Introduction}
Absorption lines are often observed in the quasar spectra, which are
a powerful tool to probe the gas in the Universe from high redshifts
to the present epoch (see Meiksin 2009 for a review). Quasar
absorption lines provide an unique chance to study the gaseous phase
(e.g., ionization states, kinematics, metallicities) of distant
galaxies that otherwise might be invisable, which are independent of
the luminosity of the background quasars. They are also important to
understand the star formation and evolution of the ordinary galaxies
(e.g., Prochter et al. 2006; Zibetti et al. 2007; M\'enard et al.
2011; Chen 2013).
Narrow absorption lines (NALs), with the line width of a few hundred
$\rm km~s^{-1}$, can be classified into three categories according
to the relationship between the absorber and the corresponding
quasar. They are intrinsic absorption lines, associated absorption
lines and intervening absorption lines. The intrinsic absorption
lines are often believed to be physically related with the quasar
wind/outflow (e.g., Narayanan et al. 2004; Misawa et al. 2007;
Hamann et al. 2011). The associated absorption lines with $z_{\rm
abs} \approx z_{\rm em}$ probably arise from the gas in the quasar
host galaxy or the galaxy cluster around the quasar (e.g., Weymann
et al. 1979; Wild et al. 2008; Vanden Berk et al. 2008). The
intervening absorption lines with $z_{\rm abs} \ll z_{\rm em}$ are
due to the absorption of galaxies along the quasar sightlines
located at cosmological distances from the corresponding quasars
(e.g., Bahcall \& Spitzer 1969; Bergeron 1986; L\'opez \& Chen
2012). The criteria, determining whether the absorption lines are
truly tied to the corresponding quasars, is ambiguous, because there
are many factors that can disturb the observed absorption lines,
such as the signal to noise ratio of the quasar spectra. To day the
dividing line of the intervening absorption lines and the associated
absorption lines are usually derived by statistics (e.g., Richards
2001; Wild et al. 2008). The absorption lines at velocity
separations less than the value of $\rm \sim 0.02c - 0.04c$, when
compared to the quasar systems, are classified as associated
absorption line group (Vanden Berk et al. 2008; Wild et al. 2008).
However, that does not mean that narrow absorption lines with
velocity separation larger than that value completely belong to
intervening absorption lines. Narrow intrinsic absorption lines can
be formed in the quasar outflows with velocity separations up to,
and even exceeding $\rm 0.1c$ (e.g., Misawa et al. 2007; Tombesi et
al. 2011; Chen et al. 2013a; Chen \& Qin 2013).
$\rm C~IV\lambda\lambda1548,1551$ resonant doublets are observable
redward of the $\rm Ly\alpha1216$ emission line, which can be
detected over a redshift range of $z\approx1.5$ --- 5.5 in the
optical spectra. These lines are strong transitions and have good
profiles. They are valuable absorption lines to study the
intergalactic medium (e.g., Songaila \& Cowie 1996; Cowie \&
Songaila 1998; Songaila 2001; Schaye et al. 2003; Cooksey et al.
2010; D\'Odorico et al. 2010; Simcoe et al. 2011).
Based on the Sloan Digital Sky Survey (SDSS, York et al. 2000), many
previous works aimed at systematically searching for metal
absorption lines have been done (e.g., Quider et al. 2011; Qin et
al. 2013; Zhu \& M\'enard 2013; Cooksey et al. 2013). We are going
to identify absorption doublets, such as $\rm
C~IV\lambda\lambda1548,1551$ and $\rm Mg~II\lambda\lambda2796,2803$,
in the quasar spectra of the Baryon Oscillation Spectroscopic Survey
(BOSS), which is a part of the SDSS-III (Eisenstein et al. 2011). In
this paper, our work is to identify the $\rm
C~IV\lambda\lambda1548,1551$ absorption doublet, which becomes the
first in a series of papers on the absorption lines in the BOSS
quasar spectra.
In section 2, we show how we construct our $\rm
C~IV\lambda\lambda1548,1551$ absorption sample and present the
spectral analysis. The properties of the absorption lines are
presented in section 3. Section 4 is the discussion, and section 5
is the summary.
\section{Data analysis}
BOSS is the main dark time legacy survey of the third stage of the
SDSS (P\^aris et al. 2012; Eisenstein et al. 2011), which is a five
year programm. BOSS aims to get quasar spectra over $150,000$ with
$z_{\rm em}>2.15$ using the same $\rm 2.5 m$ telescope (Gunn et al.
2006; Ross et al. 2012) as the SDSS did. The spectra of BOSS span a
wavelength range of 3600 \AA --- 10400 \AA~ at a resolution of
$1300<R<3000$. The first data release of BOSS, SDSS Data Release
Nine (SDSS DR9), contains $87,822$ quasars detected over an area of
$3275~deg^2$ (P\^a aris et al. 2012).
In order to avoid the noisy region of the spectra, we exclude those
data shortward of 3800 \AA~ at the observed frame. The pair of $\rm
O~I\lambda1302$ and $\rm S~II\lambda1304$ has a wavelength
separation similar to that of the $\rm C~IV\lambda\lambda1548,1551$
doublet, and that may lead to misidentifications of the latter. To
avoid confusions arising from the $\rm Ly\alpha$ forest, and $\rm
O~I\lambda1302$ and $\rm S~II\lambda1304$ absorption lines, we
constrain our analysis on the wavelength range longward of 1310 \AA~
at the rest frame. We also conservatively constrain the upper
wavelength limit to $\rm 1548$\AA$\rm \times(1+z_{em})\times
\sqrt{(1-\beta)/(1+\beta)} $, where we adopt $\rm \beta=-1/30$ to
search for intervening $\rm C~IV\lambda\lambda1548,1551$ absorption
doublets. This cut reduces the quasar sample to $70,336$ quasars
with $z_{em}\gtrsim1.54$.
The noise superposed on the spectra with low signal-to-noise ratios
(SNR) often confuses the true absorptions. Here, we limit our
analysis to sources with high enough signal-to-noise ratios in the
surveyed spectral region. There is a median signal-to-noise
ratio (median SNR) of the spectrum of each quasar, which can roughly
reveal the level of the noise of the observation of the source.
Illustrated in Fig. 1 is the distribution of the median SNR of these
$70,336$ quasars. We find that the median value of this distribution
is quite close to $4$ (see Fig. 1). We accordingly adopt this value
to limit our analysis. That is, we select only quasars with $\rm
median~SNR\ge4$ in the surveyed spectral region.
\begin{figure}
\vspace{3ex} \centering
\includegraphics[width=7 cm,height=6 cm]{select_snr.ps}\vspace{3ex}
\caption{Distribution of the median SNR of the 70,336 quasars, in
the surveyed spectral region of $\rm C~IV\lambda\lambda1548,1551$
absorption doublets. The red dash line denotes the median value of
this distribution, which is located at 4.06.}
\end{figure}
As the first paper of the series of work, here we concern only
quasars with $z_{\rm em}\le2.4$. Taking into account all the above
limitations, we have 10,121 quasars with $1.54\lesssim z_{\rm
em}\le2.4$ to identify $\rm C~IV\lambda\lambda1548,1551$ absorption
doublets. The upper cuts of the emission redshift and the median SNR
are showed in Fig. 2. The distribution of emission redshifts of our
final quasar sample is plotted in Fig. 4.
\begin{figure}
\vspace{3ex} \centering
\includegraphics[width=7 cm,height=6 cm]{search_qso.ps}\vspace{3ex}
\caption{Plot of the median SNR, in the surveyed spectral region of
$\rm C~IV\lambda\lambda1548,1551$ absorption doublets, versus the
emission redshift of the $70,336$ quasars. The red lines are the
limits of SNR and $z_{em}$ used to construct our quasar sample.}
\end{figure}
We derive a pseudo-continuum for each quasar of our sample by
invoking a combination of cubic splines (for underlying continuum,
see Willian et al. 1992 for details) and Gaussians (for emission and
broad absorption features), which is utilized to normalize the
spectral data (fluxes and flux uncertainties). These processes are
iterated several times to improve the fittings of both the cubic
spline and Gaussian (e.g., Nestor et al. 2005; Quider et al. 2011,
Chen et al. 2013a,b). Shown in the left panels of Fig. 3 are several
quasar spectra (with various values of the median SNR) together with
their pseudo-continuum fitting curves. The pseudo-continuum
normalized spectra are presented in the right panels of Fig. 3.
We search $\rm C~IV\lambda\lambda1548,1551$ absorption candidates
from the pseudo-continuum normalized spectra. As the first step of
the searching (see also Chen et al. 2013a), the $2\sigma$ curve
below the pseudo-continuum fitting is marked, and then those
absorption figures located above this curve are ruled out.
In many cases, some very broad troughs appear in the blue wing of
$\rm C~IV$ or/and $\rm Si~IV$ emission lines. The broad absorption
line (BAL) is a confusing terminology. Based on the definition of
balnicity index (BI, Weymann et al. 1991), absorption troughs with
the width broader than $2000~\rm km~s^{-1}$ at depths $>10\%$ below
the pseudo-continuum fitting curve can be classified as BALs.
However, in terms of the absorption index (AI, Hall et al. 2002;
Trump et al. 2006), some narrower absorption troughs ($>1000~\rm
km~s^{-1}$) also belong to the BAL population. Knigge et al. (2008)
found that the BAL fraction will be underestimated in terms of BI,
and overestimated in terms of AI. They also found that both samples
of BI and AI show bimodal distributions, which bring about a problem
of the overlap of broad NALs and narrow BALs. We are going to
analyze only narrow absorption doublets with a few hundreds
$km~s^{-1}$, therefore, as the second step, we conservatively
disregard those absorption figures with widths broader than
$2000~\rm km~s^{-1}$ and at depths $>10\%$ below the
pseudo-continuum fitting curve in our program autonomically.
In the third step, each absorption trough is fitted by a Gaussian
component, and the absorption figures with the full width at half
maximum (FWHM) greater than $800~\rm km~s^{-1}$ are ruled out. And
then, we search the candidates of $\rm C~IV\lambda\lambda1548,1551$
absorption doublets from the residual absorption figures.
In the fourth step, we measure the equivalent widths ($W_r$) of
these candidate absorption lines at the rest-frame from the Gaussian
fittings, and estimate their uncertainties by
\begin{equation}
(1+z)\sigma_w=\frac{\sqrt{\sum_i
P^2(\lambda_i-\lambda_0)\sigma^2_{f_i}}}{\sum_i
P^2(\lambda_i-\lambda_0)}\Delta\lambda,
\end{equation}
where $P(\lambda_i-\lambda_0)$ is the line profile centered at
$\lambda_0$, $\lambda_i$ is the wavelength, and $\sigma_{f_i}$ is
the normalized flux uncertainty as a function of pixel (Nestor et
al. 2005; Chen et al. 2013b; Chen \& Qin 2013). The sum is performed
over an integer number of pixels that covers at least $\pm 3$
characteristic Gaussian widths. We adopt the method provided by Qin
et al. (2013) to evaluate the signal-to-noise ratio of the
absorption line for the candidates as well. $\rm 1\sigma$ noise is
calculated by:
\begin{equation}
\sigma_N=\sqrt{\frac{\sum\limits_{i=1}^M[\frac{F^i_{noise}}{F^i_{cont}}]^2}{M}},
\end{equation}
where $\rm F_{noise}$ is the flux uncertainty, $\rm F_{cont}$ is the
flux of the psuedo-continuum fit, and $i$ represents the pixel in
the wavelength range of
1548{\AA}$\rm\times(1+z_{abs})-5${\AA}$\rm<\lambda_{obs}<1551${\AA}$\rm\times(1+z_{abs})+5${\AA}.
The signal-to-noise ratio of the absorption line is determined by:
\begin{equation}
SNR^\lambda=\frac{1-S_{abs}}{\sigma_N},
\end{equation}
where $\rm S_{abs}$ is the smallest value of the normalized spectral
flux within an absorption trough. Finally, we select only the
absorption lines with $W_r>0.2$ \AA~ and $SNR^{\lambda} \ge 2.0$ for
both $\lambda1548$ and $\lambda1551$ lines. In this way, we get 8368
potential intervening $\rm C~IV\lambda\lambda1548,1551$ absorption
doublets. These absorption doublets are presented in Table 1.
\begin{figure*}
\centering
\includegraphics[width=7.6cm,height=1.5cm]{spec-4365-55539-0228eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4365-55539-0228abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4367-55566-0904eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4367-55566-0904abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4365-55539-0996eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4365-55539-0996abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4217-55478-0583eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4217-55478-0583abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4218-55479-0286eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4218-55479-0286abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4310-55508-0306eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4310-55508-0306abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4309-55528-0936eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4309-55528-0936abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4309-55528-0736eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4309-55528-0736abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4274-55508-0426eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4274-55508-0426abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4424-55532-0104eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4424-55532-0104abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4222-55444-0506eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4222-55444-0506abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4372-55541-0294eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4372-55541-0294abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-3650-55244-0768eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-3650-55244-0768abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4241-55450-0842eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-4241-55450-0842abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-3756-55505-0840eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-3756-55505-0840abs.ps}
\includegraphics[width=7.6cm,height=1.5cm]{spec-3782-55244-0298eml.ps}
\hspace{2ex}
\includegraphics[width=7.6cm,height=1.5cm]{spec-3782-55244-0298abs.ps}
\caption{The quasar spectra with various values of the median
signal-to-noise ratio (SNR) in the searched spectral region of $\rm
C~IV\lambda\lambda1548,1551$ absorption doublets. The red curves in
the left panels represent the pseudo-continuum fitting curves. The
green lines in the right panels represent the $1\sigma$ flux
uncertainty levels that have been normalized by the
pseudo-continuum. The blue solid lines are the Gaussian fitting
curves of the doublets. The red vertical lines in the right panels
represent the lower and upper limitations respectively, which are
used to cut the spectral region to search for $\rm
C~IV\lambda\lambda1548,1551$ absorption doublets. We do not search
the $\rm C~IV\lambda\lambda1548,1551$ absorption doublet in the
spectra with SNR less than 4 (e.g., the first five spectra).}
\end{figure*}
\begin{table*}[htbp]
\caption{Catalog of $\rm C~IV\lambda\lambda1548,1551$ absorption
systems} \tabcolsep 1.1mm \centering
\begin{tabular}{cccccccccccccc}
\hline\hline\noalign{\smallskip}
SDSS NAME & PLATEID & MJD & FIBERID & $z_{\rm em}$ & $z_{\rm abs}$ &
$\rm W_r\lambda1548$ &$N_{\sigma\lambda1548}$& $\rm W_r\lambda1551$&$N_{\sigma\lambda1551}$&$SNR^{\lambda1548}$&$SNR^{\lambda1551}$&$\beta$\\
\hline\noalign{\smallskip}
000027.01+030715.5 & 4296 & 55499 & 0630 & 2.3533 & 1.9833 & 0.22 & 4.40 & 0.22 & 4.40 & 3.9 & 4.4 & 0.11639 \\
000027.01+030715.5 & 4296 & 55499 & 0630 & 2.3533 & 2.1303 & 0.91 & 22.75 & 0.69 & 17.25 & 20.3 & 18.3 & 0.06871 \\
000050.59+010959.1 & 4216 & 55477 & 0746 & 2.3678 & 1.8971 & 0.46 & 7.67 & 0.47 & 5.88 & 7.1 & 5.3 & 0.14942 \\
000050.59+010959.1 & 4216 & 55477 & 0746 & 2.3678 & 1.9184 & 0.99 & 14.14 & 0.86 & 14.33 & 13.6 & 13.0 & 0.14225 \\
000120.27+030731.9 & 4277 & 55506 & 0098 & 2.1082 & 1.8898 & 0.38 & 7.60 & 0.25 & 6.25 & 6.6 & 5.3 & 0.07273 \\
000133.39+023657.1 & 4277 & 55506 & 0090 & 1.6556 & 1.4773 & 0.71 & 2.84 & 0.67 & 4.47 & 2.7 & 4.2 & 0.06939 \\
000146.95+001428.9 & 4216 & 55477 & 0860 & 2.1567 & 1.9256 & 0.39 & 3.90 & 0.38 & 6.33 & 3.8 & 5.6 & 0.07588 \\
000202.33-002648.4 & 4216 & 55477 & 0154 & 2.1761 & 1.9382 & 0.59 & 3.47 & 0.35 & 3.18 & 3.3 & 2.9 & 0.07770 \\
000207.61+032801.5 & 4296 & 55499 & 0748 & 2.2195 & 1.7502 & 1.25 & 6.58 & 0.86 & 6.14 & 6.2 & 5.9 & 0.15626 \\
000223.32+010101.2 & 4216 & 55477 & 0876 & 2.2931 & 2.1549 & 0.32 & 2.91 & 0.45 & 3.21 & 2.6 & 3.1 & 0.04285 \\
\hline\hline\noalign{\smallskip}
\end{tabular}
\\
\footnote[]~Note---$N_{\sigma}=\frac{W_r}{\sigma_{W}}$ represents the significant level of the detection. $\beta=\frac{v}{c}=\frac{(1+z_{em})^2-(1+z_{abs})^2}{(1+z_{em})^2+(1+z_{abs})^2}$.
The table is available in its entirety in the machine-readable form
in the online journal.
\end{table*}
\section{Statistical properties of the absorbers}
In this work, we collect 10,121 quasars to identify $\rm
C~IV\lambda\lambda1548,1551$ absorption doublets, whose emission
redshifts are plotted in Fig. 4. Of the 10,121 quasar spectra, 5,442
are found to have at least one detected $\rm
C~IV\lambda\lambda1548,1551$ absorption doublet. Emission redshifts
of these 5,442 quasars are also plotted in Fig. 4. We identify 8,368
$\rm C~IV\lambda\lambda1548,1551$ absorption doublets from these
quasars. These absorption redshifts are also showed in the Fig. 4.
\begin{figure}
\vspace{3ex}\centering
\includegraphics[width=7 cm,height=6 cm]{zabs_zem.ps}\vspace{3ex}
\caption{Distributions of redshifts. The red line represents the
emission redshift of 10,121 quasars that are used to search for $\rm
C~IV\lambda\lambda1548,1551$ absorption doublets. The blue line
stands for the emission redshift of the 5442 quasars for which at
least one $\rm C~IV\lambda\lambda1548,1551$ absorption doublet is
detected. The black line describes the absorption redshift of all
the detected $\rm C~IV\lambda\lambda1548,1551$ absorption doublets.}
\end{figure}
The total redshift path covered by this catalog can be computed via
\begin{equation}
Z(SNR^{\lambda1548})=\sum_{i=1}^{N_{spec}}\int_{z_i^{min}}^{z_i^{max}}g_i(SNR^{\lambda1548},z)dz,
\end{equation}
where $\rm g_i(SNR^{\lambda1548},z)=1$ if $\rm SNR^{lim}~\le
SNR^{\lambda1548}$, otherwise $\rm g_i(SNR^{\lambda1548},z)=0$; $\rm
z_i^{min}$ and $\rm z_i^{max}$ are the redshifts corresponding to
the minimum and maximum wavelengths of survey for quasar $i$,
respectively (see also Qin et al. 2013). The derived redshift path
is shown in Fig. 5 as a function of the signal-to-noise ratio.
\begin{figure}
\centering
\includegraphics[width=7cm,height=6cm]{zpath_signa.ps}
\caption{Redshift path covered by our catalog ($z_{\rm em} \le
2.4$), shown as a function of $\rm SNR^{\lambda1548}$.}
\end{figure}
Distributions of $W_r$ of the two lines of the $\rm
C~IV\lambda\lambda1548,1551$ absorption doublet of our catalog are
plotted in Fig. 6. These distributions have smooth tails out to
$W_r\approx3.0$ \AA,~ with the largest values of
$W_r\lambda1548=3.19$ \AA~ and $W_r\lambda1551=2.88$ \AA,~
respectively. The median values of the $W_r$ are: 0.62 \AA~ for the
$\lambda1548$ absorption lines, and 0.49 \AA~ for the $\lambda1551$
absorption lines. In this catalog, about 33.7\% (2823/8368)
absorbers have $0.2$ \AA$\le W_r\lambda1548<0.5$ \AA,~ about 45.9\%
(3842/8368) absorbers have $0.5$ \AA$\le W_r\lambda1548<1.0$ \AA,~
about 19.2\% (1603/8368) absorbers have $1.0$ \AA$\le
W_r\lambda1548<2.0$ \AA,~ and about 1.2\% (100/8368) absorbers have
$W_r\lambda1548\ge2.0$ \AA.
\begin{figure}
\vspace{3ex}\centering
\includegraphics[width=7 cm,height=6 cm]{ew.ps}\vspace{3ex}
\caption{Distributions of the rest-frame equivalent width of the
$\rm C~IV$ absorption line. The black line is for the $\rm
\lambda1548$ absorption, and the red line is for the $\rm
\lambda1551$ absorption.}
\end{figure}
In Fig. 7 we plot the distribution of the $W_r$ ratio of the two
lines ($W_r\lambda1548/W_r\lambda1551$). We invoke a Gaussian
function to fit this distribution, which yields a center value of
1.18 and $\rm FWHM=0.80$. The maximum and minimum values of the
$W_r$ ratio are 4.5 and 0.2, respectively. The $W_r$ ratio can
reflects the saturated degree (Str\"omgren 1948). The $W_r$ ratio of
the $\rm C~IV\lambda\lambda1548,1551$ doublet can be changed from
completely saturated absorption, $\rm DR = 1.0$, to completely
unsaturated absorption, $\rm DR = 2.0$ (e.g., Sargent et al. 1988;
Steidel 1990). The boundaries of the completely saturated absorption
($\rm DR = 1.0$) and completely unsaturated absorption ($\rm DR =
2.0$) are marked in Fig. 7. Most of the absorbers of this catalog
satisfy $1.0\le W_r\lambda1548/W_r\lambda1551 \le 2.0$, occupying
nearly 72.9\% (6007/8638) of the total. About 22.0\% (1839/8638)
absorbers have $W_r\lambda1548/W_r\lambda1551<1.0$, and about 6.2\%
(522/8638) absorbers have $W_r\lambda1548/W_r\lambda1551>2.0$. We
guess that the $\rm C~IV\lambda\lambda1548,1551$ absorption systems
that lie outside the theoretical limits of the $W_r$ ratio
($W_r\lambda1548/W_r\lambda1551<1.0$ or
$W_r\lambda1548/W_r\lambda1551>2.0$) might mainly originate from the
line blending.
\begin{figure}
\vspace{3ex}\centering
\includegraphics[width=7.5 cm,height=6.5 cm]{ew-ew.ps}\vspace{3ex}
\caption{Distribution of the ratio of the rest-frame equivalent
widthes of the $\rm C~IV$ doublet. The red curve is the fitting
Gaussian with $\rm center = 1.18~and~FWHM =0.80$. The red dash lines
are the theoretical limits for completely saturated ($\rm
W_r\lambda1548/W_r\lambda1551 = 1.0$) and unsaturated ($\rm
W_r\lambda1548/W_r\lambda1551 = 2.0$) absorptions, respectively.}
\end{figure}
\section{Discussion}
In order to estimate the false positives/negatives of the $\rm C~IV$
absorption system, we wish to look at the frequency of the detected
$\rm C~IV$ absorption systems ($f_{NALs}$) as a function of
signal-to-noise ratio, which can be computed via
\begin{equation}
f_{NALs}=\lim_{\bigtriangleup SNR\rightarrow0}\frac{\bigtriangleup
N_{abs}}{\bigtriangleup N_{sdp}}
\end{equation}
where $\bigtriangleup N_{abs}$ and $\bigtriangleup N_{sdp}$ are the
count of the detected $\rm C~IV$ absorption systems and the count of
the spectral data points in signal-to-noise ratio bin
$\bigtriangleup SNR$, respectively.
The resulting $f_{NALs}$, as a function of the signal-to-noise
ratio, is displayed in Fig. 8. It exhibits a platform in the range
of $SNR^{\lambda1548}\gtrsim4$, suggesting that the detection of
$C~IV$ absorption systems would likely be complete when the
signal-to-noise ratio is larger than 4.
\begin{figure}
\centering
\includegraphics[width=7cm,height=6cm]{f8.ps}
\caption{Plot of the detection frequency as a function of the
signal-to-noise ratio. The upper (red) line represents the count of
the spectral data point, the middle (blue) line represents the count
of the detected $\rm C~IV$ absorption system, and the bottom (black)
line stands for the frequency of NALs detected in this work,
calculated by equation (5).}
\end{figure}
The incompleteness of the detection of $C~IV$ absorption systems is
obvious within the range of $SNR^{\lambda1548}\lesssim4$. As
suggested by Fig. 8, we find that, within the range of
$SNR^{\lambda1548}\lesssim4$, when the signal-to-noise ratio tends
to be smaller, more $C~IV$ absorption systems would tend to be
missed by our analysis. To roughly estimate the significance of the
incompleteness, we compute the missing rate ($f_{MR}$) of the
detection of $C~IV$ absorption systems in several bins of the
signal-to-noise ratio via
\begin{equation}
f_{MR}=\frac{\overline{f_{NALs}}-f_{NALs}}{\overline{f_{NALs}}},
\end{equation}
where $\overline{f_{NALs}}$ is the average frequency of NALs in the
range of $SNR^{\lambda1548}>4$, and $f_{NALs}$ is the frequency of
NALs in the corresponding signal-to-noise ratio bin. The results are
presented in Table 2.
\begin{table}
\caption{The missing rate of absorption systems with $SNR^{\lambda1548}\le 4$} \tabcolsep 2mm \centering
\begin{tabular}{ccccccc}
\hline\hline\noalign{\smallskip}
SNR bin & [2.0,2.5] & [2.5,3.0] & [3.0,3.5] & [3.5,4.0]\\
\hline\noalign{\smallskip}
$f_{MR}$&0.91&0.67&0.62&0.20\\
\hline\hline\noalign{\smallskip}
\end{tabular}
\end{table}
To refine the quasar sample to search the $\rm
C~IV\lambda\lambda1548,1551$ absorption system, we perform our
analysis under the condition that the spectra examined must have a
median signal-to-noise ratio greater than or equal to $4$. It is
possible that some $\rm C~IV\lambda\lambda1548,1551$ absorption
doublets, which satisfy our criteria of selecting absorption lines,
may be imprinted in the spectra with the median signal-to-noise
ratio being less than $4$, and they will be missed.
To have a look at these possibly missed doublets, we randomly select
$100$ quasars from those located in the left lower region of Fig. 2
(below the horizontal red line and on the left hand side of the
vertical red line), to detect $\rm C~IV\lambda\lambda1548,1551$
absorption doublets with the same criteria described in section 2.
These quasars are listed in Table 3. $15$ $\rm
C~IV\lambda\lambda1548,1551$ absorption doublets are detected from
these quasar spectra, which are presented in Table 4. For this
randomly selected quasar sample, the redshift path computed using
Equation (4) and the frequency of NALs calculated by Equation (5)
are displayed in Figs. 9 and 10, respectively.
\begin{table}
\caption{Sources of the randomly selected quasar sample} \tabcolsep 1.1mm \centering
\begin{tabular}{ccccccccc}
\hline\hline\noalign{\smallskip}
SDSS NAME & PLATEID & MJD & FIBERID & $z_{\rm em}$ & SNR\\
\hline\noalign{\smallskip}
000525.86+030813.5 & 4296 & 55499 & 0908 & 2.1802 & 3.4 \\
00063.085+031327.1 & 4296 & 55499 & 0962 & 2.3788 & 3.9 \\
002059.05+030633.3 & 4300 & 55528 & 0716 & 2.1935 & 3.4 \\
004616.50+011343.0 & 3589 & 55186 & 0864 & 2.1632 & 1.5 \\
005623.89+021253.2 & 4308 & 55565 & 0740 & 2.2631 & 2.1 \\
010618.39+101247.8 & 4551 & 55569 & 0598 & 2.2872 & 1.3 \\
011927.05+000008.0 & 4227 & 55481 & 0036 & 2.3571 & 1.7 \\
013752.51+102410.6 & 4548 & 55565 & 0802 & 2.1453 & 2.7 \\
\hline\hline\noalign{\smallskip}
\end{tabular}
\\
\footnote[]~Note---SNR is the median signal-to-noise ratio of the quasar in the surveyed spectral
region. The table is available in its entirety in the
machine-readable form in the online journal.
\end{table}
\begin{table*}
\caption{The $\rm C~IV\lambda\lambda1548,1551$ absorption systems of the randomly selected quasar sample} \tabcolsep 1mm \centering
\begin{tabular}{cccccccccccccc}
\hline\hline\noalign{\smallskip}
SDSS NAME & PLATEID & MJD & FIBERIN & $z_{\rm em}$ & $z_{\rm abs}$ &
$\rm W_r\lambda1548$ &$N_{\sigma\lambda1548}$& $\rm W_r\lambda1551$&$N_{\sigma\lambda1551}$& $SNR^{\lambda1548}$&$SNR^{\lambda1551}$&$\beta$\\
\hline\noalign{\smallskip}
075343.86+182204.9 & 4490 & 55629 & 0734 & 2.1708 & 1.9708 & 0.38 & 2.53 & 0.56 & 2.55 & 2.3 & 2.4 & 0.06506 \\
114931.76+360338.8 & 4653 & 55622 & 0042 & 2.2658 & 1.7910 & 0.79 & 2.39 & 1.21 & 3.67 & 2.2 & 3.4 & 0.15582 \\
014848.55+145729.2 & 4658 & 55592 & 0948 & 2.1370 & 1.8690 & 0.31 & 2.38 & 0.44 & 2.44 & 2.2 & 2.3 & 0.08907 \\
152155.41+310942.3 & 4719 & 55736 & 0322 & 2.1108 & 1.8249 & 0.88 & 5.18 & 0.39 & 2.79 & 4.4 & 2.7 & 0.09611 \\
080345.70+422136.2 & 3683 & 55178 & 0178 & 2.0877 & 1.6675 & 0.61 & 3.81 & 0.71 & 3.23 & 3.5 & 3.0 & 0.14525 \\
155717.07+163309.6 & 3922 & 55333 & 0594 & 2.3355 & 2.1045 & 0.81 & 3.12 & 0.71 & 2.84 & 2.9 & 2.7 & 0.07165 \\
081937.46+302718.3 & 4447 & 55542 & 0070 & 2.2037 & 2.0069 & 0.58 & 2.76 & 0.65 & 2.83 & 2.7 & 2.7 & 0.06331 \\
074256.10+481730.0 & 3675 & 55183 & 0520 & 2.2775 & 1.9637 & 1.14 & 3.93 & 0.99 & 3.41 & 3.6 & 3.2 & 0.10030 \\
150553.69+304300.5 & 3876 & 55245 & 0264 & 2.2329 & 1.7556 & 1.02 & 2.83 & 0.74 & 3.08 & 2.7 & 3.0 & 0.15840 \\
134259.55+340404.9 & 3856 & 55269 & 0612 & 2.2972 & 2.1021 & 0.77 & 2.75 & 0.43 & 2.39 & 2.6 & 2.3 & 0.06092 \\
024842.21-000302.1 & 4241 & 55450 & 0265 & 2.0587 & 1.8580 & 0.59 & 3.11 & 0.46 & 2.88 & 3.0 & 2.7 & 0.06776 \\
150914.76+230044.0 & 3962 & 55660 & 0610 & 2.1652 & 1.9392 & 1.04 & 2.89 & 0.79 & 2.39 & 2.7 & 2.3 & 0.07394 \\
150539.79+062612.3 & 4856 & 55712 & 0230 & 2.3698 & 2.0359 & 0.88 & 3.03 & 1.19 & 3.31 & 2.9 & 3.1 & 0.10397 \\
104647.31+382734.8 & 4634 & 55626 & 0932 & 2.2235 & 1.9783 & 1.20 & 2.86 & 0.76 & 2.38 & 2.7 & 2.2 & 0.07895 \\
094705.52+434013.8 & 4569 & 55631 & 0764 & 2.1984 & 1.9500 & 1.14 & 4.75 & 1.60 & 3.48 & 4.1 & 3.3 & 0.08067 \\
\hline\hline\noalign{\smallskip}
\end{tabular}
\\
\footnote[]~Note---See Table 1 for the meanings of each column.
\vspace{6ex}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=7cm,height=6cm]{path_signa2.ps}
\caption{Redshift path covered by the randomly selected quasar
sample, shown as a function of $\rm SNR^{\lambda1548}$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=7cm,height=6cm]{f10.ps}
\caption{Plot of the detection frequency as a function of the
signal-to-noise ratio for the randomly selected quasar sample. See
Fig. 8 for the meanings of each solid line. The dash lines are the
solid lines shown in Fig. 8 with the same colors.}
\end{figure}
The spectral signal-to-noise ratio is important to detect narrow
absorption lines. It is very difficult to distinguish the true NALs
from the noise in the spectra with lower signal-to-noise ratio,
since the fluctuations of the noise frequently confuse or cover the
real narrow absorption lines. As stated above, only 15 $\rm C~IV$
absorption systems are detected in the spectra of the 100 randomly
selected quasars. In other words, only 0.15 $\rm C~IV$ absorption
system can be detected in per quasar spectrum with the median
signal-to-noise ratio being as low as less than 4. However, we
detect 8,368 $\rm C~IV$ absorption systems in the spectra of the
10,121 quasars with their median signal-to-noise ratios being
greater than 4. The value of 8368/10121 is several times larger than
that of 15/100, which manifests that many real absorption lines
cannot be identified in the spectra with lower signal-to-noise
ratios.
\section{Summary}
As the first effort in our series work on identifying absorption
lines in quasar spectra of BOSS, we search quasars with $z_{\rm
em}\le2.4$ and identify potential intervening $\rm
C~IV\lambda\lambda1548,1551$ absorption doublets with
$W_r\lambda1548\ge0.2$ \AA~. Our sample contains 10,121 quasars,
from which we identify 8,368 $\rm C~IV\lambda\lambda1548,1551$
absorption systems which covers the absorption redshift range of
$z_{\rm abs}=1.4544$
--- $2.2805$. Of 10,121 quasars, 5,442 are
detected to have at least one $\rm C~IV\lambda\lambda1548,1551$
absorption doublet. We find that about 33.7\% absorbers have $0.2$
\AA$\le W_r\lambda1548<0.5$ \AA,~ about 45.9\% absorbers have $0.5$
\AA$\le W_r\lambda1548<1.0$ \AA,~ about 19.2\% absorbers have $1.0$
\AA$\le W_r\lambda1548<2.0$ \AA,~ and about 1.2\% absorbers have
$W_r\lambda1548\ge2.0$ \AA. Most of the $\rm
C~IV\lambda\lambda1548,1551$ absorption doublets (72.9\%) lie within
the theoretical limits of the completely saturated and unsaturated
absorptions ($1.0\le W_r\lambda1548/W_r\lambda1551 \le 2.0$).
\\
\\
\acknowledgements We thank the anonymous referee for helpful
comments and suggestions. This work was supported by the National
Natural Science Foundation of China (NO. 11363001; No. 11073007),
the Guangxi Natural Science Foundation (2012jjAA10090), the
Guangzhou technological project (No. 11C62010685), and the Guangxi
university of science and technology research projects (NO.
2013LX155).
Funding for SDSS-III has been provided by the Alfred P. Sloan
Foundation, the Participating Institutions, the National Science
Foundation, and the U.S. Department of Energy Office of Science. The
SDSS-III web site is http://www.sdss3.org/.
SDSS-III is managed by the Astrophysical Research Consortium for the
Participating Institutions of the SDSS-III Collaboration including
the University of Arizona, the Brazilian Participation Group,
Brookhaven National Laboratory, Carnegie Mellon University,
University of Florida, the French Participation Group, the German
Participation Group, Harvard University, the Instituto de
Astrofisica de Canarias, the Michigan State/Notre Dame/JINA
Participation Group, Johns Hopkins University, Lawrence Berkeley
National Laboratory, Max Planck Institute for Astrophysics, Max
Planck Institute for Extraterrestrial Physics, New Mexico State
University, New York University, Ohio State University, Pennsylvania
State University, University of Portsmouth, Princeton University,
the Spanish Participation Group, University of Tokyo, University of
Utah, Vanderbilt University, University of Virginia, University of
Washington, and Yale University.
|
2,877,628,089,381 | arxiv | \section{Introduction}
Several frequentist testing procedures for multivariate locations are available in the literature, both parametric and non-parametric. The most well-known parametric procedure is the Hotelling's $T^2$-test, which is based on the multivariate mean vector and the covariance matrix, and it also relies on the assumption of multivariate normality. This technique performs well if the assumption of multivariate normality is nearly correct, but suffers heavily otherwise, or in the presense of outliers. Non-parametric and robust alternatives based on signs and ranks have been quite popular over the years (\cite{oja2004multivariate}).
The notions of signs and ranks are based on the \enquote{ordering} of the data points, but in the multivariate setting, there is no objective basis of ordering. The notions are generalized to higher dimensions using $\ell_1$-objective functions (see Section \ref{3S2}). The existing one-sample location problem have the following set up. Suppose that, we have $n$ observations $Y=Y_1,\dots,Y_n \in \mathbb{R}^k$ from a distribution $P(y-\theta)$, where $P(\cdot-\theta)$ is a $k$-variate continuous distribution centered at $\theta=(\theta_1,\dots,\theta_k)^T$. Our objective is to test the hypothesis
\begin{equation}
H_0: \theta=\theta_0\quad \text{vs.}\quad H_1:\theta \neq \theta_0.
\end{equation}
The existing non-parametric test procedures are based on the spatial sign vectors $U$, mulivariate spatial rank $R$, and multivariate spatial signed rank $Q$, which are defined respectively by
\begin{align}\label{e1}
U(y)=& \begin{cases}
\Vert y \Vert_2^{-1}y,\quad y\neq 0,\\
0,\qquad\qquad y=0,
\end{cases}\\
\label{e2}
R(y,Y)=\ &\frac{1}{n}\sum_{i=1}^nU(y-Y_i),\\
\label{e3}
Q(y)=&\ \frac{1}{2}[R(y;Y)+R(y,-Y)].
\end{align}
The estimator of the location associated with spatial signs in \eqref{e1} is the spatial median
\begin{equation}
\hat{\theta}_n=\argmin_{\theta \in \mathbb{R}^k}\mathbb{P}_n\Vert Y-\theta\Vert_2,
\end{equation}
where $\mathbb{P}_n=n^{-1}\sum_{i=1}^n\delta_{Y_i}$ is the empirical measure. The objective functions \eqref{e2} and \eqref{e3} give rise to multivariate Hodges-Lehmann estimators (\cite{oja2004multivariate}). The p-values of these multivariate sign and rank-based tests rely on a limiting chi-square distribution of the test statistics. Provided the underlying distribution is elliptically symmetric, i.e., its density is of the form
$$
f(y-\theta)=\vert \Sigma \vert ^{-1/2}g((y-\theta)^T\Sigma^{-1}(y-\theta)),
$$
with symmetry center $\theta$, and a positive definite scatter matrix $\Sigma$, its center of symmetry, location parameter, mean and spatial median are the same. In this paper, we construct Bayesian non-parametric testing procedures for multivariate locations using spatial median. Such a procedure is attractive because it provides a credible set for spatial median, hence a testing criterion can be formulated without depending on asymptotics. In other words, here we focus on the objective function of type \eqref{e1} and propose a non-parametric Bayesian testing procedure. We assume that the observations are drawn from a random distribution $P$, and we put a Dirichlet process (details given in Section \ref{3S3}) prior on it. From $P$, we can infer about its median functional
\begin{equation}
\theta(P)=\argmin_{\theta \in \mathbb{R}^k}P(\Vert Y-\theta \Vert_2-\Vert Y \Vert_2),
\end{equation}
where $Pf=\int f \mathrm{d}P$. The exact posterior distribution of $\theta(P)$ can be obtained easily by posterior simulation. Thus, we can form a credible region for $\theta(P)$ and our decision will be based on whether the value $\theta_0$ falls into this credible set. For elliptically symmetric distributions, this testing procedure effectively studies the one-sample location problem described above, but our testing procedure can be used to study a wider range of distributions $P$, where we study the null hypothesis $H_0:\theta(P)=0$. We show that our testing procedure is asymptotically non-parametric and further compute the asymptotic power function under Pitman (contiguous) alternatives along possible directions. The two-sample test can be formulated in a similar way.
The rest of this paper is organized as follows. In Section \ref{3S2}, we give an overview of the existing multivariate testing procedures. In Section \ref{3S3}, we describe our Bayesian non-parametric test procedures. Section \ref{3S4} gives the local asymptotic power under contiguous alternatives and Section \ref{3S5} presents a simulation study. All the proofs are given in Section \ref{3S6}.
\section{Overview of existing tests}\label{3S2}
We begin this section by describing existing non-parametric testing procedures for one-sample location problems, and later move on to two-sample and several samples problems. Let $Y_1,\dots,Y_n \in \mathbb{R}^k$ be $n$ observations from a $k$-variate probability distribution $P$. According to \cite{sirkia2007multivariate}, the non-parametric testing methods can be classified as based on multivariate spatial sign function $U$, multivariate spatial rank $R$, and multivariate spatial signed rank $Q$, which are defined as follows:
The test statistic based on the score statistic $T(Y)$, which is a general notation for the score functions described above, is given by $n^{-1}\sum_{i=1}^nT(Y_i)$. Under $H_0$, $n^{-1/2}\sum_{i=1}^n T(Y_i) \rightsquigarrow \mathrm{N}_k(0, \Sigma)$, where $\Sigma=P\{T(Y)T(Y)^T\}$. The usual estimator for $\Sigma$ is $\hat{\Sigma}=n^{-1}\sum_{i=1}^nT(Y_i)T(Y_i)^T$.
The assumption of elliptical symmetry is needed to decide the appropriate cut-off for constructing the test procedure. Under $H_0$,
$$
Q^2=n\Big\Vert \hat{\Sigma}^{-1/2}\frac{1}{n}\sum_{i=1}^n T(Y_i)\Big\Vert^2 \rightsquigarrow \chi_k^2,
$$
where $\rightsquigarrow$ denotes convergence in distribution, and $\chi_k^2$ denotes a chi-square distribution with degrees of freedom $k$ (\cite{sirkia2007multivariate}). For elliptically symmetric distributions, $Q^2$ is strictly distribution free (\cite{oja2004multivariate}). An approximate p-value can be obtained from the above chi-square distribution. For small sample sizes, a conditional distribution-free p-value can be obtained under the assumption of directional symmetry (under which $(Y-\theta)/\Vert Y-\theta\Vert_2$ has the same distribution as $(\theta-Y)/\Vert \theta-Y\Vert_2$). This p-value can be obtained as $\mathrm{E}_\delta[\mathbbm{1}\{Q_\delta^2 \geq Q^2\}]$, where $\mathrm{E}_\delta$ is the expectation for the uniform distribution $\delta$ over the $2^n$ $k$-dimensional with each component being $+1$ or $-1$, and $Q_\delta^2$ is the value of the test statistic for the data points $\delta_1Y_1,\dots,\delta_nY_n$ (\cite{oja2004multivariate}).
The one sample testing procedures have been naturally extended to two samples. Suppose that, we have $2$ independent random samples $Y_{1}^{(j)},\dots,Y_{n_j}^{(j)}$, from $k$-variate distributions $P(\cdot-\theta^{(j)})$, $j=1,2$. We test the hypothesis
$$
H_0: \theta^{(1)}=\theta^{(2)},\quad \text{against}\ H_1: \theta^{(1)}\neq \theta^{(2)}.
$$
\cite{sirkia2007multivariate} developed a testing procedure using the general score function $T(Y)$ based on the following \textit{inner standardization} approach. First, a $k\times k$ matrix $H$ and a $k$-vector have to be found such that, for $Z_{i}^{(j)}=H(Y_{i}^{(j)}-h),\ i=1,\dots,n_j,\ j=1,2$,
\begin{align*}
\frac{1}{n}\sum_{j=1}^2\sum_{i=1}^{n_j}T(Z_i^{(j)})=& 0,\\
\frac{k}{n}\sum_{j=1}^2\sum_{i=1}^{n_j}T(Z_{i}^{(j)})T(Z_{i}^{(j)})^T=& \bigg\{\frac{1}{n}\sum_{j=1}^2\sum_{i=1}^{n_j}\Vert T(Z_i^{(j)})\Vert_2^2\bigg\}I_k,
\end{align*}
where $n=n_1+n_2$, and $I_k$ denotes the identity matrix of order $k\times k$. The test statistic has the form
$$
Q^2= k\frac{\sum_{j=1}^2 n_j\Vert \frac{1}{n_j}\sum_{i=1}^{n_j}T(Z_{i}^{(j)})\Vert_2^2}{\frac{1}{n}\sum_{j=1}^2\sum_{i=1}^{n_j}\Vert T(Z_{i}^{(j)})\Vert_2^2}.
$$
It has been shown that $Q^2$ has a limiting chi-square distribution with $k$ degrees of freedom. Thus, for large samples, a p-value can be constructed using the quantiles of the chi-square distribution. For smaller samples, an approximate p-value can be obtained using a conditionally distribution-free \textit{permutation test} verion (\cite{sirkia2007multivariate}). This approach has been extended to a general $c$ number of samples as well.
\section{Bayesian Non-parametric Tests}
\label{3S3}
\subsection{One-sample Problem}
Suppose that, we have $n$ observations $Y_1,\dots,Y_n \in \mathbb{R}^k$ from a $k$-dimensional distribution $P$. We choose a non-parametric Bayesian approach, i.e., we impose a prior on the underlying random distribution $P$, and we form a credible set based on the posterior distribution of the spatial-median functional
\begin{equation}
\theta(P)=\argmin_{\theta \in \mathbb{R}^k}P\{\Vert Y-\theta \Vert_2-\Vert Y \Vert_2\}.
\end{equation}
The most commonly used prior on $P$ is a Dirichlet process prior with centering measure $\alpha$ ($\mathrm{DP}(\alpha)$) (see Chapter 4, \cite{ghosal2017fundamentals}). A Dirichlet process prior can be alternatively denoted as $\mathrm{DP}(MG)$, where $M=\vert \alpha \vert$, and $\bar{\alpha}=\alpha/M$ has cumulative distribution function $G$. The notations $\mathrm{DP}(\alpha)$ and $\mathrm{DP}(MG)$ will be used interchangeably in this paper.
The process $\mathrm{DP}(\alpha)$ is a conjugate prior for i.i.d. observations from $P$, and the posterior distribution of $P$ given $Y_1,\dots,Y_n$ is $\mathrm{DP}(\alpha+n\mathbb{P}_n)$. The exact posterior distribution of $\theta(P)$ cannot be obtained analytically, but posterior samples can be drawn via the stick-breaking construction of a Dirichlet process (chapter 4, \cite{ghosal2017fundamentals}). If $\xi_1,\xi_2,\dots \overset{iid}{\sim} \bar{\alpha}$, and $V_1,V_2,\dots, \overset{iid}{\sim} \mathrm{Be}(1,M)$ are independent random variables and $W_j=V_j\prod_{l=1}^{j-1}(1-V_l)$, then $P=\sum_{j=1}^\infty W_j\delta_{\xi_j}\sim \mathrm{DP}(M\bar\alpha)$. Thus the posterior distribution of $P$ given $Y_1,\dots,Y_n$ takes the same form $\sum_{j=1}^\infty W_j\delta_{\xi_j}$ with $V_1,V_2,\dots, \overset{iid}{\sim}\mathrm{Be}(1,M+n)$. For computation, we use a truncated approximation $P_N=\sum_{j=1}^NW_j\delta_{\xi_j}$ to the stick-breaking representation. Thus, a posterior $100(1-\alpha)\%$ credible region can be formed using the following steps.
\begin{itemize}
\item Draw $V_j,\ j=1,\dots,N-1 \stackrel{iid}{\sim} \mathrm{Be}(1,\ M+n)$ and $V_N=1$. Then, we calculate the stick-breaking weights as $W_1=V_1,\ W_j=V_j\prod_{l=1}^{j-1}(1-V_l)$, $j=2,\dots,N$.
\item With probability $M/(M+n)$, draw $\xi_j$ from $G$, and with probability $n/(M+n)$, $\xi_j$ is drawn from $\mathbb{P}_n$.
\item Draw posterior samples $\hat{\theta}_b,\ b=1,\dots,B$
\begin{equation*}
\hat{\theta}_b=\argmin_{\theta}\sum_{j=1}^N W_{jb}\Vert Y_{jb}-\theta \Vert_2.
\end{equation*}
\item Compute the posterior mean $\bar{\theta}=B^{-1} \sum_{b=1}^B \hat{\theta}_b$ and the posterior covariance matrix $S=B^{-1}\sum_{b=1}^B(\hat{\theta}_b-\bar{\theta})(\hat{\theta}_b-\bar{\theta})^{\prime}$.
\item A $100(1-\alpha)\%$ credible set for $\theta(P)$ is then given by
\begin{equation*}
C(Y_1,\dots,Y_n; \alpha)=\{\theta:(\theta-\bar{\theta})^{\prime}S^{-1}(\theta-\bar{\theta})\leq r_{1-\alpha}\},
\end{equation*}
where $r_{1-\alpha}$ is the $100(1-\alpha)$th percentile of $(\hat{\theta}_b-\bar{\theta})^{\prime}S^{-1}(\hat{\theta}_b-\bar{\theta}),\ b=1,\dots, B$.
\item We reject $H_0$ if $\theta_0 \notin C(Y_1,\dots,Y_n;\alpha)$.
\end{itemize}
The non-informative limit as $M\to 0$ of the posterior of $P$ is $\mathrm{DP}(n\mathbb{P}_n)$ is called the Bayesian bootstrap distribution. Its centering measure is $\mathbb{P}_n$, and a random distribution generated from it is supported on the observation points. If we choose the non-informative limit of the posterior Dirichlet process, we do not need to generate posterior samples from the full Dirichlet process, and we only need to sample $n$ independently and identically distributed (i.i.d.) observations from an exponential distribution with parameter 1, which saves some computational cost.
\begin{theorem}{\label{3thm1}}
The one-sample Bayesian non-parametric test for $H_0:\theta(P)=\theta_0$ is asymptotically distribution-free, i.e., $$
P_{\theta_0}(\theta_0 \in C(Y_1,\dots,Y_n;\alpha)) \rightarrow 1-\alpha,
$$
for any $P_{\theta_0}$ such that $\theta(P_{\theta_0})=\theta_0$.
\end{theorem}
As we have already mentioned, the testing procedure has been constructed only using the posterior samples, without relying on any asymptotic properties.
\subsection{Two Sample Problem}
The Bayesian non-parametric testing procedure for two-sample location problem can be easily constructed generalizing the one-sample procedure. Suppose that, we have $n_1$ observations $Y_1^{(1)},\dots,Y_{n_1}^{(1)} \in \mathbb{R}^k$ from a distribution $P^{(1)}$, and $n_2$ observations $Y_1^{(2)},\dots,Y_{n_2} ^{(2)}\in \mathbb{R}^k$ from $P^{(2)}$. We want to test the hypothesis
$$
H_0: \theta(P^{(1)})-\theta(P^{(2)})=0\quad \text{against}\quad H_1: \theta(P^{(1)})-\theta(P^{(2)}) \neq 0.
$$
As we have previously mentioned, if $P^{(1)}=P(\cdot-\theta^{(1)})$ and $P^{(2)}=P(\cdot-\theta^{(2)})$ are elliptically symmetric distributions, then this problem boils down to studying the two-sample location problem $H_0:\theta^{(1)}-\theta^{(2)}=0$ against $H_1:\theta^{(1)}-\theta^{(2)} \neq 0$. We put a $\mathrm{DP}(MG)$ prior on both $P^{(1)}$ and $P^{(2)}$, for some $M>0$ and $G$. Thus $P^{(1)}$ and $P^{(2)}$ have the stick-breaking representations $P^{(1)}=\sum_{j=1}^\infty W_j^{(1)}\delta_{\xi_j}^{(1)}$ and $P^{(2)} = \sum_{j=1}^\infty W_j^{(2)}\delta_{\xi_j}^{(2)}$
respectively.
where both $W_j^{(1)},\ j=1,2,\dots$ and $W_j^{(2)},\ j=1,2,\dots$ are drawn from $\mathrm{Be}(1,M)$, and $\xi_j^{(1)},\ j=1,2,\dots$ and $\xi_j^{(2)},\ j=1,2,\dots$ are i.i.d. samples from $G$. We truncate both sets of weights at $N$ and construct a $100(1-\alpha)\%$ posterior credible set through the following steps.
\begin{itemize}
\item Draw $V_j^{(1)},\ j=1,\dots,N-1 \stackrel{iid}{\sim} \mathrm{Be}(1,\ M+n_1)$ and $V_N^{(1)}=1$. Then the stick-breaking weights are $W_1^{(1)}=V_1^{(1)},\ W_j^{(1)}=V_j^{(1)}\prod_{l=1}^{j-1}(1-V_l^{(1)})$, $j=2,\dots,N$. Similarly draw $V_j^{(2)}$ from $\mathrm{Be}(1,M+n_2)$ and construct $W_j^{(2)}$ accordingly.
\item Next, with probability $M/(M+n_l)$, draw $Y_{1b}^{(l)},\dots,Y_{Nb}^{(l)}$ from $G$, and with probability $n_l/(M+n_l)$, draw $Y_{1b}^{(l)},\dots,Y_{Nb}^{(l)}$ from $\mathbb{P}_{n_l}$, $l=1,2$.
\item Draw posterior samples $\hat{\theta}_b^{(1)}$ and $\hat{\theta}_b^{(2)},\ b=1,\dots,B$,
\begin{align*}
\hat{\theta}_b^{(1)}=&\argmin_{\theta}\sum_{j=1}^N W_{jb}^{(1)}\Vert Y_{jb}^{(1)}-\theta \Vert_2,\\
\hat{\theta}_b^{(2)}=&\argmin_{\theta}\sum_{j=1}^N W_{jb}^{(2)}\Vert Y_{jb}^{(2)}-\theta \Vert_2.
\end{align*}
\item Compute $\bar{\theta}^{(l)}=B^{-1} \sum_{b=1}^B \hat{\theta}_b^{(l)}$ and $S^{(l)}=B^{-1}\sum_{b=1}^B(\hat{\theta}_b^{(l)}-\bar{\theta}^{(l)})(\hat{\theta}_b^{(l)}-\bar{\theta}^{(l)})^{\prime}$, for $l=1,2$.
\item A $100(1-\alpha)\%$ credible set for $\theta(P_1)-\theta(P_2)$ is then given by
\begin{equation}
\begin{split}
C(Y_1^{(1)},\dots,Y_{n_1}^{(1)},Y_1^{(2)},\dots, Y_{n_2}^{(2)};\alpha)=\{\theta_1-\theta_2:(\theta_1-\theta_2-\bar{\theta}^{(1)}+\bar{\theta}^{(2)})^{\prime}\\
(S^{(1)}+S^{(2)})^{-1}(\theta_1-\theta_2- \bar{\theta}^{(1)}+\bar{\theta}^{(2)})\leq r_{1-\alpha}\},
\end{split}
\end{equation}
where $r_{1-\alpha}$ is the $100(1-\alpha)$th percentile of $(\hat{\theta}_b^{(1)}-\hat{\theta}_b^{(2)}-\bar{\theta}^{(1)}+\bar{\theta}^{(2)})^{\prime}(S^{(1)}+S^{(2)})^{-1}(\hat{\theta}_b^{(1)}-\hat{\theta}_b^{(2)}-\bar{\theta}^{(1)}+\bar{\theta}^{(2)}),\ b=1,\dots, B$.
\item We reject $H_0$ if $0 \notin C(Y_1^{(1)},\dots,Y_{n_1}^{(1)},Y_2^{(1)},\dots,Y_{n_2}^{(2)};\alpha)$.
\end{itemize}
Again, we can consider the non-informative limit of the posterior Dirichlet processes, and use the Bayesian bootstrap to form the credible region.
\begin{theorem}{\label{3thm2}}
The two-sample Bayesian non-parametric test is asymptotically non-parametric, i.e., for $0 < \alpha <1$, and $\theta_0 \in \mathbb{R}^k$
$$
P_{\theta_0}^{(1)}P_{\theta_0}^{(2)}(0 \in C(Y_1^{(1)},\dots,Y_{n_1}^{(1)},Y_1^{(2)},\dots,Y_{n_2}^{(2)};\alpha)) \rightarrow 1-\alpha,
$$
for any $P_{\theta_0}^{(1)},\ P_{\theta_0}^{(2)}$ such that $\theta(P_{\theta_0}^{(1)})=\theta(P_{\theta_0}^{(2)})=\theta_0$.
\end{theorem}
\section{Asymptotic power Under Contiguous Alternatives}
\label{3S4}
In this section, we analyze the local asymptotic power of the proposed Bayesian non-parametric tests, i.e., the limiting power under a sequence of alternatives converging to the null value. For the one-sample problem, we consider differentiable in quadratic mean (DQM) densities $\mathcal{P}=\{p_\theta=\mathrm{d}P_\theta/\mathrm{d}\mu: \theta \in \mathbb{R}^k\}$, i.e., there exists a vector valued measurable function $\dot{\ell}_\theta: \mathbb{R}^k\rightarrow \mathbb{R}^k$ such that, for $h\rightarrow 0$,
$$
\int \Big (\sqrt{p_{\theta+h}}-\sqrt{p_\theta}-\frac{1}{2}h^T\dot{\ell}_\theta\sqrt{p_\theta}\Big)^2\mathrm{d}\mu=o(\Vert h \Vert_2^2).
$$
We consider shrinking alternatives of the form
\begin{equation}\label{3eq11}
H_{1n}: \theta=\theta_0+ \frac{h}{\sqrt{n}},
\end{equation}
for models $P_\theta \in \mathcal{P}$. Then we derive the limiting power for the sequence of distributions $P_{\theta_0+h/\sqrt{n}} \in \mathcal{P}$. As a consequence of the DQM condition, the models $P_{\theta_0+h/\sqrt{n}}^n$ satisfy the local asymptotically normal (LAN) condition, i.e., there exist a matrix $I_\theta$ and a random vector $\Delta_{n,\theta}\sim \mathrm{N}_k(0,I_\theta)$ such that, for every converging sequence $h_n \rightarrow h$,
\begin{equation}\label{eq312}
\log\frac{\mathrm{d}P_{\theta+h_n/\sqrt{n}}^n}{\mathrm{d}P_{\theta}^n} =h^T \Delta_{n,\theta}-\frac{1}{2}h^T I_\theta h+o_{P_{\theta}^n}(1).
\end{equation}
In this context, specifically $\Delta_{n,\theta}=n^{-1/2}\sum_{i=1}^nh^T \dot{\ell}_\theta(Y_i)$, and $I_\theta= P_\theta \dot{\ell}_\theta \dot{\ell}_\theta^T$. The next theorem gives the limiting power for the one-sample test under a sequence of alternatives of the form $H_{1n}$ for the DQM models. Before stating the theorem, we introduce some notations. Define
\begin{align}
U_{\theta,P}=&P\bigg( \frac{(Y-\theta)(Y-\theta)^T}{\Vert Y-\theta \Vert_2^2}\bigg)\\
V_{\theta,P}=&P\bigg\{ \frac{1}{\Vert Y-\theta \Vert_2}\bigg(I_k -\frac{(Y-\theta)(Y-\theta)^T}{\Vert Y-\theta \Vert_2^2}\bigg)\bigg\},
\end{align}
Let $P^\star$ be the true distribution of $Y$, i.e., the truth of $P$, and $\theta^\star$ be the spatial median for the true distribution $P^\star$, i.e., $\theta^\star=\theta(P^\star)$.
\begin{theorem}\label{3thm3}
For a sequence of shrinking alternatives of the form \eqref{3eq11}, i.e., under a sequence of differentiable in quadratic mean (DQM) models $P_{\theta_0+h/\sqrt{n}} \in \mathcal{P}$, the asymptotic power of the one-sample Bayesian non-parametric test for $H_0:\theta(P)=\theta_0$ is given by $F_{\chi^2}(\chi^2_{k;\alpha};k,\delta_1^\prime(V_{\theta_0,P_{\theta_0}}^{-1} U_{\theta_0,P_{\theta_0}} V_{\theta_0,P_{\theta_0}}^{-1})^{-1}\delta_1)$, where
\begin{equation}
\delta_1=P_{\theta_0}\bigg(-V_{\theta_0,P_{\theta_0}}^{-1} \frac{Y-\theta_0}{\Vert Y-\theta_0 \Vert_2}\dot{\ell}_{\theta_0}^T I_{\theta_0}^{-1}h\bigg),
\end{equation}
and $F_{\chi^2}(x;k,\delta)$ is the CDF of a non-central chi-square distribution with degrees of freedom $k$ and non-centrality parameter $\delta$, with $\chi^2_{k;\alpha}$ being the $100(1-\alpha)$th percentile of a $\chi^2_k$ distribution.
\end{theorem}
For the two sample problem, we again consider DQM models $P^{(1)}_{\theta_0+h_1/\sqrt{n_1}}$, $P^{(2)}_{\theta_0+h_2/\sqrt{n_2}} \in \mathcal{P}$, i.e., the contiguous alternatives are of the form
\begin{equation}\label{eq316}
H_{1n}:\theta_{n_j}^{(j)}=\theta_0+ \frac{h_j}{\sqrt{n_j}},\ j=1,2,
\end{equation}
with $n=n_1+n_2$, such that $n_1/n \to \lambda$, and $n_2/n \to 1-\lambda$. The following theorem gives the limiting power of the two-sample test under contiguous alternatives of the form \eqref{eq316}, and the notations from Theorem \ref{3thm3} directly translate to the next theorem.
\begin{theorem}\label{3thm4}
For a sequence of shrinking alternatives of the form \eqref{eq316}, i.e., for a sequence of DQM models $P^{(1)}_{\theta_0+h_1/\sqrt{n_1}},\ P^{(2)}_{\theta_0+h_2/\sqrt{n_2}}\in \mathcal{P}$, the asymptotic power of the two-sample Bayesian non-parametric test for testing $H_0: \theta(P^{(1)})=\theta(P^{(2)})=\theta_0$ for $\theta_0 \in \mathbb{R}^k$, is given by
$$
F_{\chi^2}(\chi^2_{k;\alpha}; \delta_2^\prime (V_{\theta_0;P_{\theta_0}^{(1)}}^{-1}U_{\theta_0;P_{\theta_0}^{(1)}}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}+V_{\theta_0;P_{\theta_0}^{(2)}}^{-1}U_{\theta_0;P_{\theta_0}^{(2)}}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1})^{-1}\delta_2),
$$where
\begin{equation}
\begin{split}
\delta_2=\frac{1}{\sqrt{\lambda}}P_{\theta_0}^{(1)}\{-& V_{\theta_0,P_{\theta_0}^{(1)}}^{-1} \frac{ Y^{(1)}-\theta_0 }{\Vert Y^{(1)}-\theta_0 \Vert_2} \dot{\ell_{\theta_0}^{(1)}}^{T} {I_{\theta_0}^{(1)}}^{-1}h_1\}\\ & +\frac{1}{\sqrt{1-\lambda}}P_{\theta_0}^{(2)}\{-V_{{\theta_0},P_{\theta_0}^{(2)}}^{-1} \frac{ Y^{(2)}-\theta_0 }{\Vert Y^{(2)}-\theta_0 \Vert_2}\dot{\ell_{\theta_0}^{(2)}}^{T} {I_{\theta_0}^{(2)}}^{-1}h_2\},
\end{split}
\end{equation}
for any $\theta_0\in \mathbb{R}^k$.
\end{theorem}
\section{Simulation Study}
\label{3S5}
We perform the simulation study to demonstrate the finite sample performance of the proposed one-sample and two-sample Bayesian non-parametric tests. We compare our tests with the Hotelling $T^2$-test, and the spatial sign and rank tests. The underlying distributions are bivariate Gaussian, bivariate $t$ (both elliptically symmetric), and bivariate gamma (asymmetric). The bivariate gamma distribution is constructed using Gaussian copula (\cite{xue2000multivariate}). To describe the construction briefly, let $Y_1,\dots,Y_k$ be $k$ many univariate gamma distributions $\mathrm{Ga}(s,r)$ with cumulative distribution functions (CDF) and probability density functions (PDF) being denoted by $F_j$ and $f_j$, $j=1,\dots,k$. Then the joint density of $Y=(Y_1,\dots,Y_k)$ is then given by
$$
g(y,s,r,V)=c_\phi\{F_1,\dots,F_k \vert V \}\prod_{j=1}^k f_j(y_j,s,r),
$$
where $c_\phi(\cdot\vert V)$ denotes the density of the $k$-dimensional Gaussian copula. For comparison, we choose a general version of Hotelling's $T^2$-test, where the Gaussian assumption can be relaxed to existence of second moments. Here the p-value is based on a chi-square approximation instead of the usual $F$-distribution. For one-sample test, we consider a sample size of $n=100$ and for the two-sample test, we choose $n_1=100$, $n_2=90$. We calculate the power, i.e., the proportion of times the null hypotheses are rejected off 2000 replications. The location parameters are chosen suitably to show a good range of powers, and the scatter matrices are chosen to be $I_2$. For our testing method, we choose a $\mathrm{DP}(\alpha)$ prior with $\alpha=2\times \mathrm{N}_2(0,10I_2)$.
Tables \ref{3tab1} and \ref{3tab2} show the power values, and it should be noted that our test procedures attain the nominal level $0.05$, and outperforms all other procedures in most cases. When the underlying distributions are not Gaussian, our method performs better than other methods, especially compared with the Hotelling's $T^2$ -test.
\begin{table}[t]
\begin{center}
\begin{tabular}{c|cccc}
\hline
$\theta$ & NPBayes & Sign Test & Rank Test & Hotelling's $T^2$ \\
\hline
& \multicolumn{4}{c}{Bivariate Gaussian Distribution}\\
\hline
(0,\ 0) & 0.050 & 0.046 & 0.051 & 0.055\\
(0.05,\ 0.05) & 0.139 & 0.086 & 0.084 & 0.099\\
(0.1,\ 0.05) & 0.169 & 0.125 & 0.141 & 0.156\\
(0.1,\ -0.1) & 0.221 & 0.188 & 0.213 & 0.234 \\
\hline
& \multicolumn{4}{c}{Bivariate $t_1$ Distribution}\\
\hline
(0,\ 0) & 0.050 & 0.053 & 0.041 & 0.02 \\
(0.05,\ 0.05) & 0.174 & 0.058 & 0.053 & 0.025\\
(0.1,\ 0.05) & 0.179 & 0.094 & 0.082 & 0.018 \\
(0.1,\ -0.1) & 0.201 & 0.171 & 0.197 & 0.026\\
\hline
& \multicolumn{4}{c}{Bivariate Gamma Distribution}\\
\hline
(0,\ 0) & 0.050 & 0.016 & 0.025 & 0.294\\
(0.05,\ 0.05) & 0.027 & 0.021 & 0.039 & 0.528\\
(0.1,\ 0.05) & 0.029 & 0.013 & 0.058 & 0.607\\
(0.1,\ -0.1) & 0.034 & 0.009 & 0.018 & 0.255
\end{tabular}
\end{center}
\caption{Power for testing $H_0: \theta(P)=0$ for bivariate Gaussian, bivariate $t$ (with 1 degree of freedom), and bivariate gamma distributions with different location parameters ($\theta$).}
\label{3tab1}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{c|c|cccc}
\hline
$\theta^{(1)}$ & $\theta^{(2)}$ & NPBayes & Sign Test & Rank Test & Hotelling's $T^2$ \\
\hline
& & \multicolumn{4}{c}{Bivariate Gaussian Distribution}\\
\hline
(0,\ 0) & (0,\ 0) & 0.050 & 0.057 & 0.051 & 0.037\\
(0,\ 0) & (0.1,\ 0) & 0.135 & 0.091 & 0.085 & 0.083\\
(0,\ 0) & (0.1,\ 0.1) & 0.225 & 0.098 & 0.122 & 0.136\\
(0,\ 0) & (0, 0.3) & 0.402 & 0.337 & 0.346 & 0.146 \\
\hline
& & \multicolumn{4}{c}{Bivariate $t_1$ Distribution}\\
\hline
(0,\ 0) & (0,\ 0) & 0.059 & 0.041 & 0.052 & 0.011 \\
(0,\ 0) & (0,1.\ 0) & 0.141 & 0.060 & 0.074 & 0.026\\
(0,\ 0) & (0.1,\ 0.1) & 0.158 & 0.087 & 0.099 & 0.022 \\ (0,\ 0) &
(0,\ 0.3) & 0.307 & 0.248 & 0.213 & 0.023 \\
\hline
& & \multicolumn{4}{c}{Bivariate Gamma Distribution}\\
\hline
(0,\ 0) & (0,\ 0) & 0.024 & 0.020 & 0.019 & 0.017 \\
(0,\ 0) & (0.1,\ 0) & 0.020 & 0.019 & 0.018 & 0.025\\
(0,\ 0) & (0.1,\ 0.1) & 0.030 & 0.015 & 0.014 & 0.030\\
(0,\ 0) & (0,\ 0.3) & 0.033 & 0.018 & 0.023 & 0.028
\end{tabular}
\end{center}
\caption{Power for testing $H_0: \theta^{(1)}=\theta^{(2)}$ for bivariate Gaussian, bivariate $t$ (with 1 degree of freedo), and bivariate gamma distributions with different location parameters ($\theta^{(1)}$ and $\theta^{(2)}$).}
\label{3tab2}
\end{table}
\begin{remark}
Here we have considered tests for multivariate locations based on spatial medians, but these tests can be constructed using multivariate $\ell_1$-medians (with $\ell_p$-norms) as well. For some fixed $p>1$, the $\ell_1$-median for a $k$-variate distribution $P$ can be defined as
$$
\theta_p(P)=\argmin_{\theta \in \mathbb{R}^k}\{\Vert Y-\theta\Vert_p-\Vert \theta \Vert_p\}.
$$
Bernstein-von Mises theorems of $\ell_1$-medians are available in the literature (\cite{bhattacharya2019bayesian}). Hence the expressions for local asymptotic powers under shrinking alternatives can be obtained using those theorems.
\end{remark}
\section{Proofs}
\label{3S6}
Before giving the proofs of the main theorems, we need a couple of auxiliary results. The first one gives convergence results for the posterior mean $\bar{\theta}$ and covariance matrix $S$.
\begin{lemma}\label{3l1}
Suppose the true distribution of $Y_1,\dots,Y_n \in \mathbb{R}^k$ is $P^\star$, and the following conditions hold for $P^\star$.
\begin{enumerate}
\item The distribution $P^\star$ has a density that is bounded on bounded subsets of $\mathbb{R}^k$.
\item The spatial median of $P^\star$, $\theta^\star=\theta(P^\star)$ is unique.
\end{enumerate}
Then, $\bar{\theta}=\hat{\theta}_n +o_{P^\star}(n^{-1/2})$ and $nS=V_{\theta^\star,P^\star}^{-1}U_{\theta^\star,P^\star}V_{\theta^\star,P^\star}^{-1}+o_{P^\star}(1)$, where $\hat{\theta}_n$ is the sample spatial median of $Y_1,\dots,Y_n$.
\end{lemma}
\begin{proof}
The posterior distribution of $\theta(P)$ can be approximated by a Gaussian distribution in the Bernstein-von Mises sense (\cite{bhattacharya2019bayesian}), i.e., given $Y_1,\dots,Y_n$
$$
\sqrt{n}(\theta(P)-\hat{\theta}_n) \rightsquigarrow \mathrm{N}_k(0, V_{\theta^\star,P^\star}^{-1}U_{\theta^\star,P^\star}V_{\theta^\star,P^\star}^{-1}).
$$
Let $\mathbb{B}_n$ is the Bayesian bootstrap process defined by $\mathbb{B}_nf=\sum_{i=1}^n W_{ni}\delta_{Y_i}$, where $(W_{n1},\dots, W_{nn})$ follows a Dirichlet distribution $\mathrm{Dir}(n;1,\dots,1)$. Define $\theta(\mathbb{B}_n)=\argmin_\theta\mathbb{B}_n\Vert Y-\theta\Vert_2$. It has been shown in Theorem 3.1 in \cite{bhattacharya2019bayesian} that, asymptotically, $\theta(P)$ is a Bayesian bootstrapped analog of a $Z$-estimator. Thus, our problem boils down to showing the first and second moment consistency of the bootstrap $Z$-estimator $\theta(\mathbb{B}_n)$.
\cite{cheng2015moment} proved the consistency of the bootstrap moment estimators for the class of exchangeably weighted bootstrap (see Section 2.2, \cite{cheng2015moment}). The Bayesian bootstrap weights fall into the class of the exchangeable bootstrap weights, and we have to show that the $\ell_1$ criterion function $m_\theta(y)=-\Vert y-\theta \Vert_2+\Vert y \Vert_2$ satisfies the following two sufficient conditions. Let $\mathbb{G}_n=\sqrt{n}(\mathbb{P}_n-P^\star)$ denotes the empirical process and $\mathbb{G}_n^\star=\sqrt{n}(\mathbb{B}_n-\mathbb{P}_n)$ denotes the bootstrap empirical process. Suppose the following conditions hold.
\begin{enumerate}
\item Let $\Theta$ be the compact parameter space. For any $\theta \in \Theta$,
$$
P^\star(m_\theta-m_{\theta^\star})\lesssim -\Vert \theta-\theta^\star \Vert_2^2.
$$
\item Define $N_\delta=\{m_\theta-m_{\theta_0}: \Vert \theta-\theta \Vert_2 \leq \delta\}$. We have to show
\begin{align}
\label{eq318}
\big(\mathrm{E}_X\Vert \mathbb{G}_n \Vert _{N_\delta}^{p^\prime}\big)^{1/p^\prime} &\lesssim \delta\\
\big(\mathrm{E}_{XW}\Vert \mathbb{G}_n^\star \Vert_{N_\delta}^{p^\prime}\big)^{1/p^\prime} &\lesssim \delta,
\end{align}
for some $p^\prime > 2$.
\end{enumerate}
Then the assertion in Lemma \ref{3l1} holds. First, we need to show that the parameter space can be restricted to a compact subset of $\mathbb{R}^k$ with high probability. In Theorem 3.1 of \cite{bhattacharya2019bayesian}, it has been shown that for some $0<\epsilon<1/4$ and $K>0$ such that $P(\Vert Y \Vert_2\leq K)>1-\epsilon$, given $Y_1,\dots, Y_n$, $\Vert\theta(\mathbb{B}_n)\Vert_2 \leq 3K$ with high ${P^\star} ^n$-probability, which implies that asymptotically, given $Y_1,\dots,Y_n$, $\Vert\theta(P)\Vert_2 \leq 3K$ with high ${P^\star}^n$-probability.
After fixing $K>0$, we choose $\Theta=\{\theta: \Vert \theta \Vert_2 \leq 3K\}$. Since $\Theta$ is compact, Condition 1 can be shown from a Taylor series expansion around $\theta^\star$
\begin{equation}\label{eq320}
P^\star m_\theta-P^\star m_{\theta^\star}=(\theta-\theta^\star)^\prime P^\star\dot{m}_{\theta^\star}+\frac{(\theta-\theta^\star)^\prime V_{\theta^\star,P^\star}(\theta-\theta^\star)}{2}+o(\Vert \theta-\theta^\star \Vert^2).
\end{equation}
Since $\theta^\star$ is the maximizer of $P^\star m_\theta$, $P^\star \dot{m}_{\theta^\star}$ vanishes. The matrix $V_{\theta^\star,P^\star}$ is negative definite, and hence, the second term in the right hand side of \eqref{eq320} is bounded above by $-c\Vert \theta-\theta^\star \Vert_2^2$, for a positive constant $c$.
Before proving Condition 2, we introduce some notations. For any class of functions $\mathcal{A}$, and metric $\ell$, its $\epsilon$.-bracketing number is denoted as $N_{[\,]}(\epsilon,\mathcal{A},\ell)$. The corresponding bracketing entropy integral is defined as
$$
J_{[\,]}(\epsilon,\mathcal{A},\ell)= \int_0^\delta \sqrt{1+\log N_{[\,]}(\epsilon,\mathcal{A},\ell)}\mathrm{d}\epsilon.
$$
Following \cite{cheng2015moment}, a simple sufficient condition for \eqref{eq318} is the following global Lipschitz condition
\begin{align}\label{eq321}
\vert m_\theta(x)-m_{\theta^\star}(x)\vert \leq & \Vert \theta-\theta^\star \Vert_2
\end{align}
for any $\theta \in \Theta$, and
\begin{align}\label{eq322}
J_{[\,]}(1,N_\delta,L_2(P^\star))+ \Vert M \Vert_{\psi_{p^\prime }} < \infty,
\end{align}
for some $p^\prime >2$, where $\Vert \cdot \Vert_{\psi_p}$ is the Orlicz norm with respect to the convex function $\psi_p(t)=\exp {(t^p-1)}$. In our case, \eqref{eq321} holds from triangle inequality $\vert m_\theta(y)-m_{\theta^\star}(y)\vert \leq \Vert \theta-\theta^\star \Vert_2$. Since $M(y)=1$ for every $y$, we just have to show $J_{[\,]}(1,N_\delta, L_2(P^\star)) < \infty$.
By Example 19.7 in \cite{van2000asymptotic}, since $\vert m_\theta(y)-m_{\theta^\prime}(y)\vert \leq \Vert \theta-\theta^\prime \Vert_2$, for every $\theta,\ \theta^\prime \in \Theta$, there exists a constant $K$ such that
$$
N_{[\,]}(1,N_\delta,L_2(P^\star)) \leq \bigg(\frac{\mathrm{diam}\ \Theta} {\epsilon}\bigg)^k,\ \text{for every}\ 0<\epsilon<\mathrm{diam}\ \Theta.
$$
Then, the entropy is of the order $\log(1/\epsilon)$. By a change of variable, it can be shown that $J_{[\,]}(1,N_\delta, L_2(P^\star)) < \infty$.
\end{proof}
The next lemma gives a Bernstein-von Mises theorem for the difference of spatial medians for two independent samples $Y_1^{(1)},\dots, Y_{n_1}^{(1)} \sim P^{(1)}$, and $Y_1^{(2)},\dots, Y_{n_2}^{(2)} \sim P^{(2)}$. The sample spatial medians are denoted by $\hat{\theta}_{n_1}$ and $\hat{\theta}_{n_2}$ respectively. We put a $\mathrm{DP}(\alpha)$ prior on both $P^{(1)}$ and $P^{(2)}$, and construct a posterior for $\theta(P^{(1)})-\theta(P^{(2)})$. The asymptotic result follows almost immediately from Theorem 3.1 in \cite{bhattacharya2019bayesian}.
\begin{lemma}{\label{312}}
Suppose the following conditions hold.
\begin{enumerate}
\item The true distributions ${P^\star}^{(1)}$ and ${P^\star}^{(2)}$ have probability densities that are bounded on compact subsets of $\mathbb{R}^k$.
\item The true spatial medians, ${\theta^\star}^{(1)}=\theta({P^\star}^{(1)})$ and ${\theta^\star}^{(2)}=\theta({P^\star}^{(2)})$ are unique.
\end{enumerate}
Then, denoting $n=n_1+n_2$, such that $n_1/n \to \lambda$, and $n_2/n \to 1-\lambda$,
\begin{enumerate}[label={\upshape(\roman*)}]
\item $\sqrt{n}(\hat{\theta}_{n_1}^{(1)}-{\theta^\star}^{(1)}-\hat{\theta}_{n_2}^{(2)}+{\theta^\star}^{(2)})\rightsquigarrow \mathrm{N}_k(0,\lambda^{-1}V_{{\theta^\star}^{(1)},{P^\star}^{(1)}}^{-1}U_{{\theta^\star}^{(1)},{P^\star}^{(1)}}V_{{\theta^\star}^{(1)},{P^\star}^{(1)}}^{-1}+(1-\lambda)^{-1}V_{{\theta^\star}^{(2)},{P^\star}^{(2)}}^{-1}U_{{\theta^\star}^{(2)},{P^\star}^{(1)}}V_{{\theta^\star}^{(2)},{P^\star}^{(1)}}^{-1})$
\item Given $Y_1^{(1)},\dots,Y_{n_1}^{(1)}$ , $Y_2^{(1)},\dots,Y_{n_2}^{(2)}$,
\begin{align*}
\sqrt{n}(\theta(P^{(1)})-\hat{\theta}_{n_1}^{(1)}-&\theta(P^{(2)})+\hat{\theta}_{n_2}^{(2)})\rightsquigarrow\\
& \mathrm{N}_k(0, \lambda^{-1}V_{{\theta^\star}^{(1)},{P^\star}^{(1)}}^{-1}U_{{\theta^\star}^{(1)},{P^\star}^{(1)}}V_{{\theta^\star}^{(1)},{P^\star}^{(1)}}^{-1}+\\ &(1-\lambda)^{-1}V_{{\theta^\star}^{(2)},{P^\star}^{(2)}}^{-1}U_{{\theta^\star}^{(2)},{P^\star}^{(2)}}V_{{\theta^\star}^{(2)},{P^\star}^{(2)}}^{-1}).
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
From Theorem 3.1 in \cite{bhattacharya2019bayesian}, for $j=1,2$,
\begin{enumerate}[label={\upshape(\roman*)}]
\item $\sqrt{n_j}(\hat{\theta}_{n_j}^{(j)}-{\theta^\star}^{(j)}) \rightsquigarrow \mathrm{N}_k(0, V_{{\theta^\star}^{(j)},{P^\star}^{(j)}}^{-1}U_{{\theta^\star}^{(j)},{P^\star}^{(j)}}V_{{\theta^\star}^{(j)},{P^\star}^{(j)}}^{-1})$,
\item Given $Y_1^{(j)},\dots,Y_{n_j}^{(j)}$,
\begin{align*}
\sqrt{n_j}(\theta(P^{(j)})-\hat{\theta}_{n_j}^{(j)}) \rightsquigarrow \mathrm{N}_k(0, V_{{\theta^\star}^{(j)},{P^\star}^{(j)}}^{-1}U_{{\theta^\star}^{(j)},{P^\star}^{(j)}}V_{{\theta^\star}^{(j)},{P^\star}^{(j)}}^{-1}).
\end{align*}
\end{enumerate}
From independence of the two samples, the conclusion follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{3thm1}]
The probability of accepting the null hypothesis under the null is
\begin{align*}
P_{\theta_0}(\theta_0 \in C(Y_1,\dots,Y_n))=& P_{\theta_0}((\bar{\theta}-\theta_0)^\prime S^{-1}(\bar{\theta}-\theta_0) \leq r_{1-\alpha})
\end{align*}
Using Lemma \ref{3l1} and Theorem 3.1 in \cite{bhattacharya2019bayesian},
\begin{equation}\label{eq3223}
n(\theta-\bar{\theta})^T S^{-1}(\theta-\bar{\theta})\rightsquigarrow \chi_k^2,
\end{equation}
which implies that $r_{1-\alpha}=\chi^2_{k;\alpha}+o_{P_{\theta_0}}(1)$. The weak convergence in \eqref{eq3223} uses the fact that if $X \sim \mathrm{N}_k(0,I_k)$, then $X^TX \sim \chi_k^2$. Next, again using Lemma \ref{3l1}, Theorem 3.1 in \cite{bhattacharya2019bayesian} and Slutsky's theorem,
\begin{align*}
P_{\theta_0}(\theta_0 \in C(Y_1,\dots,Y_n))=& P_{\theta_0}((\bar{\theta}-\theta_0)^\prime S^{-1}(\bar{\theta}-\theta_0) \leq r_{1-\alpha})\\
=& P_{\theta_0}((\hat{\theta}_n-\theta_0)^\prime (V_{\theta_0,P_{\theta_0}}^{-1}U_{\theta_0,P_{\theta_0}}V_{\theta_0,P_{\theta_0}}^{-1})^{-1}(\hat{\theta}_n-\theta_0) + \\&o_{P_{\theta_0}}(1) \leq \chi^2_{k;\alpha}+o_{P_{\theta_0}}(1))
\rightarrow 1-\alpha.
\end{align*}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{3thm2}]
The proof is similar to that of Theorem \ref{3thm1}. Using Lemma \ref{312} under $H_0$, $\sqrt{n}(\hat{\theta}_{n_1}^{(1)}-\hat{\theta}_{n_2}^{(2)})$ converges to a Gaussian distribution with mean $0$ and covariance matrix $\lambda^{-1}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}U_{\theta_0,P_{\theta_0}^{(1)}}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}+(1-\lambda)^{-1}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1}U_{\theta_0,P_{\theta_0}^{(2)}}V_{\theta_0,P_{\theta_0}^{(2)}}$. Using Lemma \ref{3l1}, Lemma \ref{312} and Slutsky's theorem,
$$
n(\bar{\theta}^{(1)}-\bar{\theta}^{(2)})^\prime (S^{(1)}+S^{(2)})^{-1}(\bar{\theta}^{(1)}-\bar{\theta}^{(2)}) \rightsquigarrow\chi_k^2,
$$
which implies that $r_{1-\alpha}=\chi^2_{k;\alpha}+o_{P_{\theta_0}^{(1)}}(1)+o_{P_{\theta_0}^{(2)}}(1)$. Next, using Lemma \ref{312}, under $H_0$,
\begin{align*}
(P_{\theta_0}^{(1)}\times & P_{\theta_0}^{(2)})[(\bar{\theta}^{(1)}-\bar{\theta}^{(2)})^\prime (S^{(1)}+S^{(2)})^{-1}(\bar{\theta}^{(1)}-\bar{\theta}^{(2)}) \leq r_{1-\alpha}]\\
&= (P_{\theta_0}^{(1)}\times P_{\theta_0}^{(2)})(n(\hat{\theta}_{n_1}^{(1)}-\hat{\theta}_{n_2}^{(2)})^\prime (\frac{1}{\lambda}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}U_{\theta_0,P_{\theta_0}^{(1)}}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}\\
&+\frac{1}{1-\lambda}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1}U_{\theta_0,P_{\theta_0}^{(2)}}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1})^{-1}(\hat{\theta}_{n_1}^{(1)}-\hat{\theta}_{n_2}^{(2)})
+o_{P_{\theta_0}^{(1)}}(1)+o_{P_{\theta_0}^{(2)}}(1)\\
&\leq\chi_{k;\alpha}^2+o_{P_{\theta_0}^{(1)}}(1)+o_{P_{\theta_0}^{(2)}}(1))
\rightarrow 1-\alpha.
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{3thm3}]
It is well known that the models $P_{\theta_0}^n$ and $P_{\theta_0+h/\sqrt{n}}^n$ are mutually contiguous. Under $H_0$, using Theorem 5.23 in \cite{van2000asymptotic}, $\hat{\theta}_n$ can be written as
$$
\sqrt{n}(\hat{\theta}_n-\theta_0)=-\frac{1}{\sqrt{n}}\sum_{i=1}^n V_{\theta_0,P_{\theta_0}}^{-1}\frac{Y_i-\theta_0}{\Vert Y_i-\theta_0 \Vert_2}+ o_{P_{\theta_0}}(1).
$$
Let $L_n$ denote the log-likelihood ratio $L_n=\log(\mathrm{d}P_{\theta_0+h/\sqrt{n}}^n/\mathrm{d}P_{\theta_0}^n)$. By central limit theorem, $(\sqrt{n}(\hat{\theta}_n-\theta_0), L_n)$ tends to a $(k+1)$-dimensional Gaussian distribution with mean zero and covariance
$$
\delta_1=P_{\theta_0}\big(-V_{\theta_0,P_{\theta_0}}^{-1} \frac{Y-\theta_0}{\Vert Y-\theta_0 \Vert_2}\dot{\ell}_{\theta_0}^T I_{\theta_0}^{-1}h\big).
$$
Then by Le Cam's third lemma (Example 6.7, \cite{van2000asymptotic}), under $P_{\theta_0+h/\sqrt{n}}$, $\sqrt{n}(\hat{\theta}_n-\theta_0)$ converges weakly to a Gaussian distribution with mean $\delta_1$. Following the arguments used in Theorem \ref{3thm1}, the local asymptotic power of the test is given by
\begin{align*}
P_{\theta_0+h/\sqrt{n}}&(\bar{\theta}-\theta_0)^\prime S^{-1}(\bar{\theta}-\theta_0) \leq r_{1-\alpha}) =\\ &P_{\theta_0+h/\sqrt{n}}(n(\hat{\theta}_n-\theta_0)^\prime (V_{\theta_0,P_{\theta_0}}^{-1}U_{\theta_0,P_{\theta_0}}V_{\theta_0,P_{\theta_0}}^{-1})^{-1}(\hat{\theta}_n-\theta_0)+ o_{P_{\theta_0}}(1)\\
&\leq \chi_{k;\alpha}^2+o_{P_{\theta_0}}(1)).
\end{align*}
Under $P_{\theta_0+h/\sqrt{n}}$, $n(\hat{\theta}_n-\theta_0)^\prime (V_{\theta_0,P_{\theta_0}}^{-1}U_{\theta_0,P_{\theta_0}}V_{\theta_0,P_{\theta_0}}^{-1})^{-1}(\hat{\theta}_n-\theta_0)$ tends to a chi-square distribution with non-centrality parameter $\delta_1^\prime (V_{\theta_0,P_{\theta_0}}^{-1}U_{\theta_0,P_{\theta_0}}V_{\theta_0,P_{\theta_0}}^{-1})^{-1} \delta_1$, which gives us the asymptotic power given in the statement of Theorem \ref{3thm3}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{3thm4}]
The proof proceeds along the lines of Theorem \ref{3thm3}. The models $\{{P_{\theta_0}^{(1)}}\times P_{\theta_0}^{(2)}\}$ and $\{P_{\theta_0+h_1/\sqrt{n_1}}^{(1)}\times P_{\theta_0+h_2/\sqrt{n_2}}^{(2)}\}$ are mutually contiguous. From Theorem 5.23 in \cite{van2000asymptotic}, sample spatial medians have the following linearization,
\begin{align}
\label{eq324}
\sqrt{n_1}(\hat{\theta}_{n_1}^{(1)}-\theta_0)=& -\frac{1}{\sqrt{n_1}}\sum_{i=1}^{n_1}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}\frac{Y_i^{(1)}-\theta_0}{\Vert Y_i^{(1)}-\theta_0 \Vert_2}+o_{P_{\theta_0}^{(1)}}(1),\\
\label{eq325}
\sqrt{n_2}(\hat{\theta}_{n_2}^{(2)}-\theta_0)=& -\frac{1}{\sqrt{n_2}}\sum_{i=1}^{n_2}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1}\frac{Y_i^{(2)}-\theta_0}{\Vert Y_i^{(2)}-\theta_0 \Vert_2}+o_{P_{\theta_0}^{(2)}}(1).
\end{align}
Subtracting \eqref{eq325} from \eqref{eq324}, under $H_0$,
\begin{align*}
\sqrt{n}(\hat{\theta}_{n_1}^{(1)}-&\hat{\theta}_{n_2}^{(2)})=-\bigg\{\frac{1}{\sqrt{n_1\lambda}}\sum_{i=1}^{n_1}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}\frac{ Y_i^{(1)}-\theta_0}{\Vert Y_i^{(1)}-\theta_0 \Vert_2}\\
&-\frac{1}{\sqrt{n_2(1-\lambda)}}\sum_{i=1}^{n_2}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1}\frac{ Y_i^{(2)}-\theta_0}{\Vert Y_i^{(2)}-\theta_0 \Vert_2}\bigg\}+
o_{P_{\theta_0}^{(1)}}(1)+o_{P_{\theta_0}^{(2)}}(1).
\end{align*}
Define the log-likelihood ratio $L_N^\prime=\log(\mathrm{d}P_{\theta_0+h_1/\sqrt{n_1}}^{(1)}\mathrm{d}P_{\theta_0+h_2/\sqrt{n_2}}^{(2)}/\mathrm{d}P_{\theta_0}^{(1)}\mathrm{d}P_{\theta_0}^{(2)})$, which looks like
\begin{align*}
L_N^\prime=\frac{1}{\sqrt{n_1}}h_1^T\sum_{i=1}^{n_1}\dot{\ell}_{\theta_0}^{(1)}(Y_i^{(1)})-\frac{1}{2}h_1^TI_{\theta_0}^{(1)}h_1+\frac{1}{\sqrt{n_2}}h_2^T\sum_{i=1}^{n_2}\dot{\ell}_{\theta_0}^{(2)}(Y_i^{(2)})-\frac{1}{2}h_2^TI_{\theta_0}^{(2)}h_2\\
+o_{P_{\theta_0}^{(1)}}(1)+ o_{P_{\theta_0}^{(2)}}(1).
\end{align*}
By central limit theorem, the joint distribution of $\sqrt{n}(\hat{\theta}_{n_1}^{(1)}-\hat{\theta}_{n_2}^{(2)})$ and $\log L_n^\prime$ tends to a $(k+1)$-dimensional Gaussian distribution with mean zero and covariance
\begin{equation*}
\begin{split}
\delta_2=\frac{1}{\sqrt{\lambda}}P_{\theta_0}^{(1)}\{-V_{\theta_0,P_{\theta_0}^{(1)}}^{-1} \frac{ Y^{(1)}-\theta_0 }{\Vert Y^{(1)}-\theta_0 \Vert_2}\dot{\ell_{\theta_0}^{(1)}}^{T} {I_{\theta_0}^{(1)}}^{-1}h_1\}+\\ \frac{1}{\sqrt{1-\lambda}}P_{\theta_0}^{(2)} \{-V_{\theta_0,P_{\theta_0}^{(2)}}^{-1}\displaystyle \frac{ Y^{(2)}-\theta_0}{\Vert Y^{(2)}-\theta_0 \Vert_2}\dot{\ell_{\theta_0}^{(2)}}^{T} {I_{\theta_0}^{(2)}}^{-1}h_2\}.
\end{split}
\end{equation*}
Again, by Le Cam's third lemma, under $P_{\theta_0+h_1/\sqrt{n_1}}^{(1)} \times P_{\theta_0+h_2/\sqrt{n_2}}^{(2)}$, $\sqrt{n}(\hat{\theta}^{(1)}_{n_1}-\hat{\theta}_{n_2}^{(2)})$ converges weakly to a Gaussian distribution with mean $\delta_2$. Thus following the arguments used in Theorem \ref{3thm2}, the asymptotic power is given by
\begin{equation*}
\begin{split}
P_{\theta_0+h_1/\sqrt{n_1}}^{(1)}& \times P_{\theta_0+h_2/\sqrt{n_2}}^{(2)}\{(\bar{\theta}_{n_1}^{(1)}-\bar{\theta}_{n_2}^{(2)})^\prime (S^{(1)}+S^{(2)})^{-1}(\bar{\theta}_{n_1}^{(1)}-\bar{\theta}_{n_2}^{(2)}) \leq r_{1-\alpha}\}= \\
&P_{\theta_0+h_1/\sqrt{n_1}}^{(1)}\times P_{\theta_0+h_2/\sqrt{n_2}}^{(2)}\{n(\hat{\theta}_{n_1}^{(1)}-\hat{\theta}_{n_2}^{(2)})^\prime (\lambda^{-1}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}U_{\theta_0,P_{\theta_0}^{(1)}}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}+\\
&(1-\lambda)^{-1}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1}U_{\theta_0,P_{\theta_0}^{(2)}}V_{\theta_0,P_{\theta_0}^{(2)}}^{(1)})^{-1}(\hat{\theta}_{n_1}^{(1)}-\hat{\theta}_{n_2}^{(2)})+\\ &o_{P_{\theta_0}^{(1)}}(1)+ o_{P_{\theta_0}^{(2)}}(1)\leq \chi^2_{k;\alpha}+o_{P_{\theta_0}^{(1)}}(1)+ o_{P_{\theta_0}^{(2)}}(1)\}.
\end{split}
\end{equation*}
Therefore under $P_{\theta_0+h_1/\sqrt{n_1}}^{(1)}\times P_{\theta_0+h_2/\sqrt{n_2}}^{(2)}$,
\begin{align*}
n(\hat{\theta}_{n_1}^{(1)}-\hat{\theta}_{n_2}^{(2)})^\prime (\frac{1}{\lambda}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}U_{\theta_0,P_{\theta_0}^{(1)}}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}+\frac{1}{1-\lambda}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1}U_{\theta_0,P_{\theta_0}^{(2)}}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1})^{-1}\\
(\hat{\theta}_{n_1}^{(1)}-\hat{\theta}_{n_2}^{(2)})
\end{align*}
tends to a non-central chi-square distribution with non-centrality parameter $\delta_2^\prime (\lambda^{-1} V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}U_{\theta_0,P_{\theta_0}^{(1)}}V_{\theta_0,P_{\theta_0}^{(1)}}^{-1}+(1-\lambda)^{-1}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1}U_{\theta_0,P_{\theta_0}^{(2)}}V_{\theta_0,P_{\theta_0}^{(2)}}^{-1})^{-1}\delta_2$, which gives the asymptotic power.
\end{proof}
\ifthenelse{1=1}{
\bibliographystyle{apalike}
|
2,877,628,089,382 | arxiv | \section{Introduction}
\label{intro}
As one of the most important manipulation skills, vision-based grasping has been studied intensely in the field of robotics for decades.
Although robots performing pick and place tasks have been applied successfully in industry, creating autonomous robots for grasping objects in unstructured real-world scenes remains an open question.
In recent years, deep reinforcement learning (DRL) has attracted increasing research attention in robotics since its success in video game.
Combining with CNNs, DRL maps the features of visual observations directly into control policies by trial-and-error.
This provides a general way for robots to learn manipulation skills by using information acquired from cameras \cite{simulation-robotic}, \cite{dexterous-manipulation}.
The typical way to train a visual-based DRL agent is in an end-to-end fashion, in which the reward signal of reinforcement learning is used to train both CNNs and policy networks synchronously.
However, in order to achieve satisfying performance, large amounts of interaction data are required for training of a DRL agent.
For example, to collect enough training data, \cite{hand-eye} executed 800,000 robotic grasping attempts in several months with 14 robots,
and \cite{self-super} collected 700 hours robot grasping data with 50,000 grasping attempts.
Moreover, DRL methods try to map raw visual observation into a lower dimensional latent space of control that preserves various types of information about manipulated objects.
However, tangled and uninterpretable latent representation restricts the generalization across object and environment and further leads to poor control policy.
Most works have to evaluate the trained DRL agents with the similar objects and environments as that of training \cite{robotic-autoencoder}, \cite{dexterous-manipulation}, since the networks trained before have to be fine tuned to adapt to change and hundreds of thousands of robotic manipulation experiences may be needed once more when transferring to a new environment.
These limitations will definitely prohibit the use of DRL method in real-world robotic application.
A feasible way to alleviate data requirements is to train DRL agents in simulation, in which the interaction could be speed up easily by using programming techniques, such as multithreading.
With this approach, large volumes of experiences can be captured efficiently than that of real world interactions and meanwhile many variants of environments could be constructed for generalization concern.
However, there is a huge gap between the simulation and the real-world, which causes the agents trained in simulation to be hardly applied in real world conditions, especially in the context of vision-based robotic manipulation, where illumination changes and varying textures can have significant effects on the quality of the results.
Therefore, some DRL based robotic control approaches are only verified in simulation due to difficulties in transferring from simulation to real robots \cite{no-real-robot2}, \cite{no-real-robot1}.
To alleviate the problem some techniques are proposed to allow for automatic adaptation to the real world environments \cite{simulation2real2}, \cite{simulation2real}.
In this paper, we propose a DRL-based visual grasping system aiming at improving generalization performance with the least cost of the acquisition of real world experiences.
Following the typical visual based DRL paradigm, our framework consists of two major components: a CNN based visual \emph{perception} and a DRL based control \emph{policy}.
The perception module extracts features from visual observation (i.e. raw images) and then the features are mapped into the action space by the policy module.
We train the perception and the policy separately instead of end-to-end.
The perception is trained in a supervised setting to produce the semantic and spatial information of the grasped objects.
In the meantime, the control policy is trained in a simulation environment where the class and pose of the object to be grasped can be read automatically.
Training the policy with the quantitative description of manipulated objects can be beneficial to both generalization and transferability since the information irrelevant for control decision is discarded.
In our work, after roughly \textbf{30 minutes} of training in simulation, the policy is directly transferred to a real robotic system \textbf{without any additional training}.
The performance of our system is evaluated on challenging tasks including semantic grasping, clustered object and moving object grasping.
The experimental results demonstrate the robustness and generalization of our approach.
\section{Related work}
\label{relatedwork}
Simulation environments can provide experiences data much more effective
since the simulation could be accelerated by programming.
And many robotic manipulation DRL algorithms are verified in simulation environments \cite{simulation-robotic}, \cite{model-base}.
Unfortunately, the gap between simulations and real-world makes the the agent trained in simulation can hardly use in physical robot.
Many works had tried to bridge reality gap \cite{simulation2real}, \cite{simulation2real2}.
In \cite{simulation2real}, the images came from simulations were rendered in randomization, and while the visual perception had seen enough variability over simulations, the real-world images may appear to the model just as another variation.
Such randomization made a successful visual grasping model in real-word.
\cite{simulation2real2} unified the visual inputs from simulation and real-world using an adaptation network.
The adaptation network was trained to generalize canonical images from randomized images from simulation.
And because of the randomization, the trained network could also generalize canonical images from real-world.
The generalized canonical images which had been mapped into the same space were used to train the visual grasping DRL agents.
Benefited from the adaptation network, DRL agents could be trained in simulation and used in real-world.
Many researches have tried to relieve data inefficiency by improving the efficiency of DRL training process and experiences data generation.
Guided policy search (GPS) algorithm \cite{guided-policy-search} converts reinforcement learning into supervised learning,
where a trained a local linear controller provided with full state observation (i.g., object poses) served as supervisor.
And a global policy parameterized by neural networks derived from supervision.
This allows a CNNs policy with 92,000 parameters to be trained in tens of minutes of real-world robotic manipulation time
and in test stage the full state is no longer available that the policy trained in a supervised setting could handle several novel, unknown configurations.
Another direction to improve sample efficiency is to accelerate model-free reinforcement learning with a learned dynamics models \cite{model-base}.
The learned models can generate synthetic sample data to enrich the agent experiences efficiently
which has no need to execute the physical robot, though it needs additional efficient model learning algorithms \cite{model-learn}, \cite{model-learn-3}.
However, the learned model would quite differ from the true dynamics and the induced error would weaken performance of learned policy.
These methods tried to train an optimal policy and visual perception simultaneously in an end-to-end style.
However, it can hardly be generalized to different manipulated objects and different execution environments.
Since the generalization is relied on the distribution of training data,
it requires a huge experience data to achieve usable generalization ability \cite{e2eGeneralize}.
It is impractical in a robotic grasping task to acquire enough data.
An intuitive alternative is to train image representation and reinforcement agent separately \cite{auto-encoder}, \cite{robotic-autoencoder}.
With an auto encoder pretrained by an auxiliary reconstruct loss, the high dimension of image input is embedded into a low dimension, latent space and aggregate useful features before interacting with environment.
This way the training of the reinforcement agent networks would be more easily with much less interaction experiences
for there is no need to learn the state representation and the training would significantly speed up.
However, the latent feature representation has no exact physical meanings and would be lack of interpretability as well the trained policy.
From this perspective, meaningful feature representation would significantly improve generalization ability.
\section{Framework}
\label{framework}
We propose a robotic grasping framework based on deep reinforcement learning.
Reinforcement learning enables agents (e.g., robots) to learn an optimal policy through interaction with environments by trial-and-error.
In doing so, we formulate a robotic grasping problem as a Markov decision process:
at time $t$, the robot receives the state of target objects and constructs environment state $s_t$ accordingly.
After that, the robot chooses an action $a_t$ to move itself based on the current policy $\pi \left(a_{t}|s_{t}\right)$.
Then the environment transits to a new state $s_{t+1}$ reacting to $a_t$ and an immediate reward $R_t \left(s_t,a_t,s_{t+1}\right)$ is offered by the environment. The goal of the robot is to find an optimal policy $\pi ^{*}\left(a_{t}|s_{t}\right)$ that maximizes the discounted future rewards
$$G_t=\sum _{i=t}^{\infty}\gamma ^{i-t}R_{i}\left(s_i,a_i,s_{i+1}\right)$$
where $0 < \gamma < 1$ is the discounted factor which reduces the weight of future rewards.
Similar to recent works \cite{robotic-autoencoder}, \cite{guided-policy-search}, our framework is composed of two stages, as shown in Fig.\ref{framework_figure}.
\begin{figure*}
\includegraphics[width=1\textwidth]{framework.pdf}
\caption{Our framework for robotic prehensile control.}
\label{framework_figure}
\end{figure*}
Raw RGB images from camera are input into the perception, where object semantic segmentation and pose estimation are made by Mask R-CNN and PCA respectively.
The policy is a PPO \cite{ppo} agent which receives the control quantities of desired objects and decides which action will be taken to execute grasping.
The pseudo code of grasping a single object with our framework is presented in Algorithm \ref{framework_pseudo}.
The details of each component are discussed in the following subsection.
The perception and policy are trained separately.
In particular, Mask R-CNN is trained in a supervised way, in which the labels are constructed manually using a tool LabelMe \cite{labelme}.
There is no training stage for PCA as it is an unsupervised method.
The PPO is trained in simulation for fast experience data acquisition.
Since the semantic and position information can be read by an interface provided by the simulation environment, the training of PPO could proceed in parallel with that of Mask R-CNN.
\begin{algorithm}
\caption{Separate Perception and Policy}
\label{framework_pseudo}
\begin{algorithmic}[1]
\Require $image$
\Statex
\Statex \textbf{\emph{Perception}}
\State \Comment detect object class and mask from raw images
\State $mask, class$ $\gets$ \textbf{mask-rcnn}($image$)
\State \Comment get physical quantities from mask
\State $center, direction$ $\gets$ \textbf{PCA}($mask$)
\Statex
\Statex \textbf{\emph{Policy}}
\Repeat
\State \Comment decide action via PPO
\State $action$ $\gets$ \textbf{PPO}($center, direction$)
\Until{$|action| < \epsilon$}
\Statex
\State \Comment grasp specific object and success check
\State $success$ $\gets$ \textbf{grasp}($class$)
\end{algorithmic}
\end{algorithm}
\subsection{Perception}
\label{perception}
The perception plays a sensor-like role that transforms raw image inputs into physical quantities binding with object semantic information (i.e., object class and its corresponding pose).
We should note that this work focuses on a 3DOF grasp \cite{grasp-pose} given that the workspace of a robot is constrained to a tabletop.
\subsubsection{Semantic Segmentation}
To grasp a target object, the robot must know where the target is.
To achieve this, we leverage a popular semantic segmentation method Mask R-CNN \cite{maskrcnn} as the front part of the perception to detect and segment objects from raw images.
Based on Faster R-CNN \cite{fastrcnn}, Mask R-CNN introduces a mask branch at the end of the original network for segmentation tasks.
It proceeds in two stages: first, the region proposal network (RPN) \cite{fasterrcnn} is applied to scan the image to find the area where the target exists;
secondly, the object class and its bounding box coordinates are predicted simultaneously.
Finally, a pixel-wise binary mask for each bounding box is generated by a fully convolutional network (FCN) indicating whether each pixel in bounding box is the point of the detected object.
As a result, masks on the original image exactly cover the areas where the objects exist.
The class and mask of an object provide us with a good starting point for pose estimation.
\subsubsection{Object Pose}
Since an object mask produced carries information about object pose,
learning a DRL policy from masks is in principle possible, as what most DRL based approaches do.
In realistic robotic applications however, we can not afford to collect such huge interaction data required by a policy learning algorithm.
To avoid this difficulty, we further infer pose for an object instance based on the object mask obtained.
In a 3DOF grasp setting, object pose can be represented by a 2-dimensional position coordinates of the object center and the direction of the object.
Here, we develop a Principal Component Analysis (PCA) based method to estimate 3D object pose from a pixel-wise binary mask output by Mask R-CNN.
In general, PCA is an unsupervised method that could identify the main components of data with largest variances from a big dataset.
For our purpose, the center and main direction of a set of pixel points are inferred by using PCA.
The output of Mask R-CNN is an object with its covered mask which contains all pixel points formalized as
$$mask=\left \{ \left(x_0,y_0\right), \left(x_1,y_1\right),...,\left(x_n,y_n\right)\right \} $$
where $n$ is the number of pixel points in the mask, i.e the number of samples in PCA.
Firstly, we calculate the mean point of $mask$ as the center point of the mask:
$$
c=(x,y)=\frac{1}{n} \sum_{i=1}^n (x_i, y_i)
$$
Note that, the mean point $c=(x,y)$ is the geometric center of a mask.
After that, all the points in the $mask$ are subtracted by the $c$ resulting residual coordinates
$$Res = mask - c$$
Thus, the covariance matrix of $Res$ and its corresponding eigenvalues and eigenvectors are calculated:
$$
\lambda _1, \lambda _2 = eigenvalues \left ( covMat\left(Res\right) \right )
$$
$$
\alpha _1, \alpha _2 = eigenvectors \left ( covMat\left(Res\right) \right )
$$
Since the pixel points are in two dimensions, there are totally two eigenvalues and eigenvectors of $Res$ matrix.
The $\lambda$ and $\alpha \in \mathbb{R}^{2\times 1}$ are sorted by the magnitude of eigenvalues in descending order.
Finally, the main component with largest variance is calculated:
$$
M = Res \cdot \alpha _1 \cdot \alpha _1^\top + c
$$
$M \in \mathbb{R}^{n\times 2}$ contains $n$ points on a straight line.
We take two points from $M$ randomly to construct a straight line.
$\theta$ represents the angle of the straight line respect to the horizontal axis.
Fig.\ref{PCA-result} shows the results of PCA on a number of objects with various shapes.
With the help of a calibrated camera, the position and orientation in pixel coordinates can be mapped into that of a physical coordinate system.
\begin{figure*}
\centering
\includegraphics[scale=1]{multi-obj-pca.pdf}
\caption{
The example results of PCA pose estimation.
The black shadows represent the masks produced by Mask R-CNN, and the green points are the center points of the masks, i.e.
the position coordinate of the object, and the red lines indicate the main component of the masks,
i.e. the orientation of the object in the plane.
The range of $\theta$ is $\left [-\frac{\pi}{2}, \frac{\pi}{2}\right ]$, where the green arrow indicates a positive $\theta$, while the red arrow is negative. }
\label{PCA-result}
\end{figure*}
\subsection{Policy}
\label{policy}
The policy is a deep reinforcement agent that receives the physical quantities from the perception and decides an optimal action to move the robot. For our framework,
we adopt a policy gradient method called Proximal Policy Optimization (PPO) \cite{ppo} which is favorable for high dimension continuous robotic control problem.
PPO significantly improves data efficiency over other policy gradient methods by updating multiple gradient steps with the same trajectory.
Moreover, to avoid an increase in data variance, PPO introduces a clipping coefficient which stops the gradient when the difference between the updated and original policy is too large.
\subsubsection{State Representation}
\label{state-representation}
We concatenate the output from the perception $x, y, \theta$ (the pose of a target object) and the robotic configuration $x_r, y_r, \theta _r$ ( the pose of the end effector of a robot) to form the environment state $s_{t}$ as the input of the policy network PPO:
$$
s_{t} = \left(x,y,\theta,x_r, y_r, \theta _r\right)
$$
Through several fully connected layers, PPO finally outputs an action distribution over current state $s_t$.
Thereafter, the optimal action in the current state $s_t$ at time step $t$ is sampled from the action distribution:
$$
a_t \sim \pi\left (a|s_t \right )
$$
The action is a three-dimensional continuous variable instructing the robot's next moving direction and magnitude.
The execution of the action would lead to a new environment state $s_{t+1}$ and an immediate reward $R_{t}$ offered by environment.
\subsubsection{Reward}
\label{reward}
The reward function for learning the policy $R_{t}$ is defined as:
$$
R_{t} = \left\{\begin{matrix}
-d_t-0.1 & away\\
-d_t+0.1 & approaching\\
1 & grasp~success\\
-1 & grasp~failed\\
\end{matrix}\right.
$$
where $d_t$ is the distance between the end effector's current and target position in time step $t$.
If $d_t$ is decreasing compared to the previous $d_{t-1}$, the end effector is $approaching$ the target and will receive a slightly positive reward addition,
and otherwise, it is $away$ with a negative reward.
In this way, we encourage the end effector to approach and trace the target object as soon as possible.
When the policy decides actions bounded in a very small magnitude for several time steps,
the policy will decide to execute the grasping, i.e., the end effector moves down on $z$ coordinate and closes the gripper.
A grasping is counted as a success if the gripper is not fully closed.
With a larger reward for a successful grasping, the policy could learn the tracing target policy and grasping policy simultaneously.
\subsubsection{Training Loss}
\label{traing-loss}
PPO is an Actor-Critic style \cite{a3c} algorithm and typically contains a value function and a policy function.
The value function $V_{s_t}$ which estimate the expected reward from a state $s_t$ is trained to minimize TD error \cite{rl-introduction}, whose loss function is defined as:
$$
L_{V} = \left ( V\left(s_{t}\right)- \left (R_{t}+\gamma V\left(s_{t+1}\right)\right )\right )^{2}
$$
where $\gamma $ is the discounted factor.
The policy function $\pi(a_t|s_t)$ which decides an optimal action over a state $s_t$ is trained to maximize a novel surrogate objective \cite{ppo} according to the value function:
$$
L_{a} = \textbf{min}\left ( r_t A_t, clip\left ( r_t, 1-\epsilon, 1+\epsilon \right ) A_t \right )
$$
where $r_t = \frac{\pi \left (a_t|s_t \right )}{\pi_{old}\left (a_t|s_t \right )}$ is the importance sampling coefficient in which $\pi_{old}\left (a_t|s_t\right )$ is the behavior policy whose parameters are frozen during one update epoch.
$A_t$ is the advantage function \cite{a3c} which indicates if the reward of current action is above average.
And it could be estimated easily by $A_{t} =R_{t} + \gamma V\left(s_{t+1}\right)- V\left(s_{t}\right) $ or GAE method \cite{gae} according to the value function $V(s_t)$.
And a \emph{clip} is a function that limits the importance sampling value between $1-\varepsilon$ and $1+\varepsilon$
in order to avoid a large step update where $\varepsilon$ is the clipping coefficient which usually equals to $0.2$ \cite{ppo}.
Therefore, the final loss function $L$ becomes
$$
L = L_V - L_a
$$
and the parameters of network are updated through gradient descent method according to $L$.
\section{Experimental Evaluation}
\label{experiments}
\subsection{Implementation}
\label{implementation}
To evaluate our approach, we implemented a visual-based grasping system based on the framework shown in Fig.\ref{framework_figure}.
The perception consists of a Mask R-CNN and a PCA procedure, which are executed in pipeline.
The input images resized into $600\times 600$ are fed into the Mask R-CNN and a number of object instances covered with their masks are produced.
For each mask produced, PCA is invoked to compute its position and orientation as the output of the perception.
The implementation of the Mask R-CNN is based on \cite{maskrcnn_implementation}.
Instead of using a pre-trained Mask R-CNN model on general objects datasets such as MSCOCO \cite{coco}, we train the Mask R-CNN on our own dataset considering detection accuracy.
To this end, 1000 images of 21 classes of objects are collected and labeled with their mask ground truth manually by the label tool LabelMe \cite{labelme}.
For the policy, three fully-connected layers are stacked together to form a PPO agent.
The first layer takes as input a six dimension vector concatenating the object position and the robot position and transforms the input into a 512 dimension latent vector.
And the second layer transforms the 512 dimension vector into two streams: a 512 dimension action vector and a 512 dimension value vector.
Then one stream is transformed into two 3-dimension vectors, representing the parameters of action distribution $\mu$ and $\sigma$, while another stream is transformed into a scalar representing the value of current environment state.
For efficient training of PPO, we setup a simulation environment in V-REP \cite{vrep}, as shown in Fig.\ref{simulation-environment}.
\begin{figure}
\centering
\includegraphics[scale=1]{simulation.pdf}
\caption{Simulation environment set up in V-REP \cite{vrep}. }
\label{simulation-environment}
\end{figure}
Seven classes of objects from a robotic manipulation benchmarks YCB \cite{YCB} are used for PPO training in simulation, including a detergent bottle, an orange, a round can, a rectangular can, a cup, a pudding box and an electric drill.
Since the class and pose of an object in the simulation environment can be obtained directly through software interfaces, PPO could be trained separately, without the help of the perception.
The parameters of PPO are learned in a learning rate of $1e-5$ using Adam optimization method \cite{adam}.
Both the training of PPO and Mask R-CNN is done on a PC with a RTX 2080Ti GPU.
The average rewards of PPO training in simulation over 5 runs are shown in Fig.\ref{train-result}.
Very impressive results are obtained after about \textbf{30 minutes} training of PPO, indicating by the red arrow in Fig.\ref{train-result}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{train_process.pdf}
\caption{The process of policy training in simulation.
The rewards converged very quickly and the model trained has achieved a good performance after about 30 minutes.
The policy model used in all experiments is trained for 200 episodes, as indicated by the red arrow.}
\label{train-result}
\end{figure}
By contrast, the interactions with the same number of episodes would take tens of hours for a real physical robot.
The training of Mask R-CNN on our own dataset takes about 10 hours.
This training may be not necessary since a pre-trained model on general dataset usually works well in many cases.
It is worth noting that all the training above does not require a real-world robot and the trained networks will be transferred into a real-world robotic grasping system directly.
\subsection{Real-world Evaluation}
\label{real-world}
The overall goal of our evaluation is to determine whether the trained networks can enable a real world robot to perform various grasping tasks without any further training.
To this end, a number of grasp tasks commonly used in our daily life are designed to evaluate the ability to perform grasping skills and generalization over objects and situations.
We use an industrial UR5 robot arm with an RG2 gripper to achieve two-finger rigid grip. A RealSense camera \cite{realsense} is located 100 cm above the work surface, producing RGB images for input.
A laptop with a RTX 2080 GPU acceleration is used for real-time robotic control and communication with UR5 via TCP/IP protocol.
The experimental hardware platform is shown in Fig.\ref{real-world-environment}.
It is worth note that the objects used in the experiments are totally different from that of PPO training in simulation.
\begin{figure}
\centering
\includegraphics[scale=1]{real-world.pdf}
\caption{The hardware setup of our system. }
\label{real-world-environment}
\end{figure}
\subsubsection{Sim-to-Real Transfer}
As mentioned before, the trained networks including Mask R-CNN and PPO are transferred into our robotic grasping system without any further training.
We first examine the behavior of the system in a controlled manner.
As shown in Fig.\ref{corn}, a target object (a corn) is placed on the work surface in various positions and orientations.
The robot grasps the target successfully for 20 randomly chosen object locations.
\begin{figure}
\centering
\subfigure[]{
\includegraphics[scale=0.85]{corn.jpg}
}
\quad
\subfigure[]{
\includegraphics[scale=0.85]{corn_grasp.png}
}
\caption{Sim-to-Real transferring test of the trained policy.
(a) Picking up a corn in various position and orientation.
(b) Picking up a corn with different robotic configurations.}
\label{corn}
\end{figure}
Furthermore, in order to test the robustness of the control policy, we manually introduce external disturbances.
As shown in Fig.\ref{trajectory}, the control policy could find its correct trajectory again and grasp the target successfully after a sudden change on the robot configuration during the robot's execution, exhibiting good stability and robustness.
\begin{figure}
\centering
\subfigure[]{
\includegraphics[scale=0.8]{trajectory-a.png}
}
\subfigure[]{
\includegraphics[scale=0.8]{trajectory-b.png}
}
\subfigure[]{
\includegraphics[scale=0.8]{trajectory-c.png}
}
\subfigure[]{
\includegraphics[scale=0.8]{trajectory-d.png}
}
\caption{Robustness test of the trained policy.
(a) The robot starts to perform a task.
(b) We forced it manually to an unseen configuration in training.
(c) The robot finds a proper path back to the right way.
(d) The robot grasps the target successfully.}
\label{trajectory}
\end{figure}
\subsubsection{Multi-object Grasping}
\label{multiobj-grasp}
Multi-object grasping is a common task used to measure the performance for a vision based robotic grasping system.
In our test setting, 10-13 objects are placed randomly on the table and the UR5 robot is requested to pick up all objects sequentially and then put them out of the workspace.
In addition, the background color of the work surface is shifted from white into brown or green. Two example test settings in different backgrounds are shown in Fig.\ref{multiobj}.
A grasp is successful if an object is grasped and threw aside, while a remove completion means no objects are left on the table.
We perform 10 tests for each background and grasp success rate and remove completion rate are presented in Table \ref{multiobj-result}.
\begin{figure}
\centering
\subfigure[]{
\includegraphics[scale=0.90]{single_green.png}
}
\quad
\subfigure[]{
\includegraphics[scale=0.90]{single_yellow.png}
}
\caption{Multiple object removing up.
(a) 11 objects in a green background.
(b) 12 objects in a brown background.}
\label{multiobj}
\end{figure}
\begin{table}
\caption{The results of multiple objects remove up. }
\label{multiobj-result}
\begin{tabular}{ccc}
\hline\noalign{\smallskip}
scenarios & grasp success & remove completion \\
\noalign{\smallskip}\hline\noalign{\smallskip}
brown & 100\%(112/112) & 100\%(10/10)\\
green & 100\%(120/120) & 100\%(10/10)\\
dense & 93.7\%(104/111) & 80\%(8/10)\\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\subsubsection{Clustered object Grasping}
\label{in-cluster}
A challenge task in robotic manipulation is to grasp objects clustered closely together.
As shown in Fig.\ref{dense-cluster}, in a cluster scenario, the objects would block each other and some objects may be completely invisible.
In such a task, the order of manipulations really matters if we want to remove all the objects in sequence.
To decide the ordering of picking up, we define a mask ratio $r$ for each object recognized as follows:
$$
r = \frac{m}{M}
$$
where $m$ is the recognized mask with possible occlusion and $M$ is the full mask of the object which is pre-determined.
The larger the ratio, the more likely the object is to be picked up firstly as it is less occluded by others.
For dense cluster scenarios, we perform 10 tests and grasp success rate and remove completion rate are presented in Table \ref{multiobj-result}.
The failure cases occur due to misidentifications by Mask R-CNN because of partially visible objects.
\begin{figure}
\centering
\subfigure[]{
\includegraphics[scale=0.90]{cluster_2.png}
}
\quad
\subfigure[]{
\includegraphics[scale=0.90]{cluster.png}
}
\caption{Dense clustered object removing up.
(a) Objects blocked each other but every object is visible.
(b) The red pepper is completely invisible. }
\label{dense-cluster}
\end{figure}
\subsubsection{Semantic Grasping}
\label{semantic-grasp}
In a semantic grasping task, a robot is instructed to grasp a specified object among a set of candidates.
The capability of semantic grasping is essential to allow autonomous robots to perform manipulations in an unstructured environment.
Benefiting from the power of Mask R-CNN to detect objects, our system first identifies the class of an object before deciding how to pick up.
Similar to the experiment setting in multi-object grasping, 10-13 objects are randomly placed on the table for each trial. For each run, we randomly specify one object and simply count the number of successful grasps.
We perform five trails and the success rate achieves \textbf{100\% (60/60)}.
\begin{figure*}
\centering
\subfigure[]{
\includegraphics[scale=0.35]{mouse_1.png}
}
\quad
\subfigure[]{
\includegraphics[scale=0.35]{mouse_2.png}
}
\quad
\subfigure[]{
\includegraphics[scale=0.35]{mouse_3.png}
}
\quad
\subfigure[]{
\includegraphics[scale=0.35]{mouse_4.png}
}
\caption{Moving object grasping.
(a) The fake mouse is in the initial position.
(b) The mouse is moving forward and the robot is changing its control strategy.
(c) The mouse keeps moving and the robot keeps tracing it while gripper moving down simultaneously.
(d) The robot grasps the mouse successfully.}
\label{moving-mouse}
\end{figure*}
\subsubsection{Moving Object Grasping}
\label{semantic-tracking}
Grasping a moving object is still a challenging task in visual-based robotic manipulation \cite{moving-grasp}.
Our learning based approach provides a promising way to approach this challenge.
To demonstrate the effectiveness of our approach, a case study is conducted in a scenario where a small fake mouse is moving and the robot is ordered to pick up the mouse in motion.
To pick up the moving mouse, our robot needs to be able to track the target continuously and decide to execute a grasping once the action outputted by the control policy is smaller than the preset threshold.
In order to further reduce the time between making a grasp decision and closing the gripper,
we add a fixed movement in $z$ direction simultaneously with the movement in the $x-y$ plane, instead of moving down in $z$ direction after the decision making.
This minor modification significantly improves the successful rate of picking up the mouse in our experiments.
The example pictures of robot's execution on picking up a moving mouse are shown in Fig.\ref{moving-mouse}.
However, due to the computation cost of the system and communication delay between the laptop and the robot, the delay time in our current implementation is about $200ms$, which limits the speed of moving objects in our experiments.
\section{Conclusion and future work}
\label{conclusion}
We presented a robotic grasping approach that combines visual perception and a DRL based control policy.
Comparing with other alternatives, training on a real robot is avoided by decoupling the control from visual perception with the help of a physical representation of objects, which makes them easier to be trained.
Moreover, the policy trained in simulation could be transferred to a real system without any further training. Real world experiments on UR5 demonstrate the robustness and generalization over a wide variation in challenging grasping tasks.
However, in this work, we only consider 3DOF grasping in which objects are placed on a table and the grasping height is fixed.
In future work, we would like to extend this work to a 6DOF grasping.
To do so, it will be important to investigate the pose of gripper in 3D shape perception.
\bibliographystyle{IEEEtran}
|
2,877,628,089,383 | arxiv | \section{Introduction}
\label{s:intro}
Consider the \emph{defocusing nonlinear Schrödinger equation}
\[
\ii\partial_tu = -\partial_x^2u + 2\abs{u}^2u,\qquad x\in\T,
\]
on the circle $\T = \R/\Z$ with $u$ taken from $L^2 \defl L^2(\T,\C)$. As is well known, the \nls-equation can be written as an \emph{infinite dimensional Hamiltonian system}. We introduce the phase space $L^2_c\defl L^2\times L^2$ with elements $\ph=(\phm,\php)$ and define the inner product
\[
\lin{\ph,\psi} \defl \int_{\T} \ph_{+}\ob{\psi_{+}} + \ph_{-}\ob{\psi_{-}}\,\dx,
\]
which makes $L^{2}_{c}$ a Hilbert space. The associated norm is denoted by $\n{\ph}_{0}$. Further, we endow this space with the Poisson structure given by
\[
\{F,G\}
\defl
-\ii\int_{\T} (\partial_{\phm} F\, \partial_{\php} G
- \partial_{\php}F\,\partial_{\phm} G)\,\dx.
\]
Here $\partial_{\phm} F$ and $\partial_{\php} F$ denote the components of the $L^2$-gradient $\partial_{\ph} F$ of a $C^1$-functional $F$. The Hamiltonian system with Hamiltonian
\[
\Hc(\phm,\php) \defl \int_{\T} (\partial_{x}\phm\partial_{x}\php + \phm^{2}\php^{2})\,\dx
\]
is then given by
\[
\ii\partial_{t} (\phm,\php) = (\partial_{\php}\Hc,-\partial_{\phm}\Hc),
\]
and the defocusing \nls is obtained by restriction to the invariant subspace of real type states
\[
L^2_r
\defl
\setdef{\ph \in L_c^2}{\ph^* = \ph},\qquad \ph^* = (\ob\php,\ob\phm).
\]
Indeed, with $\ph = (u,\ob{u})$ the defocusing \nls-equation becomes
\[
\ii\partial_{t}u = \ii\setd{u,\Hc} = \partial_{\ob{u}}\Hc(u,\ob{u}).
\]
After the \kdv-equation, the \nls-equation was the second evolution equation known to be \emph{completely integrable} by the inverse scattering method~\cite{Zakharov:1975ft}. In fact, according to \cite{Grebert:2014iq} (cf. also \cite{Battig:1995jr,Battig:1993vu,McKean:1997ka}), the defocusing \nls-equation is integrable in the strongest possible sense meaning that it admits global \emph{Birkhoff coordinates} $(x_{n},y_{n})_{n\in\Z}$.
To state our main result, let $\sum_{n\in\Z} (\ph_{2n}^{-}\e^{-\ii 2n\pi x},\ph_{2n}^{+}\e^{\ii 2n\pi x})$ denote the Fourier series of $\ph=(\phm,\php)\in L_{c}^{2}$ and introduce for any $s\ge 0$ the Sobolev norm $\n{\ph}_{s}$ given by
\[
\n{\ph}_{s}^{2}
\defl \sum_{n\in\Z} \lin{2n\pi}^{2s}(\abs{\ph_{2n}^{-}}^{2}+\abs{\ph_{2n}^{+}}^{2}),
\qquad \lin{x} \defl 1+\abs{x}.
\]
The space of all $\ph\in L_{c}^{2}$ with $\n{\ph}_{s} < \infty$ is denoted $H^{s}_{c}$, and $H_{r}^{s} \defl L^{2}_{r}\cap H_{c}^{s}$. Furthermore, let us introduce the model space
\[
h_{r}^{s} \defl
\setdef{(x,y)=(x_{n},y_{n})_{n\in\Z}}
{\n{(x,y)}_{s} \defl (\n{x}_{s}^{2}+\n{y}_{s}^{2})^{1/2} < \infty},
\]
where the norm $\n{x}_{s}$ is defined by
\[
\n{x}_{s}^{2} \defl \sum_{n\in\Z} \lin{2n\pi}^{2s}\abs{x_{n}}^{2}.
\]
This space is endowed with the canonical Poisson structure $\{x_{n},y_{m}\} = -\dl_{n,m}$ while all other brackets vanish.
The \emph{Birkhoff map} $\ph\mapsto (x_{n},y_{n})_{n\in\Z}$ is a bi-analytic, canonical diffeomorphism $\Om\colon H^{0}_{r}\to h^{0}_{r}$, whose restriction $\Om\colon H^{m}_{r}\to h_{r}^{m}$ is again bi-analytic for any integer $m\ge 1$. On $h^{1}_{r}$ the transformed \nls Hamiltonian $\Hc\circ\Om^{-1}$ is a real analytic function of the actions $I_{n} = (x_{n}^{2}+y_{n}^{2})/2$ alone and the \nls evolution takes the particularly simple form
\[
\dot x_{n} = -\om_{n} y_{n},\quad
\dot y_{n} = \om_{n} x_{n},\qquad
\om_{n} \defl \partial_{I_{n}} \Hc.
\]
One may thus think of $\Om$ as a nonlinear Fourier transform for the defocusing \nls-equation.
Remarkably, the derivative $\ddd_{0}\Om$ of $\Om$ at the origin is the Fourier transform and on $L_{r}^{2}$, as for the Fourier transform,
\[
\n{\Om(\ph)}_{0} = \n{\ph}_{0},
\]
which we also refer to as Parseval's identity -- cf. e.g. \cite{McKean:1997ka,Grebert:2014iq}.
Our main result says that for higher order Sobolev norms the following version of Parseval's identity holds for the nonlinear map $\Om$.
\begin{theorem}
\label{b-est}
For any integer $m\ge 1$ there exist absolute constants $c_{m}$, $d_{m} > 0$ such that
the restriction of $\Om$ to $H_{r}^{m}$ satisfies the two sided estimates
\[
\emph{ (i)}\quad
\n{\Om(\ph)}_{m} \le c_{m}\bigl(
\n{\ph}_{m} + (1+\n{\ph}_{1})^{2m}\n{\ph}_{0} \bigr),
\]
\[
\emph{(ii)}\quad
\n{\ph}_m \le d_{m}\bigl(
\n{\Om(\ph)}_{m} + (1+\n{\Om(\ph)}_{1})^{4m-3}\n{\Om(\ph)}_{0} \bigr).
\]
\end{theorem}
The main feature of Theorem~\ref{b-est} is that the constants $c_{m}$ and $b_{m}$ can be chosen independently of $\ph$.
Note that the estimate (i) is linear in the highest Sobolev norm $\n{\ph}_{m}$ for $m\ge 2$, and that the estimate (ii) is linear in the highest weighted $h_{r}^{m}$-norm $\n{\Om(\ph)}_{m}$ of $\Om(\ph)$. Hence Theorem~\ref{b-est} shows that the 1-smoothing property of the Birkhoff map $\Om$ established in \cite{Kappeler:4WN-jiH9} holds in a uniform fashion.
The proof of Theorem~\ref{b-est} relies on estimates of the action variables $I(\ph) = (I_n)_{n\in\Z}$ where $I_{n} = (x_{n}^{2} + y_{n}^{2})/2$, $n\in\Z$. The decay properties of the actions $I_n$ are known to be closely related to the regularity properties of $\ph$ -- c.f. \cite{Kappeler:2009uk,Djakov:2006ba,Kappeler:2009kp}. We need to quantify this relationship by providing two sided estimates of the Sobolev norms of $\ph$ in terms of weighted $\ell^{1}$-norms of $I(\ph)$. For that purpose introduce the weighted sequence space $\ell^{1}_{s}$ whose norm is defined by
\[
\n{I(\ph)}_{\ell^{1}_{s}} \defl \sum_{n\in\Z} \lin{2n\pi}^{s} \abs{I_n(\ph)}.
\]
\begin{theorem}
\label{act-sob-est}
For any integer $m\ge 1$, there exist absolute constants $c_m$ and $d_{m}$, such that for all $\ph\in H_{r}^{m}$
\[
\emph{ (i)}\quad\n{I(\ph)}_{\ell^{1}_{2m}}
\le
c_{m}^{2}\bigl(\n{\ph}_{m}^{2}
+
(1+\n{\ph}_{1})^{4m}\n{\ph}_{0}^{2}\bigr),
\]
\[
\emph{(ii)}\quad\n{\ph}_m^{2} \le d_{m}^{2}\bigl(\n{I(\ph)}_{\ell^{1}_{2m}}
+ (1+\n{I(\ph)}_{\ell^{1}_{2}})^{4m-3}\n{I(\ph)}_{\ell^{1}}\bigr).
\]
\end{theorem}
\emph{Remark.} The same constants $c_{m}$, $d_{m}$ of Theorem~\ref{b-est} can be used in Theorem~\ref{act-sob-est}.
It turns out that versions of the estimates (i) of Theorems~\ref{b-est} \& \ref{act-sob-est} hold for a larger family of spaces referred to \emph{weighted Sobolev spaces} -- see \cite{Kappeler:1999er,Kappeler:2001hsa} for an introduction. A \emph{normalised, submultiplicative}, and \emph{monotone weight} is a symmetric function $w\colon\Z\to\R$ with
\[
w_n \ge 1,\qquad w_{n} = w_{-n},\qquad w_{n+m}\le w_{n}w_{m},\qquad
w_{\abs{n}}\le w_{\abs{n}+1},
\]
for all $n,m\in\Z$. The class of all such weights is denoted by $\Mc$ and $H_{c}^w$ is the space of $L^{2}_{c}$ functions $\ph$ with finite $w$-norm
\[
\n{\ph}_w^2 \defl \sum_{n\in\Z} w_{2n}^2 (\abs{\ph_{2n}^{-}}^{2}+\abs{\ph_{2n}^{+}}^{2}).
\]
Furthermore, $h_{r}^{w}$ denotes the subspace of $\ell^{2}_{r}$ where
$\n{x}_{w}^{2} + \n{y}_{w}^{2} < \infty$,
\[
\n{x}_{w}^{2} \defl \sum_{n\in\Z} w_{2n}^{2} \abs{x_{n}}^{2}.
\]
For any $s \ge 0$, the \emph{Sobolev weight} $\lin{n\pi}^{s}$ gives rise to the usual Sobolev space $H^{s}_{c}$. For $s\ge 0$ and $a > 0$, the \emph{Abel weight} $\lin{n\pi}^s \e^{a\abs{n}}$ gives rise to the space $H^{s,a}_{c}$ of $L^2_{c}$-functions, which can be analytically extended to the open strip $\setdef{z}{\abs{\Im z} < a/2\pi}$ of the complex plane with traces in $H_{c}^s$ on the boundary lines. In between are, among others, the \emph{Gevrey weights}
\[
\lin{n}^s \e^{a\abs{n}^\sigma},\qquad 0< \sigma < 1,\quad s\ge 0,\quad a > 0,
\]
which give rise to the Gevrey spaces $H^{s,a,\sigma}_{c}$, as well as weights of the form
\[
\lin{n}^s\exp\biggl(\frac{a\abs{n}}{1+\log^{\sg}\lin{n}} \biggr),
\qquad 0< \sigma < 1,\quad s\ge 0,\quad a > 0,
\]
that are lighter than Abel weights but heavier than Gevrey weights.
To avoid certain technicalities in our estimates, we restrict ourselves to weights incorporating a factor which grows at a linear rate. We thus introduce the subclass
\[
\Mc^{1} \defl
\setdef*{w\in\Mc}
{w_{n}=\lin{n}v_{n}\; \text{ for all }n\in\Z\text{ with some }v\in\Mc}.
\]
Finally, we assume all weights $w\in\Mc$ to be piecewise linearly extended to functions on the real line $w\colon \R\to\R_{>0}$, $t\mapsto w[t]$.
\begin{theorem}
\label{act-west}
For any weight $w\in\Mc^{1}$ there exists a complex neighbourhood $W_{w}$ of $H^{w}_{r}$ within $H^{w}_{c}$ and a constant $c_{w}$, such that
\[
\emph{ (i)}\quad\sum_{n\in\Z} w_{2n}^{2}\abs{I_{n}}
\le
c_{w}^{2}w[16\n{\ph}_{w}^{2}]^{2}\n{\ph}_{w}^{2}.
\]
Moreover, the restriction of the Birkhoff map $\Om$ to $H^{w}_{r}$ takes values in $h_{r}^{w}$ and satisfies
\[
\emph{ (ii)}\quad
\n{\Om(\ph)}_{w} \le c_{w}w[16\n{\ph}_{w}^{2}]\n{\ph}_{w}.
\]
\end{theorem}
In this more general set up the bounds in (i) and (ii) of Theorem~\ref{act-west} are of the same type as the weight, and are valid for all submultiplicative weights including those growing exponentially fast. The following version of Theorem~\ref{act-west} for Sobolev spaces of real exponent complements the results of Theorems~\ref{act-sob-est}-\ref{act-west}.
\begin{corollary}
For any real $s\ge 1$ there exist a complex neighbourhood $W_{s}$ of $H^{s}_{r}$ and a constant $c_{s}$ such that
\[
\n{I(\ph)}_{\ell^{1}_{2s}} \le c_{s}^{2}(1+\n{\ph}_{s})^{4s}\n{\ph}_{s}^{2}.
\]
Moreover, the restriction of the Birkhoff map $\Om$ to $H^{s}_{r}$ takes values in $h_{r}^{s}$ and satisfies
\[
\n{\Om(\ph)}_{s} \le c_{s}(1+\n{\ph}_{s})^{2s}\n{\ph}_{s}.
\]
\end{corollary}
\emph{Update.} In this updated version of the article~\cite{Molnar:2014vg} we improve the estimates of Theorem~\ref{b-est} \& \ref{act-sob-est} so that the remainders depend only on the $H^{1}$-norm of $\ph$ and the $\ell_{2}^{1}$-norm of $I(\ph)$ instead of the $H^{m-1}$-norm and $\ell_{2m-2}^{1}$-norm, respectively.
\emph{Outline.} The action variables $I_{n}$ and more generally the action variables $J_{n,k}$ on levels $k\ge 1$ can be defined entirely in terms of the periodic spectrum of the associated Zakharov-Shabat operator used in the Lax-pair formulation of the \nls-equation. More to the point, consider the operator
\[
L(\ph) =
\mat[\bigg]{\,\ii & \\ & -\ii}
\frac{\ddd}{\dx} +
\mat[\bigg]{ & \psi \\ \ob\psi & },
\]
with periodic boundary conditions on the interval $[0,2]$ of twice the length of the periodicity of $\ph=(\psi,\ob\psi)\in L^{2}_{r}$. The spectrum of $L(\ph)$ is well known to be real, pure point, and to consist of a double infinite sequence of eigenvalues
\[
\dotsb \le \lm_{n-1}^{+} < \lm_{n}^{-} \le \lm_{n}^{+} < \lm_{n+1}^{-} \le \dotsb
\]
with asymptotic behaviour $\lm_{n}^{\pm} \sim n\pi$ as $\abs{n}\to\infty$.
The asymptotic behaviour of the actions on odd levels $k=2m+1$ turns out to be
\[
J_{n,2m+1} \sim (\lm_{n}^{\pm})^{2m}I_{n} \sim (n\pi)^{2m}I_{n},\qquad \abs{n}\to \infty,
\]
and they satisfy the trace formula
\[
\sum_{n\ge 1} J_{n,2m+1} = \frac{(-1)^{m+1}}{4^{m}}\Hc_{2m+1},\qquad m\ge 1,
\]
where $\Hc_{2m+1}$ denotes the $2m+1$th Hamiltonian in the \nls-hierarchy,
\[
\Hc_{1} = \int_{\T} \abs{\psi}^{2}\,\dx,\quad
\Hc_{3} = \int_{\T} (\abs{\psi'}^{2} + \abs{\psi}^{4})\,\dx,\quad \dotsc.
\]
Note that $\Hc_{3}$ denotes the Hamiltonian of the \nls equation. On $H_{r}^{m}$, for $m\ge 1$,
\[
\Hc_{2m+1} = (-1)^{m+1}\int_{\T} \left(\abs{\ps^{(m)}}^{2} + p_{m}(\ps,\dotsc,\ps^{(m-1)})\right)\,\dx,
\]
where $p_{m}$ is a polynomial expression in $\psi$ and its first $m-1$ derivatives.
Viewing $\Hc_{2m+1}$ as a lower order perturbation of the $H_{r}^{m}$-norm we formally obtain at first order
\[
\sum_{n\in\Z} (n\pi)^{2m} I_{n}
\sim \sum_{n\in\Z} J_{n,2m+1}
\sim \n{\ph}_{m}^{2}.
\]
The essential ingredient to make this formal statement explicit is a sufficiently accurate localisation of the periodic eigenvalues $\lm_{n}^{\pm}$ whose threshold in $\abs{n}$ depends only on $\n{\ph}_{1}$. For $\abs{n}$ above this threshold we can directly compare the weighted action norms and the polynomial expressions in $\ph$ as described above, while the remainder for $\abs{n}$ below the threshold can be regard as a $H^{1}$-error term. Thereof we obtain Theorem~\ref{act-sob-est}, which directly implies Theorem~\ref{b-est}. Note that our method of proof completely avoids the use of auxiliary spectral quantities such as \emph{spectral heights} or \emph{conformal mapping theory}.
To prove Theorem~\ref{act-west}, we take a slightly different approach by quantitatively estimating the action variables in terms of the spacing of the periodic eigenvalues of the associated Zakharov-Shabat operator. For the latter we obtain estimates in any weighted norm, which allows us to obtain Theorem~\ref{act-west}.
\emph{Related results.}
Theorem~\ref{b-est} for $m=0$ is referred to as Parseval's identity,
\[
\frac{1}{2}\n{\Om(\ph)}_{0} = \n{I(\ph)}_{\ell^{1}} = \frac{1}{2} \n{\ph}_{0}^{2},
\]
and is well known -- see \cite{Grebert:2014iq,McKean:1997ka}. The case $m=1$ was proved by Korotyaev~\cite{Korotyaev:2005fb} using conformal mapping theory, see also \cite{Korotyaev:2010ft}. However, his method does not seem applicable for the case $m\ge 2$. In fact, it is stated as an open problem in \cite{Korotyaev:2010ft}.
For the case of the KdV-equation
\[
\partial_{t}u = -\partial_{x}^{3}u + 6u\partial_{x}u,\qquad x\in\T,
\]
Korotyaev~\cite{Korotyaev:2000tc,Korotyaev:2006uh} obtained polynomial bounds of the Sobolev norms $\n{u}_{m}$ in terms of the action variables where the order of the polynomials grows factorial in $m$. Note that the bound in (ii) of Theorem~\ref{act-sob-est} is of order 1 in the Sobolev norm $\n{\ph}_{m}$ and the order of the remainder grows linearly in $m$. It turns out that our method can also be applied to the \kdv-equation. In \cite{Molnar:_ROURXz4} we improve on the bounds obtained by Korotyaev in \cite{Korotyaev:2000tc,Korotyaev:2006uh}.
For \nls in weighted Sobolev spaces the qualitative relationship
\[
\ph\in H_{r}^{w}\quad \iff \quad \Om(\ph) \in h_{r}^{w}
\]
is a corollary of the methods presented in \cite{Kappeler:2009kp,Djakov:2006ba} -- at least for weights growing at subexponential speed. To the best of our knowledge, the estimate of $\n{\Om(\ph)}_{w}$ on $H_{r}^{w}$ as well as the estimate of $\n{I(\ph)}_{w}$ on a small complex neighbourhood of $H_{r}^{w}$ as presented in Theorem~\ref{act-west} are new.
Viewing the action $I_{n}$ as a 1-smoothing perturbation of the squared modulus of the $n$th Fourier coefficient, our method of comparing the weighted action norms with the Hamiltonians of the \nls-hierarchy amounts to a separate analysis of Fourier modes of low and high frequencies. This idea has a long history in the analysis of nonlinear PDEs. Most recently, it lead Colliander, Keel, Staffilani, Takaoka \& Tao \cite{Colliander:2001wg,Colliander:2001fc,Colliander:2004gc,Colliander:2010fs} to invent the I-Method, which allows to obtain global well posedness of subcritical equations in low regularity regimes where the Hamiltonian (or other integrals) of the equation cease to be well defined. The idea is to damp all sufficiently high Fourier modes of a local solution such that the Hamiltonian can be controlled by weaker norms while still being an >>almost conserved<< quantity. The difficulty here is to choose the damping subtle enough such that the nonlinearity of the equation does not create a significant interaction of low and and high frequencies. Our situation is so to say opposed to that of the I-Method: As we aim for quantitative global estimates, controlling the modes of low frequencies is the most delicate part.
Here the localisation of the periodic eigenvalues of the Zakharov-Shabat operator associated with the \nls equation plays a crucial role. Note that there exists a vast amount of literature on the spectral theory of these operators -- see e.g. \cite{Korotyaev:2010ft,Grebert:1998cz,Kappeler:2001hsa,Djakov:2006ba,Grebert:2014iq} and the references therein.
\emph{Organisation of the paper.} In section~\ref{s:setup} we recall the definition of the \nls action variables on integer levels $k\ge 1$ as well as the trace formulae relating them to the hierarchy of \nls Hamiltonians. The somewhat lengthy proof of the analyticity properties of the action integrand can be found in the appendix~\ref{a:ana}. The quadratic localisation of the Zakharov-Shabat spectrum is obtained in section~\ref{s:trap-sp}, and is subsequently used in sections~\ref{s:act-est} and \ref{s:sob-est} to obtain Theorem~\ref{act-sob-est} (i) and (ii), respectively. In the final section~\ref{s:act-west} we obtain the estimate of the actions in terms of the spacing of the Zakharov-Shabat eigenvalues which implies Theorem~\ref{act-west}.
\emph{Acknowledgement.} The author is very grateful to Professor Thomas Kappeler for continued support and helpful comments on this paper.
\section{Setup}
\label{s:setup}
In this section we briefly recall the definition of \nls action variables from~\cite{Grebert:2014iq}, as well as the main properties of the periodic spectrum of Zakharov-Shabat operators used to define them.
For a \emph{potential} $\ph=(\phm,\php)\in H_{c}^0 = L^2_c$ consider the Zakharov-Shabat operator
\[
L(\ph) \defl
\mat[\bigg]{\,\ii & \\ & -\ii}
\frac{\ddd}{\dx} +
\mat[\bigg]{ & \phm \\ \php & }
\]
on the interval $[0,2]$ with periodic boundary conditions. The spectrum of $L(\ph)$ is well known to be pure point, and more precisely, to consist of a sequence of pairs of complex eigenvalues $\lm_n^+(\ph)$ and $\lm_n^-(\ph)$, listed with algebraic multiplicities, such that
\[
\lm_n^\pm(\ph) = n\pi + \ell^2_n,\qquad n\in\Z.
\]
Here $\ell_n^2$ denotes a generic $\ell^2$-sequence. We may order the eigenvalues lexicographically -- first by their real part, and second by their imaginary part -- to represented them as a bi-infinite sequence of complex eigenvalues
\[
\dotsb \lex \lambda_{n-1}^+ \lex \lambda_{n}^- \lex \lambda_{n}^+ \lex \lambda_{n+1}^- \lex \dotsb.
\]
By a slight abuse of notation, we call the eigenvalues of $L(\ph)$ the \emph{periodic spectrum of $\ph$}. Further we introduce the \emph{gap lengths}
\[
\gm_n \defl \lm_n^+ - \lm_n^- = \ell^2_n,\qquad n\in\Z.
\]
To obtain another characterisation of the periodic spectrum, we denote by $M(x,\lm,\ph)$ the standard fundamental solution of the ordinary differential equation $L(\ph)M = \lm M$, and introduce the discriminant
\[
\Dl(\lm,\ph) \defl \operatorname{tr} M(1,\lm,\ph).
\]
To simplify matters, we may drop some or all of its arguments from the notation, whenever there is no danger of confusion. The periodic spectrum of $\ph$ is precisely the zero set of the entire function $\Dl^2(\lm) - 4$, and we have the product representation
\[
\Dl^2(\lm) - 4
=
-4\prod_{n\in\Z}
\frac{(\lm_n^+ - \lm)(\lm_n^- - \lm)}{\pi_n^2},
\quad
\pi_n
\defl
\begin{cases}
n\pi, & n\neq 0,\\
1, & n=0.
\end{cases}
\]
Hence, this function is a spectral invariant. We also need the $\lm$-derivative $\Dl^{\ld}\defl\partial_{\lm}\Dl$ whose zeros are denoted by $\lm_{n}^{\text{\tiny\ensuremath{\bullet}}}$ and satisfy $\lm_{n}^{\text{\tiny\ensuremath{\bullet}}} = n\pi + \ell^{2}_{n}$. This derivative has the product representation
\[
\Dl^{\ld}(\lm) = 2\prod_{n\in\Z} \frac{\lm_{n}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\pi_{n}}.
\]
If the potential $\ph$ is of real type, as in the case of the defocusing \nls equation, the periodic spectrum is real, the eigenvalues are characterised by Floquet theory, and the lexicographical ordering reduces to the ordering of real numbers
\[
\dotsb
\le \lm_{n-1}^{+} < \lm_{n}^{-} \le \lm_{n}^{\text{\tiny\ensuremath{\bullet}}} \le \lm_{n}^{+} < \lm_{n+1}^{-}
\le \dotsb.
\]
Each $\ph\in L^{2}_{r}$ has an open neighbourhood $V_{\ph}$ within $L^{2}_{c}$ for which there exist disjoint closed discs $(U_{n})_{n\in\Z}$ centred on the real axis with the properties:
\begin{itemize}
\item[(i)] $\lm_{n}^{\pm}(\psi)$ and $\lm_{n}^{\text{\tiny\ensuremath{\bullet}}}(\psi)$ are contained in the interior of $U_{n}$ for every $\psi\in V_{\ph}$,
\item[(ii)] there exists a constant $c \ge 1$ such that for $m\neq n$,
\[
c^{-1}\abs{m-n} \le \dist(U_{n},U_{m}) \le c\abs{m-n},
\]
\item[(iii)] $U_{n} = \setd{\abs{\lm-n\pi} \le \pi/4}$ for $\abs{n}$ sufficiently large.
\end{itemize}
\noindent
Such discs are called \emph{isolating neighbourhoods}. The union of all $V_{\ph}$ defines an open and connected neighbourhood of $L^{2}_{r}$ within $L^{2}_{c}$ and is denoted by $W$.
Throughout this text $V_{\ph}$ denotes a neighbourhood of $\ph$ such that a common set of isolating neighbourhoods for all $\psi\in V_{\ph}$ exists.
Following Flaschka \& McLaughlin's approach for the \kdv-equation \cite{Flaschka:1976tc}, one can define action variables for the defocusing \nls-equation by Arnold's formula -- see also \cite{McKean:1997ka}
\[
I_n
=
\frac{1}{\pi}\int_{a_n}
\frac{\lm \Dl^{\ld}(\lm)}{\sqrt{\Dl^2(\lm)-4}}
\,\dlm.
\]
Here $a_{n}$ denotes a cycle around $(\lm_{n}^{-},\lm_{n}^{+})$ on the spectral curve
\[
C_{\ph} = \setdef{(\lm,z)}{z^2 = \Dl^2(\lm,\ph) - 4}\subset\C,
\]
on which the square root $\sqrt{\Dl^2(\lm)-4}$ is defined. This curve is another spectral invariant associated with $\ph$, and an open Riemann surface of infinite genus if and only if the periodic spectrum of $\ph$ is simple. To avoid the technicalities involved with this curve, we follow the approach presented in \cite{Grebert:2014iq} and fix proper branches of the square root which allows us to reduce the definition of the actions to standard contour integrals in the complex plane.
Firstly, we denote by $\sqrt[+]{\phantom{\lm}}$ the \emph{principal branch} of the square root on the complex plane minus the ray $(-\infty,0]$. Secondly, we define the \emph{standard root}
\[
\vs_{n}(\lm) = \sqrt[\mathrm{s}]{(\lm_{n}^{+}-\lm)(\lm_{n}^{-}-\lm)},
\qquad \lm\notin [\lm_{n}^{-},\lm_{n}^{+}],
\]
by the condition
\begin{align}
\label{s-root}
\vs_{n}(\lm) \defl (\tau_{n}-\lm)\sqrt[+]{1 - \gm_{n}^{2}/4(\tau_{n}-\lm)},
\qquad \tau_{n} = (\lm_{n}^{-}+\lm_{n}^{+})/2,
\end{align}
for all $\abs{\lm}$ sufficiently large. For any $\ph\in W$ the standard root is analytic in $\lm$ on $\C\setminus[\lm_{n}^{-},\lm_{n}^{+}]$ and in $(\lm,\psi)$ on $(\C\setminus U_{n})\times V_{\ph}$. Thirdly, we define the \emph{canonical root} $\sqrt[c]{\Dl^{2}(\lm)-4}$ by the product representation
\[
\sqrt[c]{\Dl^{2}(\lm)-4} \defl 2\ii\prod_{m\in\Z} \frac{\vs_{m}(\lm)}{\pi_{m}}.
\]
For any $\ph\in W$ this root is analytic in $\lm$ on $\C\setminus\bigcup_{\gm_{n}\neq 0} [\lm_{n}^{-},\lm_{n}^{+}]$ and in $(\lm,\psi)$ on $(\C\setminus \bigcup_{n\in\Z} U_{n})\times V_{\ph}$ -- see \cite[Section 12]{Grebert:2014iq} for all the details.
The $n$th \nls action variable of $\ph\in W$ is then given by
\[
\quad I_n
\defl
\frac{1}{\pi}\int_{\Gm_n}
\frac{\lm \Dl^{\ld}(\lm)}{\sqrt[c]{\Dl^2(\lm)-4}}\,\dlm,
\]
where $\Gm_{n}$ denotes any sufficiently close circuit around $[\lm_{n}^{-},\lm_{n}^{+}]$. More generally, the $n$th action on level $k\ge 1$ is given by
\[
J_{n,k}
\defl
\frac{1}{k\pi}\int_{\Gm_n}
\frac{\lm^{k}\Dl^{\ld}(\lm)}{\sqrt[c]{\Dl^{2}(\lm)-4}}\,\dlm.
\]
It was shown in \cite{Grebert:2014iq} and, for convenience of the reader, will be reproved in the sequel that each action variable is analytic on $W$ and vanishes if and only if $\gm_{n}$ is zero.
If $\ph$ is of real type, then all actions are real and those on odd levels, such as $J_{n,1} = I_{n}$, are nonnegative. Moreover, the actions on level one are well known to satisfy the trace formula,
\begin{align}
\label{tf-1}
\sum_{n\in\Z} I_{n}(\ph) = \Hc_{1}(\ph) = \frac{1}{2}\n{\ph}_{0}^{2}
= \frac{1}{2}\int_{\T} (\abs{\phm}^{2} + \abs{\php}^{2})\,\dx.
\end{align}
Similar trace formulae have been derived by McKean \& Vaninsky~\cite{McKean:1997ka} expressing the actions on any level $k\ge 1$ in terms of Hamiltonians of the \emph{\nls-hierarchy}. The first three Hamiltonians of this hierarchy are
\begin{align*}
\Hc_{1}(\ph) &= \phantom{\frac{1}{2}} \int_{\T} \phm\php\,\dx,\\
\Hc_{2}(\ph) &= \frac{1}{2} \int_{\T} (\phm'\php - \phm\php')\,\dx,\\
\Hc_{3}(\ph) &= \phantom{\frac{1}{2}} \int_{\T} (\phm'\php' + \php^{2}\phm^{2})\,\dx,
\end{align*}
and in general, for any sufficiently regular $\ph\in L^{2}_{c}$,
\[
\Hc_{k+1}(\ph) =
\int_{\T} (-\phm^{\phantom{.}}\php^{(k)} + q_{k}(\ph,\dotsc,\ph^{(k-1)}))\,\dx,\qquad k\ge 1,
\]
with $q_{k}$ being a canonically determined polynomial in $\ph$ and its first $k-1$ derivatives -- see appendix~\ref{s:hamil}. The following version of the trace formula is taken from \cite{Grebert:2014iq}.
\begin{theorem}[Trace Formula]
\label{tf}
For any $k\ge 2$ and any $\ph\in H_{c}^{k-1}\cap W$,
\begin{align}
\label{tf-k}
\sum_{n\in\Z} J_{n,k}(\ph) = -\frac{1}{(2\ii)^{k-1}}\Hc_{k}(\ph).
\end{align}
\end{theorem}
In particular, for every sufficiently regular real type potential
\[
\sum_{n\in\Z} J_{n,2m+1}(\ph) =
\frac{1}{4^{m}} \int_{\T} \bigl(\abs{\ph^{(m)}}^{2} + \dotsb\bigr)\,\dx,\qquad m\ge 0.
\]
This identity is used in sections~\ref{s:act-est} and \ref{s:sob-est} to estimate the actions on level $2m+1$ in terms of $\n{\ph}_{m}$. In order to obtain thereof estimates for the action variables on level one, a detailed analysis of the analytical properties of the action integrand is necessary. To this end, we define for any $\ph\in W$ on $(\C\setminus \bigcup_{n\in\Z} U_{n}) \times V_{\ph}$ the complex 1-form
\begin{align}
\label{om}
\om(\lm,\psi)
\defl \frac{\Dl^{\ld}(\lm,\psi)}{\sqrt[c]{\Dl^{2}(\lm,\psi)-4}}\,\dlm
= -\ii\prod_{m\in\Z} \frac{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}(\psi)-\lm}{\vs_{m}(\lm,\psi)} \dlm.
\end{align}
We call a path in the complex plane \emph{admissible} for $\ph\in L^{2}_{c}$ if, except possibly at its endpoints, it does not intersect any gap $[\lm_{n}^{-}(\ph),\lm_{n}^{+}(\ph)]$.
\begin{lemma}
\label{w-closed}
For each $\ph\in W$, the 1-form $\om$ has the following properties:
\begin{itemize}
\item[(i)]
$\om$ is analytic on $(\C\setminus \bigcup_{n\in\Z} U_{n}) \times V_{\ph}$,
\item[(ii)]
$\om(\cdot,\ph)$ is analytic on $\C\setminus \bigcup_{\gm_{n}\neq 0} [\lm_{n}^{-},\lm_{n}^{+}]$, and
\item[(iii)]
for any admissible path from $\lm_{n}^{-}$ to $\lm_{n}^{+}$ in $U_{n}$,
\[
\int_{\lm_{n}^{-}}^{\lm_{n}^{+}} \om = 0.
\]
In particular, for any closed circuit $\Gm_{n}$ in $U_{n}$ around $[\lm_{n}^{-},\lm_{n}^{+}]$,
\[
\int_{\Gm_{n}} \om = 0.
\]
\end{itemize}
\end{lemma}
\begin{proof}
Since $\Dl^{2}(\lm,\psi)-4$ vanishes if and only if $\lm$ is a periodic eigenvalue, and numerator and dominator of $\om(\lm,\psi)$ are analytic on $(\C\setminus \bigcup_{n\in\Z} U_{n}) \times V_{\ph}$, the first claim follows immediately.
By the same reasoning, $\om(\cdot,\ph)$ is analytic on $\C\setminus \bigcup_{n\in\Z} [\lm_{n}^{-},\lm_{n}^{+}]$. Moreover, if the $n$th gap is collapsed, that is $\lm_{n}^{+} = \lm_{n}^{-} \defr \lm_{n}$, then $\Dl^{2}(\lm)-4$ has a double root at $\lm_{n}$, and hence
\[
0 = \Dl(\lm_{n})\Dl^{\ld}(\lm_{n}) = 2(-1)^{n}\Dl^{\ld}(\lm_{n}).
\]
As $\Dl^{\ld}(\lm)$ has a single root in $U_{n}$, namely $\lm_{n}^{\text{\tiny\ensuremath{\bullet}}}$, we conclude $\lm_{n}^{\text{\tiny\ensuremath{\bullet}}} = \lm_{n} = \lm_{n}^{\pm}$, and the $n$th term in the product representations of $\Dl^{\ld}$ and $\sqrt[c]{\Dl^{2}-4}$ cancels. So $\om(\cdot,\ph)$ is analytic on all of $U_{n}$, and the second claim follows.
We first consider the case where $\ph$ is of real type. Then $(-1)^{n}\Dl(\lm,\ph) \ge 2$ on $[\lm_{n}^{-},\lm_{n}^{+}]$, as shown in \cite{Grebert:2014iq}, and hence
\[
\min_{\lm_{n}^{-} \le \lm\le \lm_{n}^{+}} (-1)^{n}\Dl(\lm,\ph) - \sqrt[+]{\Dl^{2}(\lm,\ph)-4} > 0.
\]
Thus, if we choose a circuit $\Gm$ sufficiently close to $[\lm_{n}^{-},\lm_{n}^{+}]$, and a sufficiently small neighbourhood $V\subset V_{\ph}$ of $\ph$, then $[\lm_{n}^{-}(\psi),\lm_{n}^{+}(\psi)]$ is enclosed by $\Gm$ and the real part of $(-1)^{n}(\Dl(\lm,\psi) - \sqrt[c]{\Dl^{2}(\lm,\psi)-4})$ is positive on that circuit for all $\psi\in V$. In consequence, the principal branch of the logarithm
\[
l_{n}(\lm,\psi) = \log\frac{(-1)^{n}}{2}\left(\Dl(\lm,\psi) + \sqrt[c]{\Dl^{2}(\lm,\psi)-4}\right)
\]
is analytic in a neighbourhood of $\Gm$ and $\ddd l_{n} = \om$. It follows that the analytic functional $\psi \mapsto \int_{\partial U_{n}} \om$ vanishes on an open neighbourhood of $\ph$, and hence on all of $V_{\ph}$. Since $W = \bigcup_{\ph\in L^{2}_{r}} V_{\ph}$, it follows that $\int_{\Gm_{n}}\om = 0$ for every $\ph\in W$.
Finally, consider $\int_{\lm_{n}^{-}}^{\lm_{n}^{+}} \om$ with the path of integration chosen to be admissible. As $\om$ is closed around the gap, the integral does not depend on the chosen admissible path. Suppose $\gm_{n} \neq 0$, then by the product representation \eqref{om} of $\om$,
\begin{align}
\label{chi-om}
\int_{\lm_{n}^{-}}^{\lm_{n}^{+}} \om
= -\ii \int_{\lm_{n}^{-}}^{\lm_{n}^{+}} \frac{\lm_{n}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\vs_{n}(\lm)}\chi_{n}(\lm)\,\dlm,
\qquad \chi_{n}(\lm) \defl \prod_{m\neq n} \frac{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}} - \lm}{\vs_{m}(\lm)},
\end{align}
where $\chi_{n}(\lm)$ is analytic on $\C\setminus\bigcup_{m\neq n}[\lm_{m}^{-},\lm_{m}^{+}]$ -- see \cite[Section 12]{Grebert:2014iq}. Further, by the definition of the standard root \eqref{s-root}, the function
\[
z \mapsto \vs_{n}(\tau_{n} + z \gm_{n}/2) = -z\sqrt[+]{1-z^{-2}}\gm_{n}/2
\]
is analytic on $\C\setminus[-1,1]$. For $s \in (-1,1)$ consider $z_{s} = s \pm \ii \ep$ to conclude
\[
\vs_{n}(\tau_{n} + (s \pm \ii 0) \gm_{n}/2) = \mp \ii \sqrt[+]{1-s^{2}}\gm_{n}/2.
\]
In particular, if $\al_{1}$ and $\al_{2}$ are admissible paths from $\lm_{n}^{-}$ to $\lm_{n}^{+}$ running on different sides of $[\lm_{n}^{-},\lm_{n}^{+}]$, then the integrand takes the opposite sign on these paths and hence
\[
\int_{\al_{1}} \om = -\int_{\al_{2}} \om.
\]
On the other hand, as $\int_{\partial U_{n}} \om = 0$, the integral is independent of the chosen path $\al_{i}$ and thus needs to be zero.\qed
\end{proof}
If we write the action variables as
\begin{align}
\label{Jn-om}
J_{n,k} = \frac{1}{k\pi}\int_{\Gm_{n}} \lm^{k}\om,
\end{align}
then the analyticity on $W$ is evident. To proceed we need to find a globally defined primitive of $\om$. So for $\ph\in W$ we define on $(\C\setminus\bigcup_{n\in\Z} U_{n}) \times V_{\ph}$ the mapping
\[
F(\lm,\psi) \defl \frac{1}{2}\left(\int_{\lm_{0}^{-}(\psi)}^{\lm} \om(\mu,\psi) + \int_{\lm_{0}^{+}(\psi)}^{\lm} \om(\mu,\psi)\right),
\]
where the paths of integration are chosen to be admissible. These improper integrals exist, as for $\gm_{0} = 0$ the integrand is analytic on $U_{0}$, while for $\gm_{0}\neq 0$ it is of the form $1/\sqrt{\lm_{0}^{\pm}-\lm}$ locally around $\lm_{0}^{\pm}$. By Lemma~\ref{w-closed} they are also independent of the chosen admissible path. Hence $F$ is well defined and one has
\[
F(\lm,\psi) = \int_{\lm_{0}^{-}(\psi)}^{\lm} \om(\mu,\psi) = \int_{\lm_{0}^{+}(\psi)}^{\lm} \om(\mu,\psi).
\]
Even though the eigenvalues $\lm_{0}^{\pm}$ are, due to their lexicographical ordering, not even continuous on $W$, the mapping $F$ turns out to be differentiable.
\begin{lemma}
\label{F-prop}
For every $\ph\in W$, we have that
\begin{itemize}
\item[(i)] $F$ is analytic on $(\C\setminus\bigcup_{n\in\Z} U_{n}) \times V_{\ph}$, and
\item[(ii)] $F(\cdot,\ph)$ can be continuously extended onto $\C\setminus \bigcup_{\gm_{n}\neq 0} (\lm_{n}^{-}, \lm_{n}^{+})$ with
\[
F(\lm_{n}^{+},\ph) = F(\lm_{n}^{-},\ph),\qquad n\in\Z.
\]
\item[(iii)] If, in addition, $\ph\in L^{2}_{r}$, then locally around $[\lm_{n}^{-},\lm_{n}^{+}]$
\[
F(\lm) = l_{n}(\lm) - \ii n\pi,\qquad l_{n}(\lm) = \log\frac{(-1)^{n}}{2}\left(\Dl(\lm) + \sqrt[c]{\Dl^{2}(\lm)-4}\right).
\]
In particular, for any real $\lm_{n}^{-} < \lm < \lm_{n}^{+}$,
\[
F(\lm \pm \ii 0) = \pm f_{n}(\lm) - \ii n\pi,
\qquad f_{n}(\lm) = \cosh^{-1}((-1)^{n}\Dl(\lm)/2).
\]
Clearly, $f_{n}$ is continuous on $[\lm_{n}^{-},\lm_{n}^{+}]$, strictly positive on $(\lm_{n}^{-},\lm_{n}^{+})$, and vanishes at the boundary points.
\item[(iv)] At the zero potential one has $F(\lm,0) = -\ii \lm$.
\end{itemize}
\end{lemma}
\begin{proof}
The proof of (i) is standard but a bit technical and can be found in appendix~\ref{a:ana}, and claim (ii) follows immediately from the properties of $\om$.
(iii): Note that locally around $[\lm_{n}^{-},\lm_{n}^{+}]$ both $l_{n}$ and $F$ are primitives of $\om$ and hence are identical up to a additive constant which may depend on $\ph$. Clearly, $l_{n}(\lm_{n}^{\pm}) = 0$. On the other hand, since $\int_{\lm_{k}^{-}}^{\lm_{k}^{+}}\om = 0$ for any $k$,
\[
F(\lm_{n}^{\pm}) = \sum_{k=0}^{n-1} \int_{\lm_{k}^{+}}^{\lm_{k+1}^{-}} \om,\qquad
F(\lm_{-n}^{\pm}) = \sum_{k=0}^{n-1} \int_{\lm_{-k}^{-}}^{\lm_{-k-1}^{+}} \om,\qquad n > 0.
\]
As $\ii(-1)^{k}\sqrt[c]{\Dl^{2}(\lm)-4} > 0$ for $\lm_{k}^{+} < \lm < \lm_{k+1}^{-}$ -- see \cite[Section 12]{Grebert:2014iq} -- we find
\[
\int_{\lm_{k}^+}^{\lm_{k+1}^-} \om =
\ii(-1)^{k}\int_{\lm_{k}^+}^{\lm_{k+1}^-}
\frac{\dot\Dl(\lm)}{\sqrt[+]{4-\Dl^2(\lm)}}\,\dlm
=
\ii(-1)^{k}
\arcsin\frac{\Dl(\lm)}{2}\bigg|_{\lm_{k}^+}^{\lm_{k+1}^-}
=
-\ii \pi.
\]
Consequently, $F(\lm_{n}^{\pm}) = -\ii n\pi$ and $F-l_{n} \equiv -\ii n\pi$. Finally, $\pm (-1)^{n}\sqrt[c]{\Dl^{2}(\lm\pm \ii 0)-4} > 0$ for $\lm_{n}^{-} < \lm < \lm_{n}^{+}$ hence
\begin{align*}
F(\lm \pm \ii 0)
&= \log\frac{1}{2}\left((-1)^{n}\Dl(\lm) \pm \sqrt[+]{\Dl^{2}(\lm)-4}\right) - \ii n\pi\\
&= \pm f_{n}(\Dl(\lm)/2) - \ii n\pi.
\end{align*}
(iv): At the zero potential, $\om(\lm,0) = -\ii\dlm$ which is evident from the product representation \eqref{om} of $\om$.\qed
\end{proof}
Given $\ph\in W$ we can integrate by parts in \eqref{Jn-om} to obtain
\[
J_{n,k} = \frac{1}{k\pi}\int_{\Gm_n}\lm^{k}\om
= -\frac{1}{\pi}\int_{\Gm_n} \lm^{k-1}F(\lm)\,\dlm.
\]
Clearly, $J_{n,k}$ vanishes if $\gm_{n}=0$ in view of the analyticity of the integrand. Further, provided $\ph$ is of real type, then we may shrink the contour of integration to $[\lm_{n}^{-},\lm_{n}^{+}]$, and use the properties of $F$ to the effect that
\begin{align}
\label{Jn-fn}
J_{n,k} = \frac{2}{\pi} \int_{\lm_{n}^{-}}^{\lm_{n}^{+}} \lm^{k-1} f_{n}(\lm) \,\dlm.
\end{align}
Thus on $L^{2}_{r}$ all actions are real and those on odd levels are nonnegative. Moreover, by the mean value theorem,
\begin{align}
\label{zt-In-Jn}
J_{n,2m+1} = \zt_{n,m}^{2m} I_{n},
\end{align}
for some $\zt_{n,m}\in [\lm_{n}^{-},\lm_{n}^{+}]$. Recall that $\lm_{n}^{\pm}\sim n\pi$ hence for any $m\ge 0$ we have $\zt_{n,m}^{2m} \sim (n\pi)^{2m}$ asymptotically as $\abs{n}\to\infty$. A quantitative estimate of the high level actions $J_{n,2m+1}$ in terms of the actions $I_{n}$ will be obtained in the next section.
Finally, consider the case of a potential with only finitely many open gaps. Then $F$ is analytic outside some sufficiently large circle and thus admits a Laurent expansion around zero. The coefficients of this expansion turn out to be the Hamiltonians of the \nls-hierarchy.
\begin{lemma}
\label{F-asy}
For any finite gap potential $\ph\in W$ there exists $\Lm > 0$ such that
\[
F(\lm,\ph)
= -\ii \lm - \frac{\Hc_{1}(\ph)}{2\ii \lm} + \sum_{n\ge 2} \frac{\Hc_{n}(\ph)}{(2\ii\lm)^{n}},
\qquad \abs{\lm} > \Lm.
\]
\end{lemma}
\begin{proof}
We attribute the claim to the asymptotic expansion of $\cosh^{-1}(\Dl/2)$ along the real axis -- see \cite{Grebert:2014iq,Kappeler:4WN-jiH9}. By the basic estimates for the discriminant
\[
\Dl(\ii \tau,\ph) = 2\cosh\tau + o(\e^{\tau}),\qquad \tau\to\infty.
\]
Hence on an open neighbourhood $U$ of $\ii[\tau_{0},\infty)$ for $\tau_{0} > 0$ sufficiently large $\cosh^{-1}(\Dl(\lm,\ph)/2)$ is well defined. Here $\cosh^{-1}$ denotes the principal branch of the inverse of $\cosh$ which is defined on $\C\setminus(-\infty,1)$ and real valued on $[1,\infty)$. On this neighbourhood $U$, the $\lm$-derivatives of $\cosh^{-1}(\Dl/2)$ and $F$ coincide except for possibly the sign of
\[
\Re \sqrt[c]{\Dl^{2}(\ii\tau)-4},\qquad \tau\ge \tau_{0}.
\]
This sign is locally constant in $\ph$ provided $\tau\ge \tau_{0}$ and, as the straight line $\setdef{t\ph}{0\le t\le 1}$ is compact in $L^{2}_{c}$, it can be determined by deforming $\ph$ to the zero potential. With
\[
\sqrt[c]{\Dl^{2}(\ii\tau,0)-4} = 2\ii\prod_{m\in\Z} \frac{\vs_{m}(\ii\tau,0)}{\pi_{m}}
= 2\ii\prod_{m\in\Z} \frac{m\pi - \ii\tau}{\pi_{m}} = 2\sinh \tau,
\]
we conclude the sign is positive on $U$, and consequently
\[
\cosh^{-1}\frac{\Dl(\lm,\ph)}{2} = F(\lm,\ph) + c(\ph),\qquad \lm\in U,
\]
with an analytic function $c\colon W\to \C$. For $\ph\in W$ fixed, the right hand side is analytic on $\C\setminus\bigcup_{\gm_{n}\neq 0}[\lm_{n}^{-},\lm_{n}^{+}]$ and continuous on $\C\setminus\bigcup_{\gm_{n}\neq 0}(\lm_{n}^{-},\lm_{n}^{+})$, hence the left hand side extends uniquely from $U$ onto the same domain. Moreover, $F$ vanishes at $\lm_{0}^{\pm}$, while
\[
\cosh^{-1}\frac{\Dl(\lm_{0}^{\pm},\ph)}{2} \in \setdef{\ii m\pi}{m\in\Z}.
\]
Thus $c(\ph) = \ii m\pi$, and since $c$ is continuous, this $m$ is fixed on all of $ W$. Evaluating at the zero potential reveals $\cosh^{-1}(\Dl(\lm,0)/2) = -\ii \lm = F(\lm,0)$ hence $c \equiv 0$.
If $\ph$ is a finite gap potential, then $\cosh^{-1}(\Dl/2) = F$ is analytic outside a sufficiently large circle enclosing all open gaps, and has the following asymptotic expansion along the real axis \cite{Kappeler:4WN-jiH9}
\[
\cosh^{-1}\frac{\Dl(\lm,\ph)}{2} = -\ii \lm - \frac{\Hc_{1}(\ph)}{2\ii \lm} + \sum_{n\ge 2} \frac{\Hc_{n}(\ph)}{(2\ii\lm)^{n}},
\qquad \lm \to \pm\infty.
\]
This proves the claim.\qed
\end{proof}
\section{Localising the Zakharov-Shabat Spectrum}
\label{s:trap-sp}
The goal for this section is to provide a sufficiently accurate localisation of the spectrum of the Zakharov-Shabat operator
\[
L(\ph) =
\mat[\bigg]{\,\ii & \\ & -\ii\,}
\frac{\ddd}{\dx} +
\mat[\bigg]{ & \phm \\ \php & },
\]
allowing to quantify the asymptotic relation $J_{n,2m+1} \sim (n\pi)^{2m} I_{n}$. Since one can translate the spectrum of $\ph$ without changing the $L^2$-norm, one can not obtain a uniform localisation on bounded subsets of $L^{2}_{c}$. Instead, we have to impose some regularity on $\ph$.
\begin{theorem}
\label{ev-trap}
Suppose $\ph\in H_c^1$, then for each $\lin{n}\ge 8\n{\ph}_1^2$,
\[
\abs{\lm_{n}^{\pm}-n\pi} \le
\frac{\n{\ph}_{1}^{2}}{\lin{n}}
+ \frac{\sqrt{2}\n{\ph}_{1}}{\lin{2n}} \le \pi/5,
\]
while the remaining eigenvalues are contained in the box
\[
\setdef*{\lm\in\C}{\abs{\Re \lm} \le (8\n{\ph}_1^2-1/2)\pi,\quad
\abs{\Im \lm} \le \n{\ph}_{1}}.
\]
\end{theorem}
\emph{Remark.}
In \cite{Li:1994td} Li \& McLaughlin obtained a localisation for $\ph$ in $H^{1}_{c}$ where the bound on the threshold of $\lin{n}$ is exponential in the norm of $\ph$. With a focus on lowering the regularity assumptions on $\ph$ rather than improving the threshold for $\ph$ smooth, this result was gradually improved by several authors -- see e.g. Mityagin~\cite{Mityagin:2004wn} and the references therein. The novelty of Theorem~\ref{ev-trap} consists in providing a threshold for $\lin{n}$ which is quadratic in the norm of $\ph$.
The proof is based on a \emph{Lyapunov-Schmidt decomposition} introduced by Kappeler \& Mityagin~\cite{Kappeler:1999er} -- see also \cite{Grebert:1998cz,Grebert:2001vm}: For the zero potential each $n\pi$, $n\in\Z$, is a double eigenvalue of $L$ with eigenfunctions $e_n^+ \defl (0,\e^{\ii n\pi x})$ and $e_n^- \defl (\e^{-\ii n\pi x},0)$. Thus, for a nonzero potential, provided $\abs{n}$ is sufficiently large, we expect exactly two eigenvalues which are close to $n\pi$ and whose eigenfunctions are close to the linear span of $e_{n}^{+}$ and $e_{n}^{-}$. This suggest to separate these modes from the others by a Lyapunov-Schmidt reduction.
To set the stage, we cover the complex plane with the closed strips
\[
\mathfrak{U}_n \defl \setdef{\lm}{\abs{\Re \lm-n\pi}\le \pi/2},
\]
and consider the Hilbert space of complex 2-periodic $L^2$-functions
\[
L^{2}_{*}
= \Pc_n\oplus \Qc_n
= \operatorname{sp}\setd{e_n^+,e_n^-} \oplus
\ob{\operatorname{sp}}\setdef{e_k^+,e_k^-}{k\neq n}.
\]
The orthogonal projections onto $\Pc_n$ and $\Qc_n$ are denoted by $P_n$ and $Q_n$, respectively.
To proceed, write the eigenvalue equation $Lf = \lm f$ in the form
\[
A_\lm f = \Phi f,
\]
with the unbounded operators
\[
A_\lm \defl \lm - \mat[\bigg]{\,\ii & \\ & -\ii\,}\ddx,
\qquad \Phi \defl \mat[\bigg]{ & \phm\\ \php & }.
\]
Since $A_\lm$ leaves the spaces $\Pc_n$ and $\Qc_n$ invariant, by writing
\[
f = u + v = P_nf + Q_nf,
\]
we can decompose the equation $A_\lm f = \Phi f$ into the two equations
\[
A_\lm u = P_n\Phi(u+v),\qquad
A_\lm v = Q_n \Phi(u+v),
\]
called the $P$- and the $Q$-equation, respectively.
We first consider the $Q$-equation on $\mathfrak{U}_n$. One checks that for $m\neq n$
\[
\min_{\lm\in \mathfrak{U}_n} \abs{\lm-m\pi} \ge \abs{n-m} \ge 1,
\]
hence it follows with $A_\lm e_m^\pm = (\lm-m\pi)e_m^\pm$ that the restriction of $A_\lm$ to $\Qc_n$ is boundedly invertible for all $\lm\in \mathfrak{U}_n$. Therefore, multiplying the $Q$-equation from the left by $\Phi A_\lm^{-1}$, gives
\[
\Phi v = T_n\Phi(u+v),
\]
with $T_n \defl \Phi A_\lm^{-1} Q_n$. The latter identity may be written as
\[
(I - T_n)\Phi v = T_n\Phi u,
\]
hence solving the $Q$ equation amounts to inverting $(I-T_{n})$.
We consider operator norms induced by \emph{shifted weighted norms} \cite{Kappeler:2001hsa,Poschel:2011iua}. Let $H_{i}^{w}$ denote the space of complex 2-periodic functions $u=\sum_{m\in\Z} u_{m}\e_{m}$ equipped with the $i$-shifted $H^w$-norm given by
\[
\n{u}_{w;i}^2
\defl \n{u\e_i}_{w}^2
= \sum_{m\in\Z} w_{m+i}^{2} \abs{u_m}^2,
\qquad \e_{m}(x) \defl \e^{\ii m\pi x}.
\]
On the space $H_{i,c}^w \defl H_{i}^{w}\times H_{i}^{w}$ of 2-periodic functions with values in $\C^{2}$,
\[
f=(f_-,f_+) = \sum_{n\in\Z} (f^{-}_{n}e_{n}^{-} + f^{+}_{n}e_{n}^{+}) =
\sum_{n\in\Z} (f^{-}_{n}\e_{-n}^{-},f^{+}_{n}\e_{n}),
\]
the $i$-shifted norm is defined by
\[
\n{f}_{w;i}^2
\defl \n{f_-}_{w;-i}^2 + \n{f_+}_{w;i}^2
= \sum_{m\in\Z} w_{m+i}^{2} \paren[\big]{\abs{f_m^-}^2 + \abs{f_m^+}^2}.
\]
\begin{lemma}
\label{Tn-est}
If $\ph\in H_c^w$ with $w\in\Mc$, then for any $n,i\in\Z$ and any $\lm\in \mathfrak{U}_n$,
\[
T_n = \Phi A_\lm^{-1}Q_n\colon H_{i,c}^{w}\to H_{-i,c}^{w}
\]
is bounded and satisfies the estimate
\[
\n{T_n f}_{w;i} \le 2\n{\ph}_w\n{f}_{w;- i}.
\]
\end{lemma}
\begin{proof}
Write $T_n f = \Phi g$ with $g=A_\lm^{-1}Q_nf$. Since the restriction of $A_\lm$ to $\Qc_n$ is boundedly invertible, the function
\[
g
= A_\lm^{-1}Q_nf
= \sum_{m\neq n} \biggl(\frac{f_m^-}{\lm-m\pi}e_m^- + \frac{f_m^+}{\lm-m\pi}e_m^+ \biggr)
= (g_-,g_+)
\]
is well defined. By Hölder's inequality we obtain for the weighted $\ell^1$-norm
\[
\n{g_+\e_{-i}}_{\ell^1_w}
=
\sum_{m\neq n} \frac{w_{m-i}\abs{f_m^+}}{\abs{\lm-m\pi}}
\le \biggl(\sum_{m\neq n} \frac{1}{\abs{n-m}^2}\biggr)^{1/2}
\n{f_+}_{w;-i}
< 2\n{f_+}_{w;-i},
\]
uniformly for $\lm\in \mathfrak{U}_n$; similarly $\n{g_-\e_{i}}_{\ell^1_w} \le 2\n{f_-}_{w;i}$. Since
\[
\n{T_nf}_{w;i}^2
= \n{\Phi g}_{w;i}^2
= \n{\phm g_+\e_{-i}}_w^2+\n{\php g_-\e_i}_w^2,
\]
we can use standard inequalities for the convolution of sequences to obtain
\[
\n{T_nf}_{w;i}^2
\le \n{\ph}^2_w\left(\n{g_+\e_{-i}}_{\ell^1_w}^2+\n{g_-\e_i}_{\ell^1_w}^2\right)
\le 4\n{\ph}_w^2\n{f}_{w;-i}^2.\qed
\]
\end{proof}
Note that $T_{n}f$ is estimated with a shifted $H^{w}$-norm where the sign of the shift is opposite to the sign of the shifted $H^{w}$-norm of $f$. This fact will be crucial in the following. In particular, $T_{n}^{2}$ is bounded on $H_{i,c}^{w}$ and it turns out that $\n{T_n^2}_{w;n}=o(1)$ as $\abs{n}\to~\infty$. Using a Neumann series, we then obtain the bounded invertibility of $(I-T_n)$ for $\abs{n}$ sufficiently large, which solves the $Q$-equation. To exploit the regularity assumption $\ph\in H^{1}_{c}$ of Theorem~\ref{ev-trap}, we restrict ourselves to the subclass $\Mc^{1}$ of weights which have at least a linearly growing factor -- see also \cite{Djakov:2006ba,Grebert:2001vm} for weights with factors $\lin{n}^{\dl}$, $0 < \dl < 1/2$.
\begin{lemma}
\label{Tn2-est}
If $\ph\in H_c^w$ with $w\in\Mc^{1}$, then for any $n\in\Z$ and any $\lm\in \mathfrak{U}_n$,
\[
\n{T_n^2}_{w;n} \le \frac{4}{\lin{n}}\n{\ph}_w^2.
\]
\end{lemma}
\begin{proof}
As in the preceding lemma, write $T_n^2 f = \Phi g$ with
\[
g \defl
A_\lm^{-1}Q_n \Phi A_\lm^{-1} Q_n f.
\]
A straightforward computation yields
\[
g = \sum_{k,l \neq n}\p*
{
\frac{\ph_{k+l}^-}{\lm-k\pi} \frac{f^+_l}{\lm-l\pi}e_k^-
+
\frac{\ph_{k+l}^+} {\lm-k\pi} \frac{f^-_l}{\lm-l\pi}e_k^+
}
= (g_-,g_+),
\]
and our aim is to estimate the weighted $\ell^1$-norms $\n{g_+\e_{-n}}_{\ell^1_w}$ and $\n{g_-\e_{n}}_{\ell^1_w}$. By assumption $w=\lin{n}\cdot v$ with some submultiplicative weight $v$, hence
\[
w_{k-n} \le \frac{\lin{k-n}}{\lin{k+l}\lin{l+n}}w_{k+l}w_{-l-n},\qquad k,l\in\Z.
\]
Consequently, for any $\lm\in \mathfrak{U}_{n}$
\[
\n{g_+\e_{-n}}_{\ell^1_w} \le \sum_{k,l\neq n} \frac{\lin{k-n}}{\lin{k+l}\abs{n-k}\lin{l+n}\abs{n-l}} w_{k+l}\abs{\ph^{+}_{k+l}}w_{-l-n}\abs{f_{l}^{-}},
\]
and with Cauchy-Schwarz and Young's inequality for convolutions,
\[
\phantom{\n{g_+\e_{-n}}_{\ell^1_w}} \le \p[\Bigg]
{
\sum_{k,l\neq n}
\frac{\lin{k-n}^{2}}{\lin{k+l}^2\abs{n-k}^{2}\lin{l+n}^2\abs{n-l}^2}
}^{1/2}
\n{\ph}_w\n{f_-}_{w;-n}.
\]
One further checks that
\[
\sum_{k\neq n} \frac{\lin{k-n}^{2}}{\lin{k+l}^2\abs{n-k}^{2}} \le 32/5,\qquad
\sum_{l\neq n} \frac{1}{\lin{l+n}^2\abs{n-l}^2} \le \frac{5/2}{\lin{n}^{2}}.
\]
Hence, we obtain for $\n{g_+\e_{-n}}_{\ell^1_w}$ and similarly for $\n{g_-\e_{n}}_{\ell^1_w}$,
\[
\n{g_+\e_{-n}}_{\ell_w^1}
\le \frac{4}{\lin{n}}\n{\ph}_w\n{f_-}_{w;-n},\quad
\n{g_-\e_{n}}_{\ell_w^1}
\le \frac{4}{\lin{n}}\n{\ph}_w\n{f_+}_{w;n}.
\]
Finally, with $\n{T_n^2 f}_{w;n} = \n{\Phi g}_{w;n}$, this gives
\begin{align*}
\n{T_n^2f}_{w;n}^2
\le \n{\ph}_w^2\p*{\n{g_+\e_{-n}}_{\ell_1^w}^2 + \n{g_-\e_{n}}_{\ell_1^w}^2}
\le \frac{16}{\lin{n}^2} \n{\ph}_w^4 \n{f}_{w;n}^2.\qed
\end{align*}
\end{proof}
Consequently, $T_n^2$ is a $1/2$-contraction if $\lin{n}\ge 8\n{\ph}_w^2$. In view of
\[
\hat{T}_n \defl (I-T_n)^{-1} = (I+T_n)(I-T_n^2)^{-1},
\]
one then finds a unique solution
\[
\Phi v = \hat{T}_nT_n \Phi u
\]
of the $Q$-equation. In turn, as $I+\hat{T}_{n}T_{n} = \hat{T}_{n}$, the $P$-equation yields
\[
A_\lm u
=
P_n (I + \hat{T}_nT_n)\Phi u
=
P_n\hat{T}_n\Phi u.
\]
Writing the latter as
\[
S_n u = 0,\qquad S_n \colon \Pc_{n}\to \Pc_{n},\quad u \mapsto (A_\lm - P_n\hat{T}_n\Phi)u,
\]
we immediately conclude that there exists the following relationship.
\begin{lemma}
For $\ph\in H_{c}^{1}$ and $\lin{n} \ge 8\n{\ph}_{1}^{2}$ a complex number $\lm\in \mathfrak{U}_{n}$ is an eigenvalue of $L$ if and only if the determinant of $S_{n}$ vanishes.~
\end{lemma}
\begin{proof}
Suppose $Lf = \lm f$, then by the preceding discussion $S_{n}u = 0$.
Conversely, define for any $u\in \Pc_{n}$,
\[
v = A_{\lm}^{-1}Q_{n}\hat{T}_{n}\Phi u \in \Qc_{n}.
\]
Then $\Phi v = T_{n}\hat{T}_{n}\Phi u$ is well defined, and it follows with $\hat{T}_{n} = I + T_{n}\hat{T}_{n}$ that
\[
A_{\lm}v = Q_{n}\hat{T}_{n}\Phi u = Q_{n}\Phi u + Q_{n}T_{n}\hat{T}_{n}\Phi u = Q_{n}\Phi(u+v),
\]
so the $Q$-equation is automatically satisfied. Moreover, if $S_{n}u = 0$, then
\[
A_{\lm}u = P_{n}\hat{T}_{n}\Phi u = P_{n}(I + T_{n}\hat{T}_{n})\Phi u = P_{n}\Phi(u+v).
\]
Hence also the $P$-equation is satisfied, and $\lm$ is an eigenvalue of $L$ with eigenfunction $f = u+v$.\qed
\end{proof}
Recall that $P_n$ is the orthogonal projection onto the two-dimensional space $\Pc_n$. The matrix representation of an operator $B$ on $\Pc_n$ is given by
\[
\p*{\lin{Be_n^\pm,e_n^\pm}}_{\pm,\pm}.
\]
Therefore, we find for $S_n$ the representation
\begin{align*}
A_\lm
=
\mat[\bigg]{
\lm - n\pi \\
& \lm - n\pi
},
\qquad
P_n\hat{T}_n\Phi =
\mat[\bigg]{
a_n^+ & b_n^+\\
b_n^- & a_n^-
},
\end{align*}
with the coefficients of the latter matrix given by
\[
a_n^\pm \defl \lin{\hat{T}_n\Phi e_n^\pm,e_n^\pm},\qquad
b_n^\pm \defl \lin{\hat{T}_n\Phi e_n^\mp,e_n^\pm}.
\]
We point out that these coefficients depend on $\lm$ and $\ph$. It has been observed in \cite{Djakov:2006ba,Kappeler:2009kp} that these coefficients reflect certain symmetries of the Fourier coefficients of $\ph$. We only need the fact that $a_{n}^{+}$ and $a_{n}^{-}$ coincide.
\begin{lemma}
\label{coeff-sym}
Suppose $\ph\in H_c^1$ and $\lin{n}\ge 8\n{\ph}_1^2$, then for any $\lm\in \mathfrak{U}_n$,
\[
a_n^+(\lm) = a_n^-(\lm) \equiv a_{n}(\lm).
\]
\end{lemma}
\begin{proof}
Recall that $T_n = \Phi A_\lm^{-1} Q_n$, and denote by $\ob{B}u \defl \ob{B\ob{u}}$ the complex conjugation of operators.
From evaluating the bounded diagonal operators $A_\lm^{-1}$ and $Q_n$ at $e_m^\pm$, and using the identity $\ob{e_m^\pm} = Pe_m^\mp$, we conclude
\[
(A_\lm^{-1})^*
= P\ob{A_\lm^{-1}}P = A_{\ob\lm}^{-1},\quad
Q_n^*
= P\ob{Q_n}P = Q_n,\qquad\quad
P \defl \Bigl( \begin{smallmatrix}
&\; 1\\ 1&
\end{smallmatrix} \Bigr).
\]
Since $A_\lm^{-1}$ leaves $\Qc_n$ invariant, and $P^2 = I$, we find $(A_\lm^{-1}Q_n)^* = P\overline{A_\lm^{-1}Q_n}P$. With $\Phi^* = P\ob{\Phi}P$ this gives
\[
(T_n\Phi)^* = \Phi^* (A_\lm^{-1} Q_n)^* \Phi^* = P \ob{T_n\Phi}P.
\]
Inspecting the Neumann expansion of $\hat{T}_n\Phi$ yields $(\hat{T}_n\Phi)^* = P\overline{\hat{T}_n\Phi}P$, thus
\begin{align*}
a_n^+
&= \lin{\hat{T}_n\Phi e_n^+,e_n^+}
= \lin{e_n^+,(\hat{T}_n\Phi)^*e_n^+}\\
&= \lin{Pe_n^+,\overline{\hat{T}_n\Phi}Pe_n^+}
= \lin{\hat{T}_n\Phi e_n^-,e_n^-} = a_n^-.\qed
\end{align*}
\end{proof}
It follows that $S_n$ may be written as
\[
S_n(\lm) =
\mat[\bigg]{
\lm-n\pi - a_n & -b_n^+\\
-b_n^- & \lm-n\pi - a_n
}.
\]
As $T_n$ and $\Phi$ are anti-diagonal while $I$ and $T_n^2$ are diagonal, all even terms $\lin{T_{n}^{2k}\Phi e_{n}^{+},e_{n}^{+}}$ in the expansion of $a_{n}$ vanish. Using $\hat{T}_n = (I+T_n)(I-T_n^2)^{-1}$ we thus conclude
\[
a_n = \lin{\hat{T}_n\Phi e_n^+,e_n^+} = \lin{T_n(I-T_n^2)^{-1}\Phi e_n^+,e_n^+}.
\]
On the other hand, all odd terms in the expansion of $b_{n}$ vanish, such that
\[
b_n^\pm-\ph_{2n}^\pm
= \lin{(\hat{T}_n-I)\Phi e_n^\mp,e_n^\pm}
= \lin{T_n^2(I-T_n^2)^{-1}\Phi e_n^\mp,e_n^\pm}.
\]
We introduce the following notion for the sup-norm of a complex valued function on a domain $U\subset\C$,
\[
\abs{f}_U \defl \sup_{\lm\in U} \abs{f(\lm)}.
\]
\begin{lemma}
\label{coeff-bounds}
If $\ph\in H_c^w$ with $w\in\Mc^{1}$, then for any $\lin{n}\ge 8\n{\ph}_w^2$ the coefficients $a_n$ and $b_n^\pm$ are analytic functions on $\mathfrak{U}_n$ with bounds
\[
\abs{a_n}_{\mathfrak{U}_n} \le \frac{1}{\lin{n}}\n{\ph}_w^2,
\qquad
w_{2n}\abs{b_n^\pm - \ph^\pm_{2n}}_{\mathfrak{U}_n} \le \frac{8}{\lin{n}}\n{\ph}_w^2\n{\ph_{\!\pm}}_{w}.
\]
\end{lemma}
\begin{proof}
Since $\n{T_n^2}_{w;n}\le 1/2$, the series expansions of $a_n$ and $b_n^\pm$ converge uniformly on $\mathfrak{U}_n$ to analytic functions.
Let $u=(I-T_n^2)^{-1}\Phi e_n^+$, then
\[
\n{u}_{w;n}
\le \n{(I-T_n^2)^{-1}}_{w;n}\n{\Phi e_n^+}_{w;n}
\le 2\n{\phm\e_{n}}_{w;-n} = 2\n{\phm}_{w},
\]
and with the series expansion $u=\sum_{m\in\Z} u_m e_{m}^-$ we may write
\[
a_n
= \lin{T_nu,e_n^+}
= \sum_{m\neq n} \frac{\ph^-_{n+m}}{\lm-m\pi}u_m.
\]
As $\abs{n-m}\le \abs{n}$ implies $\abs{n+m}\ge 2\abs{n}-\abs{n-m}\ge \abs{n}$, this gives
\begin{align*}
\abs{a_n}_{\mathfrak{U}_{n}}
&\le
\sum_{m\neq n}
\frac{1}{\lin{n+m}^{2}\abs{n-m}} w_{n+m}\abs{\ph_{n+m}^-} \cdot w_{n+m}\abs{u_m}\\
&\le \frac{1}{\lin{n}}\n{\phm}_w\n{u}_{w;n}
\le \frac{2}{\lin{n}}\n{\phm}_w^2.
\end{align*}
Using the representation $a_{n} = \lin{T_{n}(I-T_{n}^{2})\Phi e_{n}^{-},e_{n}^{-}}$ we similarly obtain
\[
\abs{a_n}_{\mathfrak{U}_{n}} \le \frac{2}{\lin{n}}\n{\php}_w^2.
\]
Summing both estimates up gives the first bound. To obtain the second bound, we note that $b_n^- -\ph_{2n}^- = \lin{T_n^2u,e_{n}^-}$. Since $\lin{f,e_n^{-}} = \lin{f\e_{-n},e_{2n}^{-}}$ for any function $f\in L^{2}_{c}$, we conclude
\[
w_{2n}\abs{\lin{T_n^2u,e_{n}^-}}
\le \n{T_n^2u}_{w;n}
\le \frac{8}{\lin{n}}\n{\ph}_w^2\n{\phm}_{w}.
\]
The proof for $b_n^+$ is the same.\qed
\end{proof}
In consequence, the determinant of $S_n$
\[
\det S_n = (\lm-n\pi - a_n)^2 - b_n^+b_n^-
\]
is analytic on $\mathfrak{U}_{n}$ and close to $(\lm-n\pi)^2$ provided $\lin{n}$ is sufficiently large.
\begin{lemma}
\label{Sn-roots}
Let $\ph\in H^1_c$, then for any $\lin{n}\ge 8\n{\ph}_1^2$, the determinant of $S_n$ has exactly two complex roots $\xi_+$, $\xi_-$ in $\mathfrak{U}_n$, which are contained in the disc
\[
D_n \defl \setd[\bigg]{\abs{\lm-\pi n} \le
\frac{\n{\ph}_{1}^{2}}{\lin{n}}
+ \frac{\sqrt{2}\n{\ph}_{1}}{\lin{2n}}}
\subset \setd[\bigg]{\abs{\lm-n\pi}\le \frac{\pi}{5}},
\]
and satisfy
\[
\abs{\xi^{+}-\xi^{-}}^{2} \le 6\abs{b_{n}^{+}b_{n}^{-}}_{\mathfrak{U}_{n}}.
\]
\end{lemma}
\begin{proof}
The estimates of the preceding lemma give for $\lin{n}\ge 8\n{\ph}_{1}^{2}$
\[
\abs{a_{n}}_{\mathfrak{U}_{n}} \le \frac{\n{\ph}_{1}^{2}}{\lin{n}},\qquad
\lin{2n}^{2}\abs{b_{n}^{+}b_{n}^{-}}_{\mathfrak{U}_{n}} \le \left(1 + \frac{8\n{\ph}_{1}}{\lin{n}}\right)^{2}
\n{\php}_{1}\n{\phm}_{1},
\]
where we used $w_{2n}\abs{b_{n}^{\pm}} \le \n{\ph}_{w} + w_{2n}\abs{b_{n}^{\pm}-\ph_{2n}^{\pm}}$.
Therefore,
\[
\abs{a_n}_{\mathfrak{U}_n} + \abs{b_n^{+}b_{n}^{-}}_{\mathfrak{U}_n}^{1/2}
\le \inf_{\lm\in\,\mathfrak{U}_{n}\setminus D_{n}}\abs{\lm-n\pi}
= \frac{\n{\ph}_{1}^{2}}{\lin{n}}
+ \sqrt{2}\frac{\n{\ph}_{1}}{\lin{2n}}
\le \pi/5.
\]
It follows from Rouche's Theorem that the function $h = \lm - n\pi - a_{n}$ has a single root in $D_n$, just as $\lm-n\pi$. Furthermore, $h^2$ and $\det S_n$ have the same number of roots in $D_n$, namely two when counted with multiplicity, while $\det S_n$ clearly has no root in $\mathfrak{U}_n\setminus D_n$.
To estimate the distance of the roots, we write $\det S_n=g_+g_-$ with
\[
g_{\pm} = \lambda-\pi n - a_n \mp \sigma_n, \qquad \sigma_n = \sqrt{b_n^+b_n^-},
\]
where the branch of the root is immaterial. Each root $\xi$ of $\det S_n$ is either a root of $g_+$ or $g_-$, respectively, and thus satisfies $\xi = \pi n + a_n(\xi) \pm \sigma_n(\xi)$.
Therefore,
\begin{align*}
\abs{\xi_+-\xi_-}
&\le \abs{a_n(\xi_+)-a_n(\xi_-)} + \abs{\sigma_n(\xi_+)\pm\sigma_n(\xi_-)}\\
&\le \abs{\partial_{\lm}a_{n}}_{D_{n}}\abs{\xi_+-\xi_-} + 2\abs{\sigma_n}_{\mathfrak{U}_n}.
\end{align*}
Since $\dist(D_{n},\partial \mathfrak{U}_{n}) \ge \pi/2 - \pi/5$, the claim follows with Cauchy's estimate
\[
\abs{\partial_\lambda a_n}_{D_n}
\le \frac{\abs{a_n}_{\mathfrak{U}_n}}{\dist(D_n, \partial \mathfrak{U}_n)}
\le \frac{1/8}{\pi/2-\pi/5} \le \frac{1}{6}.\qed
\]
\end{proof}
\begin{proof}[Proof of Theorem~\ref{ev-trap}.]
For each $\lin{n}\ge 8\n{\ph}_1^2$ Lemma~\ref{Sn-roots} applies giving us the two roots $\xi_+$ and $\xi_-$ of $\det S_n$ in $D_n\subset \mathfrak{U}_n$. As the strips $\mathfrak{U}_n$ cover the complex plane, and since $\lm_{n}^{\pm} \sim n\pi$ asymptotically as $n\to\pm\infty$ while there are no periodic eigenvalues in $\bigcup_{\lin{n} \ge 8\n{\ph}_{1}^{2}} (\mathfrak{U}_n\setminus D_{n})$, it follows by a standard counting argument that these roots have to be the periodic eigenvalues $\lm_n^\pm$. In turn, the remaining eigenvalues have to be contained in the strip
\[
\setdef*{\lm\in\C}{\abs{\Re \lm} \le (8\n{\ph}_{1}^{2}-1/2)\pi}.
\]
To obtain the estimate for the imaginary part, suppose $f$ is a $L^{2}_{c}$ normalised eigenfunction for $\lm$, then
\[
2\Im \lm = \lm-\ob{\lm} = \lin{Lf,f}-\lin{f,Lf} = \lin{(L-L^{*})f,f}.
\]
Further, using the $L^{\infty}$-estimate $\n{g}_{\infty} \le \sqrt{2}\n{g}_{1}$ for $g\in H^{1}$, we find
\[
\n{(L-L^{*})f}_{0} \le \sqrt{2}\n{\php-\ob{\phm}}_{1}\n{f}_{0} \le 2\n{\ph}_{1}.
\]
This completes the proof of the theorem.\qed
\end{proof}
Incidentally, we obtain the following estimate for the gap lengths, which we will use in section~\ref{s:act-west}.
\begin{proposition}
\label{gap-est}
Suppose $\ph\in H^{w}_{c}$ with $w\in\Mc^{1}$, then for any $\lin{N}\ge 8\n{\ph}_{w}^{2}$,
\[
\sum_{\abs{n}\ge N} w_{2n}^{2}\abs{\gm_{n}(\ph)}^{2}
\le 6\n{R_{N}\ph}_{w}^{2} + \frac{1152}{\lin{N}}\n{\ph}_{w}^{6},
\]
where $R_{N}\ph = \sum_{\abs{n}\ge N} (\ph_{2n}^{-}\e_{-2n},\ph_{2n}^{+}\e_{2n})$.
If, in addition, $\ph$ is in the complex neighbourhood $W$ of $L^{2}_{r}$, then
\[
\sum_{n\in\Z} w_{2n}^{2}\abs{\gm_{n}(\ph)}^{2}
\le 265\pi^{2}w^{2}[16\n{\ph}_{w}^{2}](1+\n{\ph}_{w}^{2})\n{\ph}_{w}^{2}.
\]
\end{proposition}
\begin{proof}
By Lemma~\ref{Sn-roots} we have for any $\lin{n} \ge 8\n{\ph}_{w}^{2}$ the estimate
\[
\abs{\gm_{n}}^{2}
= \abs{\lm_{n}^{+}-\lm_{n}^{-}}^{2}
\le 6\abs{b_{n}^{+}b_{n}^{-}}_{\mathfrak{U}_{n}}
\le 3\p{\abs{b_{n}^{+}}_{\mathfrak{U}_{n}}^{2} + \abs{b_{n}^{-}}_{\mathfrak{U}_{n}}^{2}}.
\]
Using $\abs{b_{n}^{\pm}}_{\mathfrak{U}_{n}} \le \abs{\ph_{2n}^{\pm}} + \abs{b_{n}^{\pm}-\ph_{2n}^{\pm}}_{\mathfrak{U}_{n}}$ we thus find for any $\lin{N}\ge 8\n{\ph}_{w}^{2}$,
\begin{align*}
&\frac{1}{6}\sum_{\abs{n}\ge N} w_{2n}^{2}\abs{\gm_{n}(\ph)}^{2}\\
&\qquad \le \sum_{\abs{n}\ge N} w_{2n}^{2}(\abs{\ph_{2n}^{+}}^{2} + \abs{\ph_{2n}^{-}}^{2}
+ \abs{b_{n}^{+}-\ph_{2n}^{+}}_{\mathfrak{U}_{n}}^{2}
+ \abs{b_{n}^{-}-\ph_{2n}^{-}}_{\mathfrak{U}_{n}}^{2}).
\end{align*}
Further by Lemma~\ref{coeff-bounds}, $w_{2n}^{2}\abs{\ph_{2n}^{\pm}-b_{n}^{\pm}}_{\mathfrak{U}_{n}}^{2} \le 64\lin{n}^{-2}\n{\ph}_{w}^{4}\n{\ph_{\pm}}_{w}^{2}$, hence
\[
\frac{1}{6}\sum_{\abs{n}\ge N} w_{2n}^{2}\abs{\gm_{n}(\ph)}^{2}
\le \n{R_{N}\ph}_{w}^{2} + 64\n{\ph}_{w}^{6}\sum_{\abs{n}\ge N} \frac{1}{\lin{n}^{2}},
\]
and the first claim follows with $\sum_{\abs{n}\ge N} 1/\lin{n}^{2} \le 3/\lin{N}$.
If additionally $\ph\in W$, then each gap is contained in its isolating neighbourhood $U_{n}$. Those are disjoint complex discs centred on the real line, whose diameters for $\abs{n} < N$ sum up to at most $(2N-1)\pi$ by Theorem~\ref{ev-trap}. Therefore,
\[
\sum_{\abs{n} < N} w_{2n}^{2}\abs{\gm_{n}(\ph)}^{2}
\le w^{2}_{2N-2}\p[\Bigg]{\sum_{\abs{n} < N}\abs{\gm_{n}(\ph)}}^{2}\\
\le w^{2}_{2N-2}(2N-1)^{2}\pi^{2},
\]
and choosing $N+1 \ge 8\n{\ph}_{w}^{2} > N$ gives the second claim.\qed
\end{proof}
If, for $\lin{n}\ge 8\n{\ph}_{w}^{2}$, we use
\[
w_{2n}\abs{b_{n}^{\pm}}
\le \n{\ph_{\pm}}_{w} + \frac{8}{\lin{n}}\n{\ph}_{w}^{2}\n{\ph_{\pm}}
\le 2\n{\ph_{\pm}}_{w},
\]
then we obtain the \emph{individual gap estimate}
\begin{align}
\label{i-gap}
w_{2n}\abs{\gm_{n}(\ph)} \le 4\n{\ph}_{w}.
\end{align}
\section{Estimating the Actions}
\label{s:act-est}
As an immediate corollary to the localisation obtained in the previous section, we obtain the following quantitative estimate of the high-level actions.
\begin{proposition}
\label{In-Jn-est}
If $\ph\in H_{r}^{1}$, then for $\lin{n}\ge 8\n{\ph}_1^2$ and $m\ge 0$,
\[
J_{n,2m+1} = \zeta_{n,m}^{2m} I_{n},\qquad
\abs{\zeta_{n,m}-n\pi} \le \frac{\n{\ph}_{1}^{2}}{\lin{n}} + \frac{\sqrt{2}\n{\ph}_{1}}{\lin{2n}}.
\]
In particular, if $n\neq 0$ and $\lin{n}\ge 8\n{\ph}_1^2$, then
\[
2^{-m}\lin{2n\pi}^{2m}I_{n} \le 4^{m}J_{n,2m+1} \le \lin{2n\pi}^{2m}I_{n},
\]
while the remaining actions for all $\lin{n} < 8\n{\ph}_{1}^{2}$ satisfy
\[
4^{m}\abs{J_{n,2m+1}} \le (16\pi)^{2m}\n{\ph}_1^{4m} I_n.
\]
\end{proposition}
\begin{proof}
Recall from \eqref{zt-In-Jn} that $J_{n,2m+1} = \zt_{n,m}^{2m}I_{n}$ with $\zt_{n,m}\in[\lm_{n}^{-},\lm_{n}^{+}]$. Provided $\lin{n}\ge8\n{\ph}_{1}^{2}$, the estimate of $\abs{\zt_{n,m}-n\pi}$ follows from Theorem~\ref{ev-trap}. If additionally $n\neq 0$, then $\lin{2n}\ge 3\lin{n}/2$ and hence $\abs{\zt_{n,m}-n\pi} \le 1/2$. In consequence,
\[
\frac{1}{\sqrt{2}}\lin{2n\pi} \le 2\abs{\zt_{n,m}} \le \lin{2n\pi},\qquad n\neq 0.
\]
Conversely, if $\lin{n} < 8\n{\ph}_{1}^{2}$, then $\abs{\zt_{n,m}} \le 8\pi \n{\ph}_{1}^{2}$.\qed
\end{proof}
In the sequel we use Proposition~\ref{In-Jn-est} to obtain an estimate of
\[
\n{I(\ph)}_{\ell^{1}_{2m}} = \sum_{n\in\Z} \lin{2n\pi}^{2m}\abs{I_{n}}
\]
in terms of $\sum_{n\in\Z} J_{n,2m+1}$ and a remainder depending solely on $\n{\ph}_{1}$. The trace formula and the polynomial structure of the Hamiltonians then allows us to obtain the first part of Theorem~\ref{act-sob-est}.
\begin{lemma}
\label{I-H}
For every $m\ge 1$,
\[
\n{I(\ph)}_{\ell^{1}_{2m}}
\le \lin{16\pi}^{2m}(1+\n{\ph}_{1})^{4m}\n{\ph}_{0}^2 + (-1)^{m+1}2^{m}\Hc_{2m+1}(\ph),
\]
uniformly for all $\ph\in H_r^{m}$.
\end{lemma}
\begin{proof}
Choose $N+1\ge 8\n{\ph}_{1}^2 > N$, then by Proposition~\ref{In-Jn-est}, the trace formula~\eqref{tf-k}, and the positivity of the actions
\[
\sum_{\abs{n}> N} \lin{2n\pi}^{2m} I_n
\le 8^{m}\sum_{n\in\Z} J_{n,2m+1}
= (-1)^{m+1}2^{m}\Hc_{2m+1}.
\]
On the other hand, by our choice of $N$ and the trace formula~\eqref{tf-1}
\[
\sum_{\abs{n} \le N} \lin{2n\pi}^{2m}I_n
\le \lin{2N\pi}^{2m}\sum_{n\in\Z} I_n
\le \lin{16\pi}^{2m}(1+\n{\ph}_{1})^{4m}\n{\ph}_{0}^2.\qed
\]
\end{proof}
We denote the two components of $\ph\in L^{2}_{r}$ by $\ph = (\psi,\ob{\psi})$, and to simplify notation write $\psi_{(m)} = \partial_{x}^{m} \psi$ such that
\[
\int_{\T} \abs{\psi_{(m)}}^{2}\,\dx = \frac{1}{2}\n{\ph_{(m)}}_{0}^{2}.
\]
We further note that on $H_{r}^{m}$,
\[
(-1)^{m+1}\Hc_{2m+1}(\ph)
=
\int_\T \paren*{\abs{\psi_{(m)}}^2 + p_{2m}(\psi,\ob\psi,\ldots,\psi_{(m-1)},\ob \psi_{(m-1)})}\,\dx,
\]
with $p_{2m}$ being a homogenous polynomial of degree $2m+2$ when $\psi$, $\ob\psi$, and $\partial_x$ each count as one degree. Further, the degree of each monomial of $p_{2m}$ with respect to $\psi$ equals the degree with respect to $\ob\psi$ -- see Corollary~\ref{h-form} from the appendix.
Consequently, each monomial $\mathfrak{q}$ of $p_{2m}$ may be estimated by
\[
\abs{\mathfrak{q}} \le c_{\mathfrak{q}} \abs{\psi}^{\mu_{0}}\abs{\psi_{x}}^{\mu_{1}}\dotsm\abs{\psi_{(m-1)}}^{\mu_{m-1}},
\]
with some positive constant $c_{\mathfrak{q}}$ and integers $\mu_{0},\dotsc,\mu_{m-1}$. Since $\mathfrak{q}$ has degree $2m+2$ we have
\[
\sum_{0\le i\le m-1} (1+i)\mu_{i} = 2m+2,
\]
and since the degree with respect to $\psi$ and $\ob{\psi}$ is the same, $\sum_{0\le i\le m-1} \mu_{i}$ is an even integer. Denote by $\Ic_{2m+2}\subset (\Z_{\ge0}/2)^{m}$ the set of all multi-indices $\mu = (\mu_{i})_{0\le i\le m-1}$ satisfying the constraints
\begin{align}
\label{p-const}
\sum_{0\le i\le m-1} (1+i)\mu_{i} = 2m+2,\qquad \abs{\mu} \defl \sum_{0\le i\le m-1} \mu_{i}
\;\in\;2\Z.
\end{align}
Then, we obtain the estimate
\begin{align}
\abs{p_{2m}} \le \sum_{\mu\in\Ic_{2m+2}} c_{\mu} \abs{\mathfrak{q}_{\mu}},\qquad
\label{p-est}
\abs{\mathfrak{q}_{\mu}} = \abs{\psi}^{\mu_{0}}\abs{\psi_{x}}^{\mu_{1}}\dotsm\abs{\psi_{(m-1)}}^{\mu_{m-1}},
\end{align}
with positive constants $c_{\mu}$. This representation of $p_{2m}$ allows us to obtain detailed estimates of the Hamiltonians $\Hc_{2m+1}$.
\begin{lem}
\label{pm-Hm-est}
For any $m\ge 1$ and any $\ep > 0$ there exists $C_{\ep,m}$ so that
\[
\int_{\T} \abs{p_{2m}}\,\dx
\le \ep\nn{\partial_{x}^{m}\psi}^{2} + C_{\ep,m}(1+\nn{\psi}^{4m})\nn{\psi}^{2}.
\]
In particular,
\[
\abs{\Hc_{2m+1}}
\le \p*{1+\ep}\nn{\partial_{x}^{m}\psi}^{2} + C_{\ep,m}(1+\nn{\psi}^{4m})\nn{\psi}^{2}.
\]
\end{lem}
\begin{proof}
First note that $\abs{\mu} \ge 4$ for any $\mu\in\Ic_{2m+2}$. Indeed, under the constraint $\abs{\mu} = k$ for some fixed $k\ge 0$ the expression $\sum_{0\le i\le m-1} (1+i)\mu_{i}$ attains its maximum when $\mu_{m-1} = \abs{\mu}$ while all other coefficients vanish. In this case,
\[
\sum_{0\le i\le m-1} (1+i)\mu_{i} = m\abs{\mu},
\]
and the right hand side is strictly less than $2m+2$ if $\abs{\mu} \le 2$. Therefore, $\abs{\mu}$ is an even integer strictly larger than $2$.
As a consequence, for any $\mu\in\Ic_{2m+2}$, there either exist two distinct nonzero coefficients $\mu_{k}$, $\mu_{l}$ with $0\le k,l\le m-1$ or $\mu_{k} = \abs{\mu}\ge 4$ for some $0\le k\le m-1$ while all other coefficients vanish. In the first case, using Cauchy-Schwarz and the $L^{\infty}$-estimate, we obtain
\begin{align*}
\int_{\T} \abs{\mathfrak{q}_{\mu}} \,\dx
&\le \int \p*{\prod_{0\le i\neq k,l}^{m-1} \abs{\psi_{(i)}}^{\mu_{i}}}
\abs{\psi_{(k)}}^{\mu_{k}-1}
\abs{\psi_{(l)}}^{\mu_{l}-1}
\abs{\psi_{(k)}}
\abs{\psi_{(l)}} \,\dx\\
&\le \p*{\prod_{0\le i\neq k,l}^{m-1} \n{\psi_{(i)}}_{L^{\infty}}^{\mu_{i}}}
\n{\psi_{(k)}}_{L^{\infty}}^{\mu_{k}-1}
\n{\psi_{(l)}}_{L^{\infty}}^{\mu_{l}-1}
\nn{\psi_{(k)}}\nn{\psi_{(l)}}.
\end{align*}
It follows from the generalized Gagliardo-Nierenberg inequality that for any integer $0\le i\le m$
\begin{equation}
\label{int-ineq}
\n{\psi_{(i)}}_{L^{\infty}} \lesssim \n{\psi}_{m}^{\frac{i+1/2}{m}}\n{\psi}_{0}^{1-\frac{i+1/2}{m}},\qquad
\n{\psi_{(i)}}_{0} \lesssim \n{\psi}_{m}^{\frac{i}{m}}\n{\psi}_{0}^{1-\frac{i}{m}},
\end{equation}
where $a\lesssim b$ means $a\le c \cdot b$ with $c$ being a multiplicative constant which is independent of $\psi$ and only depends on the parameters $i$ and $m$. We thus obtain
\begin{equation}
\label{q-mu-est}
\int_{\T} \abs{\mathfrak{q}_{\mu}} \,\dx \lesssim
\n{\psi}_{m}^{\p*{\sum_{i=0}^{m-1} \frac{i+1/2}{m}\mu_{i}} - \frac{1}{m}}
\n{\psi}_{0}^{\p*{\sum_{i=0}^{m-1} \p*{1-\frac{i+1/2}{m}}\mu_{i}}
+\frac{1}{m}}.
\end{equation}
In the second case where $\mu_{k} = \abs{\mu}\ge 4$ while all other coefficients of $\mu$ vanish, we get
\begin{align*}
\int_{\T} \abs{\mathfrak{q}_{\mu}} \,\dx
&\le \int \p*{\prod_{0\le i\neq k,l}^{m-1} \abs{\psi_{(i)}}^{\mu_{i}}}
\abs{\psi_{(k)}}^{\mu_{k}-2}
\abs{\psi_{(k)}}^{2} \,\dx\\
&\le \p*{\prod_{0\le i\neq k,l}^{m-1} \n{\psi_{(i)}}_{L^{\infty}}^{\mu_{i}}}
\n{\psi_{(k)}}_{L^{\infty}}^{\mu_{k}-2}
\nn{\psi_{(k)}}^{2}.
\end{align*}
The interpolation inequality~\eqref{int-ineq} then yields estimate~\eqref{q-mu-est} also in this case.
Recall from~\eqref{p-const} that $\sum_{i=0}^{m-1} (i+1/2)\mu_{i} = 2m+2 - \abs{\mu} / 2$, hence
\begin{align*}
\sum_{i=0}^{m-1} \frac{i+1/2}{m}\mu_{i} - \frac{1}{m}
&= \frac{2m+2-\abs{\mu}/2}{m} - \frac{1}{m}
= 2 - \frac{\abs{\mu}-2}{2m} < 2,
\end{align*}
where in the last line we used that $4 \le \abs{\mu} \le 2m+2$. Similarly,
\begin{align*}
\sum_{i=0}^{m-1} \p*{1-\frac{i+1/2}{m}}\mu_{i} + \frac{1}{m}
= \abs{\mu} - 2 + \frac{\abs{\mu}-4}{2m}
= \p*{1+\frac{1}{2m}}(\abs{\mu}-2).
\end{align*}
Both identities together yield in view of~\eqref{q-mu-est}
\[
\int_{\T} \abs{\mathfrak{q}_{\mu}} \,\dx \lesssim
\n{\psi}_{m}^{2 - \frac{\abs{\mu}-2}{2m}}
\n{\psi}_{0}^{\p*{1+\frac{1}{2m}}(\abs{\mu}-2)}.
\]
Applying Young's inequality to the latter with
\[
p = \frac{2}{2 - \frac{\abs{\mu}-2}{2m}},\qquad
p' = \frac{4m}{\abs{\mu}-2},
\]
then finally gives for any $\ep > 0$
\begin{align*}
\int_{\T} \abs{\mathfrak{q}_{\mu}} \,\dx
&\le
\ep \n{\psi}_{m}^{\p*{2 - \frac{\abs{\mu}-2}{2m}}p}
+
C_{\ep,m}\n{\psi}_{0}^{\p*{1+\frac{1}{2m}}(\abs{\mu}-2)p'}\\
&= \ep \n{\psi}_{m}^{2} + C_{\ep}\n{\psi}_{0}^{4m + 2},
\end{align*}
where $C_{\ep,m}$ is an absolute constant independent of $\psi$. Since this estimate holds for any monomial of $p_{2m}$, the final claim follows with the fact that $\n{\psi}_{m}^{2} \le 2^{m-1}\nn{\partial_{x}^{m}\psi}^{2} + 2^{m-1}\nn{\psi}^{2}$.\qed
\end{proof}
\begin{proof}[Proof of Theorem~\ref{act-sob-est} (i)]
Recall from Lemma~\ref{I-H}, that
\[
\n{I(\ph)}_{\ell^{1}_{2m}}
\le \lin{16\pi}^{2m}(1+\n{\ph}_{1})^{4m}\n{\ph}_{0}^2 + (-1)^{m+1}2^{m}\Hc_{2m+1}(\ph),
\]
which together with the estimate of the Hamiltonian from Lemma~\ref{pm-Hm-est} with $\ep = 1$ yields
\[
\n{I(\ph)}_{\ell_{2m}^{1}}
\le
2^{m}
\n{\ph}_{m}^{2}
+
c_m^2(1+\n{\ph}_{1})^{4m}\n{\ph}_{0}^{2},\qquad m\ge 1,
\]
where $c_{m}$ is an absolute constant depending only on $m$.\qed
\end{proof}
\section{Estimating the Sobolev Norms}
\label{s:sob-est}
We now turn to the converse problem of estimating the Sobolev norms of the potential in terms of weighted norms of its actions on level one. Our starting point is the identity
\begin{align}
\label{sob-rep}
\frac{1}{2}\n{\ph_{(m)}}_0^2
= 4^{m}\sum_{n\in\Z} J_{n,2m+1} - \int_\T p_{2m}\,\dx,\qquad m\ge 1,
\end{align}
which is obtained by combining Corollary~\ref{h-form} and the trace formula~\eqref{tf-k}.
The key step is estimating the actions $J_{n,2m+1}$ in terms of $I_{n}$. Subsequently, $p_{2m}$ is estimated by Lemma~\ref{pm-Hm-est}.
The main difficulty is to estimate the actions $J_{n,2m+1}$ below the threshold of $\lin{n}$ provided by Proposition~\ref{In-Jn-est}. Even though there are only finitely many of them, they cannot be controlled by the $L^{2}$-norm $\n{\ph}_{0}$ as one can translate the spectrum of $\ph$ without changing $\n{\ph}_{0}$. Instead, we use the $H^{1}$-norm $\n{\ph}_{1}$, and provide estimates of $\n{\ph}_{1}$ in terms of $I_{n}$ by separate arguments which were inspired by work of Korotyaev~\cite{Korotyaev:2005fb,Korotyaev:2010ft}.
\begin{lemma}
\label{H3-est}
Uniformly for all $\ph\in H_r^1$,
\[
\Hc_3(\ph) - 2\Hc_1^2(\ph) \le \sum_{n\in\Z} (2n\pi)^2 I_n(\ph).
\]
In particular, $\Hc_{3}(\ph) \le \n{I(\ph)}_{\ell^{1}_{2}} + 2\n{I(\ph)}_{\ell^{1}}^2$ and
\[
\frac{1}{3}\n{\ph}_1^2 \le \n{I(\ph)}_{\ell^{1}_{2}} + \n{I(\ph)}_{\ell^{1}}^2.
\]
\end{lemma}
\begin{proof}
As the Hamiltonians and the actions are continuous on $H^{1}_{r}$, it suffices to consider the case of a finite gap potential. Let $C_{r}$ denote a sufficiently large circle enclosing all open gaps of $\ph$, then the primitive $F$ of $\om$ defined in section~\ref{s:setup} is analytic outside $C_{r}$ and its Laurent expansion is given by Lemma~\ref{F-asy}. Thus, by the Residue Theorem,
\[
\frac{1}{\pi} \int_{C_{r}} F^{3}(\lm)\,\dlm
= \frac{3}{8\ii \pi} \int_{C_{r}} \frac{1}{\lm}(\Hc_{3}-2\Hc_{1}^{2}) \,\dlm
= \frac{3}{4}(\Hc_{3}-2\Hc_{1}^{2}).
\]
The right hand side is real as $\ph$ is of real type, and
\[
\Re \int_{C_{r}} F^{3}(\lm)\,\dlm
= \sum_{n\in\Z} \int_{\lm_{n}^{-}}^{\lm_{n}^{+}} \Re\bigl(F^{3}(\lm-\ii 0)
- F^{3}(\lm+\ii 0)\bigr)\ \dlm.
\]
Furthermore, by Lemma~\ref{F-prop} we have for $\lm_{n}^{-} < \lm < \lm_{n}^{+}$,
\[
\Re\bigl(F^{3}(\lm-\ii 0) - F^{3}(\lm+\ii 0)\bigr) = -2 f_{n}^{3}(\lm) + 6 (n\pi)^{2} f_{n}(\lm),
\]
and since $f_{n}$ is nonnegative, we conclude with \eqref{Jn-fn}
\[
\frac{1}{\pi} \int_{C_{r}} F^{3}(\lm)\,\dlm
\le \sum_{n\in\Z} \frac{6}{\pi}\int_{\lm_{n}^{-}}^{\lm_{n}^{+}} (n\pi)^{2} f_{n}(\lm)\,\dlm
= \frac{3}{4}\sum_{n\in\Z}(2n\pi)^{2}I_{n}.
\]
This proves the first claim. Note that $\lin{2n\pi}^{2} \le \frac{3}{2}(1+(2n\pi)^{2})$ for $n\in\Z$, so
\[
\frac{1}{3}\n{\ph}_{1}^{2}
\le \frac{1}{2}(\n{\ph_{x}}_{0}^{2} + \n{\ph}_{0}^{2})
=
\Hc_3 - \int_\T \abs{\psi}^4\,\dx + \Hc_{1}.
\]
Since $\int_{\T} \abs{\psi}^{4}\,\dx \ge \Hc_{1}^{2}(\ph)$ the second claim follows with
\[
\frac{1}{3}\n{\ph}_{1}^{2}
\le \Hc_3 - 2\Hc_1^2 + \Hc_1^2 + \Hc_{1}
\le \n{I(\ph)}_{\ell^{1}_{2}} + \n{I(\ph)}_{\ell^{1}}^{2}.\qed
\]
\end{proof}
\begin{proof}[Proof of Theorem~\ref{act-sob-est} (ii).]
The case $m=1$ is an immediate corollary of the lemma above.
For the case $m\ge 2$ we find with~\eqref{sob-rep}
\begin{align*}
\frac{1}{2}\n{\ph_{(m)}}_0^2
=
\sum_{n\in\Z} 4^{m}J_{n,2m+1} - \int_\T p_{2m}\,\dx.
\end{align*}
Choosing $\ep = 1/4$ in Lemma~\ref{pm-Hm-est} then gives
\[
\frac{1}{4}\n{\ph_{(m)}}_0^2
\lesssim
\sum_{n\in\Z} 4^{m}J_{n,2m+1} + \n{\ph}_{0}^{4m+2}
\lesssim
\sum_{n\in\Z} 4^{m}J_{n,2m+1} + \n{I(\ph)}_{\ell^{1}}^{2m+1},
\]
where we applied the trace formula~\ref{tf-1} in the last step. Finally, in view of Lemma~\ref{h-a-est} below
\begin{align*}
\frac{1}{4}\n{\ph_{(m)}}_0^2
\le
\n{I}_{\ell^{1}_{2m}} + c_{m}(1+\n{I}_{\ell^{1}_{2}})^{4m-3}\n{I}_{\ell^{1}}.\qed
\end{align*}
\end{proof}
\begin{lemma}
\label{h-a-est}
For each $m\ge 1$,
\begin{align*}
\sum_{n\in\Z}
4^{m}J_{n,2m+1}
\le
\n{I(\ph)}_{\ell^{1}_{2m}} +
(64\pi)^{2m}(1+\n{I(\ph)}_{\ell^{1}})^{2m-1}\n{I(\ph)}_{\ell^{1}_{2}}^{2m-1},
\end{align*}
uniformly for all $\ph\in H_r^{m}$.
\end{lemma}
\begin{proof}
Let $N+1 \ge 8\n{\ph}_1^2 > N$, then by Proposition~\ref{In-Jn-est}
\[
\sum_{\abs{n} > N} 4^{m}J_{n,2m+1} \le \sum_{\abs{n}> N} \lin{2n\pi}^{2m} I_n,
\]
while for the remaining actions $J_{n,2m+1} = \tilde\zt_{n,m}^{2m-2}J_{n,3}$ and hence
\begin{align*}
\sum_{\abs{n} \le N} 4^{m}J_{n,2m+1}
&\le
(16\pi)^{2m-2}\n{\ph}_1^{4m-4} \sum_{\abs{n}\le N} 4J_{n,3}\\
&\le
(64\pi)^{2m-2}(1+\n{I}_{\ell^{1}})^{2m-2}\n{I}_{\ell_{2}^{1}}^{2m-2}\sum_{\abs{n}\le N} 4J_{n,3}.
\end{align*}
By the trace formula \eqref{tf-k} and Lemma~\ref{H3-est} we finally obtain
\[
\sum_{n\in\Z} 4J_{n,3}
= \Hc_3
\le \n{I(\ph)}_{\ell^{1}_{2}} + 2\n{I(\ph)}_{\ell^{1}}^{2}.\qed
\]
\end{proof}
\section{Actions, Weighted Sobolev Spaces, and Gap Lengths}
\label{s:act-west}
The case of estimating the action variables of $\ph$ in standard Sobolev spaces $H_{r}^{m}$ with integers $m\ge 1$ is somewhat special due to the presence of the trace formula~\eqref{tf-k}. When arbitrary weighted Sobolev spaces $H_{r}^{w}$ are considered, there is no identity known to exist relating $\n{\ph}_{w}$ to Hamiltonians of the \nls-hierarchy. Albeit, even in the case of weighted Sobolev spaces, the regularity properties of $\ph$ are well known to be closely related to the decay properties of the gap lengths $\gm_{n}(\ph)$ -- see e.g. \cite{Djakov:2006ba,Kappeler:2009kp} and section~\ref{s:trap-sp}. Moreover,
\begin{align}
\label{In-gmn}
\frac{4I_{n}}{\gm_{n}^{2}} = 1 + \ell_{n}^{2}
\end{align}
is known to hold locally uniformly on $L^{2}_{r}$ and hence uniformly on bounded subsets of $H^{1}_{r}$ -- see \cite{Grebert:2014iq}. In this section we prove a quantitative version of \eqref{In-gmn} which is quadratic in $\n{\ph}_{1}$ on all of $H^{1}_{r}$. From this and the estimates of the gap lengths given in section~\ref{s:trap-sp} we then obtain Theorem~\ref{act-west}.
To set the stage, let $\ph\in W$ and recall from \eqref{Jn-om},
\[
I_{n} = \frac{1}{\pi}\int_{\Gm_{n}} \lm \om = -\frac{1}{\pi}\int_{\Gm_{n}} (\lm_{n}^{\text{\tiny\ensuremath{\bullet}}}-\lm)\om.
\]
Here the latter identity follows as $\om$ is closed around the gap. In the case $I_{n}\neq 0$, or equivalently $\gm_{n}\neq 0$, we shrink the contour $\Gm_{n}$ to the straight line $[\lm_{n}^{-},\lm_{n}^{+}]$ and insert the product representation \eqref{om} of $\om$, to obtain
\[
I_{n} = -\frac{2}{\pi}\int_{\lm_{n}^{-}}^{\lm_{n}^{+}}
\frac{(\lm_{n}^{\text{\tiny\ensuremath{\bullet}}}-\lm)^{2}}{\ii\vs_{n}(\lm)}\chi_{n}(\lm)\,\dlm,
\qquad
\chi_{n}(\lm) = \prod_{n\neq m} \frac{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\vs_{m}(\lm)}.
\]
It follows with the substitution $\lm = \tau_{n} + t\gm_{n}/2$ that
\[
\frac{4I_{n}}{\gm_{n}^{2}} = \frac{2}{\pi}\int_{-1}^{1}
\frac{(t-t_{n})^{2}}{\sqrt[+]{1-t^{2}}}\chi_{n}(\tau_{n}+t\gm_{n}/2) \,\dt,
\qquad
t_{n} = 2(\lm_{n}^{\text{\tiny\ensuremath{\bullet}}}-\tau_{n})/\gm_{n}.
\]
By Lemma~\ref{ld-tau} there exists an open connected neighbourhood $\hat{W}\subset W$ of $L^{2}_{r}$ such that $\abs{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}-\tau_{m}} \le \abs{\gm_{m}}$ for all $m\in\Z$, hence $\abs{t_{n}} \le 2$, and thus
\begin{align}
\label{In-gmn-2}
\abs*{\frac{4I_{n}}{\gm_{n}^{2}}} \le 9\abs{\chi_{n}}_{[\lm_{n}^{-},\lm_{n}^{+}]}.
\end{align}
To get a quantitative version of \eqref{In-gmn} we thus need a uniform estimate of $\chi_{n}$ .
\begin{lemma}
On $H^{1}_{c}\cap \hat{W}$ for any $\abs{n} \ge \lin{N} \ge 8\n{\ph}_{1}^{2}$,
\[
\abs{\chi_{n}}_{[\lm_{n}^{-},\lm_{n}^{+}]}
\le \e^{2}\left(\frac{\abs{n} + N+3/5}{\abs{n} - N-3/5}\right)
\le 2^{9}(1+\n{\ph}_{1}^{2}).
\]
\end{lemma}
\begin{proof}
Suppose $\ph\in H^{1}_{c}\cap \hat{W}$ and choose $\lin{N} \ge 8 \n{\ph}_{1}^{2} > N$. For $\abs{n} \ge N$ we split the product $\chi_{n}$ into two parts,
\[
\chi_{n}(\lm) =
\biggl(\prod_{{\abs{m} \le N}} \frac{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\vs_{m}(\lm)}\biggr)
\biggl(\prod_{\atop{n\neq m}{\abs{m} > N}} \frac{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\vs_{m}(\lm)}\biggr).
\]
If $\abs{k} > N$, then by Theorem~\ref{ev-trap}
\[
\abs{\lm_{k}^{\pm}-k\pi} \le \frac{\n{\ph}_{1}^{2}}{\lin{k}}
+ \frac{\sqrt{2}\n{\ph}_{1}}{\lin{2k}} \le \pi/8,
\]
where we used that $\lin{2k} \ge 3\lin{k}/2$. Thus, for $\abs{m},\abs{n} > N$ and $\lm\in [\lm_{n}^{-},\lm_{n}^{+}]$,
\[
\abs{\tau_{m}-\lm} \ge 2\abs{n-m}.
\]
Further, $\abs{\gm_{m}} \le 4\n{\ph}_{1}/\lin{2m}$ by the individual gap estimate {\eqref{i-gap}}, hence
\[
\abs*{\frac{\gm_{m}/2}{\tau_{m}-\lm}}
\le \frac{\n{\ph}_{1}}{\lin{2m}\abs{n-m}}
\le 1/4.
\]
It follows that $\abs{\vs_{m}(\lm)} \ge \abs{\tau_{m}-\lm} - \abs{\gm_{m}}/2$. Moreover, $\abs{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}} - \tau_{m}} \le \abs{\gm_{m}}$ as $\ph\in\hat{W}$, thus with $(1+2r)/(1-r) \le 1+4r$ for $0\le r\le 1/4$, we conclude
\[
\abs*{\frac{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\vs_{m}(\lm)}}
\le
\frac{\abs{\tau_{m}-\lm} + \abs{\gm_{m}}}{\abs{\tau_{m}-\lm} - \abs{\gm_{m}}/2}
\le 1 + \frac{\abs{\gm_{m}}}{\abs{n-m}}.
\]
It follows with Cauchy-Schwarz that
\begin{align*}
\sum_{\atop{m\neq n}{\abs{m} > N}} \frac{\abs{\gm_{m}}}{\abs{n-m}}
&\le \left(\sum_{\atop{m\neq n}{\abs{m} > N}} \frac{1}{\lin{2m}^{2}\abs{n-m}^{2}}\right)^{1/2}
\left(\sum_{\abs{m} > N} \lin{2m}^{2}\abs{\gm_{m}}^{2}\right)^{1/2}\\
&\le \frac{2}{\lin{2N+2}}\left(\sum_{\abs{m} > N} \lin{2m}^{2}\abs{\gm_{m}}^{2}\right)^{1/2},
\end{align*}
and by Proposition~\ref{gap-est}
\[
\sum_{\abs{m} > N} \lin{2m}^{2}\abs{\gm_{m}}^{2}
\le 6\n{\Rc_{N}\ph}_{1}^{2} + 144\n{\ph}_{1}^{4}
\le 4\lin{N}^{2}.
\]
Therefore, by the standard estimates for infinite products,
\[
\prod_{\atop{\abs{m}> N}{m\neq n}} \abs*{\frac{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\vs_{m}(\lm)}}
\le \exp\left(4\lin{N}/\lin{2N+2} \right)
\le \e^{2}.
\]
To estimate the remaining part of the product we note that $\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}$ and $\lm_{m}^{\pm}$ are contained in the isolating neighbourhood $U_{m}$, which is a complex disc centred on the real line. Thus if $\lm\in [\lm_{n}^{-},\lm_{n}^{+}]$ and $n > N$, then
\[
\abs{\lm_{m-1}^{\pm}-\lm} > \abs{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}-\lm},\qquad m\le N,
\]
and consequently
\[
\prod_{\abs{m} \le N} \abs*{\frac{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\vs_{m}(\lm)}}
= \abs*{\frac{\lm_{-N}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\vs_{N}(\lm)}}
\prod_{-N < m \le N} \abs*{\frac{\lm_{m}^{\text{\tiny\ensuremath{\bullet}}}-\lm}{\vs_{m-1}(\lm)}}
\le \frac{\abs{\tau_{-N}-\lm} + \abs{\gm_{-N}}}{\abs{\abs{\tau_{N}-\lm} - \abs{\gm_{N}}/2}}.
\]
By Theorem~\ref{ev-trap} we have $\abs{\gm_{\pm N}} \le \pi/5$, as well as
\[
\abs{\pm N - n}\pi - 2\pi/5 \le \abs{\tau_{\pm N}-\lm} \le \abs{\pm N - n}\pi + 2\pi/5.
\]
It follows that
\[
\frac{\abs{\tau_{-N}-\lm} + \abs{\gm_{-N}}}{\abs{\abs{\tau_{N}-\lm} - \abs{\gm_{N}}/2}}
\le
\frac{\abs{n}+N+3/5}{\abs{n}-N-3/5}\le 5\lin{N}.
\]
Similarly, for $\lm\in [\lm_{n}^{-},\lm_{n}^{+}]$ with $n < -N$.
\end{proof}
\begin{proposition}
Suppose $\ph\in H^{1}_{c}\cap \hat{W}$, then for any $\abs{n} \ge 8\n{\ph}_{1}^{2}$,
\[
\abs{I_{n}} \le 2^{11}(1+\n{\ph}_{1}^{2})\abs{\gm_{n}}^{2}.
\]
\end{proposition}
\begin{proof}
If $\gm_{n} = 0$, then $I_{n} = 0$ and the estimate clearly holds. If $\gm_{n}\neq 0$, then by \eqref{In-gmn-2} and the preceding lemma,
\[
\abs*{{4I_{n}}/{\gm_{n}^{2}}} \le 9\abs{\chi_{n}}_{[\lm_{n}^{-},\lm_{n}^{+}]}
\le 2^{13}(1+\n{\ph}_{1}^{2}).
\]
\end{proof}
\begin{proof}[Proof of Theorem~\ref{act-west}.]
Suppose $\ph\in H^{w}_{c}\cap \hat{W}$ and choose $N+1\ge 8\n{\ph}_{w}^{2} > N$. Then by the preceding proposition
\[
\sum_{\abs{n} > N} w_{2n}^{2}\abs{I_{n}}
\le 2^{11}\n{\ph}_{1}^{2}\sum_{\abs{n} > N} w_{2n}^{2}\abs{\gm_{n}}^{2},
\]
and the gap lengths may be estimated by Proposition~\ref{gap-est}
\[
\sum_{\abs{n} > N} w_{2n}^{2}\abs{\gm_{n}}^{2}
\le 6\n{R_{N}\ph}_{w}^{2} + 144\n{\ph}_{w}^{4}
\le 144(1+\n{\ph}_{w}^{2})\n{\ph}_{w}^{2}.
\]
In particular, the mapping
\[
H^{w}_{c}\cap\hat{W} \to [0,\infty),\qquad \ph\mapsto \sum_{n\in\Z} w_{2n}^{2}\abs{I_{n}},
\]
is continuous.
Suppose $\ph$ is of real type, then the remaining actions for $\abs{n} \le N$ may be estimated by
\[
\sum_{\abs{n} \le N} w_{2n}^{2}\abs{I_{n}}
\le w^{2}_{2N}\sum_{n\in\Z} I_{n}
\le w[16\n{\ph}_{w}^{2}]^{2}\n{\ph}_{0}^{2}.
\]
Since $w\in\Mc^{1}$ is growing with at least linear speed, we thus find on $H^{w}_{r}$
\[
\sum_{n\in\Z} w_{2n}^{2} \abs{I_{n}} \le 2^{20} w[16\n{\ph}_{w}^{2}]^{2}\n{\ph}_{w}^{2}.
\]
For any nonzero potential $\ph$ this estimate extends by continuity to a complex neighbourhood of $\ph$ within $H^{w}_{c}$ with just the absolute constant doubled. On the other hand, sufficiently close to the zero potential we have $\n{\ph}_{w}^{2} < 1/8$. In this case we may choose $N=0$ such that
\[
\sum_{n\neq 0} w_{2n}^{2}\abs{I_{n}} \le 2^{20}(1+\n{\ph}_{w})^{4}\n{\ph}_{1}^{2}.
\]
Consequently, on some sufficiently small open neighbourhood of $H_{r}^{w}$ in $H_{c}^{w}$,
\[
\n{I(\ph)}_{\ell_{w}^{1}} \le c_{w}^{2}w[16\n{\ph}_{w}^{2}]^{2}\n{\ph}_{w}^{2},
\]
with a real constant $c_{w}$ depending only on $w_{0}$.
\end{proof}
\clearpage
|
2,877,628,089,384 | arxiv | \section{Introduction}
\label{Intro}
Spin glasses and neural networks are very analogous and draw many parallels in their dynamics. Generally, a spin glass is a model of disordered magnetism. The simplest model of a spin glass, the Ising model, is a network of $N$ "spins" $\{\sigma_{i}\}$ which take on the discrete values, connected by a weight matrix $J_{ij} \in \mathbb{R}$ that represents the strength of connection between the spins. The dynamics of these systems is determined by the values of randomly chosen $J_{ij}$, which are generally time independent \citep{book:Spin_Glasses_Neural_Networks}.
The similarity of these spin glass systems with neural networks is of interest to us because spin glasses have been a focus of research in statistical physics for the last fifty years, and a large library of machinery and techniques has been developed to deal with them. We would like to apply this machinery to the field of neural networks.
For this paper we used PetaVision, a high performance neural simulation toolbox \citep{petavision}, to construct sparsely coding convolutional neural networks and examine the relationship between the network's efficiency and sparsity. Interesting behavior in the efficiency of the networks as the sparsity was varied led us to analyze the finite-size scaling of the network, a technique more commonly used in the study of spin glasses, and discovered power law relationships that indicate a continuous (second-order) phase transition is occurring in the networks as sparsity is varied.
\section{Neural network}
We used two networks in our simulation, both built using PetaVision.
The first network was a sparse auto-encoder network that trained the filter kernels of a convolutional layer using a Locally Competitive Algorithm, as defined by \citet{rozell}, as it attempted to iteratively converge on a sparse representation of different input images. The second network (see Figure \ref{Denosing_Network}) used the same sparsely encoding convolutional layer that was trained by the autoencoder to denoise images that had very high Gaussian noise added to them.
The input for both networks were images from the CIFAR-10 image set \citep{CIFAR-10}.The image set was divided into two parts. The first 50,000 images were used for training the filter kernels of the sparsely coding convolutional layer for different levels of sparsity. Then, 10,000 additional images had very high Gaussian noise added to them and were denoised by the denoising network for each level of sparsity used in training.
We observed a distinct minimum in the percent reconstruction error of the noisy images as the sparsity of the network was varied that displayed behavior analogous to continuous phase transitions seen in spin glasses (see figure \ref{fig:Error_vs_Fraction}) \citep{book:Spin_Glasses_Neural_Networks,tauber-book}. With this as our motivation we investigated the presence of a phase transition in our system.
\begin{figure}[h]
\begin{tikzpicture}
\node(Input)[base, fill=black!10] at (0,0){\includegraphics{./figures/Input.png}};
\node[above of=Input] {Input Layer};
\node(Noise)[base, fill=black!10] at (2.5,0){\includegraphics{./figures/NoiseLayer.png}};
\node[above of=Noise] {Noise Layer};
\node(InputErr)[base, fill=black!10] at (5,0){\includegraphics{./figures/DenoiseError.png}};
\node[above of=InputErr] {\begin{tabular}{c}
Input Error\\
Layer
\end{tabular}};
\node(V1)[diamond,fill=black!10, draw=black,minimum width = 1cm,minimum height=1cm] at (8.0, 0){};
\node[above of=V1] {\begin{tabular}{c}
Sparsely Coding\\
Convolutional Layer
\end{tabular}};
\node(Recon)[base, fill=black!10] at (11,0){\includegraphics{./figures/InputRecon.png}};
\node[above of=Recon] {\begin{tabular}{c}
Input Reconstruction\\
Layer
\end{tabular}};
\draw[->] (Input) -- (Noise);
\draw[->] (Noise) -- (InputErr);
\draw[->] (InputErr) -- (V1);
\draw[->] (V1) -- (Recon);
\draw[->,style=dashed, draw=red!60] (Recon.west) to [bend left] (InputErr.east);
\end{tikzpicture}
\caption{\label{Denosing_Network}A schematic of the denoising network. The Input Error Layer computes the difference between the Noise Layer and the Input Reconstruction Layer, an alternative implementation of lateral inhibition in LCA \citep{schultz2014replicating}. Feature learning utilizes a local Hebbian rule to implement stochastic gradient descent.}
\end{figure}
\section{Phase transitions and finite-size scaling}
\label{Phase}
A phase of a system is defined as a subspace of the microscopic system parameters where the system's dynamics obey the same macroscale laws and relations everywhere in that subspace. The space of system parameters can have many phases, and the system can transition between them as system control parameters change. The point of transition between two (or more) phases is known as the critical point.
Phase transitions have been subject of significant study in Condensed Matter Physics, and it is well established that the occurrence of a continuous phase transition\footnote{A continuous (second-order) phase transition has a continuous change in the dynamics of the system as it transitions between phases, while first-order phase transitions are discontinuous.} is accompanied by a singularity at the critical point in one or more system parameters when the system is of infinite size \citep{tauber-book}. It is impossible to achieve infinite system sizes computationally, but this theory can be expanded to finite systems where these singularities become truncated and rounded. These minima or maxima that the singularities turn into at finite system sizes follow very specific relations with system size:
\begin{align}
\text{Location of Minima} \sim L^{-1/\nu} \label{eq:Loc}\\
\text{Height of Minima} \sim L^{-\gamma/\nu}\label{eq:Height},
\end{align}
where $L$ is the linear system size. This behavior is known as finite-size scaling \citep{tauber-book,Cardy}. The exponents $\nu$ and $\gamma$ two examples of "critical exponents". The critical exponents describe the behavior of the system as it approaches the critical point \citep{tauber-book,Cardy}. Thus we can identify a phase transition in our network by the existence and behavior of minima and maxima in the space of system parameters as we vary the system size, which in our case will be the number of neurons in the convolutional layer. The exponents we record, $\bar{\nu}$ and $\bar{\gamma}$, will be proportional to $\gamma$ and $\nu$ through some effective dimension of our system.
\section{Results}
\label{Results}
The parameters of the system that we are interested in are the fraction of active neurons and the average percent reconstruction error of our noisy images:
\begin{equation}
P_{err} = \frac{1}{10,000}\sum_{i=1}^{10,000} \frac{\| \mathbf{s}_{i} - \mathbf{\hat{s}}_{i} \|^2}{\|\mathbf{s}_i\|^2},
\end{equation}
where $P_{err}$ is the average percent reconstruction error, $\mathbf{s}_{i}$ is the $i^{\text{th}}$ original image before it has Gaussian noise added to it, $\mathbf{\hat{s}}_{i}$ is the $i^{\text{th}}$ reconstruction of the noised image taken from the sparsely coding convolutional layer \citep{petavision,rozell,schultz2014replicating}.
The fraction of active neurons is controlled by a parameter $\lambda$, as described \citet{rozell}, that behaves monotonically with the sparsity of active neurons and inversely with the fraction of active neurons. Through $\lambda$ we can control the fraction of active neurons and observe how the average percent reconstruction error behaves as the fraction of active neurons is varied. We observed a minimum in $P_{err}$ occur as we varied the fraction of active neurons for many different system sizes. These results are summarized in Figure \ref{fig:Error_vs_Fraction}.
\begin{figure}[H]
\centering
\includegraphics[width=.45\linewidth]{./figures/FitPlots.pdf}
\caption{\label{fig:Error_vs_Fraction}Average percent active error vs. fraction active neurons}
\end{figure}
\begin{figure}[h]
\begin{subfigure}[c]{.4\textwidth}
\includegraphics[width=\textwidth]{./figures/Max_vs_Height.pdf}
\caption{Height of minima vs. system size}
\end{subfigure}\hfill%
\begin{subfigure}[c]{.4\textwidth}
\includegraphics[width=\textwidth]{./figures/Activity_vs_Height.pdf}
\caption{Location of minima vs system size}
\end{subfigure}
\caption{\label{fig:results}The power law behavior of the minimum average percent reconstruction error (a), and the fraction of active neurons at that minimum (b). We report $\bar{\nu} =1.32 \pm 0.04$ and $\bar{\gamma} = 0.0099 \pm 0.0095$.}
\end{figure}
We measured the shift in height and location of the minima in $P_{err}$ as the system size was varied, and plot each on a log-log plot (see Figures \ref{fig:results} (a), and \ref{fig:results} (b)). We observe power law behavior in both the location and height of the minima as the system size is varied. This satifies the finite-size scaling requirements as defined in equations \ref{eq:Loc} and \ref{eq:Height}. This finite-size scaling behavior indicates a continuous phase transition is occurring as the sparsity of the network is varied.
\section{Discussion}
The existence of phase transitions in neural networks is not unique to this sparsely coding convolutional system. The auto-associative network proposed by \citet{Hopfield} was shown by \citet{hertz} to display a first-order phase transition in its memory capacity. If the number of patterns recorded by the network exceeds a "critical fraction" of the network size, the output of the network is maximally disordered \citep{hertz}.
We propose a similar mechanism is responsible for the observed continuous phase transition of our sparsely coding convolution network, where the fraction of active neurons is analogous to the "critical fraction" of learned patterns in the auto-associative network. If our network's fraction of active neurons is too far above the "critical fraction", the network will have the freedom to reconstruct the noise in the image, while if the fraction of active neurons is too low, the network will only reconstruct image components for which it has learned strong priors. These two different regions of dynamics form our "phases". The existence of a phase transition in the average percent reconstruction error of the network as the fraction of active neurons is varied guarantees the persistence of the power law behavior seen in Figure \ref{fig:results} (b). This power law behavior allows us to predict the optimal fraction of active neurons for any system size, which in turn can be tuned to through the parameter $\lambda$, as described by \citet{rozell}, to ensure that any sparsely coding convolutional network is operating at the optimal level of sparsity.
The critical behavior of the network allows us to always achieve the minimum denoising error by operating the network at this critical value of sparsity.
\subsubsection*{Acknowledgments}
We gladly acknowledge helpful discussions with Uwe C. T\"auber.
This work was supported by the Los Alamos National Laboratory under contract DE-AC52-06NA25396.
Computations were performed using the Darwin Computational Cluster at Los Alamos National Laboratory.
\medskip
\small
|
2,877,628,089,385 | arxiv | \section{Transverse-field Ising model}
Among the rich variety of condensed matter systems, magnetic materials are a source of many fruitful problems whose studies and solutions inspired discussions and new models beyond their immediate scope. The Kondo effect (existence of a minimum of electrical resistivity at low temperature in metals due to the presence of magnetic impurities) is one such problem \cite{Hewson1993,Coleman2007} as it provides an excellent basis for studies of quantum criticality and absolute zero-temperature phase transitions \cite{Sachdev2000,Coleman2005} and also on a more fundamental level, a concrete example of asymptotic freedom \cite{Coleman2007}. Assuming infinite on-site repulsion, the single-impurity Anderson model \cite{Anderson1961,Hewson1993} permitted the establishment of a correspondence between Hamiltonian language and path integral for the development of non-perturbative methods in quantum field theory \cite{Fresard2007,Fresard2008}. One other important model is that of the Heisenberg Hamiltonian, defined for the study of ferromagnetic materials, and which, assuming a crystal subjected to an external magnetic field ${\bm B}$, reads \cite{Diu1996}:
\begin{eqnarray}\label{Heisenberg}
H = -\sum_{\langle i,j\rangle}J_{ij} \hat{S}^{i} \hat{S}^{j} - {\bm h}\cdot\sum_j \hat{S}^j
\end{eqnarray}
\noindent where for ease of notations we introduced ${\bm h}=g\mu_B{\bm B}$, with $g$ being the Land\'e factor, and $\mu_B=e\hbar/2m_{\rm e}$ being the Bohr magneton ($e$: elementary electric charge, and $m_{\rm e}$: electron mass); $J_{ij}$ is a parameter that characterizes the nearest-neighbours exchange interaction between electron spins on the crystal sites $i$ and $j$ (the quantum spins $\hat{S}^{i}$ and $\hat{S}^{j}$ are vector operators whose components are proportional to the Pauli matrices). For simplicity, one may consider $J_{ij}\equiv J$ constant. If $J>0$ then the system is ferromagnetic and if $J<0$ the system is antiferromagnetic. Hereafter, we fix the electron's magnetic moment $g\mu_{\rm B}=1$.
Although Eq.~\eqref{Heisenberg} has a fairly simple form, the exact calculation of the partition function:
\begin{equation}
Z = {\rm Tr}~e^{-\beta H}
\end{equation}
\noindent where $\beta = 1/k_{\rm B}T$ is the inverse thermal energy, is possible on the analytical level with the mean-field approximation that simplifies the Hamiltonian \eqref{Heisenberg}, and also for one-dimensional systems, one difficulty of the Heisenberg Hamiltonian being that the 3 components of a spin vector operator do not commute. That said, Heisenberg's Hamiltonian is very useful to, e.g., study spin frustration \cite{Mila2015}, entanglement entropy \cite{Refael2004}, and also serve as a test case for density-matrix renormalization group algorithms \cite{Schollwock2005}. Under zero field, Heisenberg's Hamiltonian is also a simplified form of the Hubbard model at half-filling, thus including ferromagnetism in the scope of strongly correlated systems studies.
A particular but very important approximation of Heisenberg's Hamiltonian and whose significance in physics, especially for the study of critical phenomena, cannot be underestimated is the so-called Ising model. In its initial formulation \cite{Ising1925}, Ising spins are $N$ classical variables, which may take $\pm 1$ as values, and form a one-dimensional (1D) system characterized by free or periodic boundary conditions. The classical partition function $Z$ may be calculated analytically for the 1D Ising model, and quantities such as the average total magnetization obtained directly \cite{Kramers1941}:
\begin{equation}\label{magthermal}
M = \frac{1}{\beta} \frac{\partial\ln Z}{\partial h}
\end{equation}
\noindent In the present work, we consider a 1D quantum spin chain whose Hilbert space is given by ${\cal H} = \bigotimes_i^N\mathbb{C}^2$. The system is described by the transverse-field Ising (TFI) Hamiltonian \cite{ovchinnikov2003antiferromagnetic}:
\begin{eqnarray}\label{IsingTF}
H = -J\sum_{\langle i,j\rangle} \sigma^i_z \sigma^{i+1}_z + h_x\sum_{i=1}^N \sigma^i_x.
\end{eqnarray}
\noindent where $\sigma^i_{\alpha}$ ($\alpha\equiv x,z$) is the Pauli matrix for the $\alpha$-component of the $i^{th}$ spin in the chain, and $h_x$ is the magnetic field applied in the transverse direction $x$. In this case, the spins are no longer the classical Ising ones and the two terms that compose the Hamiltonian $H$ do not commute, hence the need for a full quantum approach. An example of a real-world system that may be studied as a quantum Ising chain is cobalt niobate (CoNb$_2$O$_6$); in this case the spins that undergo the phase transition as the transverse field varies, are those of the Co$^{2+}$ ions \cite{Coldea2010}. The spin states are denoted $|+\rangle_i$ and $|-\rangle_i$ at ion site $i$. There are two possible ground states: when all $N$ spins are in the state $|+\rangle$, or in the state $|-\rangle$, i.e. when they are all aligned, which defines the ferromagnetic phase.
The phase transition from the ferromagnetic phase to the paramagnetic phase that we speak of now is of a quantum nature, and not of a thermal nature, as here it is driven only by the external magnetic field. More precisely, when the transverse field $h_x$ is applied with sufficient strength, the spins align along the $x$ direction, and the spin state at site $i$ is given as the superposition $\left(|+\rangle_i + |-\rangle_i\right)/\sqrt{2}$, which is nothing else but the eigenstate of the $x$-component of the spin. So, in this particular case, there is no need to raise the temperature of the system, initially in the ferromagnetic phase, beyond the Curie temperature, to make it a paramagnet: the many-body system remains in its ground state but its properties have changed. Further, it is interesting to note that unlike for the ferromagnetic phase, the quantum paramagnetic phase has spin-inversion symmetry. We recommend the reading of \cite{Sachdev2011} for an insightful discussion on quantum criticality.
Now, we briefly comment on the quantity $\beta=1/k_{\rm B}T$ in the context of quantum phase transitions, which, strictly speaking, can only occur at temperature $T=0$ K. In fact, close to the absolute zero, where $\beta \rightarrow \infty$, their signatures can be observed as quantum fluctuations dominate thermal fluctuations in the criticality region, where the quantum critical point lies. The imaginary time formalism \cite{Matsubara1955}, where $\exp(-\beta H)$ is interpreted as an evolution operator, and the partition function $Z$ as a path integral, provides a way to map a quantum problem onto a classical one with the introduction of the imaginary time $\beta$ resulting from a Wick rotation in the complex plane, thus yielding one extra dimension to the model. In classical thermodynamics, to observe a phase transition in a system requires that its size (i.e. the number of constituents $N$) tends to infinity so that the order parameter is non-analytic at the transition point; so for the quantum transition, the thermodynamic limit entails the limit $\beta \rightarrow \infty$ also: the 1D TFI model is mapped onto an equivalent 2D classical Ising model \cite{Kogut1979}. The imaginary time formalism permits implementation of classical Monte Carlo simulations to study quantum systems. Further discussion, including the sign problem for the quantum spin-$1\!/\!2$ system, is available in \cite{LeBellac2006}.
We have chosen the the transverse-field Ising model as an illustrative case for our study for several reasons. First, since this system is 1-dimensional, we can apply an MPS variational ground state solver \cite{schollwock2011density} and hence obtain the ground state solution in MPS representation. We can then perform fast and exact sampling for generation of large data sets for the training of the VAE. Next, this model can be solved analytically, which allows us to adequately benchmark our results. Finally, this model shows a nontrivial behavior around the quantum phase transition point at $h_x=1$, and thus constitutes an interesting example to apply a VAE.
\section{Generative model as a quantum state}
Many-body quantum physics is rich in high-dimensional problems. Often, however, with increasing dimensionality, these become extremely difficult or impossible to solve. One solving method is through the reformulation of the quantum mechanical problem as a statistical problem, when possible. This way, machine learning can be used to effectively solve such a problem, since machine learning is a tool for the solving of high-dimensional statistical problems \cite{krizhevsky2012imagenet}. Probabilistic interpretation allows for using powerful sampling-based methods that work efficiently with high dimensional data.
An example of the reformulation of a quantum problem as a statistical problem is with informationally complete (IC) positive-operator valued measures (POVMs) \cite{holevo2011probabilistic}. POVMs describe the most general measurements of a quantum system. Each particular POVM is defined by a set of positive semidefinite operators $M^{\alpha}$, with the normalization condition $\sum_{\alpha}M^{\alpha}={\mathds 1}$, where ${\mathds 1}$ is the identity operator. The fact that the POVM is informationally complete means that using measurement outcomes one can reconstruct the state of a system with arbitrary accuracy.
The probability of measurement outcome for a quantum system with the density operator $\rho$ is governed by Born's rule: $P[\alpha] = {\rm Tr}(\varrho M^{\alpha})$, where $\{M^{\alpha}\}$ is a particular POVM, and $\alpha$ is an outcome result. In other words, any density matrix can be mapped on a mass function, although not all mass functions can be mapped on a density matrix \cite{Filippov2010,appleby2017introducing}. Some mass functions lead to non-positive semidefinite ``density matrices'', which is not physically allowed. As such, quantum theory is a constrained version of probability theory. For a many-body system, these constraints can be very complicated, and direct consideration of quantum theory as a constrained probability theory is not fruitful. However, if one can access the samples of the IC POVM induced mass function, which is by definition physically allowed, this mass function can be reconstructed using generative modeling \cite{carrasquilla2019reconstructing, LeiWanglecture}. Samples can be obtained either by performing generalized measurements over the quantum system or by in silico simulation.
In the present work, we simulate measurements of the ground state of a spin chain with the TFI Hamiltonian, Eq.~(\ref{IsingTF}). As a local (one spin) IC POVM, we use the so-called symmetric IC POVM for qubits (tetrahedral) POVM \cite{Caves1999}:
\begin{eqnarray}
&&M^{\alpha}_{\rm tetra} = \frac{1}{4}\left({\mathds 1} + \bm{s^{\alpha}}\bm{\sigma}\right),\ \alpha \in (0, 1, 2, 3), \ \bm{\sigma} = \left(\sigma_x, \sigma_y, \sigma_z\right),\nonumber\\
&&s^0 = (0, 0, 1), \ s^1 = \left(\frac{2\sqrt{2}}{3}, 0, -\frac{1}{3}\right), \ s^2 = \left(-\frac{\sqrt{2}}{3}, \sqrt{\frac{2}{3}}, -\frac{1}{3}\right), \ s^3 = \left(-\frac{\sqrt{2}}{3}, -\sqrt{\frac{2}{3}}, -\frac{1}{3}\right).
\end{eqnarray}
\noindent Note that many-spins generalization of local IC POVM can easily be obtained by considering the tensor product of local ones:
\begin{eqnarray}
M^{\alpha_1, \dots, \alpha_N}_{\rm tetra} = M^{\alpha_1}_{\rm tetra} \otimes M^{\alpha_2}_{\rm tetra}\otimes \dots \otimes M^{\alpha_N}_{\rm tetra}.
\end{eqnarray}
In order to simulate measurements outcome under the IC POVM described above, we implement the following numerical scheme: First, we run a variational MPS ground state solver to obtain the ground state of the TFI model in the MPS form:
\begin{equation}
\Omega_{i_1, i_2, \dots, i_N}=\sum_{\beta_1, \beta_2, \dots, \beta_{N-1}}A^{1}_{i_1\beta_1}A^{2}_{\beta_1 i_2 \beta_2}\dots A^{N}_{\beta_{N-1}i_N}
\end{equation}
\noindent where we use the tensor notation instead of the bra-ket notation for further simplicity, and we obtain the MPS representation of IC POVM induced mass function:
\begin{eqnarray}
&&P[\alpha_1, \alpha_2, \dots, \alpha_N] = \sum_{\delta_1, \delta_2,\dots,\delta_{N-1}} \pi_{\alpha_1\delta_1}\pi_{\delta_1\alpha_2\delta_2}\dots\pi_{\delta_{N-1}\alpha_N},\nonumber\\
&&\pi_{\delta_{n-1}\alpha_n\delta_n}=\pi_{\underbrace{\beta_{n-1}\beta_{n-1}'}_{{\rm multi-index}\ \delta_{n-1}}\alpha_n\underbrace{\beta_n\beta_n'}_{{\rm multi-index}\ \delta_{n}}}=\left[M_{\rm tetra}\right]^{\alpha_n}_{ij}A^n_{\beta_{n-1}j\beta_n}\left[A^{n}\right]^*_{\beta_{n-1}'i\beta_n'}
\end{eqnarray}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{fig1}
\caption{Tensor diagrams for a) building blocks b) MPS representation of measurement outcome probability and c) its sub-tensor.}
\label{fig1}
\end{figure}
\noindent whose diagrammatic representation \cite{orus2014practical} is shown in Fig.~\ref{fig1}. Next, we produce a set of samples of size $M$: $\{\alpha_1^i, \alpha_2^i,\dots,\alpha_N^i\}_{i=1}^M$ from the given probability. The sampling can be efficiently implemented as shown in Appendix B. We call this set of samples (outcome measurements) a data set, which may then be used to train a generative model $p[\alpha_1, \alpha_2,\dots,\alpha_N|\theta]$ to emulate the true mass function $P[\alpha_1, \alpha_2,\dots,\alpha_N]$. Here $\theta$ is the set of parameters of the generative model, which is trained by maximizing the logarithmic likelihood ${\cal L}(\theta)=\sum_{i=1}^M\log p[\alpha_1^i, \alpha_2^i,\dots,\alpha_N^i|\theta]$ with respect to the parameters $\theta$ \cite{myung2003tutorial}. The trained generative model fully characterizes a quantum state. The density matrix is obtained by applying an inverse transformation to the mass function \cite{Filippov2010b}:
\begin{eqnarray}
&&\varrho = \sum_{\alpha_1, \alpha_2, \dots, \alpha_N}p[\alpha_1,\alpha_2,\dots,\alpha_N|\theta][M^{\alpha_1}_{\rm tetra}]^{-1}\otimes [M^{\alpha_2}_{\rm tetra}]^{-1} \otimes\dots\otimes [M^{\alpha_N}_{\rm tetra}]^{-1},\nonumber\\ &&[M^{\alpha}_{\rm tetra}]^{-1} = \sum_{\alpha'}T^{-1}_{\alpha\alpha'}M^{\alpha'}_{\rm tetra},\nonumber\\
&&T_{\alpha\alpha'} = {\rm Tr}\left(M^{\alpha}_{\rm tetra}M^{\alpha'}_{\rm tetra}\right),
\end{eqnarray}
\noindent the diagrammatic representation of which is given in Fig.~\ref{fig2}. Note that the summation included in the density matrix representation is numerically intractable, but we can estimate it using samplings from the generative model.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{fig2}
\caption{Tensor diagrams for a) building blocks b) inverse transformation from a mass function to a density matrix.}
\label{fig2}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{fig3}
\caption{Tensor diagrams representing calculation of two-point correlation function.}
\label{fig3}
\end{figure}
Our goal is to use a generative model as an effective representation of quantum states to calculate the mean values of observables such as , e.g., two-point and higher-order correlation functions. An explicit expression of the two-point correlation function obtained by sampling from the trained generative model is shown in Fig.~\ref{fig3}. To obtain the ground state of the TFI model we use a variational MPS ground state search and we pick the bond dimension of MPS equal to $25$ and perform $5$ DMRG sweeps to get an approximate ground state in the MPS form. We use the variational MPS solver provided by the mpnum toolbox \cite{mpnum}.
\section{Variational Autoencoder Architecture}
In our work we use a conditional VAE \cite{sohn2015learning} to represent quantum states. A conditional VAE is a generative model expressed by the following probability distribution:
\begin{eqnarray}
p[x | \theta, h] = \int p[x | z, \theta, h] p[z] dz,
\end{eqnarray}
\noindent where $x$ is the data we want to simulate, $\theta$ represents the VAE parameters, which can be tuned to get the desired probability distribution over $x$, $h$ is the condition, and $z$ is a vector of latent variables. In our case $x$ is the quantum measurement outcome in one-hot notation. A collection of measurement outcomes is a matrix of size $N \times 4$, where $N$ is the number of particles in the chain, and 4 is the number of possible outcomes of the tetrahedral IC POVM, which is either $[1 0 0 0]$, $[0100]$, $[0010]$, or $[0001]$. $h$ is the external magnetic field. The probability distribution $p[x | z, \theta, h]$ can thus be written as:
\begin{eqnarray}
p[x | z, \theta, h] = \prod_{i=1}^N \prod_{j=1}^4 \pi_{ij} (z, h, \theta)^{x_{ij}},
\end{eqnarray}
\noindent where $\pi_{ij}(z, h, \theta)$ is the neural network in our architecture; and more precisely, $\pi_{ij}$ is the probability of the $j^{th}$ outcome of the POVM for the $i^{th}$ spin with $\sum_{j=1}^N \pi_{ij} = 1$ and $\pi_{ij} \geq 0$. The quantity $p[z]$ is the prior distribution over latent variables, which is simply given by ${\cal N}(0,I) =\frac{1}{\sqrt{2\pi}^N}\exp\left\{-\frac{1}{2}z^{\rm T}z\right\}$, with $I$ being the identical covariance matrix. We take the number of latent variables equal to the number of spins, $N$. Essentially, we want to optimize our VAE so that its probability matches the probability of the quantum measurement outcomes as closely as possible. This can be done using the well-known maximum likelihood estimation:
\begin{eqnarray}
\theta_{MLE} = \underset{\theta}{\rm argmax}\sum_{i=1}^M \log(p[x_i | \theta, h]),
\label{eq:argmax}
\end{eqnarray}
\noindent where $\{x_i\}_{i=1}^M $ is the data set of outcome measurements. We cannot simply maximize this function using, for example, a gradient descent method, due to the presence of hidden variables in the structure of this function. However, we can overcome this problem by using the Evidence Lower Bound (ELBO) \cite{kingma2013auto} and the reparametrization trick shown in \cite{rezende2014stochastic}. The detailed description of the procedure is given in the Appendix A.
Once trained, the VAE is a simple and efficient way to produce new samples from its probability distribution. It can be done in three steps. First, we produce a sample from the prior distribution $p[z] = {\cal N}(0, I)$. Next, we feed this sample and the external magnetic field value into the neural network decoder $\pi_{ij}(z, \theta, h)$, which returns the matrix of probabilities. Finally, we sample from the matrix of probability $\pi_{ij}(z, \theta, h)$ to generate ``fake'' outcome measurements. A visual representation of the sampling method is shown in Fig. \ref{fig:sampling_architecture}.
\begin{figure}
\centering
\includegraphics[scale=0.4]{sampling_architecture.png}
\caption{Sampling scheme with the trained VAE}
\label{fig:sampling_architecture}
\end{figure}
In many problems, gradients of observables with respect to different model parameters yield quantities of interest. For example, one may consider the magnetic differential susceptibility tensor $\chi_{i j}=\partial \mu_i/\partial h_j$. It can be done efficiently by using backpropagation through the VAE architecture but, as samples from the VAE are discrete, a straightforward backpropagation is impossible. In recent papers \cite{jang2016categorical, kusner2016gans, maddison2016concrete}, a method called the Gumbel-softmax, was introduced to overcome this difficulty through continuous relaxation. The spirit, and hence the physical meaning of the method, may be understood with a short discussion of the so-called simulated annealing technique, which is often used to solve discrete optimization problems. Broadly speaking, the simulated annealing rests on the introduction of a parameter that acts as an artificial ``temperature'', which varies continuously to modify the state of the system in search of a global optimum. Starting from a given state, for some values of the temperature, if the system mostly explores the neighboring states, moving among them and possibly in the vicinity of the ``better'' ones, i.e. with lower energy, it may get and remain close to a local optimum, or local energy minimum in the thermodynamic language; but to avoid remaining in a locally optimal region, ``bad'' moves leading to worse (i.e. higher energy) states are useful to explore the temperature space more completely improving the chance to find a global optimum or at least to be near it. To each move an energy variation, $\Delta E$, is associated. It is the continuous character of the fictitious temperature that makes the discrete problem continuous as the probability $\exp(-\Delta E)/k_{\rm B}T$ of acceptance of a state is continuous. Although this approach has been known for a long time \cite{Metropolis1953}, it remains topical and under active development \cite{RevModPhys.80.1061,yavorsky2019highly}. The method of continuous relaxation we use also exploits such an artificial temperature to make discrete samples continuous.
The Gumbel-softmax trick, consists of three steps:
\begin{enumerate}
\item We calculate the matrix of log probabilities, taking element-wise logarithm of decoder network output:\\ $\log \Pi = \begin{bmatrix} \log \pi_{11} & \log \pi_{12} \dots \log \pi_{1N}\\ \log \pi_{21} & \log \pi_{22} \dots \log \pi_{2N} \\ \log \pi_{31} & \log \pi_{32} \dots \log \pi_{3N} \\
\log \pi_{41} & \log \pi_{42} \dots \log \pi_{4N}\end{bmatrix}$,
\item We generate a matrix of samples from the standard Gumbel distribution $G$ and sum it up element wise with the matrix of log probabilities $\log \Pi$: $Z = \log \Pi + G$,
\item Finally we take the ${\rm softmax}$ function of the result from the previous step: $x_{\rm soft}^{\rm fake}(T) = {\rm softmax}(Z / T)$, where $T$ is a temperature of softmax. The softmax functions is defined by the expression: ${\rm softmax}(x_{ij})=\frac{\exp\left(x_{ij}\right)}{\sum_{i}\exp\left(x_{ij}\right)}$.
\end{enumerate}
\noindent The quantity $x_{\rm soft}^{\rm fake}(T)$ has a number of remarkable properties: first, it becomes an exact one-hot sample when $T \rightarrow 0$; second, we can backpropagate through soft samples for any T$>0$. The method is validated in the next section.
\section{Results}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{corr_zz.pdf}
\caption{ Two-point correlation function $\langle \sigma^z_1 \sigma^z_n \rangle$ for different values of external magnetic field $h_x$. }
\label{fig:corr_zz}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{corr_xx.pdf}
\caption{Two-point correlation function $\langle \sigma^x_1 \sigma^x_n \rangle$ for different values of external magnetic field $h_x$. }
\label{fig:corr_xx}
\end{figure}
Here, we show that the VAE trained on a set of preliminary measurements is capable to describe the physics of the whole family of TFI models. We validate our results by comparing VAE-based calculations with numerically exact calculations performed by variational MPS algorithm \cite{orus2014practical}. And, to assess the capabilities of the VAE, we consider a spin chain with $32$ spins. We calculate the MPS representation of the ground state and extract information from it by performing measurements over the state. The external field in the $x$-direction is varied from $0$ to $2$ with a step of $0.1$. The VAE is trained on a data set (TFI measurement outcomes) consisting of $10.5$ million samples in total: $21$ external fields $h_x$ with $500 000$ samples per field.
To evaluate the VAE performance, we simply compare directly the numerically exact correlation functions with those reconstructed with our VAE. For $n=1,\ldots,32$, $\langle \sigma^1_z \sigma^n_z \rangle$ and $\langle \sigma^1_x \sigma^n_x \rangle$ are shown in Fig.~\ref{fig:corr_zz} and Fig.~\ref{fig:corr_xx} respectively; and we compare the numerically exact and the VAE-based average magnetizations along $x$, given by $\langle \sigma^n_x \rangle$ for each position of the spin along the chain, in Fig.~\ref{fig:meanx}. We see that the VAE captures well the physics of the one- and two-point correlation functions. Figure~\ref{fig:magn} shows the total magnetizations, $\mu_x$ and $\mu_z$, in the $x$ and $z$ directions respectively, with $\mu_i = \frac{1}{N}\sum_{j=1}^{N}\langle\sigma_{i}^{j}\rangle$, and we see that the VAE is a tool well-suited for the description of the quantum phase transition and also finite-size effects: while for the infinite TFI chain, i.e. in the thermodynamic limit, the phase transition is observed at $h_x=1$, the finite size of the system yields a shift of the critical point at $h_x \approx 0.9$. Also note that in the $T\rightarrow 0$ limit, the magnetization $M$ defined in Eq. (\ref{magthermal}), coincides exactly with the magnetization $\mu$ defined above.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{meanx.pdf}
\caption{Average magnetization per site along $x$ for different values of external magnetic field $h_x$.}
\label{fig:meanx}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{magnetization.pdf}
\caption{Total magnetization along $x$ and $z$ axes for different values of external magnetic field $h_x$. The location of the critical region is slightly shifted towards smaller values of $h_x$ due to the finite size of the chain.}
\label{fig:magn}
\end{figure}
A back-propagation algorithm combined with the Gumbel-softmax trick may be used to evaluate the derivative of an output over an input. We use this approach to calculate some elements of a magnetic differential susceptibility tensor $\chi_{i j}=\partial \mu_i/\partial h_j$, in particular, $\chi_{x x}$ and $\chi_{z x}$ shown in Fig.~\ref{fig:Mx_grad}. The backpropagation-based magnetic differential susceptibility agrees well with the numerically calculated one (central differences). The main advantage of the backpropagation-based calculation is its numerical efficiency. The VAE may thus be trained with an arbitrary set of external parameters, i.e. not only $h_x$, but also $h_y$ and $h_z$, and yield the full differential susceptibility tensor.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{magnetization_dh.pdf}
\caption{Backpropagation-based and numerical-based (central differences) values of $\chi_{xx}$ and $\chi_{zx}$ for different values of external magnetic field $h_x$. Both derivatives slightly fluctuate due to VAE error.}
\label{fig:Mx_grad}
\end{figure}
At this stage, we could conclude that the VAE is capable to describe the physics of one- and two-point correlation functions, and hence the TFI physics. However, notwithstanding the ability of the VAE to yield correlation functions that fit well numerically-exact correlation functions, this is not yet a full proof that it represents quantum states well. To address this point, we consider a small spin chain (five spins with TFI Hamiltonian and an external magnetic field $h_x=0.9$) for which we calculate both the exact mass function and that estimated from VAE samples. Figure~\ref{vae_mass_vs_exact_mass} shows that the VAE result again fits the numerically exact mass function with high accuracy. Further, we calculate the Bhattacharyya coefficient \cite{bhattacharyya1943measure}: ${\rm BC}(p_{\rm vae}, p_{\rm exact})=\sum_\alpha p_{\rm exact}[\alpha]\sqrt{\frac{p_{\rm vae}[\alpha]}{p_{\rm exact}[\alpha]}}$ as a function of the external magnetic field $h_x$. Results reported in Fig.~\ref{fidelity_vs_field} show that ${\rm BC}(p_{\rm vae}, p_{\rm exact})>0.99$ over the whole $h_x$ range, which thus proves that the VAE represents a quantum state well, at least for small spin chains.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{vae_mass_vs_exact_mass.pdf}
\caption{Comparison of two POVM induced mass functions ($P[\alpha] = {\rm Tr}(\rho M^\alpha)$) for a chain of size $5$: numerically exact mass function and reconstructed from VAE samples mass function. A sequence of indices $\alpha$ has been transformed into a single multi-index. Indices have been ordered to put numerically exact probability in descending order. A good agreement between the mass functions is observed.}
\label{vae_mass_vs_exact_mass}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{fidelity_vs_field}
\caption{Dependence of the classical fidelity on the external magnetic field. A high predictive accuracy is demonstrated for the whole set of fields.}
\label{fidelity_vs_field}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.4]{entropy.pdf}
\caption{Comparison of the numerically exact R\'enyi entropy and that reconstructed from the VAE samples for different values of $n$.}
\label{entropy}
\end{figure}
The structure of the entanglement is an another interesting object, we would like to validate. The essence of entanglement between two parts of the chain, which is split into $n$ left spins and $N-n$ right spins, can be described by the R\'eniy entropy of the left part of this chain: $S_\alpha=\frac{1}{1-\alpha}\log{\rm Tr}\rho_{\rm n}^\alpha$, where $\rho_{\rm n}$ is the density matrix of the first $n$ spins in the chain. We estimate the R\'enyi entropy of order $2$: $S_2=-\log({\rm Tr}\rho^2)$, since it can be efficiently calculated from the matrix product representation of the density matrix and from the VAE samples. However, as samples-based estimation of the entangled entropy has a variance which grows exponentially with the number of spins, we consider a small spin chain of size $10$. A direct comparison between the numerically exact and the VAE-based entangled entropies is shown for different values of $n$ in Fig.~\ref{entropy}. For this particular case, the VAE clearly overestimates the entangled entropy. This undesirable effect is in fact, observed for all sizes of spin chains, and even for the spin chain of size $5$ for which we have an excellent agreement between the numerically exact mass function and the VAE-based result. One reason is that $S_2$ is very sensitive to small errors in the mass function, but a deeper understanding of why the VAE overestimates the entangled entropy is the object of future investigation.
\section{Conclusion}
In the present work, we studied the ability of a VAE to reconstruct the physics of quantum many-body systems, using the transverse-field Ising model as a non-trivial example. We used the IC POVM to map the quantum problem onto a probabilistic domain and vice-versa. We trained the VAE on a set of samples from the transformed quantum problem, and our numerical experiments show the following results:
\begin{itemize}
\item For a large system (32 spins) the VAE's reliability is verified by comparing one- and two-point correlation functions.
\item For small system (5 spins) the VAE's reliability is verified by direct comparison of mass functions.
\item The VAE can capture a quantum phase transition.
\item The response functions (magnetic differential susceptibility tensor) can be obtained using backpropagation through VAE.
\item Despite the very good agreement between the VAE-based mass function and the true mass function, the VAE shows limited performance with the determination of the entangled entropy. This is point is the object of further development.
\end{itemize}
Our method can be extended to any other thermodynamic system by introduction of the temperature as an external parameter, thereby considering also thermal phase transitions. As one can calculate different thermodynamic quantities by applying backpropagation through VAE, a worthwhile and highly complex system to study would be water under its difference phases to test recent new ideas and models \cite{Artemov2014,Artemov2019}. \\
Our code for our numerical experiments is available on the GitHub repository website \cite{github}.
\acknowledgments
The authors thank Stepan Vintskevich for fruitful discussions. I.A.L. thanks the Russian Foundation for Basic Research for partial support under Project No. 18-37-00282 and Project No. 18-37-20073. S.N.F. acknowledges the Russian Foundation for Basic Research for partial support under Project No. 18-37-20073. A. R. and H. O. acknowledge partial support from the Skoltech NGP Program (Skoltech-MIT joint project).
|
2,877,628,089,386 | arxiv | \section{INTRODUCTION}
The control of dynamic systems in safety-critical infrastructures such as power systems, factory automation or traffic networks has been automated more and more over the last decades. While the increasing degree of automation involves opportunities to improve the system's efficiency and integrity, it further increases the threat of malicious attacks
on physical or cyber components of the system.
It is therefore crucial to develop methods for preventing, identifying, and handling attacks. The communication layers of cyber-physical systems are protected by means of IT
security, and also the system's resilience on the control layer can be increased, e.g., by robust control. Nevertheless, absolute safety cannot be guaranteed. Therefore, each autonomous system should be equipped with methods for attack detection and identification to reveal the existence and location of an attack.
We consider a networked control system with states $x \in \mathbb{X} \subseteq \mathbb{R}^{d_x}$, initial state $x^0 \in \mathbb{X}$ and control $u\in \mathbb{U} \subseteq \mathbb{R}^{d_u}$, that consists of a set $\mathcal{P}$ of physically coupled subsystems with nonlinear dynamics. The dynamics of the system are exposed to possible attacks, where an \emph{attack} is modeled as a modification $a(u) \neq u$ of the input $u \in \mathbb{U}$ through an \emph{attack function} $a: \mathbb{U} \rightarrow \mathbb{U}$. Modeling an attack as a disturbance in the input is a frequently used attack model, \mbox{see~\cite{Pasqualetti2013Attack,Giraldo2018Survey,Ananduta2020Resilient}}, and implies that the intended controller action does not match the actual actuation of the system~\cite{Giraldo2018Survey}. While controller or actuator attacks are thus clearly covered by the attack model, also sensor attacks can be expressed by suitable attack functions since a sensor can be modeled as a simple input-output device. An attack can alter the local inputs $u_I \in \mathbb{U}_I$ in one or several subsystems $I\in \mathcal{P}$, and modify one or multiple input components $(u_I)_i$. It may or may not depend on the undisturbed control $u$ and we do \emph{not} assume the set of possibly \mbox{occurring attacks nor any attack patterns to be known. }
Denoting the local states of subsystem $I$ by $x_I \in \mathbb{X}_I$, the nonlinear discrete-time dynamics of subsystem $I$ including possible, unknown attacks $a_I$ are given as
\begin{equation}
\label{eq: local dynamics}
\begin{aligned}
x_I^+ &= f_I(x_I, a_I(u_I), z_{\mathcal{N}_I}), \\
z_I &= h_I(x_I).
\end{aligned}
\end{equation}
The function $h_I$ relates the local states $x_I$ to the local coupling variables $z_I \in \mathbb{R}^{d_{z_I}}$ through which subsystem $I$ influences other subsystems. By $\mathcal{N}_I$ we denote the neighborhood of subsystem $I$, that is defined as the set of all subsystems~$J$ influencing the dynamics of $x_I$ through couplings $z_J$.
\subsection{Related Work}
A series of recently published surveys shows comprehensive research on control and model-based approaches towards attack detection and identification in cyber-physical systems~\cite{Giraldo2018Survey,Ding2018survey,Dibaji2019Systems}.
Many proposed methods involve observer-based filters that are tailored for linear dynamics, e.g.,~\cite{Pasqualetti2013Attack,Gallo2018Distributed,Shames2011Distributed,Ding2008Model}. Both centralized~\cite{Ding2018survey} and distributed~\cite{Gallo2018Distributed,Pasqualetti2012Consensus} filters requiring only local model knowledge exist.
Similar to our approach, some methods involve optimization problems to compute plausible sparse attack signals~\cite{Liu2014Detecting} or update the probability of hypotheses on the attack constellation~\cite{Ananduta2020Resilient}.
Some papers deal with networked systems with special properties such as consensus networks or weakly coupled subsystems~\cite{Pasqualetti2012Consensus}, other frameworks depend on the attackers' resources~\cite{Teixeira2015secure}. While some of these methods for linear systems have been applied to attack identification in power systems~\cite{Pasqualetti2013Attack,Shames2011Distributed}, using linearized swing equations to model the dynamics in power systems is only valid as long as the phase angles are close to each other~\cite{Pan2015Online}. Since this cannot be guaranteed in case of attack, identification methods designed for systems with nonlinear dynamics should be considered. To this end, de Persis and Isidori propose a differential-geometric characterization of attack identification in nonlinear systems~\cite{DePersis2001geometric}. They present solvability conditions in terms of an unobservability distribution and derive a detection filter.
However, the proposed conditions result in a centralized approach that is unsuitable for large-scale systems.
In contrast, Esfahani et al.\ propose a scalable residual generator for nonlinear systems with additive attacks, which is based on solving a sequence of quadratic programs~\cite{Esfahani2012tractable}. The nonlinearities in the dynamics are not taken as part of the model but as disturbances following some known patterns, and a linear filter which is robust towards these disturbances is applied. An approach to attack identification in power systems with modeled nonlinearities is presented in~\cite{Pan2015Online}. Similar to our method, a sparse signal recovery problem is solved to find an attack signal explaining the observed behavior. While the authors consider several subsequent time steps under constant attack and apply linear regression requiring measurements of all phase angles, our approach uses measurements at some coupling nodes and one sampling time only. It can be classified as a hierarchical identification scheme since it requires aggregated sensitivity information but no global knowledge of the dynamics of each subsystem.
Further methods for linear and nonlinear systems can be found in the area of \emph{fault} detection and identification, which focuses on unintended system failures rather than malicious attacks~\cite{Boem2018Plug}. In this field, it is common to assume that the set of possible faults in nonlinear systems is known and finite, which is an invalid assumption for \emph{attack} identification~\cite{Ding2018survey}.
\subsection{Contribution}
We present a scalable attack identification method for distributed control systems in Section~\ref{sec: identification method}, which was introduced and successfully used to identify faulty buses in power systems in our preliminary work~\cite{Braun2020Hierarchical}.
In contrast to, e.g., \cite{Pasqualetti2013Attack,Gallo2018Distributed,Shames2011Distributed,Esfahani2012tractable}, it is designed for explicitly modeled nonlinearities in the dynamics.
It involves the exchange of predicted nominal values for certain coupling states and local sensitivity information as in Fig.~\ref{fig: exchange nominal values}, based on which we approximate how an attack spreads through the network. Attack identification is then approached by solving a sparse signal recovery problem. While requiring the global knowledge of sensitivity information evaluated at the current iterate, the method does not involve the global dynamics nor cost functions nor measurements of all states, unlike~\cite{Pan2015Online,DePersis2001geometric,Teixeira2015secure}. It is designed for nonlinear dynamics but, in contrast to~\cite{Boem2018Plug}, does not assume all potential attacks to be known nor makes further restrictions like considering only additive attacks as in~\cite{Esfahani2012tractable}.
\begin{figure}[b]
\centering
\begin{overpic}[tics=5,width=\columnwidth]{contract_exchange.pdf}
\put(12.7,14.3){II}
\put(48.8,14.3){I}
\put(82.4,14.3){III}
\input{nominal_couplings.tex}
\put(41,34.5){\small $\bar{z}_{\text{I}}$}
\put(23,12.5){\small $\bar{z}_{\text{II}}$}
\put(58,12.5){\small $\bar{z}_{\text{III}}$}
\put(26.5,10.2){\contractOne}
\put(43.8,32.4){\contractTwo}
\put(62.2,10.2){\contractThree}
\end{overpic}
\caption{Subsystems in a distributed control scheme with physical couplings shown by dashed edges. They exchange information about nominal future trajectories of their local couplings, optionally also sensitivity information (here depicted as blue areas and intervals around the nominal values).}
\label{fig: exchange nominal values}
\end{figure}
The main contribution of the paper is presented in Section~\ref{sec: sufficient conditions}, proving sufficient conditions under which the identification method successfully uncovers all attacked components even for nonlinear dynamics and nonlinear couplings of the subsystems.
Remarkably, the proposed rigorous guarantees of the identification method can be applied for realistic nonlinear case studies, as we illustrate with experiments on the IEEE~30 bus power system in Section~\ref{sec: numerical experiments}.
\subsection{Distributed Control Setup}
A distributed system structure as in~\eqref{eq: local dynamics} suggests the application of distributed control methods, which typically scale much better than centralized approaches. For an overview of existing methods, in particular approaches in distributed model predictive control (MPC), we refer to the survey papers~\cite{Christofides2013Distributed,Scattolini2009Architectures}. In contrast to fully decentralized approaches, distributed control schemes are based on the exchange of some information between the subsystems. This typically allows to reduce the uncertainty in the mutual interference and can be employed to design local controllers that are robust towards unknown couplings of neighbored subsystems.
This idea is implemented in~\cite{Farina2012Distributed}, where the subsystems exchange corridors in which future coupling values are guaranteed to lie, and apply robust MPC controllers to approach the uncertainties. The concept is shown in Fig.~\ref{fig: exchange nominal values} and formally described in~\cite{Lucia2015Contract}, supplemented by conditions for stability guarantees.
In our previous work~\cite{Braun2020Hierarchical}, we applied the distributed robust MPC method from~\cite{Lucia2015Contract} to a nonlinear system of systems under attack, designing the local control inputs~$u_I$ to be robust towards uncertain coupling values~$z_{\mathcal{N}_I}$ of neighbors as well as potentially disturbed internal inputs~$a_I(u_I)$.
In this way, constraint satisfaction is achieved in each subsystem $I$, even if an attack disturbs $u_I$ or causes the neighbors' couplings $z_{\mathcal{N}_I}$ to deviate from the nominal values.
\begin{figure}[t]
\centering
\begin{overpic}[width=\columnwidth, tics=5]{overview.pdf}
\put(12,53){Attack Detection}
\put(58,53){Attack Identification}
\footnotesize
\put(16,39.9){\fontsize{7.5pt}{9pt}$\|\Delta z_I\| > \tau_\text{D}$}
\put(35,41.4){Yes}
\put(8.5,41.4){No}
\put(7.8,22.5){Measurements}
\put(0.5,24){$\Delta z$}
\put(31.1,22.5){DMPC}
\put(37,17){$u$}
\put(34.5,10.5){\includegraphics[width=0.4cm]{devil.pdf}}
\put(26.5,11){$a(u)$}
\put(11,9){System}
\put(56,37.5){Locally compute sensitivities}
\put(66.5,30.5){\scriptsize$S_I^a, S_I^\mathcal{N}$}
\put(64, 24){Set up and solve}
\put(58, 20.5){identification problem~\eqref{opt: identification problem}}
\put(76, 16.5){\scriptsize$\Delta a^\ast$}
\put(61, 10.9){Identified attack set}
\put(67, 7.4){$\text{supp}(a^\ast)$}
\end{overpic}
\caption{Outline of the hierarchical attack detection and identification, embedded in a distributed model predictive control (DMPC) loop and executed at each sampling time. Identification is based on exchanging locally computed sensitivity information and solving a central identification problem.}
\label{fig: overview method}
\end{figure}
The identification method analyzed in this paper and illustrated in Fig.~\ref{fig: overview method} is applicable together with distributed closed-loop control schemes like the one in~\cite{Braun2020Hierarchical}. While it is mostly decoupled from the specific design of, e.g., the exchanged corridors, it requires the exchange of predicted nominal coupling values $\bar{z}_I$ as in Fig.~\ref{fig: exchange nominal values}. They indicate undisturbed reference values that the coupling variables $z_I$ will attain if no attack $a_I$ occurs in subsystem $I$ and the neighboring coupling variables $z_{\mathcal{N}_I}$ also behave according to their nominal values~$\bar{z}_{\mathcal{N}_I}$.
Closely following the notation in~\cite{Lucia2015Contract}, we denote by $\bar{z}_I(k|t)$ the nominal coupling value of subsystem $I$ for time $k$ calculated at time $t$. Similarly, $u_I(k|t)$ is the undisturbed input at time $k$ computed by the MPC scheme at time $t$ and $\bar{z}_{\mathcal{N}_I}(\cdot|t)$ is the function of nominal coupling values of neighboring subsystems on the prediction horizon, assumed to be discretized piecewise constant. The predicted nominal states $\bar{x}_I(k|t)$ and the nominal coupling values $\bar{z}_I(k|t)$ to be exchanged at time $t$ are computed as
\begin{equation}
\label{eq: nominal values}
\begin{aligned}
\bar{x}_I(k|t) &\coloneqq f_I\left(x_I(k-1|t), u_I(k-1|t), \bar{z}_{\mathcal{N}_I}(\cdot|t)\right), \\
\bar{z}_I(k|t) &\coloneqq h_I\left(\bar{x}_I(k|t)\right),
\end{aligned}
\end{equation}
for $k = t+1, \dots, t+N$ with prediction horizon $N$.
After receiving the nominal trajectory $\bar{z}_J(\cdot|t)$ from each neighbor $J \in \mathcal{N}_I$ at time $t$, each subsystem $I$ combines its neighbors' nominal values for the next sampling time $t+1$ as
\begin{align*}
\bar{z}_{\mathcal{N}_I}(k|t+1) \coloneqq \Pi_{J\in \mathcal{N}_I} \bar{z}_J(k|t).
\end{align*}
In order to obtain initial nominal coupling values $\bar{z}_I(k|0)$, we assume the system to be in steady state such that $h_I(x_I^0)$ for all $I \in \mathcal{P}$ provide suitable initial values. For a general procedure to obtain initial values we refer to~\cite{Braun2020Hierarchical}.
\section{HIERARCHICAL IDENTIFICATION METHOD}
\label{sec: identification method}
In accordance with relevant literature, such as~\cite{Pasqualetti2013Attack,Ananduta2020Resilient}, we distinguish between attack detection and identification as the problems to uncover the presence and location of an attack, respectively. Attack detectors typically monitor some system outputs and compare estimates with measurements to detect unexpected deviations that might indicate an attack~\cite{Dibaji2019Systems,Boem2018Plug}. For attack identification, we consider methods revealing the points of attack by means of the \emph{attack set}, which is defined in the following, similar to~\cite{Pasqualetti2013Attack}.
\begin{definition}[Attack Set]\label{def: attack set}
Let $u \in \mathbb{U}$ be an undisturbed controller input and $a(u)$ the attacked input tampering with the dynamics according to~\eqref{eq: local dynamics}. The \emph{attack set} $\text{supp}(a)$ of $a$ is defined as the set of all control indices which are affected by the attack, i.e., $\text{supp}(a) \coloneqq \{i: \left(a(u)\right)_i \neq u_i\} \subseteq \{1, \dots, d_u\}.$
\end{definition}
The blue highlighted fields in Fig.~\ref{fig: overview method} give an overview of the method for attack detection and identification that is presented in the following. It is embedded in a classical control loop with a distributed MPC controller and performed at each sampling time. Only if the detection scheme triggers an alarm, the identification method is executed. In the following, we consider one fixed sampling time and omit the time indices for the sake of brevity.
One step towards a hierarchical scheme (a detailed discussion follows) consists in monitoring the measurements of only the coupling variables~$z_I$ in each subsystem, instead of all global states $x$.
By definition, the nominal coupling values~$\bar{z}_I$ provide suitable estimates of the expected values in an undisturbed scenario. If in any subsystem $I$ the estimation error $\|\widetilde{z}_I - \bar{z}_I\|$ with measured coupling values $\widetilde{z}_I$ exceeds some detection threshold $\tau_\text{D}$, our detection method raises an alarm. Throughout this paper, we assume all coupling variables $z_I$ to be measurable without any measurement noise, i.e., $\widetilde{z}_I = z_I$. We further define the deviation $\widetilde{z}_I - \bar{z}_I$ from the nominal value as $\Delta z_I$.
Since all subsystems are physically coupled, a significant deviation $\|\Delta z_I\| > \tau_\text{D}$ from the nominal values $\bar{z}_I$ in some subsystem $I$ may be caused by some internal attack $a_I$ in $I$, but may just as well result from an attack $a_J$ in some other subsystem $J\neq I$, the impact of which spreads through the network.
The proposed attack identification is based on monitoring the deviations $\Delta z_I$ in the coupling values and figuring out at each time step $t$ in which subsystems the local inputs $u_I(t)$ are disturbed by some attack $a_I(u_I(t)) \neq u_I(t)$. For this purpose, we derive linear equations approximating the propagation of an attack through the network of subsystems.
According to the system dynamics~\eqref{eq: local dynamics}, the coupling variables $z_I= h_I \circ f_I(x_I, a_I(u_I), z_{\mathcal{N}_I})$ depend on $x_I, a_I(u_I)$ and $z_{\mathcal{N}_I}$, and we set $\zeta_I \coloneqq h_I \circ f_I$. The nominal coupling values are defined in~\eqref{eq: nominal values} such that \mbox{$\bar{z}_I = \zeta_I(x_I, u_I, \bar{z}_{\mathcal{N}_I})$.}
In order to analyze which deviations $\Delta z_I$ are caused by disturbances in $a_I(u_I)$ and $z_{\mathcal{N}_I}$, we compute a first-order Taylor approximation of $\zeta_I$ in $a_I(u_I)$ and $z_{\mathcal{N}_I}$ around the nominal value $(x_I, u_I, \bar{z}_{\mathcal{N}_I})$. Denoting the deviation $a_I(u_I) - u_I$ of the potentially disturbed input $a_I(u_I)$ from the undisturbed controller input $u_I$ by $\Delta a_I$, and the deviation $z_{\mathcal{N}_I} - \bar{z}_{\mathcal{N}_I}$ by $\Delta z_{\mathcal{N}_I}$, it holds by Taylor's theorem for $\Delta a_I, \Delta z_{\mathcal{N}_I} \rightarrow 0$:
\begin{equation}
\begin{aligned}
\label{eq: approximation attack propagation}
\Delta z_I
&= \frac{\partial \zeta_I}{\partial a_I}(x_I, u_I, \bar{z}_{\mathcal{N}_I}) \Delta a_I \\
&+ \frac{\partial \zeta_I}{\partial z_{\mathcal{N}_I}}(x_I, u_I, \bar{z}_{\mathcal{N}_I})\Delta z_{\mathcal{N}_I} + R_I,
\end{aligned}
\end{equation}
where an estimation of the remainder term $R_I$ is given in Lemma~\ref{lem: estimation remainder term}.
The Jacobians $\frac{\partial \zeta_I}{\partial a_I}$ and $\frac{\partial \zeta_I}{\partial z_{\mathcal{N}_I}}$ evaluated at $(x_I, u_I, \bar{z}_{\mathcal{N}_I})$ are computed locally by each subsystem applying the chain rule on $\zeta_I = h_I \circ f_I$ and calculating
\begin{align*}
\frac{\partial \zeta_I}{\partial a_I}(x_I, u_I, \bar{z}_{\mathcal{N}_I}) = \frac{\partial h_I}{\partial a_I}(x_I) \frac{\partial f_I}{\partial a_I}(x_I, u_I, \bar{z}_{\mathcal{N}_I}).
\end{align*}
The Jacobian $\frac{\partial \zeta_I}{\partial z_{\mathcal{N}_I}}$ can be computed similarly. In the following, we denote these matrices by $S_I^a \coloneqq \frac{\partial \zeta_I}{\partial a_I}(x_I, u_I, \bar{z}_{\mathcal{N}_I})$ and \mbox{$S_I^\mathcal{N} \coloneqq \frac{\partial \zeta_I}{\partial z_{\mathcal{N}_I}}(x_I, u_I, \bar{z}_{\mathcal{N}_I})$.}
We assume that in the case of a detected attack all subsystems share locally evaluated sensitivity information by publishing $S_I^a$ and $S_I^\mathcal{N}$. Based on this data, equations~\eqref{eq: approximation attack propagation} for each subsystem $I$ omitting the remainder term $R_I$ provide a linear approximation of the attack propagation through the network. For attack identification, we compute an attack with the sparsest possible attack set that explains the observed deviations $\Delta z_I$ by satisfying the linearized propagation equations. To this end, the following sparse signal recovery problem is solved:
\begin{equation}
\label{opt: identification problem}
\begin{aligned}
&\min_{\Delta a} && \|\Delta a\|_0 \\
&\text{ s.t.} &&S^a_I \Delta a_I = \Delta z_I - S^{\mathcal{N}}_I \Delta z_{\mathcal{N}_I} ~~\forall I \in \mathcal{P}.
\end{aligned}
\end{equation}
Here, $\|\Delta a\|_0$ denotes the $\ell_0$-``norm'' of $\Delta a$, counting the nonzero elements in $\Delta a$. For the corresponding attack $a$ with $\Delta a = a(u)- u$ it thus holds $|\text{supp}(a)| = \|\Delta a\|_0$ for the attack set $\text{supp}(a)$ as in Definition~\ref{def: attack set}.
Hence, an optimal solution $\Delta a^\ast$ of~\eqref{opt: identification problem} corresponds to an attack with the sparsest attack set among all attacks that fulfill the linear approximation of the attack propagation. Searching for a sparsest possible attack is a common approach for attack identification, see for example~\cite{Pasqualetti2013Attack,Pan2015Online,Liu2014Detecting}. It can be justified by the fact that attackers typically have restricted resources, so they can only disturb in a limited number of nodes~\cite{Liu2014Detecting}.
Since solving the $\ell_0$-minimization problem~\eqref{opt: identification problem} involves a mixed-integer program and is thus NP-hard, the $\ell_0$-``norm'' is commonly relaxed by the $\ell_1$-norm, which turns problem~\eqref{opt: identification problem} into a linear optimization problem~\cite{Braun2020Hierarchical,Pan2015Online,Candes2005Decoding}. In this paper, however, we focus on provable statements with the linearized attack propagation in the constraints and do not introduce another approximation error but \mbox{stick to the $\ell_0$-``norm''.}
Due to the fact that the identification problem~\eqref{opt: identification problem} involves measured coupling deviations $\Delta z_I$ and sensitivity information $S_I^a$, $S_I^\mathcal{N}$ for all subsystems $I$, it is not a distributed identification method.
But it is also not a classical centralized method since no information about the local dynamics~$f_I$, coupling functions~$h_I$ nor individual cost functions is needed. Assuming that the subsystems agree to provide the required sensitivities and measurements to some superior instance that solves the identification problem, it can be considered hierarchical.
Additionally, it requires only the couplings but not all states to be measured. Since problem~\eqref{opt: identification problem} contains $d_u$ optimization variables, it can be expected to scale significantly better than a fully centralized nonlinear method involving $d_x + d_u$ variables affecting the global dynamics.
\section{SUFFICIENT CONDITIONS FOR GUARANTEED ATTACK IDENTIFICATION}
\label{sec: sufficient conditions}
We consider some fixed sampling time $t$ at which an unknown attack $\widehat{a}$ disturbs the controller input $u(t)$ by \mbox{$\Delta \widehat{a} = \widehat{a}(u(t)) - u(t)$} and causes deviations $\Delta \widehat{z}$ in the coupling variables.
Only for the special case of $\zeta_I$ being linear for all $I$, the actually occurring attack $\Delta \widehat{a}$ satisfies the first-order approximation of the attack propagation and is a feasible solution of the identification problem~\eqref{opt: identification problem}. Even for systems with linear dynamics $\dot{x} = Ax + Ba(u)$ and linear coupling equations $z_I = H_I x_I$, however, the resulting functions $\zeta_I$ can be nonlinear since the solution of a linear ODE is in general nonlinear.
In this section, we consider nonlinear functions~$\zeta_I$ and derive suitable assumptions under which a solution of the identification problem~\eqref{opt: identification problem} identifies an attack $\Delta a^\ast$ that is close to the actual attack $\Delta \widehat{a}$ in an appropriate manner. Instead of bounding the error \mbox{$\|\Delta a^\ast - \Delta \widehat{a}\|$} with the $\ell_1$- or $\ell_2$-norm, we are interested in results stating that the actual, unknown attack set $\text{supp}(\widehat{a})$ (or some superset) is correctly identified. The two main results of this paper, given in Theorems~\ref{thm: superset identification} and~\ref{thm: correct identification}, provide statements of this kind.
In order to analyze the approximation error of the linearized attack propagation constituting the constraints of the identification problem~\eqref{opt: identification problem}, we consider the remainder term~$R_I$ in~\eqref{eq: approximation attack propagation} and derive an upper bound for $\|R_I\|_2$ in Lemma~\ref{lem: estimation remainder term}. For this purpose, we make use of the multi-index notation for derivatives of multivariate functions, see, e.g.,~\cite{Forster2011Analysis}. For a multi-index $\alpha = (\alpha_1, \alpha_2, \dots, \alpha_n) \in \mathbb{N}^n$, a real vector $x=(x_1, x_2, \dots, x_n) \in \mathbb{R}^n$ and some smooth function $g: \mathbb{R}^n \rightarrow \mathbb{R}^m$ we define
\begin{align*}
&|\alpha| \coloneqq \alpha_1 + \alpha_2 + \dots + \alpha_n,
&&\alpha! \coloneqq \alpha_1!\alpha_2! \dots \alpha_n!, \\
&x^\alpha \coloneqq x_1^{\alpha_1}x_2^{\alpha_2} \dots x_n^{\alpha_n} ~\text{ and}
&&\partial^\alpha g \coloneqq \frac{\partial^{|\alpha|}g}{\partial x_1^{\alpha_1}\partial x_2^{\alpha_2}\dots \partial x_n^{\alpha_n}}.
\end{align*}
\begin{lemma}[Estimation of Remainder Term]
\label{lem: estimation remainder term}
Let for all $I$ the function $\zeta_I = h_I \circ f_I$ be twice continuously differentiable. We assume that at some fixed $x_I$ the maximum second-order partial derivative $K_I \coloneqq \max_{|\alpha|=2}\left\|\partial^\alpha \zeta_I(x_I, \cdot, \cdot)\right\|_2$ exists and is finite, and define $K \coloneqq \max_I K_I$.
For the remainder term $R_I$ of the first-order Taylor approximation of $\zeta_I$ it holds
\begin{align*}
\|R_I\|_2 \leq \frac{K_I}{2} \big(\|\Delta a_I\|_1 + \|\Delta z_{\mathcal{N}_I}\|_1\big)^2.
\end{align*}
For the total remainder term $R = \left(R_I\right)_{I \in \mathcal{P}}$ it holds
\begin{align*}
\|R\|_2 \leq \frac{K}{2} \big(\|\Delta a\|_1 + M\|\Delta z\|_1\big)^2,
\end{align*}
with $M \coloneqq \max_I |\mathcal{N}_I|$ denoting the maximum degree in the network where each subsystem $I$ constitutes one node.
\end{lemma}
\begin{proof}
According to Theorem 2 in §7 of~\cite{Forster2011Analysis}, it holds for the remainder term $R_I$
\begin{align*}
R_I = \sum_{|\alpha|=2} \partial^\alpha \zeta_I(x_I, \xi^a_{I}, \xi^{z_{\mathcal{N}}}_I)\frac{1}{\alpha!} \begin{pmatrix}\Delta a_I\\\Delta z_{\mathcal{N}_I}\end{pmatrix}^\alpha,
\end{align*}
with \mbox{$\xi^a_{I} = u_I + c_I^a \Delta a_I$}, \mbox{$\xi^{z_{\mathcal{N}}}_I = \bar{z}_{\mathcal{N}_I} + c_I^\mathcal{N} \Delta z_{\mathcal{N}_I}$} intermediate points for some $c_I^a, c_I^\mathcal{N} \in (0,1)$.
Using the triangle inequality and the definition of $K_I$, we obtain
\begin{align*}
\|R_I\|_2
&\leq \sum_{|\alpha|=2} \bigg\|\partial^\alpha \zeta_I(x_I, \xi^a_{I}, \xi^{z_{\mathcal{N}}}_I)\frac{1}{\alpha!} \underbrace{\begin{pmatrix}\Delta a_I\\\Delta z_{\mathcal{N}_I}\end{pmatrix}^\alpha}_{\in \mathbb{R}} \bigg\|_2 \\
&=\sum_{|\alpha|=2} \big\|\partial^\alpha \zeta_I(x_I, \xi^a_{I}, \xi^{z_{\mathcal{N}}}_I) \big\|_2 \frac{1}{\alpha!} \left|\begin{pmatrix}\Delta a_I\\\Delta z_{\mathcal{N}_I}\end{pmatrix}^\alpha\right| \\
&\leq K_I \sum_{|\alpha|=2}\frac{1}{\alpha!} \left|\begin{pmatrix}\Delta a_I\\\Delta z_{\mathcal{N}_I}\end{pmatrix}\right|^\alpha \\
&= \frac{K_I}{2} \big(\|\Delta a_I\|_1 + \|\Delta z_{\mathcal{N}_I}\|_1\big)^2.
\end{align*}
The last equality holds due to the multinomial theorem for $k=2$, which states the equality
\mbox{$(x_1 + x_2 + \dots + x_n)^k$} $= \sum_{|\alpha|=k} \frac{k!}{\alpha!} x^\alpha$
and can be proven using the binomial theorem and induction on $n$.
It remains to derive an upper bound for the total remainder term $R = \left(R_I\right)_{I \in \mathcal{P}}$.
We estimate
\begin{align*}
\|R\|_2
&\leq \sum_{I\in \mathcal{P}}\|R_I\|_2
\leq \sum_{I\in \mathcal{P}}\frac{K_I}{2} \big(\|\Delta a_I\|_1 + \|\Delta z_{\mathcal{N}_I}\|_1\big)^2\\
&\leq \frac{K}{2} \sum_{I\in \mathcal{P}} \big(\|\Delta a_I\|_1 + \|\Delta z_{\mathcal{N}_I}\|_1\big)^2\\
&\leq \frac{K}{2} \left(\left\|\begin{pmatrix}\Delta a_1 \\ \vdots \\ \Delta a_{|\mathcal{P}|}\end{pmatrix}\right\|_1 + \left\|\begin{pmatrix}\Delta z_{\mathcal{N}_1} \\ \vdots \\ \Delta z_{\mathcal{N}_{|\mathcal{P}|}}\end{pmatrix}\right\|_1\right)^2,
\end{align*}
where the last inequality also follows from the multinomial theorem.
For the first vector in the last line it holds \mbox{$\begin{pmatrix}\Delta a_1, \dots, \Delta a_{|\mathcal{P}|}\end{pmatrix}^\text{T} = \Delta a^\text{T}$,} but for the second vector it holds in general $\left\|\begin{pmatrix}\Delta z_{\mathcal{N}_1}, \dots, \Delta z_{\mathcal{N}_{|\mathcal{P}|}}\end{pmatrix}\right\|_1 \neq \|\Delta z\|_1$ since each vector $\Delta z_{I}$ appears $|\mathcal{N}_I|$ many times. With $M$ denoting the maximum degree in the network, it holds
\begin{align*}
\left\|\begin{pmatrix}\Delta z_{\mathcal{N}_1} \\ \vdots \\ \Delta z_{\mathcal{N}_{|\mathcal{P}|}}\end{pmatrix}\right\|_1
&= \sum_I |\mathcal{N}_I| \|\Delta z_I\|_1
\leq M \|\Delta z\|_1.
\end{align*}
In total, we obtain
\begin{align*}
\|R\|_2 \leq \frac{K}{2} \big(\|\Delta a\|_1 + M\|\Delta z\|_1\big)^2.
\end{align*}
\end{proof}
\noindent
Using this upper bound on the remainder term, we next derive an $\varepsilon$-$\delta$-criterion that specifies a condition under which the computed solution $\Delta a^\ast$ of~\eqref{opt: identification problem} is in an \mbox{$\varepsilon$-neighborhood} around the actual attack $\Delta \widehat{a}$.
For the sake of clarity, we express the linear constraints of problem~\eqref{opt: identification problem} in the form $S \Delta a = b$ with $S = \text{diag}\left((S_I^a)_{I\in \mathcal{P}}\right) \in \mathbb{R}^{d_z \times d_u}$ and \mbox{$b=\left(\Delta z_I - S_I^\mathcal{N} \Delta z_{\mathcal{N}_I}\right)_{I\in \mathcal{P}}$.} The smallest singular value of~$S$ is denoted as $\sigma_{\min}$.
\begin{lemma}[$\varepsilon$-$\delta$-Criterion]
\label{lem: eps-delta-criterion}
We assume that $\sigma_{\min} > 0$ and $d_z \geq d_u$ holds for $d_z = \sum_{I\in \mathcal{P}} d_{z_I}$ denoting the total number of coupling variables. Let $\varepsilon > 0$ be given and denote by $\Delta a^\ast$ a feasible solution of the identification problem~\eqref{opt: identification problem}. Defining~$\delta$ as $\delta \coloneqq \sqrt{\frac{2\varepsilon\sigma_{\min}}{K}}$ it holds:
If $\left(\|\Delta \widehat{a}\|_1 + M\|\Delta \widehat{z}\|_1\right) \leq \delta$, then
$\left\|\Delta \widehat{a} - \Delta a^\ast\right\|_2 \leq \varepsilon$.
\end{lemma}
\begin{proof}
The main idea of the proof is to make use of the linearity of the constraints in~\eqref{opt: identification problem} to bound the distance between $\Delta \widehat{a}$ and $\Delta a^\ast$. A feasible solution $\Delta a^\ast$ clearly satisfies the constraints such that
$b - S \Delta a^\ast = 0$ holds.
For the actual attack $\Delta \widehat{a}$ it holds $b - S \Delta \widehat{a} = R$ with $R$ being the remainder term from the Taylor expansion. Subtracting these equations, we obtain
$$
\left\|R\right\|_2 =
\left\|S (\Delta \widehat{a} - \Delta a^\ast)\right\|_2.
$$
Since $d_z \geq d_u$, a lower bound of this expression is given by
$$
\left\|R\right\|_2 \geq \sigma_{\min} \left\|\Delta \widehat{a} - \Delta a^\ast\right\|_2,
$$
with $\sigma_{\min} > 0$ denoting the smallest singular value of $S$.
Using the upper bound of the remainder term from Lemma~\ref{lem: estimation remainder term} and the definition of $\delta$, it follows
\begin{align*}
\left\| \Delta \widehat{a} - \Delta a^\ast \right\|_2 &\leq \frac{\left\|R\right\|_2}{\sigma_{\min}} \leq \frac{K}{2\sigma_{\min}}\left(\|\Delta \widehat{a}\|_1 + M\|\Delta \widehat{z}\|_1\right)^2 \\
&\leq \frac{K}{2\sigma_{\min}} \delta^2 = \varepsilon.
\end{align*}
\end{proof}
We would like to derive conditions under which the attack sets $\text{supp}(\widehat{a})$ and $\text{supp}(a^\ast)$ are similar rather than the attack vectors $\Delta \widehat{a}$ and $\Delta a^\ast$ themselves. In other words, we are interested in a specific $\varepsilon$ such that Lemma~\ref{lem: eps-delta-criterion} implies that both~$\widehat{a}$ and $a^\ast$ have the same attack set. First, we state a slightly weaker result, implying that under the indicated conditions all attacked inputs are identified by the computed solution, but possibly \mbox{also some benign components are suspected.}
\begin{theorem}[Correct Superset-Identification]
\label{thm: superset identification}
Let again $\sigma_{\min}> 0$, $d_z \geq d_u$ hold, let $M$ denote the maximum degree and $\Delta a^\ast$ a feasible solution of the identification problem~\eqref{opt: identification problem}.
Let $\varepsilon > 0$ be such that $\varepsilon < \min_{i \in \text{supp}(\widehat{a})} |(\Delta \widehat{a})_i|$ and choose~$\delta$ accordingly as in Lemma~\ref{lem: eps-delta-criterion}. If $\left(\|\Delta \widehat{a}\|_1 + M\|\Delta \widehat{z}\|_1\right) \leq \delta$ holds, then for the attack sets we have
\begin{align*}
\text{supp}(a^\ast) \supseteq \text{supp}(\widehat{a}).
\end{align*}
\end{theorem}
\begin{proof}
From Lemma~\ref{lem: eps-delta-criterion} it follows that \mbox{$\|\Delta \widehat{a} - \Delta a^\ast\|_2 \leq \varepsilon.$}
We assume for contradiction that $\text{supp}(a^\ast) \nsupseteq \text{supp}(\widehat{a})$. Hence, there is some index $i \in \text{supp}(\widehat{a})$ but $i \notin \text{supp}(a^\ast)$, i.e., $(\Delta \widehat{a})_i \neq 0$ and $(\Delta a^\ast)_i = 0$. This implies
\begin{align*}
\|\Delta \widehat{a} - \Delta a^\ast\|_2
&\geq |(\Delta \widehat{a})_i - (\Delta a^\ast)_i| = |(\Delta \widehat{a})_i| \\
&\geq \min_{i \in \text{supp}(\widehat{a})} |(\Delta \widehat{a})_i| > \varepsilon,
\end{align*}
which contradicts the result following from Lemma~\ref{lem: eps-delta-criterion}.\\
\end{proof}
Theorem~\ref{thm: superset identification} guarantees, under certain assumptions, that a solution of the identification problem identifies all attacked inputs, but possibly also some undisturbed inputs. In the numerical experiments in Section~\ref{sec: numerical experiments} we will analyze how large the discrepancy is on average for randomly generated attacks. To achieve equality of the attack sets $\text{supp}(\widehat{a})$ and $\text{supp}(a^\ast)$ and thus guarantee that $\Delta a^\ast$ correctly identifies all attackers but no more, some modifications are necessary. Due to the nonlinearity of $\zeta_I$ the approximation of the attack propagation is not exact and the actual attack~$\Delta \widehat{a}$ in general does not have to be a feasible solution of~\eqref{opt: identification problem}. To resolve this, we consider a relaxed version of the identification problem:
\begin{equation}
\label{opt: identification problem relaxed}
\begin{aligned}
&\min_{\Delta a} &&\|\Delta a\|_0 \\
&\text{ s.t.} &&\left\|b - S\Delta a\right\|_2 \leq \frac{\varepsilon}{2}\sigma_{\min},
\end{aligned}
\end{equation}
where again $\varepsilon < \min_{i \in \text{supp}(\widehat{a})} |(\Delta \widehat{a})_i|$ and $\sigma_{\min}$ is the smallest singular value of the sensitivity matrix $S$. Slightly modifying the definition of $\delta$ by a constant factor and requiring $\Delta a^\ast$ to be a global solution, we obtain the following stronger result:
\begin{theorem}[Exact Identification]
\label{thm: correct identification}
Assume $\sigma_{\min}>0$, $d_z \geq d_u$ and let $\Delta a^\ast$ be a globally optimal solution of the relaxed problem~\eqref{opt: identification problem relaxed}. For \mbox{$\varepsilon < \min_{i \in \text{supp}(\widehat{a})} |(\Delta \widehat{a})_i|$,} we define $\tilde{\delta} \coloneqq \sqrt{\frac{\varepsilon\sigma_{\min}}{K}}$. If the actual attack $\Delta \widehat{a}$ satisfies $\left(\|\Delta \widehat{a}\|_1 + M\|\Delta \widehat{z}\|_1\right) \leq \tilde{\delta}$, then it holds
$$
\text{supp}(a^\ast) = \text{supp}(\widehat{a}).
$$
\end{theorem}
\begin{proof}
As a first step we show that the proof of Lemma~\ref{lem: eps-delta-criterion} works similarly for the relaxed identification problem~\eqref{opt: identification problem relaxed} and the adapted $\tilde{\delta}$. The expression $b - S \Delta a^\ast$ is no longer zero and we define the corresponding residual as $R^\ast \coloneqq b - S \Delta a^\ast$ with \mbox{$\|R^\ast\|_2 \leq \frac{\varepsilon}{2} \sigma_{\min}$} due to feasibility. Similar to the proof of Lemma~\ref{lem: eps-delta-criterion} we estimate
\begin{align*}
\left\|\Delta \widehat{a} - \Delta a^\ast\right\|_2
&\leq
\frac{\left\|R - R^\ast\right\|_2}{\sigma_{\min}}
\leq
\frac{\left\|R\| + \|R^\ast\right\|_2}{\sigma_{\min}}\\
&\leq
\frac{1}{\sigma_{\min}} \left(\frac{K\widetilde{\delta}^2 + \varepsilon \sigma_{\min}}{2}\right) =
\varepsilon.
\end{align*}
We have thus shown a similar result as in Lemma~\ref{lem: eps-delta-criterion} and a proof analogously to the one of Theorem~\ref{thm: superset identification} follows accordingly. Therefore, we obtain
\begin{align}
\label{eq: support inclusion one way}
\text{supp}(a^\ast) \supseteq \text{supp}(\widehat{a})
\end{align}
for $\Delta a^\ast$ being a solution of the relaxed identification problem~\eqref{opt: identification problem relaxed}. It remains to show that $\text{supp}(a^\ast) \subseteq \text{supp}(\widehat{a})$.
To this end, we note that the actual attack $\Delta \widehat{a}$ is a feasible solution of the relaxed problem~\eqref{opt: identification problem relaxed} since
\begin{align*}
\left\|b - S \Delta \widehat{a}\right\|_2
&\leq \frac{K}{2}\big(\|\Delta \widehat{a}\|_1 + M\|\Delta \widehat{z}\|_1\big)^2 \leq \frac{K}{2} \tilde{\delta}^2
= \frac{\varepsilon}{2}\sigma_{\min}.
\end{align*}
Since both $\Delta \widehat{a}$ and $\Delta a^\ast$ are feasible solutions of~\eqref{opt: identification problem relaxed} and $\Delta a^\ast$ is globally optimal, it holds $\|\Delta a^\ast\|_0 \leq \|\Delta \widehat{a}\|_0$. Together with~\eqref{eq: support inclusion one way} (implying $\|\Delta a^\ast\|_0 \geq \|\Delta \widehat{a}\|_0$) it follows \mbox{$\|\Delta a^\ast\|_0 = \|\Delta \widehat{a}\|_0$.} Since $\text{supp}(a^\ast) \supseteq \text{supp}(\widehat{a})$, this implies $\text{supp}(a^\ast) = \text{supp}(\widehat{a})$.\\
\end{proof}
\begin{remark}\label{rem: reduction sensitivities}
The assumptions $\sigma_{\min} > 0$ and $d_z\geq d_u$ can be replaced without loss of generality by assuming that the subsystems do not transmit the Jacobians \mbox{$S_I^a \in \mathbb{R}^{d_{z_I} \times d_{u_I}}$,} but instead remove dependent columns and publish submatrices $\widetilde{S}_I^a \in \mathbb{R}^{d_{z_I} \times r_I}$ of full rank $r_I \leq \min\{d_{z_I}, d_{u_I}\}$. So they omit redundant information which only further reduces the number of variables in problems~\eqref{opt: identification problem} and~\eqref{opt: identification problem relaxed}. It yields a total sensitivity matrix~$\widetilde{S} = \text{diag}\left((\widetilde{S}_I^a)_{I\in \mathcal{P}}\right)$ of \mbox{size $d_z \times r$ with} $r= \sum_I r_I \leq d_z$, so the proof of Lemma~\ref{lem: eps-delta-criterion} follows as above.
\end{remark}
\section{ATTACK IDENTIFICATION \\IN POWER SYSTEMS}
\label{sec: numerical experiments}
In order to evaluate the identification method from Section~\ref{sec: identification method}, we consider the problem of identifying faulty buses in power systems. For randomly generated attack scenarios, we analyze the ratio of correctly identified (supersets of the) attack sets and the proportion of samples where the sufficient conditions of Theorems~\ref{thm: superset identification} and~\ref{thm: correct identification} are satisfied, respectively. This allows us to assess not only the effectiveness of the identification method for nonlinear systems, but also the relevance of our main statements in Theorems~\ref{thm: superset identification} and~\ref{thm: correct identification}.
\begin{figure}[htb]
\centering
\begin{overpic}[tics=5, width=0.9\columnwidth]{IEEE30_Partition.pdf}
\put(4,87){\normalsize I}
\put(92,87){\normalsize II}
\put(-5.3,50){\normalsize III}
\put(96,54){\normalsize IV}
\put(-4,12){\normalsize V}
\put(99,17){\normalsize VI}
\end{overpic}
\caption{Schematic of the IEEE~30 bus system partitioned into six subsystems I-VI. Physical couplings through transmission lines between two subsystems are depicted as dashed lines.}
\label{fig: IEEE 30 bus system}
\end{figure}
We consider the IEEE~30 bus system shown in Fig.~\ref{fig: IEEE 30 bus system}, which consists of 30 buses all of which we assume to be connected to synchronous machines. The dynamics of the machine in bus $i$ with phase angle $\theta_i$ can thus be modeled by the so-called swing equation, see~\cite{Kundur1994Power}:
\begin{align*}
m_i \ddot{\theta}_i + d_i \dot{\theta}_i = u_i - \sum_{j \in N_i} P_{ij},
\end{align*}
where $m_i$ and $d_i$ denote inertia and damping constants, $u_i$ is the power infeed at bus $i$ and $P_{ij}$ describes the active power flow from bus $i$ to some bus $j$ in its neighborhood $N_i$. For the six generators buses 1, 2, 13, 22, 23 and 27, the dynamic coefficients $m_i$ and $d_i$ are taken based on the values in~\cite{DeTuglie2008coherency} and the conversion rules in~\cite{Kundur1994Power}. For the remaining load buses, arbitrary coefficients in a realistic range are chosen.
If the dynamics of power lines are neglected, the power flow $P_{ij}$ between neighbored buses $i$ and $j$ can be modeled by
\begin{align*}
P_{ij} = |V_i||V_j|b_{ij}\sin(\theta_i - \theta_j),
\end{align*}
with $|V_i|$ denoting the voltage magnitude at bus $i$, and $b_{ij}$ the susceptance of the transmission line between buses $i$ and $j$. Realistic parameter values and initial values for $\theta$ are taken from a simulation of the corresponding power system in Matpower~\cite{Zimmerman2010MATPOWER}. All parameters are chosen in a per-unit (p.u.) system with a 200\si{kV} base and a nominal frequency of 60\si{Hz}.
We consider constant loads at the six buses 3, 7, 14, 19, 26 and 30 and assume that the power infeeds at the remaining load and generator buses can be controlled through $u_i$ with $u_i \in [-0.4, 0]$ p.u.\ at all load buses and $u_i \in [-0.4, 0.9]$ p.u.\ at all generator buses. For frequency control of the system, we consider the following optimal control problem with states $\theta_i$, $\omega_i \coloneqq \dot{\theta_i}$ for $i=1,\dots,30$, and parameters $k_{ij} \coloneqq |V_i||V_j|b_{ij}$:
\begin{align}
&\min_{\theta, \omega, u} &&\|\omega\|_2^2 \notag\\
&\text{ s.t. }
&& \dot{\theta_i} = \omega_i, \label{opt: optimal frequency control}\\
&&& \dot{\omega_i} = \frac{1}{m_i}\bigg(u_i - d_i\omega_i - \sum_{j \in N_i} k_{ij} \sin\left(\theta_i - \theta_j\right)\bigg).\notag
\end{align}
An optimal solution of problem~\eqref{opt: optimal frequency control} minimizes the deviation~$\omega$ from the nominal frequency while obeying the power flow and machine dynamics. In our experiments, we consider a time horizon of $10$\si{s}, discretized with time steps of length $\Delta t = 0.1\si{s}$, and solve problem~\eqref{opt: optimal frequency control} in a distributed receding-horizon fashion applying the robust MPC scheme from~\cite{Lucia2015Contract}. It is implemented based on the do-mpc environment for multi-stage MPC~\cite{Lucia2017Rapid}, applying the NLP solver Ipopt~\cite{Wachter2006Implementation} and CasADi for automatic differentiation and optimization~\cite{Andersson2019CasADi}.
In the distributed scheme, one local MPC controller is used for each of the six subsystems indicated in Fig.~\ref{fig: IEEE 30 bus system}, which are interconnected through transmission lines drawn as dashed lines. To model the resulting physical coupling, in each subsystem those phase angles $\theta_i$ are defined as coupling variables which are incident to at least one dashed edge. In subsystem V, for example, the coupling variables \mbox{$z_\text{V}=(\theta_2, \theta_4, \theta_5)$} influence the neighbored \mbox{subsystems III and VI.} The coupling variables are assumed to be parametrized piecewise constant in the numerical integration scheme.
The partition of the IEEE~30 bus system into the indicated six subsystems yields a total number of $d_z=18$ coupling variables, which is significantly less than $d_x=60$ states and underlines again the reduced complexity of the proposed procedure, which does not require global measurements of all states nor knowledge of the local dynamics.
As there are $d_u=30\nleq d_z$ input variables, we assume that the subsystems publish full-rank submatrices $\widetilde{S}_I^a$ instead of the original sensitivity \mbox{matrices $S_I^a$ as described in Remark~\ref{rem: reduction sensitivities}.}
To evaluate the identification method from Section~\ref{sec: identification method} and the strength of the sufficient conditions of Theorems~\ref{thm: superset identification} and~\ref{thm: correct identification}, we carry out two test series \texttt{attack\_1} and \texttt{attack\_3}. In both, the system is exposed to a new, randomly generated attack at each of the 100 time steps in $[0,10]\si{s}$ and the proposed detection and identification method depicted in Fig.~\ref{fig: overview method} is applied at each sampling time. In \texttt{attack\_1}, at each time step $t$, one attacked node~$i$ and a disturbed input value $a_i(u_i(t)) \neq u_i(t)$ are chosen uniformly at random. For the remaining nodes $j\neq i$, the undisturbed controller input $a_j(u_j(t)) = u_j(t)$ is applied to the system. In \texttt{attack\_3} three random nodes per time step are attacked. An attack is detected at time step $t$ if $\|\Delta z_I(t)\|_\infty > \tau_\text{D}$ for some $I$ with detection threshold $\tau_\text{D} \coloneqq 10^{-5}$. If this is the case, the sensitivity matrices $\widetilde{S}_I^a$, $S_I^\mathcal{N}$ are locally evaluated by applying automatic differentiation with CasADi to the local integrator schemes, representing the functions~$f_I$ and $h_I$ in equations~\eqref{eq: local dynamics}. Normalizing the columns of the matrices $\widetilde{S}_I^a$ and aggregating all sensitivity information, the identification problems~\eqref{opt: identification problem} and~\eqref{opt: identification problem relaxed} are set up and solutions $\Delta a^\ast_\eqref{opt: identification problem}$ and $\Delta a^\ast_\eqref{opt: identification problem relaxed}$ are computed with Bonmin, respectively~\cite{Bonami2008algorithmic}. The identified attack sets $\text{supp}(a^\ast_\eqref{opt: identification problem})$, $\text{supp}(a^\ast_\eqref{opt: identification problem relaxed})$ contain those indices $i$, for which $|\Delta a^\ast_\eqref{opt: identification problem}|_i > \varepsilon_\text{I}$ resp.\ $|\Delta a^\ast_\eqref{opt: identification problem relaxed}|_i > \varepsilon_\text{I}$ holds with identification threshold $\varepsilon_\text{I} \coloneqq 10^{-5}$.
Among the 100 time steps with random attack sets of cardinality 1 in \texttt{attack\_1}, the detection gives an alarm at 79 sampling times. This seemingly low rate is due to the fact that only one input $u_i$ is modified by some random disturbance $\Delta \widehat{a}_i$, which in 21 cases is too small for causing a significant deviation in any coupling node. In the test series \texttt{attack\_3}, an attack is detected in all 100 time steps. In these 79 respectively 100 time steps, the attack identification method is applied. For both experiments, Table~\ref{tab: identification results} lists how often the actual, unknown attack set $\text{supp}(\widehat{a})$ or a superset is correctly identified, and how often the sufficient condition of Theorem~\ref{thm: superset identification} resp.\ Theorem~\ref{thm: correct identification} is satisfied. The results of \texttt{attack\_1} are shown in tables (a) and (b), those of \texttt{attack\_3} in tables (c) and (d). The left tables refer to identifying a superset of $\text{supp}(\widehat{a})$ as in Theorem~\ref{thm: superset identification}, the right tables to identifying the attack set exactly as in Theorem~\ref{thm: correct identification}.
\begin{table}[h]
\caption{Fourfold tables showing the results of experiments \texttt{attack\_1} (tables (a) and (b)) and \texttt{attack\_3} ((c) and (d)) with one respectively three random attackers per time step.
}
\label{tab: identification results}
\small
\begin{subtable}[t]{0.4\columnwidth}
\caption{Superset identification according to Theorem~\ref{thm: superset identification}}
\begin{tabular}{c|a|c}
\scriptsize{\texttt{attack\_1}}& Ident. & $\overline{\text{Ident.}}$ \\
\hline \rowcolor{matplotlibGray!60}\rule[.25ex]{0pt}{2.5ex}
Cond. & 94.94\% & 0.00\% \\
\hline \rule[.25ex]{0pt}{2.5ex}
$\overline{\text{Cond.}}$ & 5.06\% & 0.00\% \\
\hline \rule[.25ex]{0pt}{2.5ex}
& 100\% &
\end{tabular}
\vskip1ex
\end{subtable}
\hfi
\begin{subtable}[t]{0.4\columnwidth}
\caption{Exact identification \mbox{according} to Theorem~\ref{thm: correct identification}}
\begin{tabular}{c|a|c}
\scriptsize{\texttt{attack\_1}} & Ident. & $\overline{\text{Ident.}}$ \\
\hline \rowcolor{matplotlibBlue!60}\rule[.25ex]{0pt}{2.5ex}
Cond. & 93.67\% & 0.00\% \\
\hline \rule[.25ex]{0pt}{2.5ex}
$\overline{\text{Cond.}}$ & 6.33\% & 0.00\% \\
\hline \rule[.25ex]{0pt}{2.5ex}
& 100\% &\\
\end{tabular}
\vskip1ex
\end{subtable}
\vskip.4cm
\begin{subtable}[t]{0.4\columnwidth}
\caption{Superset identification according to Theorem~\ref{thm: superset identification}}
\begin{tabular}{c|a|c}
\scriptsize{\texttt{attack\_3}} & Ident. & $\overline{\text{Ident.}}$ \\
\hline \rowcolor{matplotlibGray!60}\rule[.25ex]{0pt}{2.5ex}
Cond. & 40.00\% & 0.00\% \\
\hline \rule[.25ex]{0pt}{2.5ex}
$\overline{\text{Cond.}}$ & 59.00\% & 1.00\% \\
\hline \rule[.25ex]{0pt}{2.5ex}
& 99.00\% &
\end{tabular}
\end{subtable}
\hfil
\begin{subtable}[t]{0.4\columnwidth}
\caption{Exact identification \mbox{according} to Theorem~\ref{thm: correct identification}}
\begin{tabular}{c|a|c}
\scriptsize{\texttt{attack\_3}} & Ident. & $\overline{\text{Ident.}}$ \\
\hline \rowcolor{matplotlibBlue!60}\rule[.25ex]{0pt}{2.5ex}
Cond. & 31.00\% & 0.00\% \\
\hline \rule[.25ex]{0pt}{2.5ex}
$\overline{\text{Cond.}}$ & 51.00\% & 18.00\%\\
\hline \rule[.25ex]{0pt}{2.5ex}
& 82.00\% &
\end{tabular}
\end{subtable}
\vskip.4cm
\caption*{\fbox{\parbox{0.95\columnwidth}{{\small Cond. = Sufficient condition satisfied, $\overline{\text{Cond.}}$ = not satisfied \\ Ident. = (Superset of) $\text{supp}(\widehat{a})$ identified, $\overline{\text{Ident.}}$ = not identified}}}}
\end{table}
Considering the experiments \texttt{attack\_1}, the green highlighted column of Table~\ref{tab: identification results}\,(a) reveals that at each time the identification method is applied, it correctly identifies a superset of the unknown attack set. In 94.94\% of the cases, this is guaranteed since the sufficient condition of Theorem~\ref{thm: superset identification} is satisfied and implies the correct identification of a superset. In 5.06\%, however, the condition is not fulfilled but still some superset is computed. This is possible because the theorem only states a sufficient but not necessary condition.
Since $\widetilde{\delta} < \delta$ with $\delta, \widetilde{\delta}$ denoting the parameters occurring in Theorems~\ref{thm: superset identification} and~\ref{thm: correct identification}, the sufficient condition of Theorem~\ref{thm: correct identification} is harder to fulfill than the one of Theorem~\ref{thm: superset identification}. This is reflected in Table~\ref{tab: identification results}\,(b), showing that the sufficient condition of Theorem~\ref{thm: correct identification} is satisfied in 93.67\%, in contrast to 94.94\% in Table~\ref{tab: identification results}\,(a).
The exact identification is successful at all times, although in 6.33\% this is not guaranteed by Theorem~\ref{thm: correct identification}.
\addtolength{\textheight}{-.2cm}
The sufficient conditions in both theorems become harder to satisfy the larger $\|\Delta \widehat{a}\|_1 + M\|\Delta \widehat{z}\|_1$ gets, where $\Delta \widehat{a}$, $\Delta \widehat{z}$ denote the \mbox{occurring} attack and the caused coupling deviations, and $M$ is the maximum degree in the subsystem network. Since in the test series \texttt{attack\_3} three inputs per time step are randomly disturbed in contrast to only one in \texttt{attack\_1}, the resulting values $\|\Delta \widehat{a}\|_1$, $\|\Delta \widehat{z}\|_1$ are expected to be larger.
This becomes evident in the comparison of Tables~\ref{tab: identification results}\,(a) with (c), and (b) with (d), respectively. The sufficient condition of Theorem~\ref{thm: superset identification} (highlighted in gray) as well as Theorem~\ref{thm: correct identification} (blue) is fulfilled in significantly fewer cases.
In more than 98\% resp.\ 73\% of all cases with unfulfilled sufficient condition, however, a superset resp.\ the attack set $\text{supp}(\widehat{a})$ itself are still correctly identified, such that total scores of 99\% for superset identification and 82\% for exact identification are reached.
Attacking three out of 18 inputs (corresponding to the size of the reduced sensitivity matrices $\widetilde{S}_I^a$), means compromising more than~15\% of the system simultaneously and thus requires attackers with very powerful resources. In this context, the achieved success rates should be regarded as very high.
Setting up the relaxed identification problem~\eqref{opt: identification problem relaxed} requires the parameter $\varepsilon$, which depends on the unknown attack~$\Delta \widehat{a}$, such that computing a solution $\Delta a^\ast_{\eqref{opt: identification problem relaxed}}$ to identify the attack set $\text{supp}(\widehat{a})$ exactly is a rather theoretical consideration or requires a good estimate of $\varepsilon$. For the actual application as attack identification method, solving the identification problem~\eqref{opt: identification problem} is more suitable and guaranteed to find a superset $\text{supp}(a^\ast_{\eqref{opt: identification problem}}) \supseteq \text{supp}(\widehat{a})$ under the condition of Theorem~\ref{thm: superset identification}. The set $\text{supp}(a^\ast_{\eqref{opt: identification problem}}) \setminus \text{supp}(\widehat{a})$, containing the wrongly identified inputs, on average contains 0.56 indices in the test series \texttt{attack\_1} and 0.9 in \texttt{attack\_3}. In a more realistic scenario, we find it valid to assume that the attack set $\text{supp}(\widehat{a})$ remains constant for some time and the attack set must not be identified within only one sampling time. On the contrary, it seems very promising that already within one time step a superset containing all attacked inputs is identified with very high success rate. Even if one or two benign inputs are contained, one can use the findings over several time steps to draw a sophisticated conclusion about the actual attack set.
\section{CONCLUSION}
We considered a hierarchical method for attack identification in distributed nonlinear control systems from preliminary work and carried out a detailed analysis in terms of both theoretical guarantees and numerical results. The method is based on the exchange of locally evaluated sensitivity information and solves a sparse signal recovery problem at each time step. It allows to identify arbitrary attacks on the system's inputs, without requiring global model knowledge nor assuming any attack patterns to be known. We derived sufficient conditions depending on the strength of the attack and properties of the system's dynamics, under which the method is guaranteed to identify all attacked inputs. Numerical experiments for the identification of faulty buses in the IEEE~30 bus power system revealed that not only the sufficient conditions are largely met, but also the success rates of correct identification are very high, although a very demanding attack scenario was considered with randomly generated attacks changing at each time step.
\bibliographystyle{../Template/IEEEtran}
|
2,877,628,089,387 | arxiv | \section{Introduction}\label{sec:intro}
Scandium hydride was first identified experimentally by Smith
\cite{73Smxxxx.ScH}, who recorded the absorption spectra of
various transition metal hydrides (ScH, TiH, VH, NiH, CoH
and deuterated isotopologues) in the region 17700 to 18300~cm$^{-1}$.
No detailed analysis of the spectrum was reported but it was remarked
that a triplet ground state was expected.
Later studies showed that the
ground state of ScH is actually $\leftexp{1}{\Sigma^+}$,
with a low-lying $\leftexp{3}{\Delta}$.
Early studies using restricted open-shell Hartree-Fock (ROHF) \cite{74ScRixx.ScH,76ScRixx.ScH},
generalised valence bond theory \cite{75KuGuBl.ScH} and empirically-fitted pseudopotentials \cite{81Daxxxx.ScH}
all incorrectly predicted a $\leftexp{3}{\Delta}$ ground state, with the $\leftexp{1}{\Sigma^+}$
generally lying about 2000~cm$^{-1}${} higher up. These studies considered the six electronic terms
correlating upon dissociation with ground state atoms (dissociation channel
labelled 1 in table~\ref{tbl.Sc.atom.experiment}, leading to $\leftexp{1,3}{\Sigma}$, $\leftexp{1,3}{\Pi}$ and $\leftexp{1,3}{\Delta}$);
it was remarked \cite{75KuGuBl.ScH} that the bonding of the molecular terms
other than $\leftexp{1}{\Sigma^+}$ is due
to the Sc($4s$) and H($1s$) orbitals, while the scandium $3d$ orbitals are
relatively unaffected with respect to the atomic state \cite{11Hougen}.
On the other hand the bonding of $\leftexp{1}{\Sigma^+}$ was
put down to $spd$ bonding \cite{75KuGuBl.ScH,79Pyxxxx.ScH}.
The different bonding character of the $\leftexp{1}{\Sigma^+}$ term is probably one of
the reasons
why an extensive treatment of electron correlation is necessary to obtain the right ordering
of the electronic terms.
Bauschlicher and Walch \cite{82BaWaxx.ScH} were the first to correctly predict
$\leftexp{1}{\Sigma^+}$ lying below $\leftexp{3}{\Delta}$
by performing full-valence multi-configuration self-consistent-field (MCSCF) calculations.
Jeung and Kouteck{\'y} \cite{88JeKoxx.ScH} studied the same six electronic terms
using pseudopotentials and truncated MRCI and confirmed a ground $\leftexp{1}{\Sigma^+}$ term close
to equilibrium ($r_e \approx 3.4~\mathrm{a}_0$), although for longer bond lengths
$\leftexp{3}{\Delta}$ becomes lower in energy.
Note that all these studies kept the scandium outer-core $3s3p$ electrons uncorrelated and often
did not include relativistic corrections.
Anglada \emph{et al} produced a series of papers \cite{83AnBrPe.ScH,84AnBrPe.ScH,86BrAnxx.ScH,89AnBrPe.ScH}
studying in great detail ScH and ScH$^+$ using multi-reference configuration interaction (MRCI);
in particular their final paper \cite{89AnBrPe.ScH} constitutes the most
complete theoretical study of ScH currently available. These authors confirmed that $\leftexp{1}{\Sigma^+}$
is the ground state when correlation effects are included; they also found that
correlation of the Sc($3s 3p$) semi-core electrons leads to large energy shifts, strongly
stabilising the $\leftexp{1}{\Sigma^+}$ term with respect to the others
and swapping the order of some of the excited states. They used basis sets
similar in size to cc-pVTZ.
More recent theoretical studies considered ScH in the context of calibration
studies of transition metal molecules using density function theory
\cite{01GuGoxx.ScH,08GoMaxx.ScH}, but they focused on equilibrium properties of
very few terms and are of little relevance for us. An exception is the very recent
study by Hubert \emph{et al} \cite{13HuOlLo.ScH} in which a detailed study of
the ground $\leftexp{1}{\Sigma^+}$ and of two excited terms
$\leftexp{1,3}{\Delta}$ around equilibrium was presented and a modification of
the coupled cluster method called general active space coupled cluster (GAS-CC)
was used.
The only theoretical dipole moment data available for ScH are those by
Anglada \emph{et al} \cite{89AnBrPe.ScH} and by Chong \emph{et al} \cite{86ChLaBa.ScH}.
Experimentally a study by Bernard \emph{et al} \cite{77BeEfBa.ScH} reported
three new bands ascribed to ScH and ScD in the region 11600 to 12700~cm$^{-1}$, but
no detailed analysis or assignments were made due to the limited resolution
and complexity of the spectra.
More recently Ram and Bernath \cite{96RaBexx.ScH,97RaBexx.ScH} reported two
detailed emission spectra analyses for ScH and ScD. In these studies they
reported on singlet-singlet bands
in regions from 5400 to 20500~cm$^{-1}${} assigned to transitions between 8
electronic terms, namely $X\,{}\leftexp{1}{\Sigma^+}$,
$A\,{}\leftexp{1}{\Delta}$, $B\,{}\leftexp{1}{\Pi}$,
$C\,{}\leftexp{1}{\Sigma^+}$, $D\,{}\leftexp{1}{\Pi}$,
$E\,{}\leftexp{1}{\Delta}$, $F\,{}\leftexp{1}{\Sigma^-}$ and
$G\,\leftexp{1}{\Pi}$. Two additional strong bands near 11620 and 12290~cm$^{-1}${}
and two weaker bands near 12660 and 16845~cm$^{-1}${} were recorded but only
incompletely analysed; the band near 11620~cm$^{-1}${} was conjectured to be due to a
transition to the low-lying $\leftexp{3}{\Delta}$ term from a
$\leftexp{3}{\Phi}$ term.
Le and Steimle \cite{11LeStxx.ScH} reported more recently a detailed
experimental study of the $X\,{}\leftexp{1}{\Sigma^+}$--$D\,{}\leftexp{1}{\Pi}$
band around 16850~cm$^{-1}$, where also the electric dipole moments of ScH in its
$X\,{}\leftexp{1}{\Sigma^+}$ and $D\,\leftexp{1}{\Pi}$ states were obtained
using optical Stark spectroscopy. Very recently, Mukund {\it et al}
\cite{14MoBhNa.ScH} reported the observation of ScH emission bands at about
17900~cm$^{-1}$, ascribed to the $g$~$\leftexp{3}{\Phi}$ -- $a$~$\leftexp{3}{\Delta}$
triplet-triplet transitions.
This paper focuses on the six low-energy electronic states dissociating to
ground state Sc and H atoms. Of the various experimentally observed
bands \cite{96RaBexx.ScH,97RaBexx.ScH,11LeStxx.ScH} only $X$ -- $B$ is
considered in the present study, although experiment is used for the empirical
refinement of the potential energy curves of the singlet terms
$X\,{}\leftexp{1}{\Sigma}$, $A\,{}\leftexp{1}{\Delta}$ and
$B\,{}\leftexp{1}{\Pi}$ as well.
As part of the ExoMol project \cite{jt528}, whose aim is to produce comprehensive line lists
for hot, astrophysically-important molecules, we have been constructing rovibrational and rovibronic
line lists for a number of diatomic species \cite{jt529,jt563,jt583,jt590,jt598}. However, so
far none of these have contained a transition metal (TM). The richness of the spectrum of TM-containing
diatomics makes their opacity particularly important for astrophysical studies \cite{aha97} but
treating their rovibronic spectrum {\it ab initio} is very challenging.
Scandium hydride is the lightest TM
molecule and for this reason constitutes a useful
benchmark system for theoretical studies of such systems.
Scandium hydride has received comparatively little attention with respect to
other transition metal hydrides such as FeH or NiH, in all probability because
of the low abundance of scandium. Scandium is in fact the rarest of
fourth-period transition metals (Sc-Zn) in the solar system \cite{03Lodders},
although it is more abundant than all heavier elements starting from the fifth
period. The study presented here on ScH constitutes a first step in the {\it ab
initio} calculation of ro-vibronic spectra of TM-containing molecule. We
perform a series of {\it ab initio} calculations on both the scandium atom and
ScH and use these to produce a line list of scandium hydride line positions and
intensities which is reasonably complete in the region up to 12~000~cm$^{-1}$.
\section{Atomic Scandium}
As a preliminary test we studied in some detail the scandium atom
using complete active space self consistent field (CASSCF) and internally-contracted
MRCI calculations using the program Molpro \cite{12WeKnKn.methods}.
We collected in table~\ref{tbl.Sc.atom.experiment} reference energy levels of
the scandium atom up to about 20~000~cm$^{-1}$, along with the ScH molecular terms
correlating adiabatically with the various atomic states; this information
serves as an indication of which molecular terms are expected to be low-lying;
the rationale is that for transition-metal hydrides the hydrogen atom
constitutes a relatively small perturbation of the atomic energy levels, so
that molecular terms correlating with high-energy atomic products should be
high-lying too. The lowest dissociation channel is separated by about
11~500~cm$^{-1}${} from others, and in fact the six electronic terms correlating to
it are lowest-lying and most theoretical studies concentrated on them. The
dissociation channels 1 to 10 reported in table~\ref{tbl.Sc.atom.experiment}
lead altogether to 60 electronic molecular terms; that is, there are 60 energy
curves in absence of
spin-orbit splitting, which become 155
in presence of spin-orbit splitting.
Of these, only eight have been characterised experimentally \cite{97RaBexx.ScH}.
Very probably many of the remaining molecular energy curves are repulsive (i.e., have no minimum)
and therefore do not concern us here.
In any case great complexity and strong perturbations in the observed spectra are expected.
This is typical of open-shell transition metal molecules.
In the following we consider the excitation energies from the ground
$\leftexp{2}{\mathrm{D}}$ term to the two lowest-energy excited terms, namely
$\leftexp{4}{\mathrm{F}}$ and $\leftexp{2}{\mathrm{F}}$.
We expect that errors of computed molecular excitation energies
are comparable with the corresponding error in the atomic case.
\begin{table*}
\begin{center}
\caption{Energy levels for scandium atom up to 20~200~cm$^{-1}$. The $\langle E \rangle$ are term-averaged energies, computed
by $\langle E \rangle = \frac{\sum_J (2J+1) E_J}{\sum_J (2J+1)}$; experimental energies for the levels $E_J$ where taken
from the NIST website \cite{NISTWebsite}. $n_f=\min(2S+1,2L+1)$ is the number of fine-structure components of a given
terms due to spin-orbit interaction; $n_\mu=(2S+1)(2L+1)$ is the total degeneracy of the term (number of microstates).
The $\xi$'s are effective spin-orbit coupling constants, such that the spin-orbit splittings for each term are best
reproduced by the expression $ E_J = E_0 + \xi [(J(J+1)-L(L+1)-S(S+1)]/2 $, $J=|L-S|,\cdots,L+S$.
The last column lists the molecular terms for ScH correlating at dissociation with the given Sc atomic term plus a
ground state $^2\mathrm{S}$ hydrogen atom. \label{tbl.Sc.atom.experiment} }
\begin{tabular}{r l l rr r r l}
\hline
\hline
\# & config. & \multicolumn{1}{c}{term} & \multicolumn{1}{c}{ $\langle E \rangle$ / cm$^{-1}$}&$n_f$ &$n_\mu$& \multicolumn{1}{c}{ $\xi$ / cm$^{-1}$ } & \multicolumn{1}{c}{ molecular terms} \\
\hline
1&$3d^1 4s^2$ & $\leftexp{2}{\mathrm{D}}$ & 0.0 & 2 & 10 & 67.3 & $^{1,3}[\Sigma^+, \Pi, \Delta]$ \\
2&$3d^2 4s^1$ & $\leftexp{4}{\mathrm{F}}$ & 11~509.1 & 4 & 28 & 15.0 & $^{3,5}[\Sigma^-, \Pi, \Delta,\Phi]$ \\
3&$3d^2 4s^1$ & $\leftexp{2}{\mathrm{F}}$ & 14~891.3 & 2 & 14 & 33.1 & $^{1,3}[\Sigma^-, \Pi, \Delta,\Phi]$ \\
4&$3d^1 4s^1 4p^1 $ & $\leftexp{4}{\mathrm{F^\circ}}$ & 15~775.8 & 4 & 28 & 33.6 & $^{3,5}[\Sigma^+, \Pi, \Delta,\Phi]$ \\
5&$3d^1 4s^1 4p^1 $ & $\leftexp{4}{\mathrm{D^\circ}}$ & 16~031.0 & 4 & 20 & 25.2 & $^{3,5}[\Sigma^-, \Pi, \Delta]$ \\
6&$3d^1 4s^1 4p^1 $ & $\leftexp{2}{\mathrm{D^\circ}}$ & 15~951.4 & 2 & 10 & -29.7 & $^{1,3}[\Sigma^-, \Pi, \Delta]$ \\
7&$3d^2 4s^1 $ & $\leftexp{2}{\mathrm{D}} $ & 16~916.7 & 2 & 10 & -5.0 & $^{1,3}[\Sigma^+, \Pi, \Delta]$ \\
8&$3d^2 4s^1 $ & $\leftexp{4}{\mathrm{P}} $ & 17~175.2 & 3 & 12 & 20.1 & $^{3,5}[\Sigma^-, \Pi]$ \\
9&$3d^1 4s^1 4p^1 $ & $\leftexp{4}{\mathrm{P^\circ}}$ & 18~440.6 & 3 & 12 & 15.0 & $^{3,5}[\Sigma^+, \Pi]$ \\
10&$4s^2 4p^1 $ & $\leftexp{2}{\mathrm{P^\circ}}$ & 18~706.5 & 2 & 6 & 96.5 & $^{1,3}[\Sigma^+, \Pi]$ \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
The most accurate study of (neutral or singly-ionized) transition metal atoms
including scandium is due to Balabanov and Peterson
\cite{05BaPexx.ScH,06BaPexx.ScH}. These authors used coupled cluster up to
CCSDTQ, relativistic corrections based on the Douglas-Kroll-Hess (DKH)
hamiltonian, included core correlation and developed the largest basis sets
available for transition metals. For scandium $\leftexp{2}{\mathrm{D}} \to
\leftexp{4}{\mathrm{F}}$ excitation energy the best theoretical coupled cluster
result is higher than the experimental one by about 115~cm$^{-1}$. The corresponding
result using ACPF (a multi reference method very close to MRCI) in conjunction
with the full-valence reference space is too low by about 110~cm$^{-1}$; using a
larger reference space including a further set of diffuse $d$ functions (which
are thought to be necessary for describing the late transition metals Fe-Cu)
leads to a worse agreement with experiment of about 190~cm$^{-1}$. Other recent
studies of transition metal atoms excitation energies including scandium were
performed by Raab and Roos \cite{05RaPoxx.ScH} and Mayhall \emph{et al}
\cite{08MaRaRe.ScH}.
Raab and Roos \cite{05RaPoxx.ScH}
computed the $\leftexp{2}{\mathrm{D}} \to \leftexp{4}{\mathrm{F}}$ excitation energy
with CCSD(T) and CASPT2 using the DKH hamiltonian for relativistic effects and the ANO-RCC basis set
(similar in size to aug-cc-pCVQZ). Both CCSD(T) and CASPT2 frozen core values agree
with experiment to about 250~cm$^{-1}$, but allowing for core correlation worsens somewhat the agreement to about
500~cm$^{-1}$. Mayhall \emph{et al} \cite{08MaRaRe.ScH} also used core-correlated CCSD(T) with the G3Large basis set
(similar in size to aug-cc-pCVTZ) and reported an agreement of 250~cm$^{-1}${}
without inclusion of relativistic effects and of about 1200~cm$^{-1}${} when
relativistic effected were included.
Table~\ref{tbl.Sc.atom.experiment} gives some indicative result for both the
$\leftexp{2}{\mathrm{D}} \to \leftexp{4}{\mathrm{F}}$ and
$\leftexp{2}{\mathrm{D}} \to \leftexp{2}{\mathrm{F}}$ transitions performed in
this study using MRCI and the full valence reference space.
Our best results for both transitions are too small on the average by about 750~cm$^{-1}${}
with respect to the results by Balabanov and Peterson; our errors are larger probably because
we performed a state-averaged calculation at the CASSCF level and also
because of the smaller basis sets used. A striking consideration is that the non-relativistic CASSCF
excitation energies are extremely good, a fact which can only be due to fortuitous cancellation
of errors. Overall our results and the analysis of the literature show that it is difficult
to get excitation energies correct to better than about 500~cm$^{-1}$, and that good agreement with experiment
can be often due to cancellation effects. We also observed that the relativistic contribution to excitation
energies shows relatively large variations of the order of 500~cm$^{-1}${} depending on the
levels of electron correlation (CASSCF, MRCI valence only or core-correlated) and on
using the mass-velocity one-electron Darwin (MVD1) rather than the Douglas-Kroll-Hess (DKH) Hamiltonian.
\begin{table*}
\begin{center}
\caption{Electronic term excitation energies for scandium atom (this work). All calculations used the full-valence (3-electron, 9-orbital) complete active space comprising the $3d4s4p$ orbitals. Orbitals are state-averaged over the three electronic terms considered. MRCI+Q are Davidson-corrected energies (relaxed reference). Calculated values are reported as (reference -- calculated),
and reference energies are taken from the column labelled `$\langle E \rangle$' of table \ref{tbl.Sc.atom.experiment}. All quantities are in cm$^{-1}$. \label{tbl.Sc.atom.3z}}
\begin{tabular}{l l r r }
\hline
\hline
Method & &$\leftexp{2}{\mathrm{D}} \to \leftexp{4}{\mathrm{F}}$ & $\leftexp{2}{\mathrm{D}} \to \leftexp{2}{\mathrm{F}}$ \\
\multicolumn{2}{r}{reference energies=} & 11~509.1 & 14~891.3 \\
\mbox{}\\
& & \multicolumn{2}{c}{ref - calc}\\
\hline
CASSCF & 3z & 28.6 & 64.0 \\
CASSCF & 4z & 49.2 & 86.0 \\
\mbox{}\\
MRCI/frz core & 3z & -1187.9 & -421.8 \\
MRCI/frz core & 4z & -1084.6 & -265.3 \\
\mbox{}\\
MRCI/core corr & wc3z & 573.3 & 541.1 \\
MRCI/core corr & wc4z & 1072.1 & 1056.0 \\
MRCI+Q/core corr& wc4z & 2080.0 & 2039.8 \\
MRCI+Q/core corr/DKH4 & wc4z-DK & 813.6 & 707.1 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
We also computed atomic spin-orbit splitting constants using CASSCF and MRCI wave functions
as implemented in MOLPRO.
Results are collected in table~\ref{tbl.Sc.atom.SO}.
Spin orbit splitting constants show weak sensitivity to the size of the basis set
and already with the smallest 2z basis set are converged within 1~cm$^{-1}$.
The dependence on the electron correlation treatment is also weak, with the ground $\leftexp{2}{\mathrm{D}}$ term being
the most sensitive. Going from CASSCF to frozen-core MRCI increases $\xi(\leftexp{2}{\mathrm{D}})$ by +18~cm$^{-1}$ but reduces
$\xi(\leftexp{4}{\mathrm{F}})$ and $\xi(\leftexp{2}{\mathrm{F}})$ by only 1.5 and 0.9~cm$^{-1}$ respectively.
Correlating the $(3s3p)$ outer core increases $\xi(\leftexp{2}{\mathrm{D}})$ by 5~cm$^{-1}$, $\xi(\leftexp{4}{\mathrm{F}})$ by 1.8~cm$^{-1}$ and $\xi(\leftexp{2}{\mathrm{F}})$ by 2.6~cm$^{-1}$.
With respect to the experimentally-derived values we do not observe a clear pattern of
convergence with respect to the level of theory used, and the simplest
CASSCF/2z values agree with experiment practically as well as the core-correlated, large basis set MRCI ones.
Considering that errors of $\approx 5$~cm$^{-1}$ in spin-orbit couplings
are very small with respect to the error in the main non-relativistic energies
we conclude that it is quite acceptable to compute spin-orbit couplings
at a low level of theory.
\begin{table*}
\begin{center}
\caption{Calculated spin-orbit constants $\xi$ for scandium atom. The column labelled `obs.' are
experimentally derived values from table~\ref{tbl.Sc.atom.experiment}.
All values are in cm$^{-1}$. \label{tbl.Sc.atom.SO}}
\begin{tabular}{r r r r r r r}
\hline
\hline
& & \multicolumn{5}{c}{CASSCF} \\
transition & Obs. & 2z & 3z & 4z & 5z \\
$\xi(\leftexp{2}{\mathrm{D}})$ & 67.3 & 57.5 & 57.6 & 57.9 & 57.9 \\
$\xi(\leftexp{4}{\mathrm{F}})$ & 15.0 & 18.5 & 18.5 & 18.6 & 18.6 \\
$\xi(\leftexp{2}{\mathrm{F}})$ & 33.1 & 37.2 & 37.2 & 37.4 & 37.4 \\
\mbox{}\\
& & \multicolumn{5}{c}{MRCI (frz. core)} \\
transition & & 2z & 3z & 4z & 5z \\
$\xi(\leftexp{2}{\mathrm{D}})$ & 67.3 & 74.7 & 75.6 & 76.0 & 76.0 \\
$\xi(\leftexp{4}{\mathrm{F}})$ & 15.0 & 16.8 & 17.0 & 17.1 & 17.1 \\
$\xi(\leftexp{2}{\mathrm{F}})$ & 33.1 & 35.9 & 36.4 & 36.5 & 36.6 \\
\mbox{}\\
& & \multicolumn{5}{c}{MRCI (core correlated)} \\
transition & & & wc3z & wc4z & wc5z \\
$\xi(\leftexp{2}{\mathrm{D}})$ & 67.3 & & 80.3 & 81.2 & 81.3 \\
$\xi(\leftexp{4}{\mathrm{F}})$ & 15.0 & & 18.7 & 18.9 & 19.0 \\
$\xi(\leftexp{2}{\mathrm{F}})$ & 33.1 & & 38.6 & 39.1 & 39.2 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
\section{ScH molecule}
As discussed in the introduction and hinted by the results for the Sc atom
presented in the previous section, from the point of view of high-resolution
spectroscopy the accuracy expected for transition metal diatomics is much lower
than the one generally achievable for molecules made up by main-group atoms.
Also, because convergence seems to be rather irregular both with respect to the
level of electron correlation and basis set size, one should not necessarily
expect more expensive calculations to be much closer to experiment than simpler ones.
For this reason, when possible, experimental data were used to adjust the
\emph{ab initio} potential energy curves, in particular, the experimental
studies by Ram and Bernath \cite{96RaBexx.ScH,97RaBexx.ScH} where the $v=0$
and sometimes $v=1$ and $v=2$ vibrational states of seven singlet terms ($X\,
\leftexp{1}{\Sigma^+}$, $A\,{}\leftexp{1}{\Delta}$, $B\,{}\leftexp{1}{\Pi}$,
$C\,{}\leftexp{1}{\Sigma^+}$, $D\,{}\leftexp{1}{\Pi}$, $F\,{}\leftexp{1}{\Sigma^-}$
and $G\,{}\leftexp{1}{\Pi}$) were characterised. Of these the $X$, $A$ and $B$
terms dissociate to channel~1 of table~\ref{tbl.Sc.atom.experiment}, $C$ to
channel 7 or perhaps 10, $F$ to channel 6 while for terms $D$ and $G$ channels
3, 6, 7, or 10 are all possible on the basis of symmetry considerations.
Details on the refinement are given in section~\ref{section.linelist};
in the rest of this section we discuss the \emph{ab initio} calculations.
\subsection{Potential energy curves}
Energy curves for the six molecular electronic terms correlating with
the ground atomic states (dissociation channel 1 of
table~\ref{tbl.Sc.atom.experiment})
were computed using CASSCF and internally-contracted MRCI \cite{Werner1988}
in conjunction with the recent aug-cc-pwCV$n$Z basis sets (awc$n$z for short)
\cite{05BaPexx.ScH,06BaPexx.ScH}.
CASSCF orbitals (state-averaged over all the degenerate components of the six terms considered in this work) were used
as a basis of the MRCI runs.
A four-electron, ten-orbital complete active space
comprising the scandium $4s,3d,4p$ orbitals and the hydrogen $1s$ orbital
(5 active orbitals of a$_1$ symmetry, 2 b$_1$, 2 b$_2$ and 1 a$_2$ in the C$_{2v}$ point
group) was used in the calculations. The outer-core scandium $3s,3p$ orbitals were left doubly occupied at the CASSCF
stage but were correlated at the MRCI one. The inner-core $1s,2s,2p$ orbitals were not
correlated. As discussed by Balabanov and
Peterson \cite{06BaPexx.ScH}, in multireference calculations the late
transition metals Fe-Cu require an active space larger than the full-valence
one, which should include a further set of diffuse $d$ functions.
However this is in not necessary for scandium. Inclusion of the Sc $4p$ orbitals
is not thought to be indispensable for a correct description of bonding but
was found to be necessary in practice to avoid convergence problems
at the CASSCF stage.
All curves were computed in the range 2.0 to 8.5~a$_0$ in steps of 0.05~a$_0$ and
from 9.0 to 13.5~a$_0$ is steps of 0.5~a$_0$, for a total of 141 points.
Our best \emph{ab initio} results are based on MRCI using the awc5z basis set;
computing at this level the energies for all six terms for a single geometry
takes about 4~GB of RAM, 20~GB of disk space and 12 hours on a single core
of an Intel Xeon E5-2670 CPU at 2.60~GHz.
Potential energy curves include a relativistic correction computed as expectation
value of the MVD1 operator.
The Davidson correction was not included in our final \emph{ab initio} curves because,
as already registed for the excitation energies of the
scandium atom (see table~\ref{tbl.Sc.atom.3z}), it does not improve agreement with known
experimental data; furthermore tests (not discussed here in detail)
at the frozen-core / cc-pVDZ level showed that Davidson-corrected energies agreed
worse with full CI than uncorrected MRCI ones.
The \emph{ab initio} energy curves were slightly shifted in energy (i.e., their
$T_e$ were changed) so that they are exactly degenerate upon dissociation; in
our MRCI calculation the exact degeneracy of the terms at $r=+\infty$ is broken
mainly because the energies of (singlet or triplet) $\Sigma^+$ and $\Delta$
terms is computed simultaneously with a two-state calculation, while the
(singlet or triplet) $\Pi$ terms were computed with one-state calculations;
because of the internal contraction approximation used in Molpro in a
multi-state calculation the variational flexibility of the MRCI wave
function is increased and this
results in a small downwards shift in energy. As a consequence at
dissociation the two $\Pi$ terms are about 50~cm$^{-1}${} higher in energy than the
other terms; furthermore a small breaking of exact degeneracies is expected and
normal also in uncontracted MRCI calculations because of the incomplete
treatment of electron correlation. We therefore thought it was reasonable to
shift all terms to restore the exact degeneracy. The shifts applied to
MRCI/awc5z/MVD1 curves for the $A\,{}\leftexp{1}{\Delta}$,
$B\,{}\leftexp{1}{\Pi}$, $a~\leftexp{3}{\Delta}$, $b\,{}\leftexp{3}{\Pi}$ and
$c\,{}\leftexp{3}{\Sigma^+}$ terms are respectively 1.1, 49.7, 1.5, 49.9 and
2.1~cm$^{-1}$. The ground $X\,{}\leftexp{1}{\Sigma^+}$ term was taken as a reference
and not shifted.
Figure~\ref{energy.curves} presents our computed potentials. As it can be seen
the ground $X\,{}\leftexp{1}{\Sigma^+}$ curves are distinct from the other
curves: it has a much shorter equilibrium bond length and its relativistic
correction curve is also very different from the others. This
is a consequence of the
different bonding character of this $X\,{}\leftexp{1}{\Sigma^+}$ term discussed in section~\ref{sec:intro}.
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=0.85\textwidth]{ScH_awc5z_mrci.eps}
\includegraphics[angle=0, width=0.85\textwidth]{awc5z_mvd1_mrci_3s3p_correlated.eps}
\caption{\emph{Ab initio} potential curves for ScH computed with MRCI and the awc5z basis set and the corresponding relativistic MVD1 correction curves (see text for details). The $3s3p$ orbitals were correlated. \label{energy.curves}}
\end{center}
\end{figure}
Equilibrium bond lengths $r_e$, harmonic vibrational frequencies $\omega_e$ and
adiabatic excitation energies $T_e$ are reported in table~\ref{tbl.eq} and
compared with previous theoretical calculations.
Our computed equilibrium bond lengths are in very good agreement (within
0.01~a$_0$) with the recent theoretical values by Hubert \emph{et al}
\cite{13HuOlLo.ScH}.
We compare our \emph{ab initio} and empirically-refined results %
with energy levels reconstructed from the experimental study \cite{97RaBexx.ScH}
in table~\ref{tbl.Obs-Calc}; empirical refinement is discussed in section~\ref{section.linelist}.
We prefer to compare directly with experimental energy levels because
experimentally-deduced values for $T_e$, $r_e$ and $\omega_e$ also include in an effective
way spin-orbit and other coupling effects between different electronic terms.
\begin{table*}
\begin{center}
\caption{Computed \emph{ab initio} adiabatic electronic excitation energies (in cm$^{-1}$), equilibrium bondlengths (in a$_0$)
and harmonic frequences (in cm$^{-1}$) for selected ScH electronic terms.} \label{tbl.eq}
\begin{tabular}{l r r r r r r r r r lll}
\hline
\hline
Term & \multicolumn{3}{c}{Anglada et al$^a$} & \multicolumn{3}{c}{Hubert et al$^b$} & \multicolumn{3}{c}{This work$^c$} \\
\cline{2-4}\cline{5-7}\cline{8-10}\cline{11-13}
& $T_e$ & $r_e$ & $\omega_e$ & $T_e$ & $r_e$ & $\omega_e$ & $T_e$ & $r_e$ & $\omega_e$ \\
$X\,{}\leftexp{1}{\Sigma^+}$ & 0 & 3.41 & 1621 & 0 & 3.35 & 1611 & 0 & 3.34 & 1587 \\
$A\,{}\leftexp{1}{\Delta}$ & 6600 & 3.68 & 1541 & 5362 & 3.58 & 1439 & 3914 & 3.59 & 1428 \\
$B\,{}\leftexp{1}{\Pi}$ & 8400 & 3.64 & 1451 &\multicolumn{1}{c}{---} &\multicolumn{1}{c}{---}&\multicolumn{1}{c}{---}& 5856 & 3.55 & 1380 \\
$a\,{}\leftexp{3}{\Delta}$ & 4600 & 3.66 & 1460 & 3660 & 3.55 & 1450 & 1868 & 3.56 & 1432 \\
$b\,{}\leftexp{3}{\Pi}$ & 6200 & 3.64 & 1438 &\multicolumn{1}{c}{---} &\multicolumn{1}{c}{---}&\multicolumn{1}{c}{---}& 3544 & 3.55 & 1406 \\
$c\,{}\leftexp{3}{\Sigma^+}$ & 7900 & 3.68 & 1389 &\multicolumn{1}{c}{---} &\multicolumn{1}{c}{---}&\multicolumn{1}{c}{---}& 6122 & 3.63 & 1325 \\
\hline
\hline
\end{tabular}
\end{center}
$^a$ Ref. \cite{89AnBrPe.ScH}; values of the $T_e$'s are taken from the column labelled `B(1f)' of table~8, $r_e$'s and $\omega_e$'s from table~10.\\
$^b$ Ref. \cite{13HuOlLo.ScH}, using the data from the column labelled CCSDT12 (Q$\zeta$) of Table V and adding the relativistic corrections in the last column of Table VI.\\
$^c$ Using MRCI/awc5z/MVD1. The $\omega_e^{(i)}$ relative to state $i$ was computed by $\omega_e^{(i)} = \sqrt{ V''(r_e^{(i)}) / \mu }$ while the adiabatic excitation energy $T_e^{(i)}$ was computed as $V_i( r_e^{(i)}) -V_0( r_e^{(0)})$, where $V_0$ is the potential for the $\leftexp{1}{\Sigma^+}$ ground state.\\
\end{table*}
\begin{table*}
\footnotesize
\begin{center}
\caption{Selected energy levels for the lowest-energy singlet terms of ScH in cm$^{-1}$. $J$ is the total angular momentum (neglecting nuclear spin), $v$ is the vibrational quantum number, $p$ the parity; `Obs.' are the values derived using the spectroscopic constants reported by Ram and Bernath \cite{97RaBexx.ScH} and the program PGOPHER~\cite{PGOPHER}; `A'
and `R' are the term values calculated with {\sc Duo} using the {\it ab initio} (MRCI/awc5z/MVD1) and {\it refined} curves as described in the text. Calculations included spin-orbit and other couplings between electronic terms.
\label{tbl.Obs-Calc} }
\begin{tabular}{rcrrrrr}
\hline
\hline
$J$ & $p$ & \multicolumn{3}{c}{energies} & \multicolumn{2}{c}{obs -- calc} \\
& & \multicolumn{1}{c}{obs}& \multicolumn{1}{c}{\emph{ab initio}} & \multicolumn{1}{c}{\emph{refined}} & \multicolumn{1}{c}{\emph{ab initio}} & \multicolumn{1}{c}{\emph{refined}} \\
\mbox{}\\
\multicolumn{7}{c}{$X\,{}^1\Sigma^+$, $v=0$}\\
0 & + & 0.00 & 0.00 & 0.00 &$ 0.00 $&$ 0.00 $\\
1 & - & 10.73 & 10.73 & 10.73 &$ 0.00 $&$ 0.00 $\\
10 & + & 586.89 & 586.88 & 586.93 &$ -0.04 $&$ -0.05 $\\
\multicolumn{7}{c}{$X\,{}^1\Sigma^+$, $v=1$}\\
0 & + & 1546.97 & 1538.53 & 1547.10 &$ 8.44 $&$ -0.12 $\\
1 & - & 1557.45 & 1549.00 & 1557.54 &$ 8.45 $&$ -0.09 $\\
10 & + & 2120.06 & 2111.48 & 2118.31 &$ 8.58 $&$ 1.76 $\\
\mbox{}\\
\multicolumn{7}{c}{A~$^1\Delta$, $v=0$}\\
2 & + & 4213.36 & 3830.27 & 4213.35 &$ 383.09 $&$ 0.01 $\\
3 & + & 4241.34 & 3858.12 & 4241.36 &$ 383.21 $&$ -0.03 $\\
10 & + & 4696.30 & 4311.04 & 4696.64 &$ 385.25 $&$ -0.34 $\\
10 & - & 4696.30 & 4311.01 & 4696.61 &$ 385.28 $&$ -0.31 $\\
\mbox{}\\
\multicolumn{7}{c}{B~$^1\Pi$, $v=0$}\\
1 & + & 5413.98 & 5704.18 & 5413.94 &$ -290.19 $&$ 0.04 $\\
2 & + & 5433.77 & 5723.68 & 5433.79 &$ -289.91 $&$ -0.02 $\\
10 & + & 5943.02 & 6226.10 & 5939.67 &$ -283.07 $&$ 3.35 $\\
10 & - & 5939.42 & 6222.43 & 5936.36 &$ -283.01 $&$ 3.06 $\\
\multicolumn{7}{c}{B~$^1\Pi$, $v=1$}\\
1 & + & 6776.75 & 7037.67 & 6776.70 &$ -260.92 $&$ 0.05 $\\
2 & + & 6795.96 & 7056.69 & 6796.35 &$ -260.73 $&$ -0.39 $\\
10 & + & 7290.12 & 7546.50 & 7290.43 &$ -256.38 $&$ -0.31 $\\
10 & - & 7286.57 & 7542.84 & 7286.95 &$ -256.28 $&$ -0.38 $\\
\multicolumn{7}{c}{B~$^1\Pi$, $v=2$}\\
1 & + & 8092.50 & 8323.48 & 8092.51 &$ -230.98 $&$ -0.01 $\\
2 & + & 8111.15 & 8342.00 & 8111.07 &$ -230.85 $&$ 0.08 $\\
10 & + & 8590.66 & 8818.77 & 8589.10 &$ -228.10 $&$ 1.56 $\\
10 & - & 8587.14 & 8815.10 & 8585.06 &$ -227.96 $&$ 2.08 $\\
\hline
\end{tabular}
\end{center}
\end{table*}
As discussed above, we expect our computed adiabatic excitation energies $T_e$
to have errors of several hundreds or perhaps a few thousands of cm$^{-1}${} and
therefore they cannot be considered very accurate. On the other hand the
equilibrium bond lengths and the shape of the potentials should be reasonably
accurate, see also table~\ref{tbl.Obs-Calc}, where the accuracy of the
potential energy curves of the singlet states can be assessed by comparing to
the vibrational energy separations within each state.
The rotational intervals between the two lowest-$J$ $v=0$ energy levels are reproduced
by our \emph{ab initio} curves with errors of 0.00, 0.12 and 0.28~cm$^{-1}$\ for the
$X$, $A$ and $B$ term respectively. Errors in vibrational transitions $v=0$ to $v=0$
are 8.44~cm$^{-1}$\ for the $X$ term and 29.27~cm$^{-1}$\ for the $B$ term.
For the ground X term only, we considered the error of the coupled-cluster
based potential curves. The absolute errors for the $v=0$ to $v=1$
transition wavenumber using CCSD, CCSD(T), CCSDT and CCSDT(Q) are,
respectively, 54.73, 24.45, 6.75 and 1.15~cm$^{-1}$\ with the awc5z basis set,
the DKH Hamiltonian and keeping all the coupling terms computed with MRCI or CASSCF; the CCSDT and CCSDT(Q) curves were obtained in the basis set formed
by wc3z for Sc and 2z for hydrogen, and added as a correction to the CCSD(T) curve.
These result indicate that our MRCI curves are, close to equilibrium, similar in quality to CCSDT and that quadruple excitations must be accounted for to obtain accuracies of the order of 1~cm$^{-1}$.
\subsection{Dissociation energy}
The dissociation energy $D_0$ of ScH is related
to the potential well depth $D_e$ by
\begin{equation}
D_0 = D_e - E_\mathrm{ZPE}
\end{equation}
where $E_\mathrm{ZPE}=787$~cm$^{-1}${} is the zero-point rotational-vibrational
energy and the quoted value was computed using out MRCI/awc5z/MVD1 PEC. The
potential well depth $D_e$ can be decomposed into three contributions: a main
non-relativistic one, a spin-independent (scalar) relativistic contribution and
a spin-dependent contribution due to spin-orbit:
\begin{equation}
D_e = D_e^\mathrm{NR} + D_e^\mathrm{R} + D_e^\mathrm{SO}
\end{equation}
The spin-orbit contribution $D_e^\mathrm{SO}$ is due to the energy lowering
of the scandium atom $^2\mathrm{D}_{3/2}$ level with respect
to the $^2\mathrm{D}$ term
and has a value $D_e^\mathrm{SO} = -3\xi/2 = -101$~cm$^{-1}$, where
$\xi = 67.3$~cm$^{-1}${} is the atomic spin-orbit contant for the $^2\mathrm{D}$ term
(see table~\ref{tbl.Sc.atom.experiment}).
The (spin-orbit free) potential well depth $D_e^\mathrm{NR}$ computed with
MRCI/awc5z is 17459~cm$^{-1}$;
the Davidson correction gives a rather large shift of +1355~cm$^{-1}$,
leading for MRCI+Q/awc5z to a value $D_e^\mathrm{NR}$=18814~cm$^{-1}$.
With a view to ascertaining the quality of our \emph{ab initio} curve close
to dissociation we computed an accurate value for $D_e^\mathrm{NR}$ using high-order
coupled cluster and the program MRCC \cite{mrcc}.
Using the awc5z basis set and correlating the outer-core $3s3p$ orbitals
gives for $D_e^\mathrm{NR}=17694$~cm$^{-1}${} using CCSD and 18547~cm$^{-1}${} using CCSD(T).
The effect of full triples (T)$\rightarrow$T
was evaluated in a basis set formed by the wc3z for scandium and 2z for hydrogen,
giving a shift of +163~cm$^{-1}$. The effect of quadruple excitations was evaluated
in an even smaller basis set constucted complementing the 2z one for hydrogen and scandium
with the core-correlation functions taken from the wc3z basis set and dropping the $g$ functions;
the computed shift T$\rightarrow$Q is +48~cm$^{-1}$; our best awc5z
(frozen inner-core) coupled cluster value is $D_e^\mathrm{NR}=18547+163+48 = 18758$~cm$^{-1}$.
The coupled cluster value therefore strongly supports the Davidson-corrected
value for $D_e^\mathrm{NR}$ rather than the uncorrected MRCI one.
We also considered the contribution to $D_e^\mathrm{NR}$ due to correlation of the
inner-core $2s2p$ orbitals; this effect gives a contribution of +62~cm$^{-1}$\ using CCSD/awc5z and
+104~cm$^{-1}${} using CCSD(T)/awc5z.
The magnitude of basis set incompleteness was estimated by looking at the difference
between the awc5z values and the basis set extrapolated ones
using the awc4z and awc5z basis sets;
Basis set extrapolation of the awc5z values gives a very small contribution, namely
$-3$~cm$^{-1}${} for MRCI/awc5z and +6~cm$^{-1}${} for CCSD(T)/awc5z.
Finally, our best coupled-cluster-based value for the non-relativistic part of
the potential well depth is $D_e^\mathrm{NR} =18758+104+6=18868(50)$~cm$^{-1}$, where the
given uncertainty was assigned by halving of the sum of the quadruples correction,
the difference between the CCSD and CCSD(T) inner-core corrections and the basis set extrapolation
contribution.
We now consider the scalar relativistic contribution $D_e^\mathrm{R}$.
The MVD1/awc5z value is $D_e^\mathrm{R}=+285$~cm$^{-1}$; using the DKH Hamiltonian (truncated to
fourth order) and the awc4z-DK basis gives a contribution +311~cm$^{-1}${} using MRCI,
+291~cm$^{-1}${} using MRCI+Q and +307~cm$^{-1}${} using CCSD(T).
Taking the DKH value $D_e^\mathrm{R} = 307$~cm$^{-1}${} (although there is no conclusive
argument to favour it instead of the MVD1 one) we arrive at a final value for the potential
well depth $D_e = 18868+307-101=19074(60)$~cm$^{-1}$\ and to a
dissociation energy $D_0 = 19074 -787 = 18287(60)$~cm$^{-1}$,
where the given uncertainty was increased to reflect the uncertainty on
the relativistic correction.
Kant and Moon \cite{81KaMoxx.ScH} reported long ago an experimental value for
the potential well depth $D_e = 16613 \pm 700$ cm$^{-1}$\ and Koseki {\it et al}
\cite{04KoIsFe.ScH} gave a survey of calculated values of $D_e$ which have a
large spread of about 3000~cm$^{-1}$\ around the value quoted. Our computed value is
larger than the experimental one by about 2500~cm$^{-1}$\ and in disagreement with it
by more than three times its uncertainty bar.
Finally, we report some run-times for our coupled cluster results.
The CCSDT run using the wc2z/2z basis set took for ScH 8.3 hours of CPU time on a
12-core Xeon X5660 at 2.80GHz machine (1.3 hours real time). The CCSDTQ run for ScH
in the 2z+wC3z basis set took 10.6 days of CPU time (1.6 days real time) and 5~GB of
RAM on the same machine. A single CCSD(T)/awc5z run takes about 10 minutes on a single core
of the same machine.
\subsection{Dipole moment curves}
While potential energy curves can be refined semiempirically from (even
limited) experimental data, one has normally to rely on computed, \emph{ab
initio} dipole moment curves \cite{jt156}. With a view to computing accurate line
intensities it is therefore important to produce dipole moment curves as
accurate as possible.
Le and Steimle \cite{11LeStxx.ScH} reported for the ground $X\,{}^1 \Sigma^+$
term an experimental equilibrium dipole $\mu = 1.74$(0.15)~D using optical
Stark spectroscopy; for our work we choose the $z$ axis such that a negative
dipole corresponds to Sc$^+$H$^-$ polarity, so we reverse their value to $\mu
=-1.74$(0.15)~D.
We were able to produce a very accurate value for the equilibrium dipole of the
ground state term using coupled cluster and the energy-derivative (ED) method
\cite{jt475} ($\lambda=\pm 10^{-4}$~au). As we are dealing with a closed-shell
electronic state near equilibrium coupled cluster converges quickly with
respect to the level of excitations included. High-order coupled cluster
calculations used the program MRCC \cite{mrcc}. Results are collected in
table~\ref{tab:cc.eq.dipole}; our best theoretical value is $\mu_e =-1.72(2)$~D
and is in full agreement with the experimental value of Le and Steimle
\cite{11LeStxx.ScH}.
\begin{table*}
\begin{center}
\caption{Equilibrium dipole of the ground state $X\,{}^{1}{\Sigma^+}$ term using coupled cluster theory.
Dipoles were computed at $r=3.35$~a$_0$ using the energy-derivative method and $\lambda=\pm10^{-4}$~au.
A part from the line labelled `D', all calculations correlated the outer-core $3s3p$ orbitals but the kept the inner core $1s2s2p$ uncorrelated.
Dipoles are in debyes. The experimental value is -1.74$(0.15)$~D \cite{11LeStxx.ScH}.
\label{tab:cc.eq.dipole}}
\begin{tabular}{c l l l r}
\hline
\hline
\multicolumn{1}{c}{label} & \multicolumn{1}{c}{method} & \multicolumn{1}{c}{basis set} & \multicolumn{1}{c}{value} \\
&RHF & awc3z & $-$1.436\\
&CCSD & awc3z & $-$1.668\\
&CCSD(T) & awc3z & $-$1.640\\
&CCSD(T) & awc4z & $-$1.686\\
&CCSD(T) & awc5z & $-$1.702\\
A &CCSD(T) & awc[345]z$^a$ & $-$1.719\\
\mbox{}\\
&\multicolumn{3}{c}{higher order correlation}\\
B & (T) $\rightarrow$ T & wc3z/2z$^b$ & $-$0.002\\
C & T $\rightarrow$ T(Q) & wc3z/2z$^b$ & $-$0.009\\
D & T(Q) $\rightarrow$ Q & 2z+wC(3z)$^c$ & $-$0.005\\
\mbox{}\\
&\multicolumn{3}{c}{inner-core correlation$^d$}\\
E & CCSD(T) & awc3z & $-$0.011\\
\mbox{}\\
&\multicolumn{3}{c}{relativistic$^e$}\\
F & CCSD(T) & awc4z & +0.078\\
\mbox{}\\
&\multicolumn{3}{c}{vibrational averaging$^f$}\\
G & & & $-$0.054\\
\hline
\hline
A + B +C +D +E+F+G & best \emph{ab initio}$^g$ & & $-$1.72(2)\\
\hline
\hline
\end{tabular}
\end{center}
$^a$ Basis-set extrapolated value using a $\mu_n = \mu_e + A / n^3$ formula.\\
$^b$ Correction due to full triples and perturbative quadruple excitations using the cc-pVDZ basis set for hydrogen and cc-pwCVTZ for scandium.\\
$^c$ Correction due to full quadruple excitations using the cc-pVDZ basis set for hydrogen and the cc-pVDZ complemented with the core-valence correlation functions from the cc-pwCVTZ ($g$ functions excluded) for scandium.\\
$^d$ Correction due to correlation of the $2s2p$ orbitals. The innermost $1s$ orbital was not correlated.\\
$^e$ Correction due to scalar-relativistic effects computed as difference of CCSD(T)/DKH4/awc5z-dk and CCSD(T)/awc5z dipoles.\\
$^f$ Vibrational averaging computed as $\langle 0 | \mu(r) | 0 \rangle - \mu(r=3.35 \mathrm{a}_0)$, where $|0\rangle$ is the $J=0, v=0$ vibrational
ground state (obtained using the \emph{ab initio} MRCI/awc5z PEC) and $\mu(r)$ is the MRCI/energy-derivative/awc5z dipole moment curve.\\
$^g$ The estimated uncertainty in the teoretical dipole is mostly due to residual basis set incompleteness error and incomplete treatment of higher-order correlation effects.
\end{table*}
MRCI, on the other hand, has difficulties in reproducing the correct value for
the dipole. Apart from the basis set size and whether or not core orbitals are
correlated, we considered three further factors affecting MRCI dipoles, namely:
\emph{i)} whether dipoles are computed by expectation value (XP) or energy
derivative (ED) \cite{jt475}; \emph{ii)} whether at the CASSCF step orbitals
are obtained by state averaging (state-averaged orbitals, SAO) or are
specifically optimised for for the $X\,{}^{1}{\Sigma^+}$ electronic term
(state-specific orbitals, SSO); and, \emph{iii)}, the effect of using
Davidson-corrected energies instead of MRCI ones. We compared dipoles obtained
with MRCI in the awc3z basis set with the very accurate (non-relativistic,
$3s3p$ correlated) value obtained with coupled cluster, which gives in this
basis set the value $\mu = 1.66$~D (see table~\ref{tab:cc.eq.dipole},
CCSD(T)/awc3z value + lines B,C,D). Results are collected in
table~\ref{tab:mrci.eq.dipole}. Figure~\ref{fig.1S.dipoles} shows the
dipole moment curves for the X ground term computed with CCSD(T), MRCI/XP,
MRCI/ED and MRCI+Q/ED; note that the CCSD(T) curve become unphysical
at large bondlengths.
\begin{table*}
\begin{center}
\caption{Values and errors in computed MRCI dipoles at $r=3.35$~a$_0$ for the ground state $X\,{}^{1}{\Sigma^+}$ term. The acronym SAO and SSO stand for state-averaged orbitals and state-specific orbitals, respectively. The last two columns report differences with the accurate value $\mu = 1.66$~D obtained with coupled cluster (see text). Dipoles are in debyes. \label{tab:mrci.eq.dipole}}
\begin{tabular}{l r r r r}
\hline
\hline
\multicolumn{1}{c}{method$^a$}& \multicolumn{2}{c}{values} & \multicolumn{2}{c}{values -- exact} \\
& SAO & SSO & SAO & SSO \\
CASSCF/XP & -0.99 & -1.32 & 0.67 & 0.34 \\
MRCI/XP & -1.19 & -1.35 & 0.47 & 0.31 \\
MRCI/ED & -1.56 & -1.66 & 0.08 & 0.00 \\
MRCI+Q/ED & -1.68 & -1.71 & -0.02 & -0.05 \\
\hline
\hline
\end{tabular}
\end{center}
$^a$ XP or ED specifies whether dipole were computed as expectation value or energy derivatives (field strength $\lambda = \pm 10^{-4}$~au). \\
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=0.85\textwidth]{awc5z_1S_dipole_curves.eps}
\caption{\emph{Ab initio} dipole curves for the ground $X\,{}^{1}{\Sigma^+}$ term computed in the awc5z basis set and various methods (see text). \label{fig.1S.dipoles}}
\end{center}
\end{figure}
As one can see in table~\ref{tab:mrci.eq.dipole} both CASSCF and MRCI/XP
equilibrium dipoles are too small in magnitude by about 0.3--0.5~D, a
considerable amount; using state-specific orbitals reduces the error to about
0.3~D, indicating that the CASSCF and MRCI wave functions are quite far from
the exact, full CI one (which is independent on the choice of the orbitals).
Computing the dipole by energy derivative greatly reduces the error in the
MRCI dipoles, bringing them in much closer agreement with the coupled cluster
value. Using Davidson-corrected energies (MRCI+Q) introduces a shift of about
0.12~D and brings the dipole curve near equilibrium even closer to the coupled
cluster one (see also fig.~\ref{fig.1S.dipoles}). The Davidson-corrected dipole
curve (`relaxed' reference energies were used) has a small jump discontinuity
(0.02~D of magnitude) at $r=4.6$~a$_0$; on the other hand the MRCI/ED curve is
perfectly smooth for all bond lengths. In conclusion the major factor affecting
MRCI dipoles is whether the XP or ED technique is used, with ED producing
considerably better dipoles. Using SSO instead of SSA helps somewhat but is of
secondary importance.
It is also worth noting that, although the absolute value of MRCI/XP
dipoles is considerably off, this quantity only affects line
intensities for pure rotational transitions. Intensities of
vibrational transition within an electronic term depend on the shape
of the dipole function, and may be given quite accurately even by
MRCI/XP.
Finally, we decided to use awc5z/MRCI/ED for all diagonal dipole curves in virtue of their smoothness,
although using MRCI+Q dipoles may lead to a slight improvement.
Off-diagonal dipoles were computed as expectation values of the awc5z/MRCI wave
functions; although it is possible to compute off-diagonal dipoles using an
energy-derivative technique \cite{98AdZaSt.method}, this route was not pursued
at this time. Figure~\ref{dipole.curves} shows the diagonal and off-diagonal
dipole moment curves for the various electronic.
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=0.85\textwidth]{ScH_awc5z_mrci_dipoles.eps}
\caption{Diagonal (in bold) and off-diagonal dipole moment curves for ScH computed with with MRCI and the awc5z basis set. \label{dipole.curves}}
\end{center}
\end{figure}
\subsection{Spin-orbit and other coupling curves}
We computed spin-orbit couplings and couplings of the angular momentum operators $\hat{L}_x$ and $\hat{L}_y$
using the CASSCF or MRCI wave functions.
Figure \ref{LxLy.curves} shows matrix elements of the $L_x$ and $L_y$ operator, obtained at the CASSCF/awc3z level;
these couplings enter in the $L$-uncoupling and spin-electronic terms of the
rotational Hamiltonian \cite{86LeFexx} and are responsible for $\Lambda$-doubling.
Finally, fig.~\ref{SO.curves} reports the 10 symmetry-independent spin-orbit coupling curves obtained at the
CASSCF/awc3z level. Care was taken to ensure that the coupling curves and dipoles are both smooth and phase
corrected, something that is by no means standard in the literature \cite{jt573}.
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=0.85\textwidth]{ScH_awc3z_LxLy_couplings.eps}
\caption{Matrix elements of the $\hat{L}_x$ and $\hat{L}_y$ operators for ScH computed with CASSCF and the awc3z basis set. Specifically, the curve labelled `1' is $\langle ^3 \Delta_{x^2-y^2} |\hat{L}_x | ^3 \Pi_y \rangle /i$; curve `2' is $\langle ^1 \Delta_{x^2 -y^2} |\hat{L}_x | ^1 \Pi_y \rangle /(-i)$; curve `3' is $\langle ^3 \Sigma^+ |\hat{L}_y | ^3 \Pi_x \rangle /(\sqrt{3} i)$ ; curve `4' is $\langle ^1 \Sigma^+ |\hat{L}_y | ^1 \Pi_x \rangle /(\sqrt{3} i)$. The phases of the electronic wave functions are chosen such as $\langle ^1 \Pi_x |\hat{L}_z | ^1 \Pi_y \rangle=i$, $\langle ^3 \Pi_x |\hat{L}_z | ^3 \Pi_y \rangle=i$, $\langle ^1\ \Delta_{x^2-y^2} |\hat{L}_z | ^1\Delta_{xy} \rangle=-2i$ and $\langle ^3\ \Delta_{x^2-y^2} |\hat{L}_z | ^3\Delta_{xy} \rangle=2i$. \label{LxLy.curves}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=0.85\textwidth]{ScH_awc3z_spin_orbit_couplings.eps}
\caption{Spin-orbit coupling matrix elements for ScH computed with CASSCF and the awc3z basis set. Specifically: the curve labelled `1' is $\langle ^3 \Delta_{x^2-y^2}, \Sigma=1 |\hat{H}_\mathrm{SO} | ^3 \Delta_{xy}, \Sigma=1 \rangle /(-i)$; curve `2' is $\langle ^1 \Delta_{xy} |\hat{H}_\mathrm{SO} | ^3 \Delta_{xx-yy} , \Sigma=0\rangle /(-i)$; curve `3' is $\langle ^3 \Sigma^+, \Sigma= 0 |\hat{H}_\mathrm{SO} | ^3 \Pi_y , \Sigma=1\rangle /(-i)$; curve `4' is $\langle ^1 \Pi_y |\hat{H}_\mathrm{SO} | ^3 \Sigma^+ , \Sigma=1\rangle /i$; curve `5' is $\langle ^1 \Sigma^+ |\hat{H}_\mathrm{SO} | ^3 \Pi_y , \Sigma=1\rangle /i$; curve `6' is $\langle ^3 \Pi_x , \Sigma= 1 |\hat{H}_\mathrm{SO} | ^3 \Pi_y , \Sigma=1\rangle /(-i)$; curve `7' is $\langle ^1 \Pi_x |\hat{H}_\mathrm{SO} | ^3 \Pi_y , \Sigma=0\rangle /i$; curve `8' is $\langle ^3 \Pi_x , \Sigma= 0 |\hat{H}_\mathrm{SO} | ^3 \Delta_{xy} , \Sigma=1\rangle /i$; curve `9' is $\langle ^1 \Delta_{xy}|\hat{H}_\mathrm{SO} | ^3 \Pi_x , \Sigma=1\rangle /(-i)$; curve `10' is $\langle ^1 \Pi_y |\hat{H}_\mathrm{SO} | ^3 \Delta_{xx-yy} , \Sigma=1\rangle /(-i)$. The phases of the electronic wave functions are chosen such as $\langle ^1 \Pi_x |\hat{L}_z | ^1 \Pi_y \rangle=i$, $\langle ^3 \Pi_x |\hat{L}_z | ^3 \Pi_y \rangle=i$, $\langle ^1\ \Delta_{x^2-y^2} |\hat{L}_z | ^1\Delta_{xy} \rangle=-2i$ and $\langle ^3\ \Delta_{x^2-y^2} |\hat{L}_z | ^3\Delta_{xy} \rangle=2i$.\label{SO.curves}}
\end{center}
\end{figure}
\section{Line list}\label{section.linelist}
The potential energy, dipole and coupling curves were then used with the
in-house program {\sc Duo} to produce a line list for $^{45}$ScH. {\sc Duo}
solves in an essentially exact way the rotational-vibrational-electronic
problem for multiple interacting energy curves for diatomic molecules and is
described in detail elsewhere \cite{jt606,jt589,jt598}. The line list can be
obtained from \textit{www.exomol.com}, while all the curves used to produce it
are made available as supplementary material. In all nuclear-motion
calculations we used the atomic masses $m_\mathrm{H}=1.0078250321$~Da and
$m_\mathrm{Sc}=44.9559100$~Da, which give for ScH a reduced mass $\mu =
(m_\mathrm{H}^{-1} + m_\mathrm{Sc}^{-1})^{-1} =0.985726930$~Da =
1796.87027~$m_e$.
The \emph{ab initio} potential energy curves of the singlet terms were adjusted
by fitting to the energy term values ($J\le 12$) derived from the experimental
spectroscopic constants reported by Ram and Bernarth \cite{97RaBexx.ScH} which
cover vibrational excitations with $v=0,1$ ($X$), $v=0$ ($A$) and $v=0,1,2$
($B$) only. This was not possible for the triplet states (see
table~\ref{tbl.eq}). We also decided not to use in the adjustments the very
recent experimental data on the $а{}\leftexp{3}{\Delta}$ electronic state
\cite{14MoBhNa.ScH} since it lead to an equilibrium bond length
$r_e=3.94$~a$_0$ which differs too substantially from theory to be safely
trusted, see table~\ref{tbl.eq}.
In the refined curves the dissociation energy was fixed to the $D_e$ value by
Koseki {\it et al} \cite{81KaMoxx.ScH}. The triplet curves were also
scaled.
to dissociate to the same value of $D_e$. All refined curves are given as
supplementary material to the paper together with the {\it ab initio} curves.
The triplet electronic states appear to be in the strong resonance with the
rovibronic states from $B~\leftexp{1}{\Pi}$ and prevented an accurate fit to
the $B$-state energies, especially for $J>12$. It should be noted however that
the spectroscopic constants from Ref.~\cite{97RaBexx.ScH} were derived using in
the absence of the of interaction with the triplet states, and thus can also
contain artifacts.
We then used the program {\sc Duo} \cite{jt606} to solve the coupled
Schr\"{o}dinger equations
to compute the rovibronic energies of ScH up to the dissociation.
In particular, we obtained for $^{45}$ScH a zero-point-energy of 799.6~cm$^{-1}$. The
highest value the total angular momentum $J$ can assume for bound states is
found to be $J=59$.
The corresponding rovibronic eigenfunctions were combined with the \emph{ab
initio} dipole moment curves to produce Einstein~A coefficients for all
transitions with line positions up to $D_0$. The Einstein~A coefficients
together with the rovibrational energies supplemented by the total degeneracies
and quantum numbers make up the line list.
{\sc Duo} calculations consist of two steps: in the first step we used a grid
of 501 points to solve six separate vibrational Schr\"{o}dinger equations for
each electronic state, using as potential the \emph{ab initio} potential curves
shown in Fig.~\ref{energy.curves} or the empirically adjusted curves described
above. We then selected 40 lowest-energy eigenfunctions from each set; the
union of these $40\times 6 = 240$ functions constitutes our vibrational basis
set $|{\rm state},v\rangle $, where $v$ is the vibrational quantum number and
`state' is the label identifying the electronic state. In the second step of
the calculation we build a basis set of Hund's case~a functions of the type
\begin{equation}\label{e:basis}
|{\rm state}, \Lambda, S, \Sigma, v \rangle = | {\rm state}, \Lambda, S, \Sigma, \rangle | J,\Omega,M \rangle |{\rm state},v\rangle,
\end{equation}
where $ | {\rm state}, \Lambda, S, \Sigma \rangle$ is the electronic function,
$| J, \Omega,M \rangle$ is the rotational function and $|{\rm state},v\rangle $ is
one of the vibrational functions; $\Lambda$, $\Sigma$, and $\Omega$ are the $z$ axis
projections of the electronic, spin and total angular momenta, respectively,
and $\Omega= \Lambda+\Sigma$; $M$ is the projection of the total angular
momentum along the laboratory axis $Z$. The full, coupled problem is then solved
by exact diagonalization in the chosen basis set.
In order to guarantee that all phases of the \emph{ab initio} couplings as well
as transition dipole moments are consistent, we used the matrix elements of the
$\hat{L}_z$ operator between the corresponding degenerate $\Pi$ and $\Delta$
components as provided by Molpro. These matrix elements were then used to
transform the matrix elements of all coupling to the representation where
$\hat{L}_z$ is diagonal, which is used in {\sc Duo} according with
eq.~(\ref{e:basis}). The phases are chosen such that all matrix elements are
positive.
Our $^{45}$ScH line list contains 1~152~827 transitions and is given in the
ExoMol format \cite{jt548} consisting of two files, an energy file and a
transition file. This is based on a method originally developed for the BT2
line list \citep{jt378}. Extracts for the line lists are given in
Tables~\ref{t:Energy-file} and \ref{t:transition-file}. Using all energies of
$^{45}$ScH we computed its partition function for temperatures up to 5000~K.
The line lists and partition function together with auxiliary data including
the potential energy, spin-orbit, electronic angular momentum, and dipole
moment curves, as well as the absorption spectrum given in cross section format
\citep{jt542}, can all be obtained from the ExoMol website at
\textit{www.exomol.com}.
As an example, in fig.~\ref{f:298K} we show absorption ($T=$ 298~K)
intensities of ScH generated using a stick-diagram. Figure~\ref{f:overview}
illustrates in detail the band structure of the absorption spectrum of ScH at
$T=1\,500$~K generated using a Gaussian line profile with the
half-width-at-half-maximum (HWHM) of 5~cm$^{-1}$. As one can see, the strongest
features belong to the $X$--$X$, $B$--$X$, $a$--$a$, and $c$--$b$ electronic
bands. The triplet bands electronic bands $a$--$a$ and $c$--$b$ should be
strong enough to be potentially observable in the lab.
Due to the spin-orbit couplings between different components forbidden bands also appear to contribute, with A–X and c–X being the strongest. These bands are significantly weaker than the dipole-allowed ones, but could still represent an important source of ScH opacity.
\begin{table}
\footnotesize
\caption{Sample extract from the energy file for $^{45}$ScH. The whole file contains
8~451 entries.}
\label{t:Energy-file}
\begin{tabular}{rrrrrrlrrrr c rrr}
\hline\hline
$i$ & \multicolumn{1}{c}{$\tilde{E}$} & $g$ & $J$ & \multicolumn{1}{c}{$+/-$} & \multicolumn{1}{c}{$e/f$} & State & $v$ & $|\Lambda|$& $|\Sigma|$ & $|\Omega|$& \\
\hline
\texttt{ 1}&\texttt{ 0.000000}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 2}&\texttt{ 1547.095548}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 1}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 3}&\texttt{ 3019.322250}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 2}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 4}&\texttt{ 3352.480112}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 0}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 5}&\texttt{ 4430.504406}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 3}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 6}&\texttt{ 4707.209870}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 1}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 7}&\texttt{ 5789.270187}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 4}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 8}&\texttt{ 6015.690883}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 2}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 9}&\texttt{ 7100.259438}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 5}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 10}&\texttt{ 7277.938899}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 3}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 11}&\texttt{ 8363.776161}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 6}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 12}&\texttt{ 8494.401968}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 4}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 13}&\texttt{ 9574.352490}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 7}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 14}&\texttt{ 9667.692015}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 5}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 15}&\texttt{ 10718.414759}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 6}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 16}&\texttt{ 10804.884934}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 8}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 17}&\texttt{ 11788.395483}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 7}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 18}&\texttt{ 11902.873682}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 9}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 19}&\texttt{ 12788.251034}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 8}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 20}&\texttt{ 12941.194344}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 10}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 21}&\texttt{ 13714.254989}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 9}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 22}&\texttt{ 13898.140315}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 11}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\texttt{ 23}&\texttt{ 14551.459500}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{b3Pi }&\texttt{ 10}&\texttt{ 1}&\texttt{ 1}&\texttt{ 0}\\
\texttt{ 24}&\texttt{ 14746.814942}&\texttt{ 16}&\texttt{ 0}&\texttt{+ }&\texttt{f }&\texttt{X1Sigma+ }&\texttt{ 12}&\texttt{ 0}&\texttt{ 0}&\texttt{ 0}\\
\hline
\end{tabular}
\mbox{}\\
$i$: State counting number. \\
$\tilde{E}$: State energy in cm$^{-1}$. \\
$g$: State degeneracy. \\
$+/-$: Actual state parity. \\
$e/f$: Rotationless parity. \\
$v$: State vibrational quantum number. \\
$|\Lambda|$: Absolute value of $\Lambda$ (projection of the electronic angular momenum). \\
$|\Sigma|$: Absolute value of $\Sigma$ (projection of the electronic spin). \\
$|\Omega|$: Absolute value of $\Omega=\Lambda+\Sigma$ (projection of the
total angular momentum).
\end{table}
\begin{table}
\footnotesize
\caption{Sample extract from the transition file for $^{45}$ScH. The whole file contains
1~152~826 entries.}
\label{t:transition-file}
\begin{tabular}{rrr}
\hline
$f$& $i$ & \multicolumn{1}{r}{$A_{\rm if}$} \\
\hline
\texttt{ 1351} & \texttt{ 1231} & \texttt{3.3006E-07}\\
\texttt{ 3574} & \texttt{ 3468} & \texttt{3.8843E-07}\\
\texttt{ 5782} & \texttt{ 5693} & \texttt{5.0269E-06}\\
\texttt{ 4942} & \texttt{ 5037} & \texttt{9.9272E-06}\\
\texttt{ 7688} & \texttt{ 7734} & \texttt{4.9607E-03}\\
\texttt{ 2070} & \texttt{ 1952} & \texttt{3.8782E-01}\\
\texttt{ 2919} & \texttt{ 2580} & \texttt{1.0196E+00}\\
\texttt{ 2804} & \texttt{ 2692} & \texttt{1.0196E+00}\\
\texttt{ 1362} & \texttt{ 1713} & \texttt{8.5042E-08}\\
\texttt{ 5638} & \texttt{ 5727} & \texttt{6.8009E-04}\\
\texttt{ 3137} & \texttt{ 3242} & \texttt{4.2340E-06}\\
\texttt{ 3672} & \texttt{ 3568} & \texttt{1.9332E-04}\\
\texttt{ 7406} & \texttt{ 7350} & \texttt{4.6729E-06}\\
\texttt{ 5984} & \texttt{ 5901} & \texttt{1.2467E-05}\\
\texttt{ 3396} & \texttt{ 3505} & \texttt{9.4844E-05}\\
\texttt{ 993} & \texttt{ 1110} & \texttt{1.0904E-06}\\
\texttt{ 3324} & \texttt{ 2999} & \texttt{1.9023E-05}\\
\texttt{ 2398} & \texttt{ 2282} & \texttt{1.0987E-04}\\
\hline
\end{tabular}
\mbox{}\\
$f$: Upper (final) state counting number. \\
$i$: Lower (initial) state counting number.\\
$A_{\rm if}$: Einstein~A coefficient in s$^{-1}$.
\end{table}
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=0.85\textwidth]{T300K_absorption.eps}
\caption{Absorption cross-sections of ScH at $T$=298~K. \label{f:298K}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[angle=0, width=0.85\textwidth]{1500K_absorption_overview.eps}
\caption{Overview of the absorption line intensities of ScH at $T$=1500~K. \label{f:overview}}
\end{center}
\end{figure}
\section{Conclusions}
A hot line list containing pure rotational, ro-vibrational and vibronic
transitions for ScH was generated using new {\it ab initio} potential energy,
dipole moment, spin-orbit, and electronic angular moment curves obtained at a
high level of theory. The work was performed with a view to astrophysical
applications. The analysis of the importance of different absorption bands for
the opacities of ScH is presented.
The complexity of the electronic structure problem when transition metals are
involved means that the accuracy of these calculations is much worse than what
is normally achievable for small molecules containing light main-group
elements. To help mitigate this problem, it is desirable to utilise
experimental data to improve the accuracy of the results. We have done this for
the singlet states.
However, for transitions involving triplet states there are no measured data
for us to use: for example, the recent triplet-triplet measurements by Mukund
{\it et al} \cite{14MoBhNa.ScH} in the 17~940 cm$^{-1}$\ region lie well above our
calculated transitions and have not reported any absolute energies of the lower
$a^{}\leftexp{3}\Delta$ electronic state. Besides, their only
parameter with which we can directly compare, $B_e$, appears to be too low to be fully trusted. Thus for the
triplet states we used the $T_e$ valued taken from our {\it ab initio}
calculations, which may be in error by up to a few thousands cm$^{-1}$, implying that
the entire bands may be erroneously shifted by similar amounts. Also, structure
due to perturbations in both the singlet and triplet manifolds are dependent
on the potential energy curve separations and therefore may not be reproduced
accurately. However we expect that our line list to be quantitatively accurate
enough for the singlet state energies and to provide a detailed spectral
structure of individual bands to be useful.
\section*{Acknowledgements}
This work was supported by ERC Advanced Investigator Project 267219.
\bibliographystyle{tMPH}
|
2,877,628,089,388 | arxiv | \section*{Acknowledgment}
\par
The authors would like to thank faculties of University of Maryland Baltimore County (UMBC) and research group at General Electric global research for funding, actively participating and guiding the research work.
\section{Architecture building blocks for SEDAT}
\par After understanding theory and design requierments SEDAT is designed in moduler fashion.each module is performing dedicated task as follow.
\subsection{\bf{Device provisioner}} This is the first communication anchor between prover and verifier. Prover sends a hello message to the verifier. Verifier sends counter value to prover in response. Prover uses this counter value and to calculate Hashed One Time Password (HOTP) using pre-shared key and sends the HTOP to the verifier. The verifier compares the received HOTP with the one it has computed, if those matches, SEDAT enrolls the client to remote verifier by sending ack signal else the connection will is dropped. Prover sends its device information, Bios details, OS details to the verifier in the following message and device provisioning task is finished by recording response into database.
\subsection{\bf{Platform attestation}} This tool perform all the steps listed in platform validation subsection. also it take the ownership of the platform and TPM module as shown in Fig- 5.
\begin{figure}[H]
\begin{center}
\includegraphics[width=3.5in]{figs/provisioningowner.png}
\end{center}
\vspace{-2ex}
\caption{Platform and TPM supplier certificates binding, establishing platform ownership
\vspace{-2ex}
\label{fig:binding}}
\end{figure}
\subsection{\bf{Firmware and software event logs generation}} This are combination of patches and scripts which pulls the firmware and IMA event logs into user space and using tools it converts them into CEL format. \par Fig 4 depects the new Canonical Eventlog Structure (CEL) for IMA and Firmware events recommended by Trusted Computing Group (TCG). each field in the eventlog has Tag Lenght and Value (TLV) parameters. for firmware first event is TPM1.2 eventlog format as per pc client specification by TCG which has information about bios firmware versions, events are crypto aligned, supported hashing algorithms etc. second event onwards are actual events for firmware as explained before. it will have PCR number which is 32 bit value between PCR0- PCR9 for firmware eventlogs and PCR10 for IMA. Digest will give information about hashing algorithm used for that event. Length of Digest field will be dependet on hashing algorithm used for that event. if it is SHA1 length will be 20 bytes, SHA256 length will be 32bytes and so on.Value field in the Digest will hold extended PCR value. Event content field will hold the actual data in TLV format. on top of all this eventlog has the sequence number field for firmware and IMA eventlogs to make the records more meaningfull and easy to understand when transfer over to verifier.
\subsection{\bf{Quote attestation}} Quote check is the mechanism used in TPM based attestation to validate identity and authentication of the platform. Mostly prover generates the quote and validates the quote locally. There is no known solution available, which does quote check at remote verifier. this tool is a quote verifier which runs couple of scripts on prover and gets quote at RV and following same process it regenerates the quote at RV and matches it.
\par The next section explains workflow of the SEDAT framework.
\subsection{Assumptions and Limitations of SEDAT}
\par SEDAT is implemented keeping hardware as root of trust using TPM2.0. All the devices provisioned with SEDAT requires to have hardware or firmware TPM2.0. Devices should have installed intel's latest TPM2-TSS, TPM2-abrmd and TPM2-tools. SEDAT assumes that, there is one time trusted secure channel for verifier to get all the root certificates from the device vendor. SEDAT uses single packet authorization for securing communication channel between prover and verifer. So, SEDAT assumes that pre-shared secret key is loaded at both ends before communication start between varifier and prover. It is protected from replay and DoS attacks but covering some advance attacks are out of scope of this research. All prover devices are required to be patched with provided IMA and firmware patches and latest linux kernel to get firmware and IMA event logs in CEL format. Protecting the prover or verifier from physical, side channel attacks is out of scope. SEDAT can be validated on Software TPM with intels TPM2 stack.
\section{Conclusion}
In this paper we have presented proof of concept work for remote verifier to perform end to end attestation of hardware, firware and software of untrusted device with tpm2.0. to the best of our knowledge SEDAT is the first solution to demornstrate one stop solution for all three components varification with INTEL's /TCG recommended tpm2-tools stack. we are the first one to auther tools for represnting IMA and Firmware eventlogs into CEL format. all codes is open-sourced and available for future research.
\section{Evaluation}
We have compared SEDAT with other solutions and found that SEDAT outperforms others by providing one stop solution for Hardware firmware and software remote verification with single packet authentication to protect from replay and DoS attacks.below figure shows the evaluation report.\\
\begin{table}[H]
\scalebox{0.92}[0.95]{
\begin{tabular}{@{}lcccc@{}}
\toprule
\multicolumn{5}{c}{Comparision table - SEDAT v/s other solutions} \\ \midrule
\multicolumn{1}{l|}{Parameters} & \multicolumn{1}{c|}{IBM-ACA} & \multicolumn{1}{c|}{Sublime} & \multicolumn{1}{c|}{HIRS-NSA} & SEDAT \\ \midrule
\multicolumn{1}{l|}{Endorsement certificate} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{Y} & Y \\
\multicolumn{1}{l|}{Platform certificate} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{Y} & Y \\
\multicolumn{1}{l|}{Platform attributes Certificate} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{Y} & Y \\
\multicolumn{1}{l|}{Platform mutable components} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{Y} & Y \\
\multicolumn{1}{l|}{CEL Firmware Event logs} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & Y \\
\multicolumn{1}{l|}{CEL IMA Event logs} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & Y \\
\multicolumn{1}{l|}{Quote generation} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{N} & Y \\
\multicolumn{1}{l|}{Quote check at RA} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & Y \\
\multicolumn{1}{l|}{replay /DoS protection} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & \multicolumn{1}{c|}{N} & Y \\
\multicolumn{1}{l|}{Multi-OS support} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{N} & Y \\ \bottomrule
\end{tabular}
}
\end{table}
\subsection{Future Work}
We are planing to enchance our varifier to include secure communication and replay protection.also we are interested in looking into secure communication protocol to transfer the manufacturers root CA and endorsement cert securely to the verifier as currently SEDAT assume that there is a one time secure channel for transfering those to verifier. we are motivated to close the loop of remote varifier and take actions once it detects a problem in attestation.
\section{Implimentation/ Tools and techniques}
\par Verifier and tools are implemented in go language and used postgres as backend database. we took insperation from National Security Authorities (NSA)'s tool for platform varification called HIRS. SEDAT verifier uses NSA's tool called paccor to generate platform certificate and platform attributes. it stores those two certificate along with manufacturer's root CA certificate into the postgres database as golden template. SEDAT transfers the golden CEL firmware and IMA event logs to the remote verifier using replay protected secure channel. we provide full control to verifier to enable or disable certificates, firmware event logs and IMA logs validation.
\section{Introduction}
\par Traditional computing and Embedded devices are proliferating into numerous and diverse aspects of everyday life. These devices are utilizing in different domains ranging from tiny personal gadgets to large industrial systems. the amount of such devices connected to the Internet is growing rapidly and securing these devices and our electronic infrastructure becomes increasingly difficult, in particular because a large fraction of devices cannot be managed by security professional nor can they be protected by firewalls. this devices are most likely to be susceptible to attacks like - hardware modification, or counterfeit can cause serious security challenges as e.g., see Chinese supply chain hardware attack \cite{Schneier2018}. Malware infestation involves modifying a device's firmware and software and replacing benign code with a malicious one. Which can destroy physical equipment (e.g., see Stuxnet \cite{Vijayan}) or enables a more sophisticated attack threatening the safety of the users (e.g., see Jeep hack \cite{Schneider}). This increasing importance confronts developers with new challenges. One of them is the verification of the identity and integrity of a device by the trusted remote attestation tool called remote verifier.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{figs/intro.png}
\end{center}
\vspace{-2ex}
\caption{SEDAT: Remote Verifier
\vspace{-2ex}
\label{fig:SEDAT}}
\end{figure}
\par Fig-1 shows high-level design of trusted remote verifier - Security Enhanced Device Attestation with TPM 2.0 (SEDAT) to attest untrusted devices. As can be seen, the verifier has to attest three components: hardware, firmware, and software to fully attest a device. Device suppliers will provide platform root certificates, endorsement certificates, platform attributes certificate. The device will have firmware and software modules that need to be unchanged and executed in the correct order to ensure integrity. This execution of the firmware and software module will create event records in form of firmware and IMA event logs, which will be explained in detail in the coming section. Fig-1 has Reference Integrity Manifest (RIM) for both firmware and software event logs, which will provide golden measurements based on Trusted Computing Group (TCG) 's new recommendations. Details of each module and working will be discussed in the following sections.
\par Remote Verifier (RV) has to validate the integrity and authenticity of hardware, firmware, and software state on the untrusted device against known good state for attesting the device. If any or all of the states fail, then the device attestation result is failed. RV should be able to perform attestation on-demand to ensure the correct state of remote prover. RV can perform a quote check with TPM2.0 to validate Ek and signing keys.
{\bf {Goals and Contributions:}}
\par In this paper, we present SEDAT, the first proof of concept work for remote attestation, to ensure the integrity and authenticity of devices via a security-enhanced communication channel. Designing such a verifier is a challenging task.As there are many possibilities of malfunctions, like, their could be counterfeit hardware, modified software or malicious prover will triggering the attestation process. Therefore, we analyzed the requirements for designing a secure attestation protocol and depict how to address them with minimal features and assumptions. Further, we identified possible use-cases where SEDAT can be applied. Our work brings the following contributions:
\begin{itemize}
\item {\bf{Client provisioning}:} Provided tool for client provisioning. it is the process that enables the platform owner to register the device with a remote verifier using a security-enhanced Single Packet Authentication (SPA) technique.
\item {\bf{Platform attestation}:} Provide remote verifier for platform which attests platform root CA certificates, endorsement certificates, platform bindings.
\item {\bf{CEL firmware event logs}:} Provide tools to convert and verify firmware event logs into canonical event log (CEL) structure. also, validated upstreamed kernel patches for crypto align firmware event logs.
\item {\bf{CEL IMA event logs}:} Provide tools and kernel patches for getting IMA event logs into userspace and converting it into CEL format.
\item {\bf{Quote check}:} Provide tools to check the quote at a remote verifier.
\item {\bf{Secure authentication}:} Provide tools for secure connection and authentication for verifier and protects it from replay and DoS type attacks.
\end{itemize}
\par {\bf{Outline}:} This is not correct at this time --putting some place-holder flow. Outline, Section 2 reviews related work, adversary models and Replay, DoS attacks on attestation. Section 3 provides required preliminaries and notations, identifies
the minimal requirements for secure RV, and describes SEDAT. The prototype implementation of SEDAT has described in Section 4; its application to collective attestation is explained in Section 5. Next, the security of SEDAT has evaluated in Section 7, and the paper concludes in Section 8.
\section{Limitations of Current Solutions}
\par In recent research papers and in practice tpm has been used for hardware root of trust and remote attestation. where the integrity and authenticity of platform is relied on tpm based attestation key (AIK or AK).
\begin{itemize}
\item {\bf{AK limitation}:}
\par AK is taken as hardware root of trust anchor for remote attestation. problem in that is, generation of AK comes few step later in device manufacturing process. AK is derived from endorsement key (EK) and EK is provided from platform/hardware vendor by fusing private key and exposing associated public key as explained in section above. So, AK based attestation fails to detect malicious or counterfiet hardware (in supply chain menufacturing process) as there is no validation for authenticity and integrity of hardware platform, the TPM module or both, if they are tempered, counterfieted or malicious. if both TPM module and platform (hardware device on which tpm is placed) are genuein but the binding that platform A should have hardware TPM module A is not verified, then also EK certificate and generated platform certificate will be different. in such cases, malicious hardware counterfitting is not detected by the remote attestation tool and AK is generated using malicious hardware's EK.
\item {\bf{Firmware eventlogs limitation}:}
\par Firmware components measurements, extension into pcr and execution is logged into firmware memory in form of firmware event logs.so, if the verifier validates firmware event logs, it will suffice the requirement of firmware validation. however, the problem with current state of firmware eventlogs is that, they are not crypto aliened format and not avaliable in user space. They are only available in UEFI shell. So, it needs to be read from ACPI table to user space. second problem is it does not have sequence number for the events. so, its hard to read the events and keep track of them from the binary blob when transfered to remote verifier.
\item {\bf{Software eventlogs limitation}:}
\par Same as firmware event logs, IMA event logs also does not have sequence numbers so it is hard to keep track of it when sent on the remote verifier. second major problem is, since it is stored into small firmware memory which is not ment for storing incremental large IMA event logs.it needs the mechanisum to free up the space once the blob is read in user space.
\item {\bf{Quote check limitation}:}
\par In some of the attestation techniques, they have implemeted quote generation with nonce and checks it at the provers end only. as quote is non reversible operation.
\item {\bf{Protection to known attacks}:}
\par Some of the available solution and attestation protocols have tried to enchance the security against replay and DoS attacks but majority of the attestation framework takes the https protocol as secure communication channel which leave a place for malicious prover to flood the verifier, delay it or spoof the traffic to redirect and mislead.
\end{itemize}
\par This limitations motivated our research to create remote verfier which can overcome all or majority of the above issues. So, lets discuss the requirements of the remote verifier first and then explain details about SEDAT.
\section{Problem Description}
\par Remote attestation is an interactive process between a trusted remote verifier (denoted as RV) and a potentially untrusted remote device called prover (denoted as RP).It allows a trusted RV to capture the state of a potentially untrusted remote device.Essentially, RV measures and takes hash of the software running on the RP, tranfers it to it self and matches to the golden measurement to attest the device. Remote attestation can be performed by Hardware-only, software-only, or hybrid techniques. Each approach has its merits and demerits. Hardware or hybrid approach provides better security assurance as they use immutable hardware as a root of trust.
\par One approach to achieve better security is to equip these devices with a root of trust, such as a Trusted Platform Module (TPM), a Trusted
Execution Environment (TEE), and Software Guard Extensions (SGX), and then have that root of trust attest the state of the device or computations made. Many devices have a Trusted Platform Module (TPM) that fulfills these tasks.
\par Remote device attestation (RV) can be used to establish static or dynamic root of trust in cyber-physical and industrial controls systems.It can be used as building block for other security services and primitives, such as device provisioning, firmware updates, kernel software patching.
\subsection{\bf{TPM based remote attestation}:}
\par As shown in Fig-1 , to attest a device RV needs to validate hardware, firmware and software all three components of the remote device.In last decade, researcher and industry has tried to solve the remote attestation problem and provided couple of solutions but non of it solves all three sametime. For example National Security Agency (NSA) has open-sourced tool called HIRS\cite{HIRS} for complite platform / hardware attestation. but it does not cover firmware or software attestation, moreover it works on centos 7 only and supports old tpm2-tools and tpm2-tss Intel's stack for TPM based device attestation.Second,The International Business Machines Corporation (IBM) has open-sourced their implementation called Attestation Client Server (IBM-ACS) \cite{Goldman}. IT does platform certificates, hardware and software attestation but it uses IBM's TSS and Tools to communicate with software TPM and its not complitely supported for hardware or firmware TPM. Also, it does not have support to verify firmware and software event logs in Cannonical Event logs (CEL) structure. Google has their open-sourced implementation for remote attestation called go-attestation\cite{weeks}, which does not do platform attestation and start at AK as root of trust. Acadamic implementation called keylime\cite{keylime} has support for multiple platforms and languages for attestation but it misses platform certificate validations, firmware and software even logs validations. all of the above solutions do not have support for replay or Denial of Service (DoS) attacks as they all work on https client-server protocols.
\par This movitaved our research to first figure out what threat and adversary model are open to address, followed by looking at limitations of available attestation schemes.
\subsection{\bf{Threat and Adversary Model}:}
\par Following the adversarial models from \cite{Ibrahim2}, SEDAT has classified attacks on remote attestation in three categories.
\begin{itemize}
\item {\bf{Communication Adversary}:} Adversary has complete control over all
communication channels. it can do eavesdropping and/or inject/modify packets, delay or drop packets. in case of DoS attack, adversery will flood the remote verifier by sending multiple provisioning request and eventually brings the verifier down.
\item {\bf{Software Adversary}:} Adversary can exploit software vulnerabilities
to infect prover or verifier, read its unprotected memory regions, manipulate
its software state or fake identity of prover.
\item {\bf{Mobile Adversary}:} In addition to adversaies soft capabilities, this sophiesticated mobile adversaries are capable of erasing all traces of its previous presence on prover which makes it not detected by remote verifier. Executing such a sophisticated attack requires the knowledge of the exact execution time of attestation.
\end{itemize}
\subsection{\bf{Limitations of Current Solutions}:}
\par In recent research papers and in practice TPM has been used for hardware root of trust and remote attestation. where the integrity and authenticity of platform is relied on TPM based attestation key (AIK or AK).
\begin{itemize}
\item {\bf{AK limitation}:} AK is taken as hardware root of trust anchor for remote attestation. problem of taking AK as only hardware root of trust is, generation of AK comes few step later in device manufacturing process. AK is derived from endorsement key (EK) and EK is provided from platform/hardware vendor by fusing private key and exposing associated public key as explained in next section. So, AK based attestation fails to detect malicious or counterfiet hardware, the TPM module or both. Also, some implementations uses software seed and AK key for attestation.
\item {\bf{Firmware eventlogs limitation}:} System records the firmware events extended into PCR's in form of firmware event logs. Verifying firmware event logs assures boot sequence and firmware code states integrity. However problem with current state of firmware event logs are they are not crypto aliened format and not avaliable in user space. They are only available in UEFI shell. So, it needs to be read from ACPI table to user space. second, problem is they do not have sequence numbers for the events. so, its hard to read the events and keep track of them from the binary blob when transfered to remote verifier. details of the firmware event log generation is explained in Theory and requirement section of SEDAT.
\item {\bf{Software eventlogs limitation}:} Same as firmware event logs, IMA event logs also does not have sequence numbers. Second major problem is, since it is stored into small firmware memory which is not ment for storing incremental large IMA event logs. It needs the mechanisum to free up the space once the blob is read in user space.
\item {\bf{Quote check limitation}:} Some remote attestation implemetations have used quote generation with nonce and checks it at the provers, but it will be more valuable to check quote and get same PCR values at the RV to ensure that there was no tempering in middle.
\item {\bf{Protection to known attacks}:} Some non TPM based attestation works have tried to enchance the security against replay and DoS attacks but majority of the attestation framework takes the https protocol as secure communication channel.
\end{itemize}
\par Above all reasons combined motivated our research and eventually resulting in implemetation of SEDAT. SEDAT: the first remote attestation scheme, which performs hardware, firmware and software attestation and it is completely secure against Denial of Service (DoS), replay attacks.
\section{Related work}
Remote Verifier (RV) allows a trusted entity to securely measure internal state of the remote unstrusted platform (prover). RV can be used to establish static or dynamic root of trust in cyber-physical and industrial controls systems.It can be used as building block for other security services and primitives, such as provisioning, updates, patches.Current attestation approaches fall into two domains namely collective attestation and single device attestation.
\begin{itemize}
\item {\bf{Collective Attestation}:} \par Traditional attestation schemes consider
only a single prover and verifier. Swarm/collective attestation aims
at scaling existing attestation schemes to networks of embedded
devices, by leveraging in-network verification \cite{Asokan}, and novel cryp-
tographic primitives \cite{Ambrosin}.
SEDA \cite{Asokan} investigates the security of swarms of embedded de
vices. It presents the first attestation protocol for large swarm,
allowing a central verifier to assess the trustworthiness of a million
device swarms in order of seconds. It achieves this by distributing
the attestation burden across the swarm, allowing neighbors to
attest each other, and aggregating the attestation results in a hop-
by-hop manner. SANA \cite{Ambrosin} enables low verification overhead due to
the integration of a novel Optimistic Aggregate Signatures (OAS),
which is a generalization of aggregate and multi-signatures. Finally,
DARPA \cite{Ibrahim} aims at detecting software compromise and device
capture in embedded networks. Since DoS attacks on collective
attestation are more significant, as it allows to adversary to disturb
a large network by targeting one device, SANA \cite{Ambrosin} proposes using
secure tokens obtained from a trusted third party to prohibt scaling
DoS attacks to large networks. However, SANA uses expensive
public key cryptography which imposes additional overhead on the
Prv, and is vulnerable to DoS attacks based on the digital signature
verification procedure.
\item {\bf{Single Device Attestation}:} \par It has three main categories:Software-based attestation schemes \cite{Gardner,kennell,Seshadri,Seshadri2,Seshadri3,Seshadri4}. does not require secure hardware. It enables attestion of legacy and low-end embedded devices with some assumptions. These assumptions are: adversery is not active during the attestation process,the attestation code and implementation are optimal,and presence of an out-of-band authentication channel. Due to this reasons, security of software-based attestation schemes has been challenged by \cite{Castelluccia,Sankar1, Wurster} and their applicability and reliability was limited. Hardware-based attestation schemes \cite{Kovah, McCune, McCune2,Petroni ,Sailer, Schellekens} provide better security guarantees. Software/Hardware Co-design or Hybrid
schemes such as \cite{Brasser, Eldefrawy, Francillon, Koeberl} provides examples of minimal hardware-based features required for enabling secure remote attestation. Such security features are as simple as a Read Only Memory (ROM), and a simple Memory Protection Unit (MPU).
\par SEDAT is the first remote attestation scheme, which performs hardware, firmware and software attestation and it is completely secure against Denial of Service (DoS), replay attacks.
\end{itemize}
\section{Theory, requirements, assumptions and limitations for New SEDAT}
\subsection{\bf{Theory and remote attestation requrements}:} SEDAT has identified remote device attestation problem needs to address five issues. the theory and requirement for each are explained in below sections, Assumptions and limitations considered while desinging SEDAT are discussed in next subsection.
\subsubsection{\bf{Platform Validation}} It is common practice in industry that device vendor will not be the one and only vendor for all the hardware software modules compressed in the device. The device vendor will assemble the device by assembling all different parts in the manufacturing unit and send it to warehouse. After receiving the final product, the device vendor needs to validate the authenticity and integrity of all the modules present in the device.
\begin{figure}[H]
\begin{center}
\includegraphics[width=3.5in]{figs/binding2.png}
\end{center}
\vspace{-2ex}
\caption{Platform and TPM supplier certificates binding, establishing platform ownership
\vspace{-2ex}
\label{fig-2:binding}}
\end{figure}
All hardware module manufacturers will provide modules' root CA certificate signed by the module manufacturer. The device vendor will validate all module certificates and binds them together with the platform and generates a self-signed platform certificate. The vendor creates a platform attributes certificate, RV needs to verify hardware certificates and bindings.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/bootseq.png}
\vspace{-2ex}
\caption{Boot Sequence of (x86 / UEFI / TPM2.0)
\vspace{-2ex}
\label{fig-3:Boot Sequence}}
\end{figure*}
\par Platform verification process has following steps, as depicted in Fig.2.
\begin{itemize}
\item {\bf{Step 1}:} TPM vendors will create endorsement private and public keys (EK) for the TPM platform. The private part of the endorsement key will be fused into the hardware, and the public key will be exposed for creating the platform attestation key (AK). This attestation key will act as a trust anchor in the hardware root of trust. EK is used for creating an endorsement certificate.
\item {\bf{Step 2}:} The platform supplier will provide a self-signed platform certificate. System user can create platform attributes certificate to get details about hardware modules are parts present on the device.
\item {\bf{Step 3}:} The platform supplier will reference and bind the platform attributes and TPM certificate generated in the previous step. This step will make TPM to be exclusive to the platform. Platform attributes certificate will provide details about what other hardware modules are present onto the platform, which are mutable and non-mutable components.\item {\bf{Step 4}:} both EK and platform certificates are stored the in NV storage of TPM2.0.
\end{itemize}
\par In order to attest platform RV needs to verify all of the above steps.
\subsubsection{\bf{Firmware Validation}} Firmware is a collection of codes stored on a small memory chip of a device. It provides the necessary instructions for the device to communicate with other hardware and software modules within and outside the device. The device uses flash ROM to store firmware, and it is semi-permanent unless it is changed or upgraded. Understanding the platform boot sequence (x86 / UEFI / TPM2) is a useful to perform firmware and software remote attestation. Fig.3 shows the boot sequence of X86 / UEFI / TPM2 based hardware device.
\par Root of trust measurement will be done by following four basic operations namely \\
\begin{center} LOAD \end{center} \begin{center} MEASURE\end{center} \begin{center} EXTEND\end{center} \begin{center} EXECUTE\end{center}
\par as seen from Fig-3, When the systems powers on, reset vector will first LOAD the Static Root of Trust Measurement (S-RTM) component of boot firmware. System MEASURES its code by taking a secure hash, EXTENDS it into PCR, creates first event record into firmware event logs in firmware memory. Next, the system EXECUTEs S-RTM module code and gives the information regarding the next boot image. The program counter will point to the next firmware image location, and the system will perform the same - LOAD, MEASURE, EXTEND and EXECUTE operations on the next firmware component (e.g., FW C1). Each firmware component will follow the same boot sequence and will have an event extended into PCR0- PCR7. The system will generate an event record in firmware event logs. The last firmware boot component will point to the Shim or shimx64 as the next stage bootloader image. Which, in turn, will call grub or grub2, followed by loading Operating System (OS). These will have events extended in PCR8, and PCR9 and firmware event logs will have record of each event.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figs/cel1.png}
\vspace{-2ex}
\caption{CEL IMA eventlogs
\vspace{-2ex}
\label{fig:ima eventlog}}
\end{figure*}
\par As seen in Fig-3, today firmware and software event logs are in different format and does not have sequence numbers , which makes hard to transfer meaningful binary blob over to RV. one of the goal of SEDAT is to convert both event logs into Cannonical Event Logs (CEL) structure as recommended by TCG.
\par The malware's like ransomware tries to change firmware code or device boot sequence to lock the device and resources. So, the assurance of the integrity of firmware is important for device attestation along with the boot sequence integrity. RV needs to verify firmware event logs.
\subsubsection{\bf{Software Validation}} Remote attestation of all the software on a device is a relatively hard and resource-heavy process. Instead, we can have a certain portion of the software modules attested, to ensure the critical portion of the software code and data is intact. Trusted Computing Group (TCG) has recommended a standard for software integrity check-called {Integrity Measurement Authority (IMA)}. Which signs, measures, and extends the IMA secure region of software into PCR10 of TPM2.0. It will also generate entry in IMA event logs. RV needs to verify IMA event logs for the IMA software integrity check.
\subsubsection{\bf{Quote Validation}} TPM2.0 has a function called quote generation. The devices' AK, selected encryption algorithm, and PCR values are used to generate a quote. TPM2.0 uses nonce for adding freshness to the quote, which is sent from the verifier for the added layer of security and replay protection. RV needs to validate the quote.
The next section describes the assumptions and limitations we considered while designing SEDAT.
\subsection{\bf{Assumptions and Limitations of SEDAT}}
\par SEDAT is implemented keeping hardware as root of trust using TPM2.0. All the devices provisioned with SEDAT requires to have hardware or firmware TPM2.0. Devices should have installed intel's latest TPM2-TSS, TPM2-abrmd and TPM2-tools. SEDAT assumes that, there is one time trusted secure channel for verifier to get all the root certificates from the device vendor. SEDAT uses single packet authorization for securing communication channel between prover and verifer. So, SEDAT assumes that pre-shared secret key is loaded at both ends before communication start between varifier and prover. It is protected from replay and DoS attacks but covering some advance attacks are out of scope of this research. All prover devices are required to be patched with provided IMA and firmware patches and latest linux kernel to get firmware and IMA event logs in CEL format. Protecting the prover or verifier from physical, side channel attacks is out of scope. SEDAT can be validated on Software TPM with intels TPM2 stack.
\section{SEDAT : framework design}
\par The framwork of SEDAT works as follow
\begin{itemize}
\item {\bf{Client provisioning}:} In this step untrusted prover will establish secure communication channel with the remote verifier.
\item {\bf{Get TPM certificate}:} Using TPM2.0 command get the TPM EK key and create endorsement certificate from the manufacturers root certificate site.
\item {\bf{Get platform certificate}:} Using TPM2.-0 command to get platform certificate and run paccor to create platform attributes certificate.
\item {\bf{Store certificates in TPM}:} Using TPM2.0 commands store the platform certificate and endorsement certificate into NV storeage of TPM2.0.
\item {\bf{Take ownership}:} Using TPM2.0 command take ownership of the platform.
\item {\bf{Get eventlogs for Firmware and IMA in CEL format}:} Using provided scripts get the firmware and IMA event logs from /sys/kernel/security/tpm0/binary* and /system/kernel/security/ima/binary* respectively and send it to RV.
\item {\bf{Generate quote}:} Using tpm2\_quote command and added freshness nonce generate quote.
\item {\bf{Validate quote at RV}:} based on received information at the remote verifier regenerate and check the quote. this is implemented using IBM's software tpm as verifier needs not to have tpm module.
\end{itemize}
\section{Usecases}
SEDAT can be applied to different application areas.here we showcase three usecases.
\begin{figure}[H]
\begin{center}
\includegraphics[width=3.5in]{figs/supply.png}
\end{center}
\vspace{-2ex}
\caption{Supply Chain
\vspace{-2ex}
\label{fig:Supply Chain}}
\end{figure}
\begin{itemize}
\item {\bf{Supply Chain Validation}:}
\par In supply chain platform supplier will be located in different geolocation and it transports the devices from the assembly line to warehouse, then it will go to retail location or to the installation facility. In this transport process, there are multiple untrusted anchors involved which can lead to counterfiet, substitute the devices.tpm based attestation provides the hardwae root of trust,signed key, certificate validation will increase security and trust. our verifier helps in tracking with reduced cost and increased trust. it also reduces the in situ installation and replacement cost.it is possible to remote key provisioning. key allows trusted remote configuration, trust channels using keys allows multiplexing connections reducing cabling costs.with SPA the communication is replay and DoS protected.\\
\begin{figure}[H]
\begin{center}
\includegraphics[width=3.5in]{figs/platvari.png}
\end{center}
\vspace{-2ex}
\caption{One-time secure channel
\vspace{-2ex}
\label{fig: One time secure channel}}
\end{figure}
\par As shown in Fig.7 we need to have one-time trusted channel between platform owner and tpm and platform supplier to transfer securely the platform supplier cert and tpm supplier cert to platform owner and it will be binded and referenced as discussed before. we are using standard https connection for now.
\begin{figure}[H]
\begin{center}
\includegraphics[width=3.5in]{figs/variallplat.png}
\end{center}
\vspace{-2ex}
\caption{Supply Chain Validation
\vspace{-2ex}
\label{fig:Supply chain}}
\end{figure}
\par As shown in Fig.8 shows supply chain validation uses case. the connection for now.
\item {\bf{Inventory management}:}
\par In large organization employer will give computers to emplyees which they use for all corporate works so integrity, authenticity and inventory management of those coputer devices are must. SEDAT can be used as remote attestation verifier as most of the computers now a days have TPM chip. SEDAT can also be laverage to list information of mutable and immutable components on computers as it was shown in \cite{HIRS}. this helps in boot time security and inventory management. if some employee has broken display or ethernet port while on vacation or travel and they replaced untrusted replacement part with SEDAT trust anchor installed verifier has golden state of all mutable immutable components on the device with was given to employee on day one. so next time when attestation takes place verifiers platform certs will not match and we can trigger an alert for appropriate action.
\item {\bf{Industrial control systems}:}
\par SEDAT can be used as remote verifier for controllers and embedded devices with hardware or firmware based TPM module. in those environment verifying the integrity and authenticity of the platform is key factor.
\end{itemize} |
2,877,628,089,389 | arxiv | \section{Introduction}
\label{intro}
Software development teams nowadays benefit from online code review tools (e.g., Gerrit, Codestriker, and ReviewBoard) to effectively inspect patches and improve the code quality of their software systems,
while enabling the teams to perform asynchronous code reviews that are more lightweight and flexible~\cite{WANG2021111009, Google_2018}.
On the other hand, a large number of code reviews are being performed by software teams as new patches (i.e., a set of code changes) frequently occur in a contemporary code review setting~\cite{FSE2013_Rigby}.
For example, the 2018 OpenStack User Survey report\footnote[1]{https://www.openstack.org/user-survey/2018-user-survey-report/} showed that about 70,000 patches were reviewed, with an average of 182 code reviews changes per day.
Such the large number of code reviews potentially poses a new challenge for collaboration (e.g., improving the patch, fixing defects) during code reviews and development tasks.
More specifically, recent studies have highlighted evidence of why developers should collaborate across code review tasks.
Zhang
et al.~\cite{ICSME18_pull} found that redundant patches (i.e., patches that address the same task or problem) are often submitted for a review in software projects hosted in GitHub.
Ebert et al.~\cite{ebert2019confusion} observed that the inclusion of more people in the code review increases their awareness of the code change, i.e., confusion resolution contributes to knowledge sharing.
Recently, Wang et al.~\cite{WANG_emse} observed that developers are likely to share links during review discussions with several intentions to fulfill information needs.
Meanwhile, Hirao et al.~\cite{hirao2019fse} shed light that the patch linkage (i.e., posting a patch link to another patch) is used to indicate patch dependency, competing solutions, or provide broader context.
As recent work has shown that patch linkage can increase the awareness of the related patches, we further investigate to what extent developers collaborate across these linked patches.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig/intro_update.png}
\caption{A conceptual illustration that describes (1) a linkage between two patches is identified and (2) a collaboration activity happens where a developer on one patch contributes to the review of the other patch.}
\label{fig:illustration}
\end{figure}
\begin{figure*}[t]
\centering
\subfigure[In the example, one developer left a comment in Patch \texttt{211019}, in order to request a collaboration with another patch \texttt{209612}.]{\includegraphics[width=.7\linewidth]{Fig/21109.png}}
\subfigure[In Patch \texttt{209612}, the author and one reviewer from Patch \texttt{211019} provided code comments. ]{\includegraphics[width=.7\linewidth]{Fig/209612.png}}
\caption{A real example from OpenStack that illustrates the cross-patch collaborations after the patch linkage is posted. Note that to avoid ethical issues, the real developer names are anonymous.}
\label{fig:illustration2}
\end{figure*}
Figure \ref{fig:illustration} illustrates a motivating scenario where collaboration occurs after the patch linkage.
As shown in the figure, a reviewer \textit{Pink} in Patch A posted a patch link to Patch B in the review discussion.
In this patch linkage, we consider Patch A as a source patch and Patch B as a target patch.
After the patch link is posted, a developer \textit{Green} who participated in the Patch A discussion votes and leaves review comments in Patch B.
At the same time, a developer \textit{Blue} who participated in the Patch B discussion before the linking time could also provide comments in Patch A discussion.
We consider either of these two cases as a collaboration occurrence.
In a realistic scenario (i.e., review at \url{https://review.openstack.org/#/c/211019}), as shown in Figure~\ref{fig:illustration2}, we observed that a reviewer (Reviewer \#1) posted a comment with a collaboration request to the patch author (Author \#1):
\begin{quote}
\textit{`Could you please sync your efforts with another patch [https://review.openstack.org/\#/c\\/209612/]?'}
\end{quote}
After the patch link is posted, we observe that the author (Author \#1) and one of the reviewers (Reviewer \#2) from Patch \texttt{211019}, who were not involved in Patch \texttt{209612} before, made the specific review comments related to the code changes in Patch \texttt{209612}.
Inspired by the realistic scenario, we hypothesize that there exist collaborations across patches (we called \textbf{cross-patch collaborations}, henceforth) after the patch linkage.
In this work, we conduct an empirical study of 368 patch linkages from a total of 8,612 linked patches to better understand the intentions of the patch linkage (e.g., requesting a collaboration) and statistically analyze to what extent collaboration will occur after the patch linkage.
Specifically, we investigate how different kinds of linkage sharing lead to collaboration opportunities and characterize the contribution kinds that follow after the link is identified.
Thus, three research questions are formulated to guide our study:
\begin{itemize}
\item \textbf{\textbf{RQ$_1$: To what degree do developers request collaborations when posting patch linkages?}}\\
\textit{\uline{Motivation.}}
Although Hirao et al.~\cite{hirao2019fse} have shown that patch linkage is mainly for team awareness (i.e., indicating dependency, providing broader context, and pointing out an alternative solution), we hypothesize that patch linkage could have an association with the developer collaboration across the patches.
Thus, we would like to first understand how often developers request collaborations accompanied with shared patch linkages.
\item \textbf{\textbf{RQ$_2$: How likely will collaborations occur after patch linkages are posted?}}\\
\textit{\uline{Motivation.}}
Prior work sheds light that patch linkage can increase awareness~\cite{hirao2019fse}.
Yet, little is known about whether developers are likely to contribute to another via the patch linkage.
To better understand this, we investigate to what degree collaboration will occur after a patch
link is posted.
\item \textbf{\textbf{RQ$_3$: What are the kinds of cross-patch collaboration activities?}}\\
\textit{\uline{Motivation.}}
To gain an in-depth insight, we would like to understand what kinds of collaborations developers do after the awareness of patch linkages.
Answering this question would help researchers and practitioners better understand the role of patch linkages.
\end{itemize}
The empirical results lead us to conclude that the patch linkage requesting collaboration is relatively less frequent.
In addition, the delay exists before a patch linkage is posted (RQ1).
We observe that a cross-patch collaboration is more likely to occur when the intention of a request for collaboration is accompanied with the patch linkage (RQ2).
Specially, four kinds of collaboration activities are classified, including voting, writing specific and general comments, and a revision of linked patches (RQ3).
The remainder of this paper is organized as follows.
Section 2 describes the empirical study design, including data preparation and approaches for each RQ.
Section 3 presents the results of our empirical study, while Section 4 discusses our findings and challenge.
Section 5 discloses the threats to validity.
Section 6 discusses the related work regarding link sharing and reviewer participation in code reviews.
Finally, we conclude the paper in Section 7.
\section{Empirical Study Design}
\subsection {Data Collection}
In this study, we use OpenStack as a case ecosystem.
OpenStack is an open-source software ecosystem where many well-known organizations and companies, e.g., IBM, VMware, and NEC, collaboratively develop a platform for cloud computing.
OpenStack actively performs code reviews through Gerrit, a tool-based code review tool, and is widely studied in the prior work~\cite{WANG_emse, chouchen2021anti, p65}.
\textbf{\textit{Clean Dataset.}} For our experiments, we used the OpenStack review dataset provided by Thongtanunam and Hassan~\cite{pick_tse}.
The dataset includes 58,212 patches dated from November 2011 to July 2019.
Since we focus on the collaboration and contributions done by patch authors or reviewers, we exclude the comments that are posted by automated tools in the discussion threads. To do so, we refer to the documentation of the studied system\footnote[2]{https://docs.openstack.org/infra/manual/developers.html} to identify the automated tools that are integrated with the code review tools.
Specifically, we use the list of the automated tools that is provided in the work of Thongtanunam et al. \cite{pick_tse}
\begin{table}[]
\caption{Summary of dataset used in the study.}
\label{tab:dataset}
\resizebox{0.5\textwidth}{!}{%
\begin{tabular}{@{}lrrr@{}}
\toprule
Ecosystem & Time Window & Linkage Dataset & Sample Dataset \\ \midrule
OpenStack & 2011.11 - 2019.7 & 8,612 & 368 \\
\bottomrule
\end{tabular}%
}
\end{table}
\textbf{\textit{Extract Patch Linkage.}} To identify the patch links, similar to prior work~\cite{WANG_emse}, we applied the regular expression to search all messages in the review discussions that include a patch URL in the following format: \textit{https?://review.openstack$\mid$opendev.org/\#/c/[1-9]+[0-9]*}.
A total of 8,944 pairs of patches are retrieved.
Then we exclude the case where the source and target patches are the same.
In our study, we keep the cases where (i) the patch linkages are written by the same patch authors and (ii) the patch authors post links by themselves, as we assume that collaboration could occur between the reviewers of both patches.
Finally, we obtain 8,612 pairs of patches that met our experiment criteria.
Table 1 shows the summary of the dataset used in the study.
\begin{table*}[t]
\tabcolsep=0.3cm
\centering
\caption{(RQ1) The prevalence of linkage kinds and their timing nature. Requesting collaboration accompanied with links is less common.}
\label{table:prevalence}
\resizebox{\textwidth}{!}{
\begin{tabular}{llrrrrrrrr}
\toprule
\multirow{2}{*}{\textbf{Linkage Kind}} & \multirow{2}{*}{\textbf{Count}} & \multicolumn{4}{c}{\textbf{Patch-linked Time (\# days)}} & \multicolumn{4}{c}{\textbf{Patch-closed Time (\# days)}} \\
& & \textbf{1st Qu.} & \textbf{Median} & \textbf{Mean} & \textbf{3rd Qu.} & \textbf{1st Qu.} & \textbf{Median} & \textbf{Mean} & \textbf{3rd Qu.} \\ \midrule
Requesting collaboration & {$\;\,$57 \mybar{0.3}} & 1.2 & 14.1 & 48.8 & 56.5 & 3.3 & 20.6 & 92.1 & 67.2 \\
Sharing information & 211 \mybar{1.14} & 0.9 & 11.6 & 38.3 & 48.2 & 1.0 & 10.8 & 55.1 & 49.2 \\
Pointing out an alternative solution & 100 \mybar{0.54} & 0.4 & 4.0 & 31.7 & 32.8 & 0.0 & 0.9 & 31.2 & 19.2 \\ \bottomrule
\end{tabular}}
\end{table*}
\subsection{RQ1 Analysis}
To answer \textbf{RQ$_1$: To what degree do developers request collaborations when posting patch linkages?}, we investigate the intention of posting patch linkages in the aspect of collaborations.
In addition, we conduct a statistical analysis to investigate the timeline of patch linkages (e.g., when the linkage is posted and how long it takes the review to be completed after the linkage is posted).
Such timeline analysis could highlight the necessity of tool support for in-time linkage recommendations.
Below, we describe these two analysis approaches in detail.
\textbf{Requesting collaboration.} We perform a manual analysis to investigate the intention behind the patch linkage.
More specifically, our analysis mainly focuses on how often the patches are linked to request collaboration.
Below, we describe our manual coding based on a statistically representative sample of our patch linkage dataset:
\textit{Representative dataset construction.}
As the full set of our constructed data is too large to manually examine their collaboration intention, we then draw a statistically representative sample.
The calculation of
statistically significant sample sizes based on population size, confidence interval, and confidence level is well established~\cite{sample_size}, with a confidence level of 95\% and a confidence interval of 5.
In the end, we randomly sample 368 patch linkages.\footnote[3]{https://www.surveysystem.com/sscalc.htm}
\textit{Manual coding.}
In this step, we classify whether the patch linkage is for requesting collaboration or not. Based on the finding of prior work~\cite{hirao2019fse}, patch linkage can be also for sharing information or pointing out an alternative solution. Hence, we classify the intention of patch linkages into three main kinds:
\begin{itemize}
\item \textit{Requesting collaboration}: Patch linkage for requesting collaboration is the linkage where a developer (either a patch author or reviewer) posts a link with a message that explicitly requests other developers to collaborate in the target patch.
In this case, developers often write message which includes words such as `help', `collaborate', `integrate' or `rebase on'.
For example, \textit{``Patch Set 1: Code-Review-1
Can we please rebase this on https://review.openstack.org/\#/c/93842/ that review ensures specific values is present in the string for the flag to be switched on. thanks, dims''}.
\item \textit{Sharing information}: Patch linkage for sharing information is the linkage where a developer posts a link to increase team awareness (e.g., indicating patch dependency, providing broader context)
\item \textit{Pointing out an alternative solution}:
Patch linkage for pointing out an alternative solution is the linkage where a developer posts a link to mention that the target patch attempts to explicitly address the same or similar objective as the source patch.
\end{itemize}
To classify the patch linkages into a category, we consider the whole textual message that comes with the link.
In some cases, we also read the whole review discussion to understand the context.
To test the comprehensive understanding of the constructed schema, we randomly select 30 samples from our representative dataset, and the three authors of this paper independently coded these samples.
Among the three coders, we obtain a Kappa agreement score of 0.77 (i.e., substantial). The three coders then discussed the samples with inconsistent codes to reach a consensus.
Encouraged by the promising Kappa agreement score, the remaining data was then coded by one coder.
\textbf{Timeline of patch linkage.}
To understand the timeline of patch linkage, we measure patch-linked time and patch-closed time.
The patch-linked time is the duration from when reviews start on a patch to the time when the patch link is posted into the review discussion.
The patch-closed time is the duration from when a patch link is posted to the time when the review is closed.
We assume that patch-linked time and patch-closed time significantly differ among linkage categories (i.e., requesting collaboration, sharing information, and pointing out an alternative solution), and specifically a relatively longer time is likely taken for the linkage of requesting collaboration to be linked and closed.
Then, we perform a statistical analysis to examine our assumption.
To do so, we use a Kruskal-Wallis test, i.e., a non-parametric test, to compute the statistical significance.
\subsection{RQ2 Analysis}
To answer \textbf{RQ$_2$: How likely will collaborations occur after patch linkages are posted?}, we investigate how frequently the collaboration will occur after the patch linkage.
Below, we describe how we measure the collaboration occurrence.
\textbf{Collaboration occurrence.} We analyze the set of additional developers who newly join and contribute to the patch after the patch link is posted.
We consider both directions of collaboration, i.e., developers who participate in the source patch contribute to the target patch (Source $\rightarrow$ Target) and developers who participate in the target patch contribute to the source patch (Source $\leftarrow$ Target).
To identify the additional developers and direction of collaboration, we first identify the set of developers who contribute (e.g., providing a comment, voting) to the source patch before the patch link is posted (S) and the set of other developers who *only* contribute after the link is posted (S').
Note that S includes the developer who posted the patch link.
Similarly, we identify the set of developers who contribute to the target patch based on the time point when the patch link is posted (T and T').
Then, we identify the set of developers in the source patch who contribute to the target patch after the patch link is posted (i.e., Source $\rightarrow$ Target = S $\cap$ T') and the number of developers in the target patch who contribute to the source patch after the patch link is posted (i.e., Source $\leftarrow$ Target = T $\cap$ S').
For example, in Figure \ref{fig:illustration}, we will identify the following sets of developers: S = \{Green, Pink\}, S' = \{Blue\}, T = \{Blue, Orange\}, and T' = \{Green\}.
Therefore, in this example, the developer \textit{Green} is considered as the one who is from the source patch and contributes to the target patch.
Similarly, the developer \textit{Blue} is considered as the one who is from the target patch and contributes to the source patch.
Note that since we will analyze the collaboration occurrence across the three link kinds, we perform this analysis based on the labeled 368 patch linkages.
\subsection{RQ3 Analysis}
To answer \textbf{RQ$_3$: What are the kinds of cross-patch collaboration activities?}, we conduct a semi-automatic analysis to further investigate the kinds of collaboration activities of developers who newly join and contribute to the patch.
Below, we describe the approach to identify collaboration activities.
\textbf{Collaborative contribution kinds.}
In addition to the occurrence analysis, we examine what collaborative contributions were made by the additional developers (i.e., S $\cap$ T' and S' $\cap$ T).
In this work, based on an open discussion with ten random samples and the OpenStack documentation by the first three authors of this paper, we focus on four kinds of contributions: 1) \textit{Vote}, 2) \textit{Specific Comments}, 3) \textit{General Comments}, and 4) \textit{Revise}.
Table~\ref{table:contribution} describes the definition of four contribution kinds.
\begin{table}[b]
\centering
\caption{The definition of contribution kinds and their distribution across the link kinds. Note that one review message can be labeled with more than one contribution kind.}
\label{table:contribution}
\begin{tabular}{lp{4.4cm}}
\toprule
\textbf{Contribution Kind} & \textbf{Definition} \\ \hline
Vote & Collaborator votes whether to merge or abandon the patch, i.e., ``Code-Review +1''.\\ \midrule
Specific Comments & Collaborator posts a comment that is directly related to patch change, i.e., typically an inline comment to reference a line of code in the patch. \\ \midrule
General Comments & Collaborator posts a generic comment that does not directly relate to or reference any line of code in the patch. \\ \midrule
Revise & Collaborator uploads revised patches, i.e., ``Uploaded patch set 3''. \\
\bottomrule
\end{tabular}
\end{table}
To identify the contribution kinds, we extract the contribution information recorded in the review message, i.e., 1,898 contributions are retrieved from the 368 labeled patch linkages.
We classify the collaborative contribution kinds in two rounds.
In the first round, we use a regular expression to automatically identify each kind of contribution.
Specifically, we use the expressions (?:.*(Workflow[\textbackslash+|\textbackslash-][0-9]|Code-Review[\textbackslash+|\textbackslash-][0-9]).*), (?:\textbackslash(.*?comment.*?\textbackslash)), and (?:uploaded patch set) to identify \textit{Vote}, \textit{Specific Comments}, and \textit{Revise}, respectively.
The rest of them are classified as \textit{General Comments}.
In the second round, we manually validate the identified candidates to reduce the potential threats caused by false positives.
In addition, we highlight those general comments that are not trivial.
For instance, a not trivial general comment is left with \textit{``Patch Set 2: Code-Review-1
i think you should update this file
https://github.com/openstack/neutron/blob/master/do\\c/requirements.txt} because after the new PTI, doc requirements are moved here.''
\begin{table*}[t]
\centering
\caption{(RQ2) Collaboration occurrence between the source patch and target patch. Collaboration is more likely to occur when the request is provided.}
\label{table:occurrence}
\resizebox{.9\textwidth}{!}{
\begin{tabular}{llr}
\toprule
\textbf{Collaboration Direction} & \textbf{Linkage Kind} & \textbf{Occurrence Percent} \\ \midrule
\multirow{4}{*}{Source $\rightarrow$ Target} & Requesting collaboration & \blackwhitebar{0.72}\\
& Sharing information & \blackwhitebar{0.57} \\
& Pointing out an alternative solution & \blackwhitebar{0.47} \\
& Average & \blackwhitebar{0.57}\\ \midrule
\multirow{4}{*}{Source $\leftarrow$ Target} & Requesting collaboration & \blackwhitebar{0.62} \\
& Sharing information & \blackwhitebar{0.49} \\
& Pointing out an alternative solution & \blackwhitebar{0.34} \\
& Average & \blackwhitebar{0.47}\\ \bottomrule
\end{tabular}}
\end{table*}
\section{Empirical Results}
In this section, we present the results for each of our research questions.
\subsection{RQ1: To what degree do developers request collaborations when posting patch linkages?}
\textit{Results:} We observe two main findings.
\ul{First, patch linkage for requesting collaboration is relatively less frequent than others.}
Table~\ref{table:prevalence} shows that only 57 patch linkages (15\%) where developers post a patch link with an explicit request for collaboration.
Most patch linkages (i.e., 211 patch linkages) are posted for sharing information such as patch dependency and broader context, while the other 100 patch linkages are for pointing out an alternative solution.
\ul{Second, we observe around 4 to 14 days (median) before a review member posts a patch linkage.}
Regarding the patch-linked time,
we find that it takes a relatively long time for review teams to post patch linkage.
The median of 14.1, 11.6, and 4.0 days are taken for requesting collaboration, sharing information, and pointing out an alternative solution, respectively, as shown in Table~\ref{table:prevalence}.
Related to the patch-closed time, we find that the patch with the linkage indicating an alternative solution is more likely to be closed quicker than the other categories.
Interestingly, we find that the linkage for requesting collaboration takes a longer time to be closed compared with other categories.
It is also important to note that we do not aim to draw a causal relationship, but only observe a trend.
Several confounding factors may also play a role, such as the patch size, the patch complexity, and the extent of the change impacts made by the patch.
For instance, if a patch is complex to be understood, it may require more collaboration and further take a longer review time.
The Kruskal-Wallis test confirms that there is a significant difference ($p$-value $<$ 0.001) in the patch-linked time and patch-closed time among different linkage kinds.
Moreover, our assumption that a relatively longer long time is taken for the linkage of requesting collaboration to be linked and closed is established.
\begin{tcolorbox}
\textbf{Takeaway I:}
Patch linkage that requests collaboration is relatively less prevalent.
Furthermore, we observe a delay from 4 to 14 days before a patch linkage is posted.
\end{tcolorbox}
\subsection{RQ2: How likely will collaborations occur after patch linkages are posted?}
\textit{Results:} \ul{Patch linkage with requesting collaboration has a relatively higher percentage of collaboration than the other two kinds.}
Table~\ref{table:occurrence} shows the percentage of patch linkages that have at least one developer from the source patch who contributes to the target patch or vice versa.
We find that on average, 72\% of the patch linkages for requesting collaboration have at least one developer from the source patch who contributes to the target patch (Source $\rightarrow$ Target).
Similarly, 62\% of the patch linkages for requesting collaboration have at least one developer from the target patch contributing to the source patch (Source $\leftarrow$ Target).
On the other hand, the percentages of collaboration in the other two link kinds are relatively lower than the percentage of the patch linkages for requesting collaboration (i.e., 34\%--57\%).
\begin{tcolorbox}
\textbf{Takeaway II:}
A cross-patch collaboration is more likely to occur when the patch linkage comment is accompanied with a request for collaboration.
\end{tcolorbox}
\begin{table*}[t]
\centering
\caption{(RQ3) The frequency of four kinds of cross-patch collaborations. Vote and general comments are more common contributions.}
\label{table:contribution2}
\resizebox{.8\textwidth}{!}{
\begin{tabular}{lll}
\toprule
\textbf{Contribution after Linkage} & \textbf{Link Kind} & \textbf{Percent} \\ \hline
\multirow{3}{*}{Vote} & Requesting collaboration & \blackwhitebar{0.56} \\
& Sharing information & \blackwhitebar{0.59} \\
& Pointing out an alternative solution & \blackwhitebar{0.61} \\ \midrule
\multirow{3}{*}{Specific Comments} & Requesting collaboration & \blackwhitebar{0.29} \\
& Sharing information & \blackwhitebar{0.37} \\
& Pointing out an alternative solution & \blackwhitebar{0.31} \\ \midrule
\multirow{3}{*}{General Comments} & Requesting collaboration & \blackwhitebar{0.43} \\
& Sharing information & \blackwhitebar{0.47} \\
& Pointing out an alternative solution & \blackwhitebar{0.52} \\ \midrule
\multirow{3}{*}{Revise} & Requesting collaboration & \blackwhitebar{0.15} \\
& Sharing information & \blackwhitebar{0.09} \\
& Pointing out an alternative solution & \blackwhitebar{0.05} \\
\bottomrule
\end{tabular}}
\end{table*}
\subsection{RQ3: What are the kinds of cross-patch collaboration activities?}
\textit{Results:} \ul{Among four kinds of collaboration activities, vote is the most frequent contribution kind.}
Table~\ref{table:contribution2} shows the distribution of contribution kinds across the link kinds.
We find that among four contribution kinds, \textit{Vote} is the most common kind (i.e., 56\% for requesting collaboration, 59\% for sharing information, and 61\% for pointing out an alternative solution).
The following common contribution kind is \textit{General Comments}.
Based on our manual validation, we observe that 19.7\% of these comments are left with not trivial information.
For instance, one comment provides advice to fix up the eventlet change, i.e., \textit{``Patch Set 4:
OK, fix up the docstring on run\_vios\_command\_as\_root and I think the commit message should mention the eventlet change, and then I'm +1.''}.
Interestingly, we find that \textit{Revise} contribution kind is relatively more frequent in the patch linkage for requesting collaboration (15\%) than the other two link kinds (9\% and 5\%, respectively).
This result suggests that the patch linkage for requesting collaboration is more likely to trigger the collaborative activity related to the patch quality (i.e., where the collaborator uploads revised patches).
\begin{tcolorbox}
\textbf{Takeaway III:}
The cross-patch collaboration via the patch linkage includes voting, writing specific and general comments, and a revision of patches.
Furthermore, voting is the most common collaboration kind, i.e., 57\% being identified on average.
\end{tcolorbox}
\section{Threats to Validity}
We now discuss the threats to the validity of our empirical study.
\textit{External Validity.}
External validity is concerned with our ability to generalize based on our results.
Our study only focuses on the OSS ecosystem (i.e., where multiple projects develop software collaboratively) using a tool-based code review.
We understand that there are not many multi-project review ecosystems similar to OpenStack.
However, as open source
adoption has grown significantly in the last decade, and numerous companies have built business models around OSS ecosystems~\cite{Zhang2020HowDC}, we believe it is important to study the OSS ecosystem.
\textit{Internal Validity.} Internal validity is the approximate
truth about inferences regarding cause-effect or causal relationships.
In our empirical study, we employ manual analysis for classifying linkage kinds.
The label might be miscoded due to the subjective nature of understanding. To eliminate such a threat, we use the Kappa agreement to measure inter-rater reliability.
Only until the Kappa score reaches more than 0.7 (i.e., the score in classifying intentions is 0.77, indicating substantial agreement), we were able to complete the rest of the samples.
Another threat may occur in the choice of selecting statistical test techniques.
To address the statistical significance of the timeline, we apply the Kruskal-Wallis test, a non-parametric test.
While, we are confident with this test, which is widely used in the previous work~\cite{wang_IST}.
\textit{Construct Validity.} Construct validity is concerned with
the degree to which our measurements capture what we
aim to study.
This threat potentially occurs in the extraction of identifying patch linkages.
In our study, we only extract patch links from the general code review discussions, however, the patch links may also appear in inline review discussions.
We believe this will not affect our observations, since we use qualitative analysis to investigate the patch links in this study.
Another threat could be concerning the validity of the categorization in RQ1. We classify three intentions behind the patch linkage, i.e., requesting collaboration, sharing information, and pointing out an alternative solution. Sharing information may be a part of collaboration.
To ensure that there is no orthogonality between them, we only classify the comment that provides actionable collaboration intention (e.g., `help', `collaborate', and so on) as requesting collaboration; otherwise, we classify it as sharing information.
\section{Challenges and Opportunities}
\label{conclusion}
We now discuss our empirical findings and challenges, as well we provide several possible opportunities to guide future research.
The empirical results show the potential for this new kind of collaboration that is triggered by a patch linkage.
Hence, the study calls for new avenues for research into this kind of collaboration.
In fact, we show that cross-patch collaborative contributions via the patch linkage are non-trivial, with key contributions like voting which affects the review outcome of the target patch, or revising which improves the patch.
In terms of the timeline of patch linkage, our empirical study provides evidence that to be aware of the existence of patch linkage takes a relatively longer time.
There are still open challenges that remain.
For instance, the current approach has the threat to include collaborations that may have not been triggered by the patch linkage.
Hence, future work needs to address the soundness of our approach.
Another challenge may include capturing cross-patch collaborations that do not have patch linkage.
This can also be addressed in a bigger study.
Furthermore, we would need a developer study to validate the practical implications of the study.
Our work lays out future opportunities for directions on how patch linkage sharing can lead to these new kinds of collaboration.
We highlight three below to name a few:
\begin{itemize}
\item \textit{Identify heuristics and the information required for a reviewer to contribute to a linked patch}. To gain more practical insights, a survey or interview of the reviewer who posts the link could reveal collaboration barriers and opportunities
\item \textit{Investigate the impact of the collaboration on patch quality and code review quality.} To further understand the impact of the collaboration, one promising direction is to explore if the patch involved with contribution via the linkage is likely to decrease the probability of defects.
\item \textit{Automatic recovery of links (especially for Duplicate/Alternative Solution Detection)}. Provide tool support to early detect or recommend patches to reduce the time taken to identify the link, especially since we find that pointing out an alternative solution earlier leads to a shorter review time compared to the other link kinds.
\end{itemize}
\section{Related Work}
In this section, we position our work with literature reviews in terms of the practice of link sharing in software engineering and the reviewer participation in the context of code review settings.
\subsection{The Practice of Link Sharing}
Link sharing has become a popular activity in software engineering, which enables developers to share knowledge and mitigate potential issues.
The value of link sharing has been commonly addressed in question-and-answer forums like Stack Overflow and tool-based code reviews like GitHub and Gerrit.
Goemz et al.~\cite{gomez_2013} reported that link sharing is a significant phenomenon on
Stack Overflow, referring readers to software development innovations like libraries and tools.
Ye et al.~\cite{stack_emse_2017} generated the structural and dynamic properties of the emergent knowledge network, using shared URLs in Stack Overflow.
With the popularity of GitHub, Hata et al.~\cite{Hata:ICSE19} found that 9.6 million links exist in source code comments across 25,925 repositories.
Within Gerrit based reviews, Wang et al.~\cite{WANG_emse} observed seven intentions behind link sharing and their developer survey results suggested that link sharing is useful.
At the same time, Hirao et al.~\cite{hirao2019fse} categorized five kinds of review linkage, such as patch dependency, broader context, and alternative solution.
To aid such practice, Wang et al.~\cite{wang_IST} proposed a linkage detection using textual contents and file location.
Our work expanded upon the work of Wang et al.~\cite{WANG_emse} and Hirao et al.~\cite{hirao2019fse} to investigate the aspect of collaboration across patch linkages.
\subsection{Reviewer Participation}
The reviewer participation becomes one of the main challenges in the tool-based code review process, since unlike formal code review, reviewers
can decide whether or not to participate in a review.
A large body of studies has found that reviewer participation is associated with software quality and code review time~\cite{p28,pj3,chouchen2021anti}.
For instance, Kononenko et al.~\cite{p09} observed that the number of invited
reviewers have a statistically significant impact on review bugginess.
Moreover, Ruangwan et al.~\cite{pj28} reported that human factors play an important role in predicting whether or not an invited reviewer will participate in a review.
To relieve the challenges of reviewer participation, many reviewer recommendation systems have been proposed.
Thongtanunam et al.~\cite{p65} introduced REVFINDER, a file location-based code-reviewer recommendation approach.
Xia et al.~\cite{p34} put textual information and file location analyses together to recommend reviewers more accurately.
Hannebauer et al.~\cite{p48} recommended code reviewers based on their expertise.
Most recently, Al-Zubaidi et al.~\cite{al2020workload} developed a novel approach that
leverages a multi-objective meta-heuristic algorithm to search for reviewers guided by two objectives.
Similarly, the ultimate goal of our study is to improve the reviewer participation by understanding the developer collaboration activities across patch linkages.
\section{Conclusion and Future Work}
The growing number of reviews in open source projects poses a new challenge for collaboration during the review process and development tasks.
In this paper, we perform an empirical study on OpenStack to investigate the cross-collaborations via patch linkages.
Our results show that requesting collaboration accompanied with shared patch links is less common, while cross-patch collaboration is more likely to occur once the request is provided.
Moreover, four kinds of collaboration activities are classified and the results suggest that cross-collaborations are not trivial.
Future research directions include the causality analysis between patch linkage and collaboration, perceptions and collaboration barriers from real developers, and tool development for link recovery.
\section*{Acknowledgment}
This work has been supported by JSPS KAKENHI Grant Numbers JP20K19774, and JP20H05706.
P. Thongtanunam
was supported by the Australian Research Council’s Discovery
Early Career Researcher Award (DECRA) funding scheme (DE210101091).
\bibliographystyle{ieicetr}
|
2,877,628,089,390 | arxiv | \section{Introduction}\label{sec:introduction}
Stable boundary layers can be generated by the advection of warm air over a colder surface. Stably-stratified atmospheric boundary layers are observed during clear nights as a result of radiative cooling of the ground surface \cite{Nieuwstadt1984,Stull2000}. Oceans, unlike the lower atmosphere, are heated from above and are usually stably stratified \cite{Wunsch2004,Thorpe2005}. In both the atmosphere and oceans, stratification has a significant effect on turbulence production, propagation, and decay. The interaction between shear-driven turbulence and stratification is a key process in a wide array of relevant geophysical flows for which the spatio-temporal scales span many orders of magnitudes.
Classical understanding of stably-stratified boundary layers is well described in a number of textbooks \cite{Panofsky1984,Sorbjan1989,Stull1988,Wyngaard2010} and reviews \cite{Garratt1994,Ivey2008,Mahrt2014}. However, fundamental features of the stably-stratified turbulent boundary layer still remain elusive from a modeling standpoint. The strong intermittency observed in stable boundary layers causes the upper portion of the boundary layer to decouple from the near-wall region due to the inhibition in vertical mixing \cite{Stull1988,Mahrt1999,Williams2017}. Strong stable stratification also significantly changes the flow structures prevalent in a boundary layer with additional features becoming prominent such as large-scale intermittency, gravity waves and Kelvin-Helmholtz instabilities \cite{Mahrt1999}, and the near parallel downstream tilting of flow structures \cite{Chauhan2013,Salesky2018,Salesky2020}.
One way to study the stably-stratified turbulent boundary layer is through on-site experiments. Researchers in the past decades have conducted field experiments in the stably-stratified atmospheric boundary layer to study turbulent energy budgets \cite{Wyngaard1971}, heat and momentum transfer \cite{Kondo1978}, regime characterization \cite{Mahrt1998, Mahrt1999}, flow structures \cite{Chauhan2013}, and the complexities of atmospheric stable boundary layers \cite{Fernando2010}. Measurements of turbulence quantities in the ocean near the bottom boundary are difficult to measure and as such the literature is sparse. Smedman \emph{et al.}~\cite{Smedman1994}, using data from a marine coastal experiment over the Baltic sea, found that the near-wall turbulence was virtually independent of forcing from large-scale structures embedded in the flow. Experiments performed in the northern bay of San Francisco \cite{Stacey1999} found that active turbulence is confined near the wall. Additionally, tidal channel experiments \cite{Lu2000} demonstrated that the production of turbulent kinetic energy is generally greatest near the bottom boundary while the buoyancy flux is weakest in this region. Still, real-world atmospheric and oceanic boundary layers are complicated by non-turbulent motions occurring simultaneously on a variety of scales, the possible importance of radiative flux divergence of the air within the boundary layer, surface condensation, and variable cloudiness \cite{Large1994,Garratt1994,Mahrt2014}. In order to isolate instances where the secondary effects are minimized, restrictions on nonstationarity or conditions on the minimum allowed value of turbulence energy may be applied to the data collected. Nonetheless, certain assumptions that are applied for analyses of these real-world stratified boundary layers are not always valid. As such, researchers supplement their work with laboratory experiments as well as simulations.
Laboratory experiments of stratified wall-bounded flows show that buoyancy effects play an important role in the transfer of heat and momentum in both the inner and outer layers of the boundary layer \cite{Arya1975, Britter1974, Piat1981, Komori1983, Fukui1983}. In general, the experiments show that with increasing stratification, the turbulence shear production rate is strongly affected by buoyancy and greatly reduced far from the wall. One measure of stratification strength is the local gradient Richardson number, $\ensuremath{ \Ri_g }$. Since shear originates at the wall, the local gradient Richardson number, which is inversely proportional to the shear, is generally smaller in the near-wall region as the shear term overpowers the buoyancy term. The stabilizing effect of stratification has a greater impact farther from the wall. Indeed, works listed here demonstrated that velocity fluctuations become weaker from the wall and in some cases, turbulence intensity is reduced as the buoyancy frequency in the system is increased. Linear inviscid stability analysis \cite{Miles1961} showed that there exists a critical value for the gradient Richardson number, $\ensuremath{ \Ri_g } \geq 0.25$, that serves as a sufficient condition for stability. Additionally, the experiments of Komori \emph{et al.}~\cite{Komori1983} show that the correlation coefficients associated with the Reynolds shear stress approach zero at values of $\ensuremath{ \Ri_g } \simeq 0.2 - 0.3$.
There have been many large-eddy simulations (LES) \cite{Garg2000,Armenio2002,Basu2006,Stoll2008} and direct numerical simulations (DNS) \cite{Iida2002,Nieuwstadt2005,Brethouwer2007,Flores2011,Garcia2011} of density stratified channel flows. The results support the experimental observations: strengthening the stratification leads to the reduction (or even suppression) of turbulent velocity fluctuations further from the wall. Garg \emph{et al.}~\cite{Garg2000} showed in their work that the mean velocity profiles of the stratified channel were similar in the near-wall region but differed in the logarithmic region. The difference is characterized by a reduction in the value of both the slope of the log-law of the mean velocity and the gradient of the mean velocity profile. It should be noted that the authors used the friction Richardson number to categorize the stratification strengths investigated in their simulations and concluded that the friction Richardson number is superior to the local gradient Richardson number in characterizing flow regimes as it is a global flow property.
Performing experiments (both on-site and in laboratories) of stratified wall-bounded turbulence can be challenging for reasons such as topography or secondary effects and simulations suffer from computational constraints. Moreover, laboratory experiments and simulations can attain only a limited range of Reynolds and Richardson numbers that are often orders of magnitude smaller than real-world geophysical phenomena. A quick numerical model prediction of key features of stratified boundary layers could greatly benefit the understanding of the interaction between velocity and scalar flux at varying scales. For these reasons, in this paper, we aim to explore the interaction between velocity and scalar fluctuations using the resolvent model \cite{McKeon2010}.
The resolvent model provides an optimal basis, in an energy sense, that allows an in-depth comparison of the underlying mechanisms in the flow. Moreover, the model is computationally efficient with only a singular value decomposition of the largest singular value required to obtain the leading order model. Resolvent analysis has been widely applied to a range of flow configurations to identify dominant flow structures and the underlying forcing, e.g. Ref.~\cite{McKeon2010,Yeh18, Towne18,Bae2020,McMullen2020, Nogueira2021}, and has been reviewed in detail in Ref. \cite{McKeon2017} and Ref. \cite{Jovanovic2021}. We use the model to provide analysis of the flow using only mean quantities, which are easy to obtain even in field experiments, along with knowledge from the energetics of the unstratified case, which is better documented than the stably-stratified case. The predictions from the resolvent model are then compared to the flow statistics from a DNS of a stably-stratified turbulent channel flow. The Reynolds number under consideration in the current study is considerably lower than those observed in geophysical flows, which is dictated by the available DNS data for comparison, rather than by the resolvent model. Resolvent analysis of unstratified wall-bounded flows shows that the results of the model are still relevant for moderate Reynolds numbers \cite{Moarref2013} with the resolvent modes in the logarithmic layer showing self-similar behavior. We expect the capability of the model in stably-stratified regimes to extend to higher Reynolds numbers as well.
The paper is organized as follows. In \S\ref{sec:modelling_and_analysis}, we introduce the resolvent framework with the inclusion of the scalar advection-diffusion equation and discuss the relevant energy norm, boundary conditions, and computational methods. In \S\ref{sec:lowrank}, we examine the sensitivity of the low-rank properties of the resolvent operator to the stable stratification strength and compare these properties with the most energetic scales in each flow. In \S\ref{sec:mode_shapes}, we analyze the characteristics of the forcing and response modes of both velocity and scalar. We compare the mode shapes with correlations obtained from DNS data. In \S\ref{sec:energy_balance}, we study the turbulent kinetic energy budget in the resolvent formulation and compare the results with the energy budget obtained from the DNS data. Finally, our conclusions on the application of the resolvent framework to a stably-stratified boundary layer are given in \S\ref{sec:conclusions}.
\section{Modeling active scalar dynamics in the Navier-Stokes equations}
\label{sec:modelling_and_analysis}
\subsection{Navier-Stokes equation with active scalar}
\label{sec:NSE_Boussinesq}
We consider a density-stratified turbulent channel flow where the density acts in the direction of gravitational acceleration. We use a Cartesian co-ordinate system $\ensuremath{\bs{x}}=(x,y,z)$ such that the force of gravity acts in the $-y$ direction, with $x$, $y$ and $z$ being the streamwise, wall-normal and spanwise directions, respectively. The governing equations are given by the non-dimensional Navier-Stokes equation under the Boussinesq approximation,
\begin{subequations}
\label{eqn:NSE_BL}
\begin{align}
\frac{ \partial \utotal }{ \partial t } + (\utotal \cdot
\nabla)\utotal & = -\nabla \ptotal + \frac{\nabla^2 \utotal}{\ensuremath{ \Reynolds_\tau }}
- \ensuremath{ \Ri_\tau }\ensuremath{ \widetilde{\rho} }\ensuremath{ \bs{e}_y },\\
\frac{ \partial \ensuremath{ \widetilde{\rho} } }{ \partial t } + (\utotal \cdot
\nabla)\ensuremath{ \widetilde{\rho} } & = \frac{\nabla^2 \ensuremath{ \widetilde{\rho} }}{\ensuremath{ \Reynolds_\tau }\ensuremath{ Pr }},\\
\nabla \cdot \utotal & = 0.
\end{align}
\end{subequations}
Here, $\utotal = (\tilde{u},\tilde{v},\tilde{w})$ is the instantaneous velocity vector in the reference system $(x,y,z)$, $t$ is time, $\ptotal$ is the kinematic pressure field that remains after removing the part that is in hydrostatic balance with the mean density field, $\ensuremath{ \widetilde{\rho} }$ is the density deviation from the reference density $\rho_0$ ($\ensuremath{ \widetilde{\rho} } \ll \rho_0$), and $\ensuremath{ \bs{e}_y }$ is the unit vector acting in the $y$-direction. The velocity and length scales are non-dimensionalized using the friction velocity $u_\tau$ and channel half-height $\delta$, respectively, and the density is non-dimensionalized using $\Delta\rho$, the difference in density between the two channel walls. We define the walls to be located at $y=0$ and $y=2$. The non-dimensional quantities are given by the Reynolds, Prandtl and Richardson numbers, defined as \refstepcounter{equation}
$$
\ensuremath{ \Reynolds_\tau } = \frac{u_{\tau}\delta}{\nu}, \qquad
\ensuremath{ Pr } = \frac{\nu}{\ensuremath{ \gamma }}, \qquad
\ensuremath{ \Ri_\tau } = \frac{g \Delta \rho \delta}{\rho_0\ensuremath{ u_\tau }^2},
\eqno{(\theequation{\mathit{a},\mathit{b},\mathit{c}})}\label{nondims_rey_pra}
$$
where $\nu$ is the kinematic viscosity, $\ensuremath{ \gamma }$ is the molecular diffusivity of density, and $g$ is the acceleration due to gravity.
\subsection{Resolvent framework with an active scalar}\label{sec:NSE_Resolvent}
The total fields $\utotal$, $\ptotal$ and $\ensuremath{ \widetilde{\rho} }$ can be split into mean and fluctuating parts as
\begin{subequations}
\begin{align}\label{eqn:means}
\utotal(\ensuremath{\bs{x}},t) & = {\boldsymbol{\umean}}(y) + \ufluc(\ensuremath{\bs{x}},t),\\
\ptotal(\ensuremath{\bs{x}},t) & = \pmean(y) + \pfluc(\ensuremath{\bs{x}},t),\\
\ensuremath{ \widetilde{\rho} }(\ensuremath{\bs{x}},t) & = \ensuremath{ \overline{\rho} }(y) + \ensuremath{ \rho }(\ensuremath{\bs{x}},t),
\end{align}
\end{subequations}
where the mean is taken in the homogeneous directions, $x$ and $z$, and time. Note that ${\boldsymbol{\umean}} = (\bar{u},\bar{v},\bar{w})$ and $\bar{v}=\bar{w}=0$. We substitute the decomposed variables into Eq. (\ref{eqn:NSE_BL}) to obtain the fluctuation equations
\begin{subequations}
\begin{align}
\ensuremath{ \partial_t }\ufluc + (\umean \cdot \nabla)\ufluc + (\ufluc \cdot \nabla)\umean
&= -\nabla \pfluc + \frac{\nabla^2 \ufluc}{\ensuremath{ \Reynolds_\tau }} -
\ensuremath{ \Ri_\tau }\ensuremath{ \rho }\ensuremath{ \bs{e}_y } + \bs{f}_{\bs{u}} \\
\ensuremath{ \partial_t }\ensuremath{ \rho } + (\umean \cdot \nabla)\ensuremath{ \rho } + (\ufluc \cdot \nabla)\ensuremath{ \overline{\rho} }
&= \frac{\nabla^2 \ensuremath{ \rho }}{\ensuremath{ \Reynolds_\tau }\ensuremath{ Pr }} + f_\ensuremath{ \rho },\\ \nabla \cdot
\ufluc &= 0,
\end{align}
\end{subequations}
where $\bs{f}_{\bs{u}} = - \ufluc\cdot\nabla\ufluc$ and $f_\ensuremath{ \rho } = -\ufluc\cdot\nabla\ensuremath{ \rho }$ are the nonlinear terms.
Taking the Fourier transform of the fluctuation equations above in homogeneous directions and time, the variables can be expressed as
\begin{equation}
\begin{bmatrix}
\boldsymbol{u}(x,y,z,t)\\p(x,y,z,t)\\\rho(x,y,z,t)
\end{bmatrix}
= \iiint^{\infty}_{-\infty}
\begin{bmatrix}
\hat{\boldsymbol{u}}(y;\ensuremath{\kx, \kz, \omega})\\
\hat{p}(y;\ensuremath{\kx, \kz, \omega})\\
\hat{\rho}(y;\ensuremath{\kx, \kz, \omega})
\end{bmatrix}
e^{\ensuremath{ \text{i} }(\ensuremath{k_x} x + \ensuremath{k_z} z -\omega t)}
d\ensuremath{k_x} d\ensuremath{k_z} d\omega,
\end{equation}
for $\ensuremath{\bs{k}} = (\ensuremath{\kx, \kz, \omega}) \neq (0,0,0)$, where $(\hat{\cdot})$ denotes the Fourier transformed variables. Here, the streamwise and spanwise wavenumbers are $\ensuremath{k_x}$ and $\ensuremath{k_z}$, respectively, and $\omega$ is the temporal frequency defined as $\omega = \ensuremath{ c }\ensuremath{k_x}$, where $c$ is the wavespeed. The streamwise and spanwise wavelengths are defined as $\ensuremath{\lambda_x} = 2\pi/\ensuremath{k_x}$ and $\ensuremath{\lambda_z} = 2\pi/\ensuremath{k_z}$, respectively. Critical-layers can be identified when the wavespeed $\ensuremath{ c }$ is equivalent to the mean velocity, i.e. $y_c$ is the critical layer location for wavespeed $\ensuremath{ c }=\umean(y_c)$. Assuming the mean velocity and density profiles are known, the fluctuations equations are expressed compactly in a linear equation as
\begin{equation}\label{eqn:linear_compact}
-\ensuremath{ \text{i} }\omega\q - \ensuremath{ \mathcal{A} }\q = \f,
\end{equation}
where we define $\q = [ \ensuremath{ \hat{u} }\;\ensuremath{ \hat{v} }\;\ensuremath{ \hat{w} }\;\ensuremath{ \hat{p} }\;\ensuremath{ \hat{\rho} } ]^T$ as the state vector and $\f = [ \ensuremath{ \hat{f}_u }\;\ensuremath{ \hat{f}_v }\;\ensuremath{ \hat{f}_w }\;0\;\ensuremath{ \hat{f}_\rho } ]^T$ as the forcing vector. The linear operator is given by
\begin{equation}\label{eqn:linearOp}
\ensuremath{ \mathcal{A} } = \begin{pmatrix}
A & -\partial\umean/\partial y & 0 & -\ensuremath{ \text{i} }\ensuremath{k_x} & 0 \\
0 & A & 0 & -\ensuremath{ D_y } & -\ensuremath{ \Ri_\tau } \\
0 & 0 & A & -\ensuremath{ \text{i} }\ensuremath{k_z} & 0 \\
-\ensuremath{ \text{i} }\ensuremath{k_x} & -\ensuremath{ D_y } & -\ensuremath{ \text{i} }\ensuremath{k_z} & 0 & 0 \\
0 & -\partial\ensuremath{ \overline{\rho} }/\partial y & 0 & 0 & A_\ensuremath{ \rho }
\end{pmatrix},
\end{equation}
where
\begin{subequations}
\begin{align}
A &= -\ensuremath{ \text{i} }\ensuremath{k_x}\umean +\frac{\hat\Delta}{\ensuremath{ \Reynolds_\tau }},\\
A_\ensuremath{ \rho } &= -\ensuremath{ \text{i} }\ensuremath{k_x}\umean +\frac{\hat\Delta}{\ensuremath{ \Reynolds_\tau } Pr},
\end{align}
\end{subequations}
$\ensuremath{ D_y }$ is the wall-normal derivative operator and $\hat\Delta \equiv \ensuremath{ \DD - \kperp }$ is the Laplacian with $\ensuremath{ k_{\perp}^2 } = \ensuremath{k_x}^2 + \ensuremath{k_z}^2$. The block matrix $\ensuremath{ \mathcal{A} }$ describes the linear dynamics of the system. Equation (\ref{eqn:linear_compact}) can be rearranged to yield
\begin{equation} \label{eqn:resolvent}
\q\; =\; \ensuremath{ \mathcal{H} }(\ensuremath{\bs{k}})\; \f,
\end{equation}
where $\ensuremath{ \mathcal{H} }(\ensuremath{\bs{k}}) = (-\ensuremath{ \text{i} }\omega I - \ensuremath{ \mathcal{A} })^{-1}$ is the resolvent of the linear operator and $I$ is the identity matrix. A related analysis has been performed in Ref. \cite{madhusudanan2020coherent}.
From Eq. \eqref{eqn:resolvent}, we wish to find a decomposition of the resolvent operator that enables us to identify high gain input and output modes with respect to the linear operator. For resolvent analysis, this is given by the Schmidt decomposition. However, this decomposition must be accompanied by a choice of inner product and the corresponding norm. The natural and physically meaningful norm is given by the non-dimensionalized energy norm, which is the sum of kinetic and potential energies \cite{Lorenz1955,Turner1979}
\begin{equation}\label{eqn:norm}
\frac{1}{2}\|\boldsymbol{q}\|^2_E = \frac{1}{2}
(\boldsymbol{q},\boldsymbol{q})_E=
\frac{1}{2} \int_{0}^2\left( u^*u + v^*v + w^*w +
\ensuremath{ \Ri_\tau }(\rho^*\rho) \right)dy,
\end{equation}
where $(\cdot)^*$ denotes the conjugate transpose.
We perform the Schmidt decomposition of the resolvent operator $\ensuremath{ \mathcal{H} }$ to generate a basis based on the most highly amplified forcing and response directions such that
\begin{equation}\label{eqn:svd}
\ensuremath{ \mathcal{H} }(\ensuremath{\bs{k}}) = \sum_{j=1}^{\infty} \sigma_j(\ensuremath{\bs{k}})
\bs{\hat{\psi}}_j(y;\ensuremath{\bs{k}}) \bs{\hat{\phi}}^{*}_j(y;\ensuremath{\bs{k}}),
\end{equation}
where the right and left Schmidt bases (or singular vectors in the discrete case) are given by $\bs{\hat{\phi}}_j$ and $\bs{\hat{\psi}}_j$ along with their corresponding gains $\sigma_j$. The singular values are in descending order such that $\sigma_1 \geq \sigma_2 \geq \cdots \geq 0$. The forcing and resolvent modes are orthonormal such that
\begin{equation}
(\bs{\hat{\phi}}_j, \bs{\hat{\phi}}_k)_E =
(\bs{\hat{\psi}}_j, \bs{\hat{\psi}}_k)_E = \delta_{jk},
\end{equation}
where $\delta_{jk}$ denotes the Kronecker delta. The basis pair defined above is used to decompose the nonlinear forcing and response field at a specified wavenumber triplet as
\begin{subequations}
\begin{align}
\label{eqn:bases}
\f(y;\ensuremath{\bs{k}}) & = \sum_{j=1}^{\infty} \bs{\hat{\phi}}_j(y;\ensuremath{\bs{k}})
\chi_j(\ensuremath{\bs{k}}),\\
\q(y;\ensuremath{\bs{k}}) & = \sum_{j=1}^{\infty} \chi_j(\ensuremath{\bs{k}}) \sigma_j(\ensuremath{\bs{k}})
\bs{\hat{\psi}}_j(y;\ensuremath{\bs{k}}).
\end{align}
\end{subequations}
Here, $\chi_j$ is a projection variable that is obtained by projecting the nonlinear forcing onto the forcing modes, and subsequently use to weight the response modes. Note that the largest energy is obtained when the forcing is aligned with the leading singular vector, i.e.\ when $\chi_j=\delta_{j1}$.
\subsection{Computational approach}\label{sec:computational_approach}
\subsubsection{Mean velocity and density profiles} \label{sec:mean_velocity}
\begin{table}
\caption{Comparison of our DNS and the results of Garc\'ia-Villalba \& del \'Alamo~\cite{Garcia2011} denoted under columns titled GV11, both at $\ensuremath{ \Reynolds_\tau }=180$. $\ensuremath{ Re }_B$ is the bulk Reynolds number defined as $\ensuremath{ u_B }\delta/\nu$ where the bulk velocity is $\ensuremath{ u_B } = \int_0^{2}\umean dy/2$. $\ensuremath{ \Ri_B }$ is the bulk Richardson number which is defined as $\ensuremath{ \Ri_B } = \ensuremath{ \Ri_\tau }(\ensuremath{ u_\tau }/\ensuremath{ u_B })^2/2$. $Nu$ is the Nusselt number defined as $Nu = 2\delta q_w/(\ensuremath{ \gamma } \Delta\ensuremath{ \rho })$. For laminar flow $Nu=1$.}
\begin{center}
\def~{\hphantom{0}}
\begin{ruledtabular}
\begin{tabular}{rcccccc}
& \multicolumn{2}{c}{$Re_B$} & \multicolumn{2}{c}{$\ensuremath{ Ri }_B$} & \multicolumn{2}{c}{$Nu$}\\
\colrule
$\ensuremath{ \Ri_\tau }$ & \textit{GV11} & DNS & \textit{GV11} & DNS & \textit{GV11} & DNS \\
\colrule
0 & \textit{2820} & 2823 & \textit{0.000} & 0.000 & \textit{6.03} & 6.08 \\[2pt]
10 & \textit{ -} & 2970 & \textit{ -} & 0.018 & \textit{ -} & 4.78 \\[2pt]
18 & \textit{3043} & 3060 & \textit{0.031} & 0.031 & \textit{4.02} & 4.15 \\[2pt]
60 & \textit{3436} & 3473 & \textit{0.082} & 0.081 & \textit{2.80} & 2.82 \\[2pt]
100 & \textit{ -} & 3850 & \textit{ -} & 0.109 & \textit{ -} & 2.37 \\
\end{tabular}
\end{ruledtabular}
\label{tab:dns}
\end{center}
\end{table}
\begin{figure}
\centering
\subfloat[][]{\includegraphics[width=0.42\textwidth]{u_mean}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[width=0.42\textwidth]{rho_mean}}\\
\subfloat[][]{\includegraphics[width=0.42\textwidth]{u_rms}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[width=0.42\textwidth]{rho_rms}}
\caption{Mean (a) streamwise velocity and (b) density profiles and root-mean-square (r.m.s.) (c) streamwise velocity and (d) density profiles from the current DNS for $\ensuremath{ Ri }_\tau = 0,10,18,60,100$ (solid lines darker to lighter), compared to the mean profiles of Ref. \cite{Garcia2011} for $\ensuremath{ Ri }_\tau = 0, 18, 120$ (dashed lines darker to lighter). The friction density is defined as $\rho_\tau = q_w / u_\tau$, where $q_w$ is the density flux at the wall.}
\label{fig:mean_profiles}
\end{figure}
Mean velocity and density profiles are required to close the resolvent model. We obtain the one-dimensional mean velocity and density profiles from a DNS of a stratified turbulent channel at $Re_\tau=180$ for a wide range of $\ensuremath{ \Ri_\tau }$. The simulations are performed by discretizing the incompressible Navier-Stokes equations with a staggered, second-order accurate, central finite-difference method in space \cite{Orlandi2000}, and an explicit third-order accurate Runge-Kutta method for time advancement \cite{Wray1990}. The system of equations is solved via an operator splitting approach \cite{Chorin1968}. The code has been verified for neutrally-buoyant cases in Ref. \cite{Bae2019,Lozano-Duran2019}.
Periodic boundary conditions are imposed in the streamwise and spanwise directions, the no-slip and no-penetration condition with $\tilde{\rho}=0$ is applied at the bottom boundary, and a no-slip and no-penetration condition with $\tilde{\rho}=1$ is applied at the top boundary. The streamwise, wall-normal, and spanwise domain sizes are $4\pi$, $2$, and $2\pi$ respectively. The grid spacings in the streamwise and spanwise directions are uniform with $\Delta x^+=8.8$ and $\Delta z^+=4.4$; non-uniform meshes are used in the wall-normal direction, with the grid stretched toward the wall according to a hyperbolic tangent distribution with $\min(\Delta y^+ ) = 0.31$ and $\max(\Delta y^+ ) = 5.19$, where the superscript $+$ indicates length scales in wall units normalized by $\nu/u_\tau$ rather than $\delta$. A constant pressure gradient is applied to drive the flow. The simulation was run over $100$ eddy-turnover times, defined as $\delta/u_\tau$, after transients.
The work of Garc\'ia-Villalba \& del \'Alamo~\cite{Garcia2011} at $\ensuremath{ \Reynolds_\tau } = 180$ is used to validate the results. The comparison of a few key quantities is shown in Table \ref{tab:dns}, which indicate a good agreement for all Richardson numbers. The mean and root-mean-squared streamwise velocity and density profiles are shown in Fig. \ref{fig:mean_profiles} for all current cases and select cases from Ref. \cite{Garcia2011}. The profiles show good agreement among all statistics.
\subsubsection{Resolvent mode computation}
The Schmidt decomposition of the resolvent operator outlined in \S\ref{sec:NSE_Resolvent} is numerically implemented as the singular value decomposition (SVD). We solve the discrete equations using a spectral collocation method with the number of points in the wall-normal direction given by $\ensuremath{ N_y }$, thus limiting the number of singular values to $5\ensuremath{ N_y }$ because the state vector $\q\in\mathbb{C}^{5\ensuremath{ N_y }\times 1}$. In this study, after conducting a grid convergence study examining the singular values, we selected a wall-normal grid resolution of $N_y=400$. Thus, the computational cost of the resolvent mode computation is at most $O(N_y^3)$ (less if randomized algorithms are employed \cite{Moarref2013,Ribeiro20}), often only requiring a leading order singular value decomposition (see \S\ref{sec:lowrank} for more information), and can be performed in seconds on a personal computer.
The discretized linear operator is constructed using Chebyshev differentiation matrices and is shifted to integrate between $y\in[0, 2]$ rather than $y\in[-1, 1]$. The mean velocity and density profiles obtained from DNS as well as their wall-normal derivatives are interpolated to the Chebyshev grid points to form the resolvent operator as in Eq. \eqref{eqn:linearOp}. The no-slip and no-penetration boundary conditions for the fluctuating velocities and density, i.e. $u,v,w,\rho=0$, are applied at the walls.
In the case of a turbulent channel, due to the symmetry in the geometry, the resolvent modes appear in pairs that can be linearly combined to produce symmetric and antisymmetric modes. Depending on the support of these modes, the singular values may be identical or similar in magnitude. For the results in the following sections, only results in the bottom half-channel will be shown, but the corresponding upper half-channel results are analogous in all cases.
\section{Results} \label{sec:results}
In this section, we explore how the resolvent analysis provides insight to changes in flow characteristics with increasing stratification from only a limited range of representative scales. We compare (i) the resolvent energy spectra, obtained from the ratio of the energy in the leading resolvent response mode to the total response, $(\sigma_1^2+\sigma_2^2)/\sum_j\sigma_j^2$, to the premultiplied energy spectra of the DNS, (ii) the structure identified by the leading resolvent mode to the correlation computed from DNS, and (iii) the energy budgets of the resolvent modes to that of the DNS.
In order for full representation of the system, a wide range of scales as well as information of all other subsequent modes in addition to the leading resolvent modes are necessary \cite{McKeon2010,McKeon2017}. However, the goal here is to provide a quick model for characterising the flow. The simplest and quickest model can be provided via a rank-one approximation, where only the leading resolvent mode is computed. Thus, our focus will be on the representation given by the leading resolvent mode for a limited number of scales.
\subsection{Resolvent energy spectra}\label{sec:lowrank}
The resolvent norm is the principal singular value, $\sqrt{\sigma_1^2+\sigma_2^2}$ in this case, of the resolvent operator $\ensuremath{ \mathcal{H} }$, and quantifies the system’s sensitivity to temporal forcing \cite{Symon2018}. The energetic contribution from broadband forcing is quantified as the square of the resolvent norm. The resolvent operator $\ensuremath{ \mathcal{H} }$ can be described as low-rank if the majority of its response to broadband forcing in the wall-normal direction is captured by the first few response modes. Theoretically, there are an infinite number of singular values and corresponding modes because the wall-normal coordinate is continuous. However, not all of the singular vectors are energetically significant. As described in \S\ref{sec:NSE_Resolvent}, a self-sustaining representation of the flow will correspond to a weighted assembly of forcing modes rather than a broadband forcing \cite{Nogueira2021}; however, past studies have showed that broadband forcing is successful in identifying the important component of the flow, e.g. Ref. \cite{McKeon2010,Bae2020}. McKeon \& Sharma \cite{McKeon2010} demonstrated that the characteristics of the leading response modes for a range of wavenumber-frequency combinations agree with experimental observations in pipe flow and with scaling concepts in wall-bounded turbulence. Moarref \emph{et al.} \cite{Moarref2013} showed that the first two resolvent modes account for more than 80\% of the total response in a channel. Bae \emph{et al.} \cite{Bae2020} investigated the low-rank nature of a compressible turbulent boundary layer and highlighted the similarities in the region where the low-rank approximation is valid for the incompressible regime.
Assuming the resolvent operator is low-rank ($\sigma_1 \simeq \sigma_2 \gg \sigma_3$) allows us to approximate the operator as
\begin{equation}
\ensuremath{ \mathcal{H} }(\ensuremath{\bs{k}})\; \approx\; \sigma_1\; \bs{\hat{\psi}}_1\;
\bs{\hat{\phi}}^{*}_1 + \sigma_2\; \bs{\hat{\psi}}_2\;
\bs{\hat{\phi}}^{*}_2,
\end{equation}
for each $\ensuremath{\bs{k}}$ since most of the energy in the system is modelled by the principal singular value. The low-rank behavior of $\ensuremath{ \mathcal{H} }$ is typically representative of there being a dynamically significant physical, spatio-temporal structure at the scale dictated by $\ensuremath{\bs{k}}$.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{2D_spectra}
\caption{Contour plots depicting the energy contained in the leading response mode relative to the total response, $(\sigma_1^2+\sigma_2^2) / \Sigma_j \sigma_j^2$, for different streamwise and spanwise wavelengths at (a) $\ensuremath{ c } =\umean(\ensuremath{ y^+ }=15)$, (b) $\ensuremath{ c } =\umean(\ensuremath{ y^+ }=30)$ and (c) $\ensuremath{ c } =\umean(\ensuremath{ y^+ }=100)$ for $\ensuremath{ \Ri_\tau } =$ 0, 10, 18, 60, 100 (top to bottom). Green dashed lines are (a) $\lambda_x = 15\lambda_z$, (b) $\lambda_x = 10\lambda_z$ and (c) $\lambda_x = 5\lambda_z$. }
\label{fig:lowrank}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{2D_spectra_DNS}
\caption{Contour plots depicting the premultiplied streamwise kinetic energy spectra as functions of the streamwise and spanwise wavelengths obtained from DNS at (a) $y^+=15$, (b) $y^+=30$, and (c) $y^+=100$ for $\ensuremath{ \Ri_\tau } = 0$ (solid line), $\ensuremath{ \Ri_\tau } = 60$ (dashed line) and $\ensuremath{ \Ri_\tau } = 100$ (dotted line). The shaded contours are from the $\ensuremath{ \Reynolds_\tau } = 180$ neutral channel \cite{delAlamo2004}. The levels plotted are $0.1, 0.3, 0.5$ times the maximum value of the corresponding spectrum.}
\label{fig:DNSspec}
\end{figure}
To study the variation in the low-rank behavior for different magnitudes of stratification, we plot the energetic contribution of the principal response mode to the total response in the model for a given $\ensuremath{\bs{k}}$ quantified by $(\sigma_1^2+\sigma_2^2) / \Sigma_j \sigma_j^2$ for a range of wall-parallel wavelengths (Fig. \ref{fig:lowrank}). The leading response modes account for more than $80\%$ of the total response over a large range of homogeneous wavelengths for the three wavespeeds selected.
The range of wavenumbers for which the resolvent operator is low-rank changes significantly with stratification. In the neutrally-buoyant case ($\ensuremath{ \Ri_\tau }=0$), we see that $\ensuremath{ \mathcal{H} }$ is low-rank in a range of moderate-to-large streamwise wavelengths. For the neutrally-buoyant case, it is known that the low-rank region coincides with the most energetic wavenumbers from the premultiplied energy spectra of a turbulent channel \cite{Moarref2013}. As the friction Richardson number first increases, the low-rank behavior shifts to only a small range of streamwise wavelengths. We see a similar phenomenon in the premultiplied streamwise energy spectra from the DNS (Fig. \ref{fig:DNSspec}), where with increasing $\ensuremath{ \Ri_\tau }$, the larger streamwise wavelength content is suppressed. This was also observed in the premultiplied energy spectra of Garc\'ia-Villalba \& del \'Alamo~\cite{Garcia2011} for a wider range of $\ensuremath{ \Reynolds_\tau }$ and $\ensuremath{ \Ri_\tau }$.
However, after $\ensuremath{ \Ri_\tau } = 18$, the low-rank behavior of the principal resolvent modes intensifies along a vertical band $\ensuremath{\lambda_x}/\delta\geq 1$ until the system becomes low-rank at large spanwise wavelengths with almost no low-rank behavior below the green dashed line in Fig. \ref{fig:lowrank} ($\ensuremath{\lambda_x}=15\ensuremath{\lambda_z}$, $10\ensuremath{\lambda_z}$ and $5\ensuremath{\lambda_z}$ for $y^+ = 15$, $30$ and $100$, respectively). This seems to indicate a low-rank behavior in structures that are descriptive of quasi-two-dimensional flow where $\lambda_z\gg\lambda_x$. Hopfinger \cite{Hopfinger1987} details the emergence of two-dimensional modes for a variety of flows with strong stratification. Moreover, Mahrt \cite{Mahrt2014} alludes to the emergence of two-dimensional modes (often referred to as pancake modes) owing to the conversion of vertical kinetic energy to potential energy in the presence of strong stable stratification. The premultiplied energy spectra for higher $\ensuremath{ \Ri_\tau }$ indicate high energy in the vertical band as well \cite{Garcia2011}.
\subsection{Mode shapes}\label{sec:mode_shapes}
\begin{table}
\caption{Representative wavenumber combinations that we will explore in \S\ref{sec:mode_shapes}.}
\centering
\begin{ruledtabular}
\begin{tabular}{lccc}
Mode name & $\ensuremath{k_x}$ & $\ensuremath{k_z}$ & $\ensuremath{ c }$ \\
\colrule
E1: most energetic mode for $y^+=15$ & $\pi/2$ & $4\pi$ & $\umean(\ensuremath{ y^+ }=15)$ \\
E2: most energetic mode for $y^+=30$ & $\pi/2$ & $3\pi$ & $\umean(\ensuremath{ y^+ }=30)$ \\
E3: most energetic mode for $y^+=100$ & $\pi/2$ & $2\pi$ & $\umean(\ensuremath{ y^+ }=100)$
\end{tabular}
\end{ruledtabular}
\label{tab:combos}
\end{table}
\begin{figure}
\centering
\subfloat[][]{\includegraphics[height=4.5cm]{u_rms_res}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{rho_rms_res}}
\caption{Amplitudes of the leading resolvent response modes for the (a) streamwise velocity and (b) density for $\ensuremath{ \Ri_\tau } = 0,10,18,60,100$ (darker to lighter) at $c = \bar{u}(y^+=15)$ (dashed line), $\bar{u}(y^+=30)$ (dot-dashed line) and $\bar{u}(y^+=100)$ (dotted line) for wave-parameters corresponding to E1, E2 and E3, respectively. The subscripts $u$ and $\rho$ indicate the corresponding components of the resolvent response mode.}
\label{fig:rms_res}
\end{figure}
In order to study the flow structures, we compute the resolvent response modes for a set of wave parameters. The most energetic scales for the various $\ensuremath{ \Ri_\tau }$ under consideration for the different wall-normal heights still coincide with the neutrally-buoyant case (Fig. \ref{fig:DNSspec}), falling in the low-rank region despite the fact that including the scalar advection-diffusion equation in the governing equations changes the wavelengths at which the resolvent operator is low-rank (Fig. \ref{fig:lowrank}). In this section, we study the resolvent response mode shapes for these wavenumber and wavespeed combinations. The list of mode combinations under consideration is listed in Table \ref{tab:combos}. In particular, mode E1 is the most energetic mode for $y^+=15$, E2 for $y^+=30$ and E3 for $y^+=100$.
The predictive capabilities of the resolvent modes are first shown through the amplitudes of the leading resolvent response modes (Fig. \ref{fig:rms_res}) of the streamwise velocity and density. The resolvent modes compare well to the streamwise and density turbulence intensities in Fig. \ref{fig:DNSspec}(c,d). The streamwise root-mean-square (r.m.s.) quantities and resolvent amplitudes show no variation among different Richardson numbers closer to the wall and increase slightly with $\ensuremath{ \Ri_\tau }$ farther away from the wall. On the other hand, the density r.m.s. and resolvent amplitudes decrease significantly with Richardson number at all wall-normal heights. Despite only using the leading resolvent mode, the relative magnitude at each corresponding wall-normal height is well captured for the range of Richardson numbers considered here.
\begin{figure}
\centering
\subfloat[][]{\includegraphics[height=4.5cm]{2D_resolvent_mode_kx_0p5pi_kz_4pi_yplus_15_jfric_0}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{2D_resolvent_mode_kx_0p5pi_kz_4pi_yplus_15_jfric_18}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{2D_resolvent_mode_kx_0p5pi_kz_4pi_yplus_15_jfric_100}}
\caption{Two-dimensional response mode shapes for $(\ensuremath{k_x},\ensuremath{k_z}) = (\pi/2,4\pi)$ at a critical-layer location of $\ensuremath{ y^+ }=15$ for (a) $\ensuremath{ \Ri_\tau } = 0$, (b) $18$, and (c) $100$. Red and blue contours represent positive and negative fluctuations, respectively. The contour levels are scaled by the maximum of each mode component. The dashed black line in each sub-plot is the location of the critical-layer where $\ensuremath{ c } = \umean(\ensuremath{ y^+ }=15)$.}
\label{fig:resp_yp15}
\end{figure}
\begin{figure}
\centering
\subfloat[][]{\includegraphics[height=4.5cm]{2D_corr_yplus_15_jfric_0}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{2D_corr_yplus_15_jfric_100}}
\caption{Autocorrelation coefficients $C_{uu}$, $C_{vv}$, $C_{ww}$ and $C_{\rho\rho}$ of the DNS at $\ensuremath{ y^+ }=15$ for (a) $\ensuremath{ \Ri_\tau } = 0$ and (b) $100$. Red and blue contours represent positive (0.4, 0.6, 0.8) and negative (-0.2) correlation, respectively, with each contour level signifying 0.2 increments. The horizontal dashed line is $\ensuremath{ y^+ }=30$ and the vertical dotted line is $\Delta x = 0$.}
\label{fig:corr_yp15}
\end{figure}
Additionally, we examine the response mode shapes in two dimensions for the different regions and compare the structures observed in the resolvent modes with the autocorrelation coefficient from the DNS data. We first define the streamwise auto-covariance as
\begin{equation}
\hat{R}_{qq}(\ensuremath{k_x},y,y',\ensuremath{k_z}) = \langle
\hat{q}(\ensuremath{k_x},y,\ensuremath{k_z})\hat{q}^*(\ensuremath{k_x},y',\ensuremath{k_z})
\rangle,
\end{equation}
where $q$ is a generic variable of zero mean and $\langle\cdot\rangle$ is the expected value. The auto-covariance in physical space, $R_{qq}(\Delta x,y,y',\Delta z)$, is obtained as the inverse Fourier transform of $\hat{R}$, where $\Delta x =x-x'$ and $\Delta z= z-z'$ are the distances between the two points in the homogeneous directions. The autocorrelation coefficient,
\begin{equation}
C_{qq}(\boldsymbol{x},\boldsymbol{x}') =
\frac{R_{qq}(\boldsymbol{x},\boldsymbol{x}')}
{\varsigma_q(\boldsymbol{x})\varsigma_q(\boldsymbol{x}')},
\end{equation}
is obtained by normalising the covariance with the product of the standard deviations, $\varsigma$, at the two points involved in the measurements, which is the normalization adopted by most researchers \cite{Tritton1967,Liu2001,Ganapathisubramani2005,Lee2011,Pirozzoli2011,Sillero2014}.
The two-dimensional structures of mode E1, which coincides with the size of the near the wall structures observed previously in experiments and simulations \cite{Kline1967, Smith1983}, are plotted in Fig. \ref{fig:resp_yp15}. The autocorrelations of the streamwise, wall-normal and spanwise velocity fields as well as the density field are shown in Fig. \ref{fig:corr_yp15}, for a two-dimensional slice at $\Delta z = 0$. The reference location $y^{\prime+} = 15$.
The LES of Armenio \emph{et al.}~\cite{Armenio2002} and the DNS of Garc\'ia-Villalba \& del \'Alamo~\cite{Garcia2011} demonstrated that structures in the near-wall region are largely unaffected by stable stratification. As expected, both the resolvent response modes and the correlations do not change significantly for the range of $\ensuremath{ \Ri_\tau }$ considered. For the velocities, the main difference is a reduction in the autocorrelation coefficient in the stratified case. The largest difference occurs for density properties as the phase in the wall-normal direction along the resolvent response modes are shifted, creating structures that are more detached from the wall. Similarly, the density correlations are wall-attached for $\ensuremath{ \Ri_\tau }=0$ whereas they are more detached for $\ensuremath{ \Ri_\tau }=100$.
\begin{figure}
\centering
\subfloat[][]{\includegraphics[height=4.5cm]{2D_resolvent_mode_kx_0p5pi_kz_3pi_yplus_30_jfric_0}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{2D_resolvent_mode_kx_0p5pi_kz_3pi_yplus_30_jfric_18}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{2D_resolvent_mode_kx_0p5pi_kz_3pi_yplus_30_jfric_100}}
\caption{Two-dimensional response mode shapes for $(\ensuremath{k_x},\ensuremath{k_z}) = (\pi/2,3\pi)$ at a critical-layer location of $\ensuremath{ y^+ }=30$ for (a) $\ensuremath{ \Ri_\tau } = 0$, (b) $18$, and (c) $100$. Red and blue contours represent positive and negative fluctuations, respectively. The contour levels are scaled by the maximum of each mode component. The dashed black line in each sub-plot is the location of the critical-layer where $\ensuremath{ c } = \umean(\ensuremath{ y^+ }=30)$.}
\label{fig:resp_yp30}
\end{figure}
\begin{figure}
\centering
\subfloat[][]{\includegraphics[height=4.5cm]{2D_corr_yplus_30_jfric_0}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{2D_corr_yplus_30_jfric_100}}
\caption{ Autocorrelation coefficients $C_{uu}$, $C_{vv}$, $C_{ww}$ and $C_{\rho\rho}$ of the DNS at $\ensuremath{ y^+ }=30$ for (a) $\ensuremath{ \Ri_\tau } = 0$ and (b) $100$. Red and blue contours represent positive (0.4, 0.6, 0.8) and negative (-0.2) correlation, respectively, with each contour level signifying 0.2 increments. The horizontal dashed line is $\ensuremath{ y^+ }=30$ and the vertical dotted line is $\Delta x = 0$.} \label{fig:corr_yp30}
\end{figure}
We plot the resolvent response modes for the wavenumbers and wavespeed corresponding to E2 (Fig. \ref{fig:resp_yp30}) and the correlations for $y^{\prime+}=30$ at $\Delta z = 0$ (Fig. \ref{fig:corr_yp30}). The results are similar to that of E1, since the velocity response modes do not vary across $\ensuremath{ \Ri_\tau }$, but a difference is observed in the density modes as a phase change along $y$. The correlation for the density modes are both wall-detached in the $\ensuremath{ \Ri_\tau } = 0$ and $\ensuremath{ \Ri_\tau } = 100$ case, although the centre of the density correlation for the $\ensuremath{ \Ri_\tau } = 100 $ case lies farther away from the wall.
\begin{figure}
\centering
\subfloat[][]{\includegraphics[height=4.5cm]{2D_resolvent_mode_kx_0p5pi_kz_2pi_yplus_100_jfric_0}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{2D_resolvent_mode_kx_0p5pi_kz_2pi_yplus_100_jfric_18}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{2D_resolvent_mode_kx_0p5pi_kz_2pi_yplus_100_jfric_100}}
\caption{Two-dimensional response mode shapes for $(\ensuremath{k_x},\ensuremath{k_z}) = (\pi/2,2\pi)$ at a critical-layer location of $\ensuremath{ y^+ }=100$ for (a) $\ensuremath{ \Ri_\tau } = 0$, (b) $18$, and (c) $100$. Red and blue contours represent positive and negative fluctuations, respectively. The contour levels are scaled by the maximum of each mode component. The dashed black line in each sub-plot is the location of the critical-layer where $\ensuremath{ c } = \umean(\ensuremath{ y^+ }=100)$.}
\label{fig:resp_yp100}
\end{figure}
\begin{figure}
\centering
\subfloat[][]{\includegraphics[height=4.5cm]{2D_corr_yplus_100_jfric_0}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[height=4.5cm]{2D_corr_yplus_100_jfric_100}}
\caption{Autocorrelation coefficients $C_{uu}$, $C_{vv}$, $C_{ww}$ and $C_{\rho\rho}$ of the DNS at $\ensuremath{ y^+ }=100$ for (a) $\ensuremath{ \Ri_\tau } = 0$ and (b) $100$. Red and blue contours represent positive (0.4, 0.6, 0.8) and negative (-0.2) correlation, respectively, with each contour level signifying 0.2 increments. The horizontal dashed line is $\ensuremath{ y^+ }=30$ and the vertical dotted line is $\Delta x = 0$.}
\label{fig:corr_yp100}
\end{figure}
The biggest difference in the resolvent response modes for the different Richardson numbers can be seen for the wavenumber and wavespeed corresponding to E3. We plot the resolvent response modes for the wavenumbers and wavespeed corresponding to E3 (Fig. \ref{fig:resp_yp100}) and the correlations for $y^{\prime+}=100$ at $\Delta z = 0$ (Fig. \ref{fig:corr_yp100}).
Here, all resolvent modes show significant differences in the stratified case compared to the unstratified case. In particular, the backwards tilting of the wall-normal velocity modes, the forward tilting of the density modes, as well as the phase difference across $y$ of the density mode are pronounced. These phenomena occur in the correlations as well. There is noticeable backwards tilting in the $C_{vv}$ term and forwards tilting in the $C_{\rho\rho}$ term for $\ensuremath{ \Ri_\tau } = 100$ compared to the neutrally stratified case. The biggest differences come in the form of the wall-normal and density models because they are coupled through the Richardson number in the stratified Navier-Stokes equations.
\subsection{Energy balance at selected scales}\label{sec:energy_balance}
Finally, we study the energy budget terms of the stratified channel. We define the production, transport, buoyancy flux, and viscous dissipation budget terms in the resolvent formulation \cite{Symon21,madhusudanan2020coherent} as
\begin{subequations}
\begin{align}
\mathcal{P}_{\text{tot}}(y) &= \mathbb{R}\left[
-\frac{\partial \umean}{\partial y}
\sum_j\int_{-\infty}^{\infty}
\sigma_j^2\chi_j^2 \Big(
\bs{\hat{\psi}}^{*}_{j,u}\bs{\hat{\psi}}_{j,v}
\Big) d\ensuremath{\bs{k}} \right],
\label{eqn:prod}\\
\mathcal{T}_{\text{tot}}(y) &= \mathbb{R}\left[
\sum_j\sum_i\int_{-\infty}^{\infty}
\sigma_j\chi_j\chi_i \ensuremath{ D_y }\Big(
\bs{\hat{\phi}}^{*}_{i,u}\bs{\hat{\psi}}_{j,v} +
\bs{\hat{\phi}}^{*}_{i,v}\bs{\hat{\psi}}_{j,v} +
\bs{\hat{\phi}}^{*}_{i,w}\bs{\hat{\psi}}_{j,v}
\Big) d\ensuremath{\bs{k}} \right],
\label{eqn:transp}\\
\mathcal{B}_{\text{tot}}(y) &= \mathbb{R}\left[
-\ensuremath{ \Ri_\tau }
\sum_j\int_{-\infty}^{\infty}
\sigma_j^2\chi_j^2 \Big(
\bs{\hat{\psi}}^{*}_{j,v}\bs{\hat{\psi}}_{j,\ensuremath{ \rho }}
\Big) d\ensuremath{\bs{k}} \right],
\label{eqn:buoy}\\
\mathcal{V}_{\text{tot}}(y) &= \mathbb{R}\left[
\frac{1}{\ensuremath{ \Reynolds_\tau }}
\sum_j\int_{-\infty}^{\infty}
\sigma_j^2\chi_j^2 \Big(
\bs{\hat{\psi}}^{*}_{j,u}\hat\Delta\bs{\hat{\psi}}_{j,u} +
\bs{\hat{\psi}}^{*}_{j,v}\hat\Delta\bs{\hat{\psi}}_{j,v} +
\bs{\hat{\psi}}^{*}_{j,w}\hat\Delta\bs{\hat{\psi}}_{j,w}
\Big) d\ensuremath{\bs{k}} \right],
\end{align}
\end{subequations}
where $\chi_j$, $\sigma_j$, $\bs{\hat{\psi}}_j$ and $\bs{\hat{\phi}}_j$ are functions of $\ensuremath{\bs{k}}$ and the subscript $u,v,w,\ensuremath{ \rho }$ indicate the corresponding components of the response or forcing mode. To get a global sense of the energy balance, the equations above are integrated over all wavenumber triplets. Here, we will examine only the principal resolvent mode contribution to the local components of the total budgets for particular $\ensuremath{\bs{k}}$, defined as
\begin{subequations}
\begin{align}
\mathcal{P}(y,\ensuremath{\bs{k}}) &= \mathbb{R}\left[
-\frac{\partial \umean}{\partial y}
\sigma_1^2 \Big(
\bs{\hat{\psi}}^{*}_{1,u}\bs{\hat{\psi}}_{1,v}
\Big) \right],
\label{eqn:prod_k}\\
\mathcal{T}(y,\ensuremath{\bs{k}}) &= \mathbb{R}\left[
\sigma_1\ensuremath{ D_y }\Big(
\bs{\hat{\phi}}^{*}_{1,u}\bs{\hat{\psi}}_{1,v} +
\bs{\hat{\phi}}^{*}_{1,v}\bs{\hat{\psi}}_{1,v} +
\bs{\hat{\phi}}^{*}_{1,w}\bs{\hat{\psi}}_{1,v}
\Big) \right],
\label{eqn:transp_k}\\
\mathcal{B}(y,\ensuremath{\bs{k}}) &= \mathbb{R}\left[
-\ensuremath{ \Ri_\tau }
\sigma_1^2\Big(
\bs{\hat{\psi}}^{*}_{1,v}\bs{\hat{\psi}}_{1,\ensuremath{ \rho }}
\Big) \right],
\label{eqn:buoy_k}\\
\mathcal{V}(y,\ensuremath{\bs{k}}) &= \mathbb{R}\left[
\frac{1}{\ensuremath{ \Reynolds_\tau }}
\sigma_1^2\Big(
\bs{\hat{\psi}}^{*}_{1,u}\hat\Delta\bs{\hat{\psi}}_{1,u} +
\bs{\hat{\psi}}^{*}_{1,v}\hat\Delta\bs{\hat{\psi}}_{1,v} +
\bs{\hat{\psi}}^{*}_{1,w}\hat\Delta\bs{\hat{\psi}}_{1,w}
\Big) \right].
\label{eqn:diff_k}
\end{align}
\end{subequations}
The results for wavenumber combinations E1, E2 and E3 are shown in Fig. \ref{fig:energy_budget_res}. Since the wavenumber combinations E1, E2 and E3 are the most energetic at each wavespeed, we predict that the local components of the budget term should indicate the overall trend of the total budget term at the corresponding wall-normal height. These quantities are compared to the energy budget computed from the DNS, shown in Fig. \ref{fig:energy_budget_DNS}.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{energy_budget_res}
\caption{Energy budget terms computed from resolvent modes Eq. (\ref{eqn:prod_k}--\ref{eqn:diff_k}) for wavenumbers given by (a) E1 at $\ensuremath{ c }=\umean(\ensuremath{ y^+ }=15)$, (b) E2 at $\ensuremath{ c }=\umean(\ensuremath{ y^+ }=30)$, and (c) E3 at $\ensuremath{ c }=\umean(\ensuremath{ y^+ }=100)$ for $\ensuremath{ \Ri_\tau } = 0, 10, 18, 60, 100$ (darker to lighter).}
\label{fig:energy_budget_res}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{energy_budget_DNS}
\caption{(Energy budget terms computed from the DNS for $\ensuremath{ \Ri_\tau } = 0, 10, 18, 60, 100$ (darker to lighter).}
\label{fig:energy_budget_DNS}
\end{figure}
The trends observed in the energy budget computed from the DNS are also recovered in the resolvent budgets. The production is mostly balanced by viscous dissipation and has larger magnitudes compared to the transport (approximately 10\% of the production term) or buoyancy flux (approximately 0.1-1\%, depending on $\ensuremath{ \Ri_\tau }$ of the production term) terms. Comparing the quantities at the wall-normal heights of interest, we see that at $y^+=15$, there is little variation in the production and viscous dissipation terms in both DNS and resolvent modes. The difference in relative magnitude over the various values of $\ensuremath{ \Ri_\tau }$ increases farther away from the wall, and at $y^+=100$, the production (and viscous diffusion) of the $\ensuremath{ \Ri_\tau } = 100$ case is double the production (and viscous diffusion) of the neutrally-buoyant case in both the DNS and resolvent.
Direct comparison of the integrated magnitudes is more difficult for the transport and buoyancy flux terms as they are not uniformly positive or negative. However, this indicates that, locally, buoyancy flux acts as a energy transfer term, much like the turbulent transport, as the term adds energy in one wall-normal location and removes it from another. Because the DNS energy budget is integrated for all spatio-temporal scales, it is impossible to deduce that the buoyance flux term acts as a local energy transfer term from Fig. \ref{fig:energy_budget_DNS}, which shows a net negative energy balance from $\mathcal{B}$ at all wall-normal locations. In contrast, the resolvent buoyancy flux term indicates a non-monotonic distribution of energy in the wall-normal direction. Similar results could be obtained through spatio-temporal deconstruction of the DNS energy budget term as in Ref. \cite{Mizuno2016}, but this would require a time-resolved dataset for a longer time domain. The resolvent turbulent transport stays relatively similar among different $\ensuremath{ \Ri_\tau }$, as does the turbulent transport term from DNS. The buoyancy flux is much more dependent on $\ensuremath{ \Ri_\tau }$, with variations becoming greater farther away from the wall in both the DNS and resolvent results.
These results can be better quantified by plotting the values at each wall-normal location normalized by the peak production at $y^+=15$ for each case, as shown in Fig. \ref{fig:budget_ratio}. This shows that the overall trend of the budget terms are well captured by the resolvent budget terms, with the exception of the transport term close to the wall. This discrepancy may be attenuated by integrating over more wavespeeds.
\begin{figure}
\centering
\subfloat[][]{\includegraphics[width=0.42\textwidth]{budget_ratio_res}}
\hspace{0.1cm}
\subfloat[][]{\includegraphics[width=0.42\textwidth]{budget_ratio_DNS}}
\caption{(a) Resolvent energy budget terms for wavenumber combinations E1, E2 and E3 evaluated at $y^+=15,30,100$, respectively, for $\ensuremath{ \Ri_\tau }=0,10,18,60,100$ (darker to lighter) normalized with $\mathcal{P}(y^+=15)$ for $\ensuremath{ \Ri_\tau } = 0$ and wavenumber combination E1. (b) DNS energy budget terms at $y^+=15,30,100$ for $\ensuremath{ \Ri_\tau }=0,10,18,60,100$ (darker to lighter) normalized with $\mathcal{P}_\text{DNS}(y^+=15)$ for $\ensuremath{ \Ri_\tau } = 0$. Symbols are $\mathcal{P}$, circles; $\mathcal{T}$, triangles; $\mathcal{V}$, crosses; and $\mathcal{B}$, asterisks.}
\label{fig:budget_ratio}
\end{figure}
Note that the results are not expected to match that of DNS for all scales as the the energy captured in the wall-parallel resolvent modes are known to be overpredicted and the energy captured in the Reynolds stress and wall-normal resolvent modes underpredicted. This is a known issue for the resolvent analysis in the primitive variables due to the competing mechanisms of the Squire modes with the Orr-Sommerfeld modes \cite{Moarref2014,Rosenberg18}. Additionally, the underprediction of energy captured in the Reynolds stress and wall-normal resolvent modes could explain the underprediction of the transport term close to the wall. Crucially, though, the most energetic scale can reproduce the integrated effect of all scales, which enables a quick predictive model of stratified boundary layers.
\section{Conclusions}\label{sec:conclusions}
The resolvent framework for the Navier-Stokes equations with the Boussinesq approximation was applied to a stratified turbulent boundary layer. Computation of the leading resolvent modes is more cost-effective than performing a full-scale simulation or experiment, while being able to provide information on the flow. This quick model can provide meaningful insight into stratified flows with only information about the mean profile and prior knowledge of energetic scales of motion in the neutrally-buoyant boundary layers.
The results show that despite using only a very limited range of representative scales, the resolvent model was able to reproduce the relative magnitude of turbulence intensities and the balance of the energy budget as well as provide meaningful analysis of structures in the flow. We studied the amplitude of the resolvent response modes and their two-dimensional mode shapes of the rank-one approximation, which were then compared to the turbulence intensities and the two-dimensional auto-correlation of the velocity and density fields of the DNS, respectively. The resolvent response modes were able to predict the relative variation in turbulence intensities as a function of wall-normal distance and Richardson number for the $\ensuremath{ \Ri_\tau }$ under consideration in this study. The two-dimensional mode shapes also provided insight into how the auto-correlation coefficient might shift as a function of $\ensuremath{ \Ri_\tau }$. Finally, the energy budget terms for the turbulent kinetic energy of the system were computed both using the rank-one approximation of the resolvent analysis and the DNS data. Again, the resolvent energy budget predicts well the relative distribution of energy between production, dissipation, transport, and buoyancy flux as a function of wall-normal distance and Richardson number.
In the current study, the resolvent model was closed using mean velocity and density profiles obtained from DNS. The computational cost of calculating the forcing and response modes at certain scales was on the order of seconds on a laptop. Therefore, by obtaining only mean velocity and scalar profiles we could generate the salient modal structure for a given stratified wall-bounded flow. The next steps involve using in-situ data to generate modes that are representative of flow phenomena observed in nature.
\section*{Acknowledgements}
The support of a Vannevar Bush Faculty Fellowship administered under
the U.S. Office of Naval Research, grant \#N00014-17-1-3022, is
gratefully acknowledged. Additionally, the authors would like to
thank Dr.\ Angeliki Laskari for insightful discussions.
|
2,877,628,089,391 | arxiv | \section{Introduction}
Sleep is an essential process to maintain our life and health although we lose consciousness \cite{yeom2017spatio,lee2017network,lee2019connectivity}. Sleep deprivation causes a reduction of cognitive ability, depression, and impairment of motor function \cite{sdeffects,kweon2020prediction}. In addition, a recent study suggested that it might be related to diabetes and obesity \cite{sdeffects2}. Although many people know sleep is important, they have still struggled with sleep induction. Therefore, attention to sleep induction has increased for not only sleep disorder patients but also healthy people.
To induce sleep, people have attempted various methods. One of those methods that anyone can easily use is auditory stimulation. The binaural beat (BB) is an oscillatory stimulus that is delivered at two adjacent frequencies to each ear at the same time \cite{bb}. This induces oscillation of frequency difference to brain \cite{bb2}. 6 Hz BB combined with natural sounds showed the possibility of sleep induction \cite{bbefects}. White noise (WN) is combining sounds of all different frequencies together. When WN is given, a more number of neonates fell asleep within five minutes than is not given \cite{wnbaby}. For improving sleep experience, WN is recommended to use in intensive care unit and coronary care \cite{wn1,wn2}. Natural sounds are popular to induce sleep on the internet. Rainy sound (RS) reported about 15 million views on YouTube. Monotonous tasks, including driving a car and watching an empty computer screen, induce a micro-sleep \cite{mseffect,kweon2021automatic}. We hypothesized that repetitive beep sound (RB) induces sleep like monotonous tasks. However, sleep induction effects of auditory stimulation are unclear and require more evidence in different conditions.
In this study, we investigated the sleep induction effects of five auditory stimulation: sham, RB, BB, WN, and RS. We assessed the placebo effects using sham sound which has no sound. To confirm that there is no difference in the subject's mental states between initiation of sessions, a psychomotor vigilance task (PVT) and Stanford sleepiness scale (SSS) were performed before auditory stimuli were given. After auditory stimulation, we asked subjects to report their sleep experience during auditory stimulation. We also calculated alpha dominant duration using an electroencephalogram (EEG). It was the period that alpha power was dominant during auditory stimulation. Our results suggested that there was suitable auditory stimulation based on subjects' mental states.
\section{Materials and Methods}
\subsection{Participants}
Thirteen healthy participants (mean age: 26.69 $\pm$ 2.46;5 female) without any neurological and auditory disease took part in the experiment. Questionnaires before experiments indicated that participants did not take any medication at the time of the experimental session. The Pittsburgh Sleep Quality Index (PSQI) was performed to evaluate sleep quality \cite{psqi}. If PSQI was under 5, the subject was in the good sleep group. The poor sleep group was the subjects who showed more than PSQI of 5 \cite{psqi2}. This study was approved by the Korea University Institutional Board Review (KUIRB-2021-0155-03) and written informed consent was obtained from participants. Because of malfunction of electroencephalogram recording, one subject was excluded to analyze data.
\begin{figure}[tbp]
\centering
\includegraphics[width=.95\linewidth]{figure1.pdf}
\caption{Overview of Experiment Paradigm.}
\label{paradigm}
\end{figure}
\subsection{Stimulation and Procedures}
We prepared five auditory stimulation: sham, RB, BB, WN, and RS. During sham auditory stimulation, the audio was mute and there were not any sounds. RB was 512 Hz sound and maintained for 2 s. It was given to subjects every 5 s. The Gnaural software generated 250 Hz and 256 Hz sounds at left and right ear, respectively for BB \cite{bbefects}. We utilized free WN that was "pure noise 3" from MC2Method (https://mc2method.org/white-noise/). RS was "Rain Heavy Loud" from YouTube Studio. All subjects selected comfortable sound volume to hear auditory stimulation from 40 dB to 45 dB.
We performed modified PVT for 5 min to assess the vigilance of the participants before the auditory stimulation was given \cite{pvt_five}. At the initiative of the trial, there was nothing on the computer screen. Every 2–10 s computer screen showed the counter that represented the elapsed time from the counter was shown. Participants were instructed to stop the counter as fast as possible by pressing the space bar. After the response of participants, the elapsed time remained for 0.5 s as feedback about their reaction time.
The experiment consisted of five sessions. Before and after auditory stimulation, the SSS and Brunel mood scale ratings were done to assess participants' sleepiness and emotion. We asked subjects to report the sleep experience after auditory stimulation (Fig. \ref{paradigm}). It was reported to be 0 if subjects fell asleep, and 1 if subjects were awake. Each session randomly gave one prepared auditory stimulation. During the experiment, we always recorded the EEG of subjects with 64-channels Ag/AgCI electrode EEG setup according to the 10-20 international system using BrainAmp (ActiCap, Brain Products, Germany) \cite{lee2018high,jeong2020brain,kwon2019subject}.
\subsection{EEG Analysis}
To assess the sleep duration during auditory stimulation, we performed a spectrogram in MATLAB (R2021a, The MathWorks, Natick, MA). We set that the window is 512 and the number of overlap samples is 256. We used the four frequency bands: delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), and beta (14-40 Hz) bands. Therefore, we got $f$ by $T$ matrix from each 10 min occipital EEG during auditory stimulation ($f$: 4 and $T$: 586). The alpha dominant duration was the ratio between the number of points that alpha was the biggest and $T$. The occipital region contained 6 channels: Oz, O1, O2, POz, PO3, and PO4.
\subsection{Statistical Analysis}
We performed one-way variance analysis to compare reaction time from PVT and SSS between sessions \cite{lee2020frontal}. Also, we used permutation test with 100 number of permutations to compare the sleep induction effects between the good and poor sleep groups. To investigate complementary usage of auditory stimulation, we calculated the Cohen's kappa value of sleep experience. If the Cohen's kappa value was close to -1, subjects experienced the opposite between auditory stimulation. Correlation between SSS and alpha dominant duration was assessed using the Kendall's tau coefficient. The significance value $\alpha$ was 0.05.
\section{Results}
\begin{figure}[tbp]
\centering
\includegraphics[width=.95\linewidth]{figure2.pdf}
\caption{Reaction time (RT) and Stanford sleepiness scale (SSS) before sham, repetitive beep (RB), binuaral beat (BB), white noise (WN), and rainy sound (RS) were given.}
\label{stats}
\end{figure}
\subsection{Mental States Before Auditory Stimulation}
The cognitive ability and sleepiness of subjects were not affected by time. The reaction time of PVT between sessions was not significantly different during the experiment ($p=0.855$). We also showed differences in reaction time between different auditory stimulation were not significant ($p=0.712$). Moreover, the sleepiness of subjects before auditory stimulation was not significantly different ($p=0.688$). Differences in sleepiness after auditory stimulation were not significant ($p=0.931$, Fig. \ref{stats}).
\begin{figure}[tbp]
\centering
\includegraphics[width=.95\linewidth]{figure3.pdf}
\caption{Sleep experience of the good and poor sleep group during auditory stimulation: sham, repetitive beep (RB), binuaral beat (BB), white noise (WN), and rainy sound (RS).}
\label{sleep}
\end{figure}
\subsection{Comparison of Sleep Induction Between Sleep Groups}
Since subjects reported 0 if they fell asleep, we averaged the sleep experience of subjects. Therefore, low value meant that more people fell asleep during auditory stimulation and indicated low sleep induction effect. As you can see in the Fig. \ref{sleep}, WN and RS showed significantly low sleep induction effect to poor sleep group (WN: $p = 0.049$, RS: $p = 0.040$). However, there were no significant differences in sleep induction effects between other auditory stimulation. The good sleep group showed similar sleep induction effects to the poor sleep group during sham and BB (sham: $p = 0.604$, BB: $p = 0.475$). The poor sleep group also showed similar sleep induction effects to the good sleep group during RB ($p = 0.525$).
\begin{table}[bp]
\centering
\caption{Kappa value of sleep experience between auditory stimulation}
\label{tab:cohen}
\begin{threeparttable}
\begin{tabular}{p{0.1\linewidth}*{5}{p{0.12\linewidth}}}
\hline
& Sham & RB & BB & WN & RS \\ \hline
Sham & - & 0.400 & 0.625 & 0.118 & -0.588 \\
RB & 0.400 & - & 0.800 & 0.636 & -0.091 \\
BB & 0.625 & 0.800 & - & 0.471 & -0.235 \\
WN & 0.118 & 0.636 & 0.471 & - & -0.029 \\
RS & -0.588 & -0.091 & -0.235 & -0.029 & - \\ \hline
\end{tabular}
\begin{tablenotes}
\item[] RB, BB, WN, RS are the repetitive beeep, binuaral beat, unkown white noise, and rainy sounds, respectively.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Complementary Relation Between Auditory Stimulation}
We investigated alternative auditory stimulation to induce sleep if one is failed. Table \ref{tab:cohen} showed the Cohen's kappa value of sleep experience during auditory stimulation. If subjects failed to fall asleep during sham, most subjects fell asleep during RS (kappa: -0.588). However, other auditory stimulation has no complementary relations. Sham has similar sleep induction effects to RB and BB (sham-BB kappa: 0.625, sham-RB kappa: 0.400).
\begin{figure}[tbp]
\centering
\includegraphics[width=.95\linewidth]{figure4.pdf}
\caption{Spectrogram of EEG in subject 8 during auditory stimulation. Alpha dominant duration was shown below type of auditory stimulation: sham, repetitive beep (RB), binuaral beat (BB), white noise (WN), rainy sound (RS).}
\label{spectro}
\end{figure}
\begin{table}[bp]
\centering
\caption{Correlation between SSS and Alpha Dominant Duration}
\label{tab:cc}
\begin{threeparttable}
\begin{tabular}{p{0.12\linewidth}*{5}{p{0.12\linewidth}}}
\hline
& Sham & RB & BB & WN & RS \\ \hline
\begin{tabular}[l]{@{}c@{}}Before AS\end{tabular} & \begin{tabular}[l]{@{}c@{}}-0.017\\ (1.000)\end{tabular} & \textbf{\begin{tabular}[l]{@{}c@{}}-0.532\\ (0.036)\end{tabular}} & \begin{tabular}[l]{@{}c@{}}-0.116\\ (0.670\end{tabular} & \begin{tabular}[l]{@{}c@{}}-0.183\\ (0.478)\end{tabular} & \begin{tabular}[l]{@{}c@{}}-0.123\\ (0.660)\end{tabular} \\
\begin{tabular}[l]{@{}c@{}}After AS\end{tabular} & \begin{tabular}[l]{@{}c@{}}-0.307\\ (0.219)\end{tabular} & \textbf{\begin{tabular}[l]{@{}c@{}}-0.510\\ (0.040)\end{tabular}} & \begin{tabular}[l]{@{}c@{}}-0.147\\ (0.573)\end{tabular} & \begin{tabular}[l]{@{}c@{}}-0.408\\ (0.092)\end{tabular} & \begin{tabular}[l]{@{}c@{}}-0.052\\ (0.885)\end{tabular} \\ \hline
\end{tabular}
\begin{tablenotes}
\item[] The results are the Kendall's correlation coefficient.
\item[] Bold represents the significant correlation.
\item[] RB, BB, WN, RS are the repetitive beeep, binuaral beat, unkown white noise, and rainy sounds, respectively.
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Correlation Between Alpha Dominant Duration and SSS}
Alpha dominant duration was smaller when subject reported they fell asleep than they were awake during auditory stimulation in subject 8 (Fig. \ref{spectro}). Table. \ref{tab:cc} represented the correlation coefficient between SSS and alpha dominant duration. RS showed negative correlation between alpha dominant duration and SSS before and after auditory stimulation (before: $\rho = -0.532$, $p = 0.036$; after: $\rho = -0.510$, $p = 0.040$). However, there were no significant correlations between alpha dominant duration and SSS from sham, BB, WN, and RS. Sham showed lower correlations between alpha dominant duration and SSS after auditory stimulation than before (before: $-0.017$, after: $-0.307$).
\section{Discussion}
In this study, we investigated the sleep induction effects of five auditory stimulation based on mental states. The poor sleep group showed higher sleep induction effects from WN and RS than the good sleep group. Moreover, alpha dominant duration increased if subjects reported high SSS. Another interesting finding was that there was complimentary relation between auditory stimulation. It indicated that alternative auditory stimulation could be used to induce sleep if one was failed.
Since our experimental paradigm subsequently performed sessions, you might wonder that one session's auditory stimulation could affect the subject's sleepiness and sleep deprivation to the next session. However, sleep deprivation effects and sleepiness were unchanged between sessions according to our results. Our results showed that there were no differences in reaction time and SSS between sessions. If the subjects had sleep deprivation, the reaction time of PVT increased \cite{tsdpvt}.
RS and WN showed lower sleep induction effects to the poor sleep group than the good sleep group. There was a piece of evidence that autonomous sensory meridian response like RS was effective to induce sleep \cite{bbefects}. Their results might come from the possibility that their subjects were in the good sleep group. However, we could not check it since they did not perform PSQI. WN was recommended to use in intensive care unit and coronary care because WN has a masking effect to prevent other noise \cite{wn1,wn2}. In our laboratory environment, there was no other noise, and our results did not come from noise masking effects. However, most patients in the intensive care unit were in the poor sleep group \cite{icupsqi}. Therefore, more study needed to investigate the reason why WN was useful in the intensive care unit and coronary care.
Our results will help people to induce sleep effectively. Especially, RS will help someone who could not fall asleep without any auditory stimulation. In the future, we should investigate the two different groups: RS effective group and sham effective group. This will help people to effectively induce sleep without trial and error. The monotonous task was effective in inducing micro sleep \cite{mseffect}. RB was a monotonous task without any cognitive or motor execution. Alpha dominant duration and SSS showed a negative correlation during RB. It indicated that more sleepy subjects fell asleep longer time during RB. There might be more effective interval to present beep sounds, and we should investigate more condition.
There were limitations to our study. First, we had a small sample to support our claims. Therefore, we will perform experiments with new subjects. Second, our experimental condition was far from the dairy sleep condition. We should investigate these results reproduced in bed at night. Finally, we should perform more advanced EEG analysis like functional connectivity \cite{zhang2019strength,zhang2017hybrid}
\section{Conclusion}
In conclusion, WN and RS were more effective in the poor sleep group than the good sleep group. RB showed high sleep induction effects if subjects were more sleepy. If someone could not fall asleep, RS helps him or her to induce sleep.
\bibliographystyle{IEEE}
|
2,877,628,089,392 | arxiv | \section{INTRODUCTION}
Threshold nets are obtained by assigning a weight $x$, from a distribution $\rho(x)$, to each of $N$ nodes
and connecting any two nodes $i$ and $j$ whose combined weights exceed a certain threshold, $\theta$:
$x_i+x_j>\theta$~\cite{cal,bog,mas,kon}. Threshold nets can be produced of (almost) arbitrary degree distributions, including scale-free, by judiciously choosing the weight distribution $\rho(x)$ and the threshold $\theta$, and they encompass an astonishingly wide variety of important architectures: from the star graph (a simple ``cartoon" model of scale-free graphs --- consisting of a single hub) with its low density of links, $2/N$, to the complete graph. Studied extensively in the graph-theoretical literature~\cite{gol80,mah,ham,mer}, they have recently come to the attention of statistical and non-linear physicists due to the beautiful work of Hagberg, Swart, and Schult~\cite{hag}.
\begin{figure}[ht]
\includegraphics*[width=0.35\textwidth]{fig_1.eps}
\caption{Threshold network: (a)~The threshold graph resulting from
the sequence $(A,A,A,B,B,A,A,B)$, and (b)~its box representation, highlighting modularity. Nodes are added one at a time from bottom to top, $A$'s on the left and $B$'s on the right.}
\label{graph_box}
\end{figure}
Hagberg {\it et al}., exploit the fact that threshold graphs may be more elegantly encoded by a two-letter sequence, corresponding to two types of nodes, $A$ and $B$~\cite{rem1}.
As new nodes are introduced, according to a prescribed sequence, nodes of type $A$ connect to none of the existing nodes, while nodes of type $B$ connect to all of the nodes, of either type: $B\to A$ and $B\to B$.
In Fig.~\ref{graph_box}(a) we show an example of the threshold graph obtained from the sequence $(A,A,A,B,B,A,A,B)$.
Note the {\it modular\/} structure of threshold graphs: a subsequence of $n$ consecutive $B$'s gives rise to
a $K_n$-clique, while nodes in a subsequence of $A$'s connect to $B$ nodes thereafter, but not among one
another. We highlight this modularity with a diagram of boxes (similar to~\cite{hag}):
oval boxes enclose nodes of type $A$, that are not connected among themselves, while rectangular boxes
enclose $K$-cliques of $B$-nodes~\cite{boxy}. A link between two boxes means that all of the nodes in one box are connected to all of the nodes in the other, Fig.~\ref{graph_box}(b).
Given the sequence of a threshold net, there exist fast algorithms to compute important structural benchmarks, besides its modularity, such as degree distribution, triangles, betweenness centrality, and the spectrum and eigenvectors of the graph Laplacian~\cite{hag}. The latter are a crucial determinant of dynamics and synchronization and have applications to graph partitioning and mesh processing~\cite{bar,nis,hon,hwa,mot,got}. Perhaps more importantly, it becomes thus possible to {\it design\/} threshold nets with a particular degree distribution, spectrum of eigenvalues, etc.,~\cite{hag}.
Despite their malleability, threshold nets
are limited in some obvious ways, for example their diameter is 1 or 2, regardless of the number of
nodes $N$. Our idea consists of studying the broader class of nets that can be constructed from a sequence
(formed from two or more letters) by deterministic rules of connectivity on their own right. It is truly this property that gives the nets all their desired attributes: modularity (as in everyday life complex nets), easily computable structural measures --- including the possibility of design --- and a high degree of compressibility. Roughly speaking, each additional letter to the alphabet allows for an increase of one link in the nets' diameter, so that
the three-letter nets possess diameter 3 or 4 (some of the new types of two-letter nets have diameter 3). This modest increase is very significant, however, in view of the fact that the diameter of many everyday life complex nets is not much larger than that~\cite{alb}. Sequence nets gain us
much latitude in the types of nets that can be described in this elegant fashion, while retaining much of the analytical appeal of threshold nets. Another unusual property of sequence nets is that any
ensemble of sequence nets admits a natural ordering; simply list them alphabetically according to their sequences.
One may use this ordering for exploring eigenvalues and other structural properties of sequence nets.
In this paper, we make a first stab at the general class of {\it sequence nets\/}. In Section~\ref{two-letter}
we explore systematically all of the possible rules for creating connected sequence nets from a two-letter alphabet.
Applying symmetry arguments, we find that threshold nets are only one of three equivalence classes, characterized
by the highest level of symmetry. We then discuss the remaining two classes, showing that also then there is a high
degree of modularity and that various structural properties can be computed easily. Curiously, the new classes of
two-letter sequence nets can be related to a generalized form of threshold nets, where the {difference}
$|x_i-x_j|$, rather than
the sum of the weights, is the one compared to the threshold $\theta$.
In Section~\ref{three-letter} we derive all possible forms of connected three-sequence nets. Symmetry arguments
lead us to the discovery of 30 distinct equivalence classes. Among these classes, we identify a natural extension of threshold nets to three-letter sequence nets. Despite the enlarged alphabet, 3-letter
sequence nets do retain many of the desirable properties of threshold and 2-letter sequence nets.
We also show that at least some of the 3-letter sequence nets can be mapped into
threshold nets with {\it two\/} thresholds, instead of one. We conclude with a summary and discussion of open problems in Section~\ref{conclude}.
\section{2-Letter Sequence Nets}
\label{two-letter}
\subsection{Classification}
Consider graphs that can be constructed from sequences $(S_1,S_2,\dots,S_N)$ of the two letters $A$ and $B$.
We can represent any possible rule by a $2\times2$ matrix {\bf R} whose elements
indicate whether nodes of type $i$ connect to nodes of type $j$:
$R_{ij}=1$ if the nodes connect, and 0 otherwise ($i=1,2$ stands for $A,B$, respectively). Fig.~\ref{graph_box} gives an example of the graph obtained from the sequence $(A,A,A,B,B,A,A,B)$, applying the {\it threshold\/} rule ${0\,0\choose1\,1}$.
Since each element can be $0$ or $1$ independently of the others, there are $2^4=16$ possible rules. We shall disregard, however, the four rules that fail to connect between $A$ and $B$,
\begin{equation}
\begin{split}
{\bf R}_0=\left({0\,0\atop0\,0}\right)
,\qquad
{\bf R}_1=\left({1\,0\atop0\,0}\right), \\
{\bf R}_2=\left({0\,0\atop0\,1}\right)
,\qquad
{\bf R}_3=\left({1\,0\atop0\,1}\right),
\end{split}
\end{equation}
for they yield simple {\it disjoint\/} graphs of the two types of nodes: ${\bf R}_0$ yields isolated nodes only, ${\bf R}_3$ yields one complete graph of type $A$ and one of type $B$, ${\bf R}_1$ yields a complete graph of type $A$ and isolated nodes of type $B$, etc.
\begin{figure}[ht]
\includegraphics*[width=0.35\textwidth]{fig_2.eps}
\caption{Combined time reversal and permutation symmetry: The graphs resulting from
${\bf R}_4$ applied to the sequence $(A,A,A,B,B,A,A,B)$~(a), and from
${\bf R}_6$
applied to the reverse-inverted sequence $(A,B,B,A,A,B,B,B)$~(b), are identical.}
\label{time_reversal}
\end{figure}
The list of remaining rules can be shortened further by considering two kinds of symmetries: (a)~permutation, and (b)~time reversal. {\it Permutation\/} is the symmetry obtained by permuting between the two types of nodes, $A\leftrightarrow B$. Thus, a permuted rule ($R_{11}\leftrightarrow R_{22}$ and $R_{12}\leftrightarrow R_{21}$) acting on a permuted sequence (${\bar S}_1,{\bar S}_2\dots,{\bar S}_N$) yields back the original graph~\cite{rem2}. {\it Time reversal\/} is the symmetry obtained by reversing the arrows (``time") in the
connectivity rules, or taking the transpose of ${\bf R}$. The transposed rule acting on the reversed sequence $(S_N,S_{N-1},\dots,S_1)$ yields back the original graph. The two symmetry operations are their own inverse and they form a symmetry group. In particular, one may combine the two symmetries: a rule with $R_{11}\leftrightarrow R_{22}$ applied on a reversed sequence with inverted types
$({\bar S}_N,{\bar S}_{N-1},\dots,{\bar S}_1)$ yields back the original graph, see Fig.~\ref{time_reversal}.
All of the four rules
\begin{equation}
\begin{split}
{\bf R}_4=\left({0\,0\atop1\,1}\right)
,\qquad
{\bf R}_5=\left({1\,1\atop0\,0}\right), \\
{\bf R}_6=\left({1\,0\atop1\,0}\right)
,\qquad
{\bf R}_7=\left({0\,1\atop0\,1}\right),
\end{split}
\end{equation}
are equivalent and generate threshold graphs. ${\bf R}_4$ is the rule for threshold graphs exploited by Hagberg et al.,~\cite{hag}, and ${\bf R}_5$ is equivalent to it by permutation. ${\bf R}_6$ is obtained from
${\bf R}_4$ by time reversal and permutation (Fig.~\ref{time_reversal}), and ${\bf R}_7$ is obtained from ${\bf R}_4$ by time reversal.
The two rules
\begin{equation}
{\bf R}_8=\left({0\,0\atop1\,0}\right)
,\qquad
{\bf R}_9=\left({0\,1\atop0\,0}\right),
\end{equation}
are equivalent, by either permutation or time reversal, and generate non-trivial bipartite graphs that are different from threshold nets (Fig.~\ref{ABgraphs}).
The rule ${\bf R}_{10}={0\,1\choose1\,0}$ generates complete bipartite graphs. However, the complete bipartite graph $K_{p,q}$ can also be produced by applying ${\bf R}_8$ to the sequence $(A,A,\dots A,B,B,\dots B)$ of $p$ $A$'s followed by $q$ $B$'s, so the rule ${\bf R}_{10}$ is a ``degenerate'' form of ${\bf R}_8$. One could see
that this is the case at the
outset, because of the symmetrical relations $A\to B$, $B\to A$: these render the ordering of the $A$'s and $B$'s in the graph's sequence irrelevant. By the same principle, ${\bf R}_{11}={0\,1\choose1\,1}$ and
${\bf R}_{12}={1\,1\choose1\,0}$ are degenerate forms of ${\bf R}_4$ and ${\bf R}_5$, respectively. They yield threshold graphs with segregated sequences of $A$'s and $B$'s.
The two rules
\begin{equation}
{\bf R}_{13}=\left({1\,1\atop0\,1}\right)
,\qquad
{\bf R}_{14}=\left({1\,0\atop1\,1}\right),
\end{equation}
are equivalent, by either permutation or time reversal, and generate non-trivial graphs different from threshold graphs and graphs produced by ${\bf R}_8$ (Fig.~\ref{ABgraphs}). Finally, the rule ${\bf R}_{15}={1\,1\choose1\,1}$ is a degenerate
form of ${\bf R}_{13}$ (or ${\bf R}_{14}$) and yields only complete graphs (which are threshold graphs, so ${\bf R}_{15}$ is subsumed also in ${\bf R}_{4}$).
\begin{figure}[ht]
\includegraphics*[width=0.47\textwidth]{fig_3.eps}
\caption{Distinct types of connected non-trivial two-letter sequential graphs: All three graphs are generated from the same sequence, $(A,A,A,B,B,A,A,B)$, applying
rules ${\bf R}_8$~(a), ${\bf R}_4$~(b), and ${\bf R}_{13}$~(c). Note the figure-background symmetry of (a) and (c): the graphs are the inverse, or complement of one another (see text).
The inverse of the threshold graph (b) is also a (two-component) threshold graph, obtained from the same sequence and applying the rule ${\bf R}_5$ (${\bf R}_4$'s complement).}
\label{ABgraphs}
\end{figure}
To summarize, ${\bf R}_4$, ${\bf R}_8$, and ${\bf R}_{13}$ are the only two-letter rules that generate different classes of non-trivial connected graphs.
There is yet another amusing type of symmetry: applying ${\bf R}_8$ and ${\bf R}_{13}$ to the same sequence yields {\it complement\/}, or {\it inverse\/ }graphs --- nodes are adjacent in the inverse graph if and only if they are {\it not\/} connected in the original graph. The figure-background symmetry manifest in the rules
${\bf R}_8$ and ${\bf R}_{13}$ ($0\leftrightarrow1$) is also manifest in the graphs they produce (Fig.~\ref{ABgraphs}a,c).
On the other hand, the inverse of threshold graphs are also threshold graphs. Also, the complement of
a threshold rule applied to the complement (inverted) sequence yields back the original graph. In this sense, threshold graphs
have maximal symmetry. ${\bf R}_8$-graphs are typically less dense, and ${\bf R}_{13}$-graphs are
typically denser than threshold graphs.
\begin{figure}[ht]
\includegraphics*[width=0.25\textwidth]{fig_4.eps}
\caption{Diagrammatic representation of rules for two-letter sequence nets:
(a)~All of the $2^2$ possible connections between nodes of type $A$ and $B$.
(b)~Three equivalent representations of the threshold rule ${\bf R}_4$. The second and third diagram
are obtained by label permutation and time-reversal, respectively.
(c)~Diagrams for ${\bf R}_8$ and ${\bf R}_{13}$. Note how they complement one another to the full
set of connections in part (a).}
\label{graph_notation}
\end{figure}
The connectivity rules have an additional useful interpretation as directed graphs, where the nodes
represent the letters of the sequence alphabet, a directed link, e,g., from $A$ to $B$ indicates the rule
$A\to B$, and a connection of a type to itself is denoted
by a self-loop (Fig.~\ref{graph_notation}). Because the rules are the same under permutation of types, there is no need to actually
label the nodes: all graph isomorphs represent the same rule. Likewise, time-reversal symmetry means that graphs with inverted arrows are equivalent as well. Note that the direction of self-loops is
irrelevant in this respect, so we simply take them as undirected. We shall make use of this notation, extensively, for the analysis of 3-letter sequence nets in Section~\ref{three-letter}.
\subsection{Alphabetical ordering}
A very special property of sequence nets is the fact that any arbitrary ensemble of such nets possesses a natural ordering, simply listing the nets alphabetically according to their sequences. In contrast, think for example of the ensemble of Erd\H os-R\'enyi random graphs of $N$ nodes, where links are present
with probability $p$: there is no natural way to order the $2^N$ graphs in the ensemble~\cite{ordering}.
Plotting a structural property against the alphabetical ordering of the ensemble reveals some
inner structure of the ensemble itself, yielding new insights into the nature of the nets. As an example,
in Fig.~\ref{eigs_2threshold} we show $\lambda_2$, the second smallest eigenvalue, for the
ensemble of connected threshold nets containing $N=8$ nodes (there are $2^7=128$ graphs in the ensemble, since their sequences must all start with the letter $A$).
Notice the beautiful pattern followed by the eigenvalues plotted in this way, which resembles
a fractal, or a Cayley tree: the values within the first half of the graphs in the $x$-axis repeat in the second half, and the pattern iterates as we zoom further into the picture.
\begin{figure}[ht]
\includegraphics*[width=0.45\textwidth]{fig_5.eps}
\caption{Second smallest eigenvalues of threshold nets with $N=8$ nodes, plotted against their
alphabetical ordering.}
\label{eigs_2threshold}
\end{figure}
\subsection{The new classes of two-letter sequence nets}\label{new_classes}
Structural properties of the new classes of two-letter sequence nets, ${\bf R}_8$ and ${\bf R}_{13}$, are as easily derived as for threshold nets. Here we focus on ${\bf R}_8$ alone, which forms a subset of bipartite graphs. The analysis for ${\bf R}_{13}$
is very similar and often can be trivially obtained from the complementary symmetry of the two classes.
All connected sequence nets in the ${\bf R}_8$ class must begin with the letter $A$ and end with
the letter $B$. A sequence of this sort may be represented more compactly~\cite{hag} by the numbers of $A$'s and $B$'s in
the alternating layers, $(N_{A_1},N_{B_2},\dots,N_{B_n})$. We assume that there are $N$ nodes and
$n$ layers ($n$ is even). We also use the notation $N_A=\sum N_{A_i}$ and $N_B=\sum N_{B_i}$ for the total number of $A$'s
and $B$'s, as well as
\begin{equation}
N_{A_j}^-=\sum_{i<j}N_{A_i}\,;\qquad N_{A_j}^+=\sum_{i\geq j}N_{A_i}\,,
\end{equation}
and likewise for $N_{B_j}^\pm$.
Finally, since all the nodes in a layer have identical properties we denote any $A$ in the $i$-th layer
by $A_i$ and any $B$ in the $j$-th layer by $B_j$. With this notation in mind we proceed to discuss
several structural properties.
\smallskip{\it Degree distribution\/}: Since $A$'s connect only to subsequent $B$'s (and $B$'s
only to preceding $A$'s) the degree $k$ of the nodes is given by
\begin{equation}
k(A_j)=N_{B_j}^+\,; \qquad k(B_j)=N_{A_{j}}^-\,.
\end{equation}
\smallskip{\it Clustering\/}: There are no triangles in ${\bf R}_8$ nets so the clustering
of all nodes is zero.
\smallskip{\it Distance\/}: Every $A$ is connected to the last $B$, so the distance between any two $A$'s is 2. Every $B$ is connected to the first $A$ in the sequence, so the distance between any two
$B$'s is also 2. The distance between $B_i$ and $A_j$ is 1 if $j<i$ (they connect directly), and 3 if $j>i$
($B_i$ links to $A_1$, that links to $B_n$, that links to $A_j$).
\smallskip{\it Betweenness centrality\/}: Because of the time-reversal symmetry between $A$ and $B$, it suffices to analyze $B$ nodes only. The result for $A$ can then be obtained by simply reversing the creation sequence and permuting the letters.
The vertex betweenness $b(v)$ of a node $v$ is defined as:
\begin{equation}
b(v) = \frac{1}{2}\sum_{s\neq t\neq v}{\frac{\sigma_{st}(v)}{\sigma_{st}}}
\end{equation}
where $\sigma_{st}$ is the number of shortest paths from node $s$ to $t$ ($s\neq t$), excluding the cases that $s=v$ or $t=v$. $\sigma_{st}(v)$ is the number of shortest paths from $s$ to $t$ that goes through $v$. The factor $\frac{1}{2}$ appears for undirected graphs since each pair is counted twice in the summation.
The betweenness of $B$'s can be calculated from lower layers to higher layers recursively. In the first B-layer
\begin{equation}
b(B_{2}) = \frac{\frac{1}{2}N_{A_1}(N_{A_1}-1)}{N_B}\,,
\end{equation}
and
\begin{equation}
\begin{split}
& b(B_{j}) = b(B_{j-2}) \\
&\>\>\>\>\>
+ N_{A_{j-1}}\frac{\frac{1}{2}(N_{A_{j-1}}-1)+N_{A_{j-1}}^{-}}{N_{B_j}^+} + N_{A_{j-1}}\frac{N_{B_{j}}^-}{N_{B_j}^{+}}\,,
\end{split}
\end{equation}
for $j>2$. The second term on the rhs accounts for the shortest paths from layer $A_{j-1}$ to itself and all previous layers of $A$, and the third term corresponds to paths from $A_{j-1}$ to $B_j$ to $A_i$ ($i<j-1$) to $B_{j-2}$. Although this recursion can be solved explicitly it is best left in this form, as it thus highlights the fact
that the betweenness centrality increases from one layer to the next. In other words, the networks are {\it modular\/}, where each additional $B$-layer dominates all the layers below.
\smallskip{\it Laplacian spectrum\/}: Unlike threshold nets, for ${\bf R}_8$ nets the eigenvalues
are {\it not\/} integer, and there seems to be no easy way to compute them. Instead, we focus
on the second smallest and largest eigenvalues, $\lambda_2$ and $\lambda_N$, alone, for
their important dynamical role: the smaller the ratio $r\equiv\lambda_N/\lambda_2$ the more susceptible
the network is to synchronization~\cite{bar}.
Consider first $\lambda_2$. For ${\bf R}_8$ it is easy to show that both the {\it vertex\/} and {\it edge connectivity\/} are equal to $\min(N_{A_1},N_{B_n})$. Then, following an inequality in~\cite{moh},
\begin{equation}
2(1-\cos(\frac{\pi}{N}))\min(N_{A_1},N_{B_n})\leq\lambda_2\leq\min(N_{A_1},N_{B_n})\,.
\end{equation}
The upper bound seems stricter and is a reasonable approximation to $\lambda_2$ (see Fig.~\ref{l2bounds}).
\begin{figure}[ht]
\medskip
\includegraphics*[width=0.4\textwidth]{fig_6.eps}
\caption{Plot of second smallest eigenvalues of all connected $R_{8}$ nets with $N=8$ against their alphabetical ordering (solid curve), and their upper and lower bounds (broken lines).}
\label{l2bounds}
\end{figure}
For $\lambda_N$, using Theorem 2.2 of~\cite{moh} one can derive the bounds
\begin{equation}\
\frac{N}{N-1}\max(N_{A},N_{B}) \leq \lambda_{N} \leq N\,,
\end{equation}
but they do not seems very useful, numerically. Playing with various structural properties of
the nets, plotted against their alphabetical ordering, we have stumbled upon the approximation
\begin{equation}
\lambda_{N}\mbox{ }\approx\mbox{ }N - \left(2\frac{N_{A}\cdot N_{B}}{N}-\left<k\right>\right)\,,
\end{equation}
where $\left<{k}\right>$ is the average degree of the graph, see Fig.~\ref{l2approx}. The approximation is exact for bipartite {\it complete\/}
graphs ($n=1$) and the relative error increases slowly with $N$; it is roughly at 10\% for $N=60$.
\begin{figure}[ht]
\medskip
\includegraphics*[width=0.4\textwidth]{fig_7.eps}
\caption{Plot of largest eigenvalue of all connected $R_{8}$ nets with $N=8$ against their alphabetical ordering (solid curve), and its approximated value (broken line).}
\label{l2approx}
\end{figure}
\subsection{Relation to threshold nets}
In~\cite{hag} it was shown that threshold graphs have a mapping to a sequence net, with a unique sequence (under the ``threshold rule" ${\bf R}_4$); and conversely, for any ${\bf R}_4$-sequence net
there exists a set of weights $x_i$ of the nodes (not necessarily unique), such that connecting any two nodes that satisfy $x_i+x_j>\theta$ reproduces the sequence net. Here we establish a similar relation
between ${\bf R}_8$- (or ${\bf R}_{13}$-) sequence nets and a different kind of threshold net, where
connectivity is decided by the difference $|x_i-x_j|$ rather than the sum of the weights.
We begin with the mapping of a weighted set of nodes to a ${\bf R}_8$-sequence net. Let
a set of $N$ nodes have weights $x_i$ ($i=1,2,\dots,N$), taken from some probability density, and we assume $0< x_i<2\theta$, without
loss of generality. Denote nodes with $x_i<\theta$ as type $A$ and nodes with $x_i>\theta$ as type $B$.
Finally, connect any two nodes $i$ and $j$ that satisfy $|x_i-x_j|>\theta$. The resulting graph can be
constructed by a unique sequence under the rule ${\bf R}_8$, obtained as follows.
For convenience, rewrite the set of weights as
\begin{equation}
0<u_1<u_2\cdots< u_{N_A}<\theta<v_1<\cdots<v_{N_B}<2\theta\,,
\end{equation}
where the first $N_A$ weights correspond to $A$-nodes and the rest to $B$-nodes.
Denote the creation sequence by $(S_1,S_2,\dots,S_N)$ and determine the $S_i$ by the algorithm
(in pseudo-code):
\medskip\noindent
{\tt Set} $i=1$, $j=1$
\noindent
{\tt For} $k=1,2,\dots,N$, {\tt do:}
\noindent\hskip 0.4cm
{\tt If} $|u_i-v_j|>\theta$
\noindent\hskip 0.8cm
{\tt set} $S_k=A$ {\tt and} $i=i+1;$
\noindent\hskip 0.4cm
{\tt Else}
\noindent\hskip 0.8cm
{\tt set} $S_k=B$ {\tt and} $j=j+1.$
\noindent
{\tt End.}
\medskip\noindent
It is understood that if the $u_i$ are exhausted before the end of the loop, the remainder $B$-nodes are
automatically affixed to the end of the sequence (and similarly for the $v_j$).
For example, using this algorithm we find that the ``difference-thre\-shold" graph resulting from the set of weights
$\{$1,2,3,5,7,16,17,20$\}$ and $\theta=12$, can be reproduced from the sequence
$(A,A,A,B,B,A,A,B)$, with the rule ${\bf R}_8$.
Consider now the converse problem: given a graph created from the sequence $(S_1,S_2,\dots,S_N)$ with the rule
${\bf R}_8$, we derive a (non-unique) set of weights $\{x_i\}$ such that connecting any two nodes with
$|x_i-x_j|>\theta$ results in the same graph. Rewrite first the creation sequence into its compact form
$(N_{A_{1}},N_{B_{2}},...,N_{B_{n}})$,
and assign weights $l$ for nodes $A$ in layer $l$, weights $n+m$ for nodes $B$ in layer $m$, and
set the threshold at $\theta=n$. For example, the sequence $(A,A,A,B,B,A,A,B)$ has a compact representation $(3,2,2,1)$, with $n=4$ layers, so the three $A$'s in layer $1$ have weights $1$, the two $B$'s in layer $2$ have weights $6$, the two $A$'s in layer $3$ have weights $3$, and the single $B$ in layer $4$ has weight $8$. The weights $\{1,1,1,6,6,3,3,8\}$, with connection threshold $\theta = 4$, reproduce the original graph.
Sequence graphs obtained from the rule ${\bf R}_{13}$ can be also mapped to difference threshold graphs in
exactly the same way, only that the criterion for connecting two nodes is then $|x_i-x_j|<\theta$, instead of
$|x_i-x_j|>\theta$, as for ${\bf R}_8$. The mapping of sequence nets to generalized threshold graphs may
be helpful in the analysis of some of their properties, for example, for finding the {\it isoperimetric number\/}
of a sequence graph~\cite{moh,isoperimetric}.
\section{Three-Letter Sequence Nets}
\label{three-letter}
\subsection{Classification}
With a three-letter alphabet, $\{A,B,C\}$, there are at the outset $2^{3^2}=512$ possible rules.
Again, these can be reduced considerably, due to symmetry. Because the rule matrix has 9 entries
(an odd number) no rule can be identical to its complement. Thus, we can limit ourselves
to rules with no more than 4 non-zero entries and apply symmetry arguments to reduce their space
--- at the very end we can then add the complements of the remaining rules.
In Fig.~\ref{3nets} we list all possible three-letter rules with two, three, and four interactions. Rules that lead to disconnected graphs, and symmetric rules (by label permutation or time-reversal) have
been omitted from the figure.
\begin{figure}[ht]
\medskip
\includegraphics*[width=0.4\textwidth]{fig_8.eps}
\caption{Rules for three-letter sequence nets:
Shown are rules with (a)~two, (b)~three, and (c)~four interactions. All label permutations and time reversals
are omitted. In addition, rules 2 and 7 degenerate to two-letter rules (identifying $A$ and $C$), and rules 3, 12, 13, and 14 are degenerate
cases of rules 2, 6, 7, and 6, respectively. This leaves us with fifteen distinct three-letter rules (underlined), and their fifteen complements, for a total of 30 different classes of three-letter sequence nets.}
\label{3nets}
\end{figure}
Rule \Re2~\cite{rem3} is in fact not new: identifying nodes of type $A$ and $C$ (as marked
in rule 1 of the figure) we can easily see that the rule is identical to the two-letter rule \Ro8. In the same fashion,
rule \Re7 is the same as the two-letter threshold rule \Ro4.
Rule \Re3 is a degenerate form of \Re2: Because of the double connection $B\to C$ and $C\to B$, the order at which $B$ and $C$ appear in the sequence relative to one another is inconsequential. (On the other hand, the order of the $B$'s relative to $A$'s {\it is\/} important, since $A$'s connect only to those $B$'s that appear earlier in the sequence.) Then, given a sequence one can rearrange it by moving all the $C$'s to the end of the list.
If we now apply \Re2, $A\to B$ and $C\to B$, then we get the same graph as from the original sequence under the rule \Re3. The same consideration applies to rules \Re{12}, \Re{13} and \Re{14}, that are degenerate forms of \Re6, \Re7 and \Re8 (or \Re6), respectively. We are thus left with only 15 distinct rules with fewer than 5 connections. To
these one should add their complements, for a total of 30 distinct three-letter rules.
Note the resemblance of \Re{9}, \Re{18}, and \Re{20} to two-letter threshold nets. \Re{18} seems like a particularly symmetrical generalization and we will focus on it in much of our discussion below.
\subsection{Connectedness}\label{connect}
While one can easily establish
wether a graph is connected or not, {\it a posteriori\/}, with a burning algorithm that requires ${\cal O}(N)$ steps,
it is useful to have shortcut rules that tell us how to avoid bad sequences at the outset: knowing that two-letter threshold graphs are connected if and only if their sequence ends with
$B$, deals with the question most effectively. Analogous criteria exist for three-letter sequence graphs
but they are a bit more complicated.
For example, three-letter sequences interpreted with \Re{18} lead to connected graphs if and only if they satisfy: {\it (1)~The first {\rm A} and the first {\rm C} in the sequence appear before the last {\rm B}. (2)~The sequence does not start with {\rm B}}. (We assume that the sequence contains all three letters.) For \Re1the requirements are:
{\it
(1)~The first {\rm A} in the sequence must appear after the first {\rm B}.
(2)~The last {\rm C} in the sequence must appear before the last {\rm B}.
(3)~The last {\rm A} in the sequence must appear after the first {\rm C}, and there ought to be at least one {\rm B} between the two.} Similar criteria exist for all other three-letter rules and can be found by inspection.
\subsection{Structural properties}
Structural properties of three-letter sequence nets are analyzed as easily as those of two-letter nets, Here we
list, as an example, a few basic attributes of \Re{18} sequence nets. We use a notation similar to that of Section~\ref{new_classes}.
\smallskip{\it Degree distribution\/}:
$A$ and $C$ nodes form complete subgraphs, while $B$ nodes connect to all preceding $A$'s and $C$'s. Thus the degree of the nodes are:
\begin{equation}
\begin{split}
&k(A_i) = N_{A} - 1 + N_{B_i}^{+}\,, \\
&k(B_i) = N_{A_i}^{-} + N_{C_i}^{-}\,, \\
&k(C_i) = N_{C} - 1 + N_{B_i}^{+}\,.
\end{split}
\end{equation}
\smallskip{\it Distance\/}:
Since the $A$ nodes make a subset complete graph $d(A_i,A_j)=1$, and likewise for $C$, $d(C_i,C_j)=1$.
The $B$'s do not connect among themselves, but they all connect to the nodes in the first layer (which does
not consist of $B$'s), so $d(B_i,B_j)=2$.
For the distance of $A$ nodes from $B$, we have
\begin{equation}
d(A_i,B_j) =
\begin{cases}
1 & i<j\,,\\
2 & i>j, \,a_1<j\,, \\
3 & i>j, \,a_1>j,\, i<b_n\,,\\
4 & i>j,\, a_1>j,\, i>b_n\,,
\end{cases}
\end{equation}
where $a_1$ is the index of the first $A$-layer and $b_n$ is the index of the last $B$-layer.
The first line follows since $B$'s are directly connected to preceding $A$'s and $C$'s. The second, and third and fourth lines are illustrated in Fig.~\ref{distance}a and b, respectively. The distance $d(C_i,B_j)$ follows the very same pattern. Finally, inspecting all different cases one finds
\begin{equation}
d(A_i,C_j) =
\begin{cases}
2 & i,j<b_n\,,\\
3 & i<b_n<j, {\rm\ or\ }j<b_n<i\,, \\
4 & i,j>b_n\,. \end{cases}
\end{equation}
\begin{figure}[ht]
\medskip
\includegraphics*[width=0.4\textwidth]{fig_9.eps}
\caption{The distance $d(A_i,B_j)$ in \Re{18} nets. (a)~If $i>j$ and the first $A$ is below $B_j$ the distance is 2. (b)~If the first $A$ is above $B_j$, then the first $C$ must be below ($B$ can't start the sequence); in that case if $A_i$ is below the last $B$ the distance is 3, and otherwise the distance is 4. Only the relevant parts of the complete net are shown.}
\label{distance}
\end{figure}
\smallskip{\it Eigenvalues\/}:
We have found no obvious way to compute the eigenvalues, despite the similarities between
\Re{18} nets and two-letter threshold nets. However, plots of the eigenvalues against the alphabetical
ordering of the nets once again reveals intriguing fractal patterns, and one can hope that these might be exploited at the very least to produce good bounds and approximations. In Fig.~\ref{r_R18} we plot
the ratio $r=\lambda_N/\lambda_2$ for \Re{18} nets with $N=7$ against their alphabetical ordering.
The $x$-axis includes sequences of nets that are not connected: In this case $\lambda_2=0$ and synchronization is not possible. These cases show as gaps in the plot, for example, the big gap in the center
corresponds to disconnected sequences that start with the letter $B$ (see Section~\ref{connect}).
\begin{figure}[ht]
\medskip
\includegraphics*[width=0.4\textwidth]{fig_10.eps}
\caption{The ratio $\lambda_N/\lambda_2$ for \Re{18} nets consisting of $N=7$ nodes, against their alphabetical ordering. Note
the gap near the center, which corresponds to sequences of disconnected graphs. Note also the mirror symmetry --- this is due to the mirror symmetry of the rule \Re{18} itself.}
\label{r_R18}
\end{figure}
\subsection{Multi-threshold nets}
Some of the three-letter sequence nets can be mapped to generalized forms of threshold nets.
For example, the following scheme yields a {\it two\/}-threshold net, equivalent to three-letter sequence nets
generated by the rule \Re{20}. Let the nodes be assigned weights $0<x_i<3\theta/2$, from a random distribution, and connect any two nodes $i$ and $j$ that satisfy $x_i+x_j<\theta\equiv\theta_1$ or
$x_i+x_j>2\theta\equiv\theta_2$. Identifying nodes with weight $0<x_i<\theta/2$ with $A$, nodes with
$\theta/2<x_i<\theta$ with $B$, and nodes with $\theta<x_i<3\theta/2$ with $C$, we see that all $A$'s
connect to one another and all $C$'s connect to one another but the $B$'s do not, and $A$'s and $C$'s do not connect; nodes of type $A$ and $B$ may or may not connect, and likewise for nodes of type $C$ and $B$.
To reflect the actual connections, the nodes of type $A$ and $B$ may be arranged in a sequence according
to the algorithm in~\cite{hag}, for the threshold rule \Ro{5}. Also the nodes of type $C$ and $B$ may be
arranged in a sequence, to reflect the actual connections, with the very same algorithm. Because there
are no connections between $A$ and $C$ the two results may be trivially merged. Note, however, that
once the $A$-$B$ sequence is established the order of the $B$'s is set, so the direction of connections between $C$ and $B$ ($C\to A$ or
$A\to C$) is {\it not\/} arbitrary. In our example, the mapping is possible to \Re{20} but not to \Re{18}.
\section{summary and discussion}
\label{conclude}
We have introduced a new class of nets, sequence nets, obtained from a sequence of letters and fixed rules of connectivity. Two-letter sequence nets contain threshold nets, and in addition two newly discovered classes.
The \Ro{13} class can be mapped to a ``difference-threshold" net, where nodes $i$ and $j$ are connected
if their weights difference satisfies $|x_i-x_j|<\theta$. This type of net may be a particularly good model for
social nets, where the weights might measure political leaning, economical status, number of offspring, etc., and
agents tend to associate when they are closer in these measures. We have shown that the structural properties of
the new classes of two-letter sequence nets can be analyzed with ease, and we have introduced an ordering in ensembles of sequence nets
that is useful in visualizing and studying their various attributes.
We have fully classified 3-letter sequence nets, and looked at a few examples, showing that they too can be analyzed simply.
The diameter of sequence nets grows linearly with the number of letters in the alphabet and for a 3-letter alphabet
it is already 3 or 4, comparable to many everyday life complex nets. Realistic diameters might be achieved with a modest expansion of the alphabet.
There remain numerous open questions: Applying symmetry arguments we have managed to reduce the class of 3-leter nets to just 30 types, but we have not ruled out the possibility that some overlooked symmetry might reduce the list further; The question of which sequences lead to connected nets can be studied by
inspection for small alphabets, but we have no comprehensive approach to solve the problem in general; We have shown how to map sequence nets to generalized types of threshold nets, in some cases --- Is such a mapping always possible? Is there a systematic way to find such mappings for any sequence rule?; What kinds of nets would
result if the connectivity rules applied only to the $q$ preceding letters, instead of to {\it all\/} preceding letters? etc.
We hope to tackle some of these questions in future work.
\acknowledgments Partial funding from the NSF (DbA) and ARO (JS) is gratefully acknowledged.
|
2,877,628,089,393 | arxiv | \section{Introduction}
In constraint satisfaction problems we ask for the probability that a random expression, built on a finite set of Boolean variables according to some rules ($k$-Sat, $k$-Xor-Sat, NAE, \dots), is (un)satisfiable.
The behaviour of this probability, when the number $n$ of Boolean variables and the length $m$ of the expression (usually defined as the number of clauses) tend to infinity, has given rise to numerous studies, most of them concentrating on the existence and location of a threshold from satisfiability to unsatisfiability as the ratio $m/n$ grows.
The literature in this direction is vast; for Xor-functions see e.g. \cite{CreignouDaude99,CrDa03,CDD03,CrDa04,CREIGNOU-DAUDE-EGLY}.
Defining a probability distribution on Boolean functions through a distribution on Boolean expressions is \emph{a priori} a different question.
Quantitative logic aims at answering such a question, and many results have been obtained when the
Boolean expression, or equivalently the random tree that models it, is a variation of well-known
combinatorial or probabilistic tree models such as Galton-Watson and P\'olya trees, binary search
trees, etc (\cite{LS97, CFGG04, BrPi05, Wo05, zaionc2005, FGGZ07, GKZ08, Ko08, CGM11, GGKM11,
FGGG12, GGKM12}).
So we have two frameworks: On the one hand we try to determine the probability that an expression
is satisfiable; on the other hand we try to identify probability distributions on the set of
Boolean functions.
It is only natural that we should wish to merge these two approaches: We set satisfiability
problems into the framework of quantitative logic (this only requires choosing a suitable model of
expressions) and ask for the probability of $\textsc{False}$ -- this is the classical satisfiability
problem -- \emph{and} for the probabilities of the other Boolean functions as well.
This amounts to refining the satisfiable case and taking all the functions different from $\textsc{False}$
also into account. The set of Boolean expressions is then partitioned into subsets according to the
(class of) Boolean function(s) that is computed.
Within this unified framework one could, e.g., ask for the probability that a random expression
computes a function that is satisfied by a specific number of assigments.
Although this may turn out to be out of our reach for most classical satisfiability problems,
there are some problems for which we may still hope to obtain a (partial) description of the
probability distribution on the set of Boolean functions.
The case of 2-Xor expressions is such a problem, and this paper is devoted to presenting our
results in this domain.
Consider random 2-Xor-Sat instances with a large number~$n$ of variables, and~$m$ of clauses.
Creignou and Daud\'e established that their limit probability of satisfiability
goes from positive values to zero when the ratio~$m/n$ crosses~$1/2$ (see \cite{CreignouDaude99}).
They then proved that this threshold is coarse (\emph{cf.} \cite{CrDa04}).
Further work by Daud\'e and Ravelomanana \cite{DaudeRavelomanana} and by Pittel and Yeum
\cite{PittelYeum} led to a precise understanding of the transition in a window of size $n^{-1/3}$ around~$1/2$.
The paper is organized as follows.
We present in the next section 2-Xor expressions and the set of Boolean functions that they can
represent. Then we give a modelization of these expressions in terms of multigraphs, before considering in Section~\ref{sec:probas} how enumeration results on classes of multigraphs allow us to compute probabilities of Boolean functions.
We then give explicit results for several classes of functions in Section~\ref{sec:results}, and conclude with a discussion on the relevance and of possible extensions of our work in Section~5.
A preliminary version of our work was presented at the conference Latin'14~\cite{dePGGK14}.
\section{Boolean Expressions and Functions and their Relations to Multigraphs}
\subsection{2-Xor Expressions and Boolean Functions}
\label{sec:model-expressions}
In this section we will lay out the framework of Boolean expressions which we will investigate. If
$x$ is a Boolean variable, we will denote by $\bar x$ its negation.
\begin{definition}
Let $\{ x_1, x_2, \ldots, x_n \}$ be a set of Boolean variables. A 2-Xor expression is a finite
conjunction of clauses $l \oplus l'$, where $l$ and $l'$ are literals, i.e. they are elements of
$\{x_1, x_2, \ldots, x_n, \bar x_1, \bar x_2,\dots, \bar x_n\}$.
The clauses as well as the literals within each clause are ordered (i.e. for instance that the
clauses $x\oplus y$ and $y \oplus x$ are distinct).
From a combinatorial point of view, an expression can be regarded as a
\emph{sequence} of clauses where each clause is a pair of two literals.
Neither the literals of a clause nor the clauses themselves need to be distinct.
The set of all such expressions is denoted by $\mathcal E_n$.
\end{definition}
We say that a 2-Xor expression \emph{defines}, or \emph{computes},
the corresonding Boolean function.
We shall denote the number of clauses of an expression by $m$.
Now each 2-Xor expression defines a Boolean function on a finite number of variables, but not all
Boolean functions on a finite number of variables can be represented by a 2-Xor expression.
We define~$\mathcal X$ as the set of functions from $\{ 0,1 \}^{\mathbb N}$ to $\{ 0, 1 \}$ which have at
least one representation by a 2-Xor expression in $\bigcup_{n\ge 1} \mathcal E_n$.
We also define, for each $n \geq 1$, the set $\mathcal X_n$ of functions in $\mathcal X$ such that there exists
an expression in $\mathcal E_n$ representing the function.
This implies that $\mathcal X_{n_1} \subset \mathcal X_{n_2}$ for $n_1 \leq n_2$, and that $\mathcal X = \cup_{n \geq 1} \mathcal X_n$.\footnote{
For the sake of brevity, in the sequel ``(the set of) Boolean functions'' is to be understood as
either the set $\mathcal X_n$ or the set $ \mathcal X$, according to the context.}
Consider now the expressions in $\mathcal E_n$. There there are $4 n^2$ distinct clauses.
We assume that the $m$ clauses are drawn with a uniform probability (and hence with replacement).
This framework allows us to define, for each~$m$, a probability distribution on the set~$\mathcal X_n$:
\begin{definition}
Let $E_{m,n} = (4 n^2)^m$ be the total number of expressions with $m$ clauses on the variables
$x_1$, \ldots, $x_n$, and $E_{m,n}(f)$ denote the number of these expressions that compute~$f$.
Then, for a Boolean function $f \in \mathcal X_n$ we set
$
\mathds{P} (f) = \frac{E_{m,n}(f)}{E_{m,n}}.
$
\end{definition}
\subsection{The Sets $\mathcal X_n$}
Rewriting a clause $l_1 \oplus l_2$ as $l_1 \sim \bar l_2$ or $\bar l_1 \sim l_2$ (i.e., the literals $l_1$ and $l_2$
must take opposite values for the clause to evaluate to $\textsc{True}$),
and merging the clauses sharing a common variable, we see that the functions we
obtain can be written as a conjunction of equivalence relations on literals:
\footnote{Note that
the relation $\sim$ corresponds to an equivalence relation on the set of variables and therefore
induces a partition on the set of variables. But as to the presence of negations, the formal
structure is in fact a little bit richer than only a set with an equivalence relation.}
$$
(l_{1} \sim \cdots \sim l_{p_1}) \wedge (l_{p_1+1} \sim \cdots \sim l_{p_2}) \wedge \cdots
\wedge (l_{p_{r-1}+1} \sim \cdots \sim l_{p_r}).
$$
E.g., for $n=7$ the expression
$(x_1 \oplus x_3) \wedge (\bar x_6 \oplus x_5) \wedge (x_7 \oplus \bar{x}_7) \wedge (x_2 \oplus \bar{x}_3)$
computes a Boolean function $f$ that we can write as
$(x_1 \sim \bar x_3) \wedge (x_6 \sim x_5) \wedge (x_7 \sim x_7) \wedge (\bar x_2 \sim \bar x_3)$ ,
or equivalently as
$(x_1 \sim \bar x_2 \sim \bar x_3) \wedge (x_5 \sim x_6)$;
furthermore this function partitions the set of Boolean variables
$\{ x_1, \ldots , x_7 \}$ into the subsets
$\{x_1, x_2, x_3\}$, $\{ x_4 \}$, $\{x_5,x_6 \}$ and $\{ x_7 \}$.
If a clause inducing $l \sim \bar{l}$ appears, then the expression simply computes $\textsc{False}$. In
other words:
\begin{proposition} \label{th:block_representation}
For any $n \geq 1$, the set $\mathcal X_n$ of Boolean functions on $n$ variables, such that there exists
at least one 2-Xor expression in $\mathcal E_n$ that computes the function, comprises exactly the
function $\textsc{False}$ and those functions $f$ that are specified as follows:
Fix a set $Y=\{y_1,y_2,\dots,y_n\}$ such that $y_i=x_i$ or $y_i=\bar x_i$, for all $i=1,\dots,n$, and
partition the set $Y$ into subsets. Then $f$ attains the value $\textsc{True}$ if and only if for each
block of the partition all the literals have the same value.
A variable which appears in no clause of an expression computing the function,
or only as $l \sim l$, is put into a singleton.
\end{proposition}
\begin{proof}
Given a set of literals~$p = \{l_1, \ldots, l_s\}$,
let~$\bar p$ denote the set where each literal is switched
\[
\bar p = \{ \bar l_1, \ldots, \bar l_s \}.
\]
Let us first observe that if a satisfiable expression is specified,
in the sens of the proposition, by the partition
\[
Y = p_1 \uplus p_2 \uplus \cdots \uplus p_t,
\]
where each variable appears in exactly one literal of~$Y$,
then it is also specified by the partition
where any number of~$p_i$ is replaced by~$\bar p_i$.
We prove the proposition by recurrence on the number of clauses~$m$.
For~$m=0$, the Boolean function computed is $\textsc{True}$,
and is specified by the partition
\[
\{\{x_1\}, \{x_2\}, \ldots, \{x_n\}\}.
\]
of~$Y = \{x_1, \ldots, x_n\}$.
Let us assume that the proposition is proven for a given~$m$,
and consider a 2-Xor expression with~$m+1$ clauses
\[
E = \tilde{E} \wedge (l_1 \oplus l_2),
\]
where $\tilde{E}$ is a 2-Xor expression with~$m$ clauses.
If $\tilde{E}$ computes the Boolean function $\textsc{False}$,
then~$E$ also computes $\textsc{False}$ and the proposition holds.
Otherwise, let
\[
Y = p_1 \uplus p_2 \uplus \cdots \uplus p_t
\]
denote the partition obtained by application of the proposition to the expression~$\tilde{E}$.
The last clause of~$E$ is $(l_1 \oplus l_2)$,
which is equivalent with $l_1 \sim \bar l_2$
and is satisfied if and only if $l_1$ and $\bar l_2$
are assigned the same Boolean value.
Without loss of generality, we can assume
that~$l_1$ belongs to~$Y$.
Otherwise, we just replace the set $p_i$ from the partition
that contains~$\bar l_1$ with $\bar p_i$.
\begin{itemize}
\item
If~$l_2$ also belongs to~$p_i$ then, according to the proposition,
$\tilde E$ is satisfied only if~$l_1$ and~$l_2$ take the same Boolean value,
so the clause~$(l_1 \oplus l_2)$ cannot be satisfied.
Therefore, $E$ is not satisfiable, so it computes the Boolean function $\textsc{False}$.
\item
If~$\bar l_2$ belongs to~$p_i$, then the clause $l_1 \oplus l_2$
is satisfied by any assignment satisfying~$\tilde{E}$,
so~$E$ is satisfiable, and the partition built by the proposition for~$E$ is $Y = p_1 \uplus p_2 \uplus \cdots \uplus p_t$.
\item
Otherwise, there is a set~$p_j$ from~$P$, distinct from~$p_i$, that contains either~$l_2$ or~$\bar l_2$.
Without loss of generality, we can assume that~$p_j$ contains~$\bar l_2$.
Otherwise, we simply replace~$p_j$ with $\bar p_j$.
Then $E$ is satisfiable.
The corresponding partition is obtained from~$(p_1, \ldots, p_t)$
by replacing the sets~$p_i$ and~$p_j$ with $p_i \cup p_j$. \qedhere
\end{itemize}
\end{proof}
We now define an equivalence relation on $\mathcal X_n$.
\begin{definition}
Two Boolean functions $f$ and $g$ on $n$ variables are \emph{equivalent}, denoted as $f\equiv g$,
if $g$ can be obtained from $f$ by permuting the variables and flipping some of the literals.
We denote by ${\cal C}(f)$ the equivalence class of a function~$f$.
\end{definition}
For example, for $n=7$ the function $f$ we have defined before is equivalent to the function
$g= (x_3 \sim x_5 \sim x_2) \wedge (x_1 \sim \bar{x_6})$.
It is easy to check that all the Boolean functions in ${\cal C}(f)$ have the same probability~$\mathds{P}(f)$.
\begin{definition}
Let $f \in \mathcal X$; we say that a Boolean variable $x$ is an \emph{essential} variable of~$f$ if and
only if $f|_{x=1} \neq f|_{x=0}$.
We set $e(f)$ as the number of the essential variables of~$f$.
\end{definition}
\begin{remark}
Although writing the constant functions $\textsc{True}$ and $\textsc{False}$ as 2-Xor expressions requires the use
of (at least) one variable, these two functions have no essential variable: $e(\textsc{True})=e(\textsc{False})=0$.
\end{remark}
Note that $g \not\in \mathcal X_{e(f)-1}$ for all $g$ with $f\equiv g$. But there exists a function $g$
with $f\equiv g$ such that $g \in \mathcal X_{e(f)}$.
In our running example, $e(f) = 5$
and the essential variables are $x_1$, $x_2$, $x_3$, $x_5$ and $x_6$,
so we can take, \textit{e.g.}, $g= (x_3 \sim x_5 \sim x_2) \wedge (x_1 \sim \bar{x_6})$.
Again, with the exception of $\textsc{False}$ that forms a class by itself, the classes we have thus defined on $\mathcal X_n$ are in bijection with the partitions of the integer~$n$; in our example the class of the function $f$ partitions the integer~7 as $1+1+2+3$.
\begin{notation}
Let~$\mathcal{P}(n)$ denote the set of partitions of the integer~$n$.
For any $\mathbf i= (i_\ell)_{\ell \geq 1}$ in~$\mathcal{P}(n)$,
$i_{\ell}$ is the number of parts of size $\ell$.
Hence the size of $\mathbf i$ is
$s(\mathbf i) := \sum_\ell \ell \, i_\ell=n$,
and the total number of parts (or \emph{blocks}) is
$\xi(\mathbf i) := \sum_\ell i_\ell$.
A partition of the type $(0, \ldots, 0,1,0,\ldots)$ with the single $1$ in position~$n$ is denoted by $\mathbf i_{\bf max(n)}$.
\end{notation}
We can now express a bijection between
classes of Boolean functions and integer partitions.
\begin{proposition}
Given an integer partition~$\mathbf i$ of~$n$,
let~${\cal C}_{\mathbf i}$ denote the set of Boolean functions
from~$\mathcal X_n \setminus \{\textsc{False}\}$
with~$i_{\ell}$ blocks of size~$\ell$ for all~$\ell \geq 1$.
Then~$\{{\cal C}_{\mathbf i}\}_{\mathbf i \in \mathcal{P}(n)}$
is in bijection with the quotient of the set~$\mathcal X_n \setminus \{\textsc{False}\}$
by the equivalence relation ``$\equiv$''.
We write $\mathbf i(f)$ for the integer partition associated to a Boolean function~$f$, and we extend the notation for the equivalence class into ${\cal C}_\mathbf i = {\cal C} (f)$ when $\mathbf i = \mathbf i(f)$.
\end{proposition}
\begin{proof}
Given a Boolean function~$f$ in~$\mathcal X_n \setminus \{\textsc{False}\}$,
${\cal C}(f)$ denotes the class of~$f$ for the equivalence relation~``$\equiv$''.
Therefore, the set of distinct classes~${\cal C}(f)$
is in bijection with~$(\mathcal X_n \setminus \{\textsc{False}\})/\equiv$.
Let~$\mathbf i$ denote the integer partition matching the block composition of~$f$.
The demonstration of the proposition is over once we have proven
${\cal C}_{\mathbf i} = {\cal C}(f)$.
Let us write the block representation of~$f$,
defined in Proposition~\ref{th:block_representation}, as
\begin{align*}
\{
& \{l_{1,1}\}, \{l_{1,2}\}, \ldots, \{l_{1,i_1}\},\\
& \{l_{2,1}, l_{2,2}\}, \{l_{2,3}, l_{2,4}\}, \ldots, \{l_{2,2 i_2-1}, l_{2,2 i_2}\},\\
& \hspace{2.1cm}\vdots\\
& \{l_{t,1}, \ldots, l_{t,t}\}, \ldots, \{l_{t,t i_t - (t-1)}, \ldots, l_{t,t i_t}\}, \ldots \},
\end{align*}
where all~$l_{i,j}$ are literals corresponding to distinct variables.
Let~$g$ be a Boolean function in ${\cal C}_{\mathbf i}$,
with block representation
\begin{align*}
\{
& \{\tilde{l}_{1,1}\}, \{\tilde{l}_{1,2}\}, \ldots, \{\tilde{l}_{1,i_1}\},\\
& \{\tilde{l}_{2,1}, \tilde{l}_{2,2}\}, \{\tilde{l}_{2,3}, \tilde{l}_{2,4}\}, \ldots, \{\tilde{l}_{2,2 i_2-1}, \tilde{l}_{2,2 i_2}\},\\
& \hspace{2.1cm}\vdots\\
& \{\tilde{l}_{t,1}, \ldots, \tilde{l}_{t,t}\}, \ldots, \{\tilde{l}_{t,t i_t - (t-1)}, \ldots, \tilde{l}_{t,t i_t}\}, \ldots \}.
\end{align*}
By flipping some of the literals and permuting the variables,
the block representation of~$f$ can be sent to the block representation of~$g$,
so~$f \equiv g$ and~${\cal C}_{\mathbf i}$ is a subset of~${\cal C}(f)$.
Reciprocally, let~$h$ denote a Boolean function in~${\cal C}(f)$.
By definition, a block representation of~$h$ can be obtained
from the block representation of~$f$ by flipping some literals
and permuting the variables.
Therefore, the block representation of~$h$
corresponds to the same integer partition~$\mathbf i$ as~$f$,
so~$h$ belongs to~${\cal C}_{\mathbf i}$ and
${\cal C}(f)$ is a subset of~${\cal C}_{\mathbf i}$.
Since we have both~${\cal C}_{\mathbf i} \subset {\cal C}(f)$
and~${\cal C}(f) \subset {\cal C}_{\mathbf i}$, we conclude
that those two sets are equal.
\end{proof}
Our running example corresponds to the integer partition~$(n-5,1,1,0,0,0)$ on $n \geq 5$ variables, which
has $n-3$ parts. The set partition it induces on the set of Boolean variables may be taken, for example, equal to $ \{ x_1, x_2\}, \{ x_3, x_4, x_5 \}$. The
function $\textsc{True}$ corresponds to the integer partition~$(n, 0, \ldots, 0)$
and is computed by the expressions that have only clauses of the form $l \oplus \bar{l}$.
\begin{proposition}
\label{prop:classes}
\begin{itemize}
\item[i)]
Set $p(n)$ as the number of integer partitions of $n$. Then the number of equivalence classes of computable Boolean functions is $p(n)+1$.
\item[ii)]
The class $C_\mathbf i$ associated to an integer partition $\mathbf i=(i_\ell)$ has cardinality
\begin{equation} \label{cardinality}
|C_\mathbf i| = \frac{2^{n-\xi(\mathbf i)} \, n!}{\prod_{\ell \geq 1} i_\ell ! (\ell!)^{i_\ell}} .
\end{equation}
\end{itemize}
\end{proposition}
\begin{remark}
As an aside, we mention that, as $n \to +\infty$ (see~\cite[p.~578]{FlajoletSedgewick}),
\[
p(n) \sim \frac{1}{4n\sqrt{3}} \exp\(\pi \sqrt{2n/3}\).
\]
\end{remark}
\begin{proof}
The number of classes comes from the bijection between classes, with the exception of the one with
$\textsc{False}$, and integer partitions, hence we get~i).
To prove ii), note that the number of partitions of the set of the $n$ Boolean variables that
lead to~$\mathbf{i}$ is
\[
\frac{n!}{ \prod_{l=1}^n (l!)^{i_l} i_l! },
\]
\emph{cf.} \cite[p.~205, Theorem~B]{Co74} or \cite[Theorem 13.2]{An76}.
Now observe that there are two possible polarities for each variable and hence $2^n$ choices. But
in this way, each block of variables is counted twice, e.g. $x_1 \sim \bar{x_2} \sim x_3$ defines
the same function as $\bar{x_1} \sim x_2 \sim \bar{x_3}$. Hence we have to divide by 2 for each
block and therefore
the cardinality of the equivalence class $C_\mathbf i$ is given by \eqref{cardinality}.
\end{proof}
\begin{remark}
The factor $2^{n-\xi(\mathbf i)}$ can also be arrived at as follows. Choose a variable in each block
and then fix the polarities of the other variables in this block as equal or opposite to the
chosen variable of the block. This gives $l-1$ decisions for a block of size $l$ and thus in total
a contribution of the multiplicative factor $2^{\sum_{l=2}^{n}i_l(l-1)}$.
\end{remark}
\subsection{2-Xor Expressions as Colored Multigraphs}\label{EsCM}
In their seminal articles on the first cycle in an evolving graph and the birth of the giant
component, Flajolet, Knuth and Pittel~\cite{FKP89} and Janson, Knuth, \L{}uczak and
Pittel~\cite{giant} introduced the following notions.
The \emph{multigraph process}, also known as the \emph{uniform graph model}, produces a labelled multigraph~$G$ with $n$ vertices and $m$ edges by drawing independently and uniformly $2 m$ vertices in $[1,n]$:
\[
u_1, v_1, u_2, v_2, \ldots, u_m, v_m.
\]
The set of vertices of $G$ is $V(G) = [1,n]$ and its set of edges is
\[
E(G) = \{ \{u_1, v_1\}, \{u_2, v_2\}, \ldots, \{u_m, v_m\} \}.
\]
Different drawings can lead to the same multigraph:
The number of ordered sequences of vertices that correspond to a given multigraph $G$ is
denoted by $\operatorname{seqv}(G)$ and satisfies
\[
\operatorname{seqv}(G) = |\{ u_1, v_1, \ldots, u_m, v_m \in [1,n]^{2m}\ |\ E(G) = \{ \{u_1, v_1\}, \ldots, \{u_m, v_m\} \} \}|.
\]
A multigraph is \emph{simple} if no edge contains twice the same vertex and all its edges are distinct.
Therefore, it contains neither loops nor multiple edges.
It follows that the number of sequences of vertices that correspond to a given simple multigraph $G$ with $m$ edges is
\[
\operatorname{seqv}(G) = 2^m m!.
\]
The \emph{compensation factor} $\kappa(G)$ of a multigraph $G$ is classically defined as
\[
\kappa(G) = \frac{\operatorname{seqv}(G)}{2^m m!},
\]
so a multigraph is simple if and only if its compensation factor is equal to $1$.
\begin{figure}[h]
\begin{center}
{\includegraphics[width=7cm]{ex-multigraph}}
\caption{\label{fig:example-multigraph} The multigraph underlying our running example.}
\end{center}
\end{figure}
For example, for $m=4$ and $n=7$ the drawings $x_2$, $x_3$, $x_7$, $x_7$, $x_1$, $x_3$, $x_6$,
$x_5$ and $x_7$, $x_7$, $x_1$, $x_3$, $x_3$, $x_2$, $x_5$, $x_6$ both lead to the multigraph of Figure~\ref{fig:example-multigraph}; indeed the number of ordered sequences leading to this multigraph is $4! \; 2^3 = 192$ and its compensation factor is $\frac{1}{2}$.
\begin{fact}
Let $\mathcal M_{m,n}$ denote the set of multigraphs with $n$ vertices and $m$ edges.
The probability for the multigraph process to produce a multigraph $G$ among all multigraphs in
$\mathcal M_{m,n}$ is proportional to its compensation factor $\kappa(G)$
\[
\mathds{P}(G\ |\ G \in \mathcal M_{m,n})
=
\frac{\kappa(G)}{\sum_{H \in \mathcal M_{m,n}} \kappa(H)}.
\]
\end{fact}
The \emph{number} of multigraphs in a family $\mathcal{F}$ is defined as the sum of their compensation factors
\[
\sum_{G \in \mathcal{F}} \kappa(G),
\]
although this quantity might not be an integer.
For example, the total number of multigraphs with $n$ vertices and $m$ edges is
\[
M_{m,n} = \frac{n^{2m}}{2^m m!},
\]
and the number of cubic multigraphs (\textit{i.e.} multigraphs where all the vertices have degree $3$) with $2 r$ vertices is
\[
\frac{(6 r)!}{(3!)^{2 r} 2^{3 r} (3 r)!},
\]
because such multigraphs have $3 r$ edges.
If $\mathcal{F}$ contains only simple multigraphs, its number of multigraphs is equal to its cardinality.
Let $n(G)$ and $m(G)$ denote the number of vertices and number of edges of a multigraph $G$,
respectively.
The generating function corresponding to a family $\mathcal{F}$ of multigraphs is
\[
\sum_{G \in \mathcal{F}}
\kappa(G) z^{m(G)} \frac{v^{n(G)}}{n(G)!}.
\]
For example, the generating function of all multigraphs is
\[
M(z,v) = \sum_{n \geq 0} e^{\frac{n^2}{2} z} \frac{v^n}{n!}.
\]
As already observed by Janson, Knuth, \L{}uczak and Pittel\cite{giant},
and Flajolet, Salvy and Schaeffer \cite{FSS04},
a multigraph is a set of connected multigraphs,
so the generating function for connected multigraphs is
\[
C(z,v) = \log M(z,v) = \sum_{r \geq -1} z^r \; C_r (z v)
\]
where we have set $r = m-n$, the \emph{excess} of the multigraph, and where $C_r(z)$ is the
generating function associated with \emph{connected} multigraphs of fixed excess $r$.
\medskip
We are now ready to define a \emph{bijection} between Boolean expressions and \emph{colored}
multigraphs, i.e. multigraphs with different types (colors) of edges between any two vertices.
\begin{figure}[h]
\begin{center}
{\includegraphics[width=7cm]{example}}
\caption{\label{fig:example} The colored multigraph for our running example.}
\end{center}
\end{figure}
\begin{proposition}
\label{fg-bijection}
The 2-Xor expressions are in bijection with multigraphs where loops are 4-colored and other edges are 8-colored.
This bijection is such that, for all $f\in \mathcal X$ the number of connected components of the associated multigraph is $\xi(\mathbf i(f))$.
Thus the function $M(8 z,v)$ is the bi-exponential generating function for 2-Xor expressions, i.e.
\[
M(8 z,v)=\sum_{n\ge 0}\sum_{m\ge 0} E_{m,n} \frac{z^n v^m}{n! m!}.
\]
\end{proposition}
\begin{proof}
We first present the bijection between a 2-Xor expression of $m$ clauses on $n$ variables, and a colored multigraph on $n$ vertices and with $m$ edges.
\begin{itemize}
\item
Each Boolean variable $x_\ell$ corresponds to a vertex, and each 2-Xor clause to an edge between two distinct vertices, or to a loop on one vertex; each loop or edge can be repeated.
\item
A loop on vertex $x$ has one of four colors: $x \oplus x$, $x \oplus \bar{x}$, $\bar{x} \oplus x$ or $\bar{x} \oplus \bar{x}$.
\item
An edge between two distinct vertices $x_i$ and $x_j$ has one of eight colors: $l_i \oplus l_j$ or $l_j \oplus l_i$, where $l_i$ and $l_j$ are respectively equal to $x_i$ or its negation, and $x_j$ or its negation.
\end{itemize}
It is then an easy matter to check that the number of connected components of the multigraph is simply the number of parts in the integer partition associated with the function~$f$ computed by the expression.
We next turn to the generating function for 2-Xor expressions and start from the generating function for multigraphs
\[
M(z,v) = \sum_{m,n} M_{m,n} \frac{v^n}{n!} \, z^m
= \sum_{n \geq 0} e^{\frac{n^2}{2} \, z} \; \frac{v^n}{n!},
\]
with $v$ marking the vertices and $z$ marking the edges and loops, and $M_{m,n}$ the number of multigraphs on $n$ vertices and with $m$ edges.
Consider expressions built on $n$ variables, and set $E_{m,n}$ as the number of such expressions with $m$ clauses.
Each vertex contributes a term $e^{4z}$ for the loops: There are 4 possible colors;
each vertex $x$ also contributes a term $\prod_{y: x<y\leq n} e^{8z} = e^{8z (n-x)}$ for the edges
to a different vertex: There are 8 possible colors. We order the vertices so as not to count them
twice. Taking into account all $n$ vertices gives
$$
\sum_m E_{m,n}\, \frac{z^m}{m!} = \prod_{s=1}^n e^{4z} \; e^{8z (n-s)}
= e^{4 n^2 z} ,
$$
which in turn leads to an expression for the global generating function as
\[
\sum_{m,n} E_{m,n}\, \frac{z^m}{m!} \; \frac{v^n}{n!}
= \sum_n e^{4 n^2 \, z} \; \frac{v^n}{n!} = M(8z,v).\qedhere
\]
\end{proof}
\subsection{The Different Ranges}
\label{sec:ranges}
We shall not consider the whole range of values for the parameters $n$ and $m$ when studying the probabilities on $\mathcal X_n$, but restrict our investigations to the case where $m$ and $n$ are (roughly) proportional -- which is the most interesting part as it includes the domain around the threshold -- and set $m\sim\alpha n$ ($\alpha$ is usually assumed to be a constant).
It is well known (see, e.g., \cite{DaudeRavelomanana}) that the probability that a random expression is satisfiable decreases from 1 to 0 when $\alpha$ increases, with a (coarse) threshold at $\frac{1}{2}$.
However \emph{a Boolean function corresponding to a partition of the $n$ Boolean variables into $p$ blocks cannot appear before at least $n-p$ clauses have been drawn, i.e. before $m \geq n-p$}.
E.g., the function $x_1 \sim \cdots \sim x_n$ cannot appear for $m < n-1$, which means that it has a non-zero probability only for $\alpha \geq 1$, much later than the threshold -- and at this point the probability of $\textsc{False}$ is~$1-o(1)$.
This leads us to define several regions according to the value of the ratio~$\alpha=m/n$ when $m,n \rightarrow +\infty$:
\begin{itemize}
\item $\alpha < 1/2$.
Here the probability of satisfiability is non-zero, but the attainable functions cannot have more than $n(1-\alpha)$ blocks.
\item $\alpha = 1/2$.
This is precisely the threshold range.
\item $1/2 < \alpha < 1$.
Some Boolean functions still have probability zero, but now the probability of satisfiability is $o(1)$ and the probability of $\textsc{False}$ is $1-o(1)$.
Thus any other attainable Boolean function has a vanishing probability~$o(1)$.
\item $1 \leq \alpha$.
At this point all the attainable Boolean functions have non-zero probability, but again the
probability of $\textsc{False}$ is tending to 1.
\end{itemize}
\section{Probabilities on the Set of Boolean Functions}
\label{sec:probas}
We consider here how we can obtain the probability of satisfiability (or equivalently of $\textsc{False}$), or of any function in $\mathcal X_n$.
The reader should recall that the probabilities given in the sequel are actually distributions on $\mathcal X_n$, i.e. they depend on~$n$ and~$m$.
Letting $n$ and $m=m(n)$ grow to infinity amounts to specializing the probability distribution
$\mathds{P}(f)$ (defined in Section~\ref{sec:model-expressions} for $f \in \mathcal X_n$) to ${\mathrm
Pr}_{[m(n),n]}(f)$.
We shall be interested in its limit when $n \rightarrow + \infty$ and $f$ is a function of~$\mathcal X$.
First we will consider the case $f=\textsc{False}$ (which is the usual satisfiability problem) and derive anew the probability of satisfiability in the critical window, before turning to general Boolean functions.
We begin with some enumeration results on multigraphs that will be useful in the proofs of our results.
\begin{remark}
Note that the classical satisfiability problems as well as the above described extension are
looking for the limit of the probability ${\mathrm Pr}_{[m(n),n]}(f)$, as $n\to\infty$. This
raises the question whether the squence of distributions ${\mathrm Pr}_{[m(n),n]}$ defines a
limiting distribution on the set $\mathcal X$. We do not know whether this is true or not, but our
asymptotic results either concern the limit of the probabilities ${\mathrm
Pr}_{[m(n),n]}(f)$ for some \emph{a priori} given function $f$ which is independent of $n$ (lying
in some $\mathcal X_{n_0}$; then the limit for $n\to\infty$ is taken) or a particular sequence of
function which depends on $n$.
When looking into the literature of quantitative logic, the question for certain limiting
probabilities often arises and is settled by means of the Drmota-Lalley-Woods theorem (see
\cite[p.~489]{FlajoletSedgewick} for the polynomial version and \cite[Sec.~2.2.5]{Drmota09} for
the analytic version). In order to apply this theorem, one has to specify the problem in terms of
a system of functional equations which has certain technical properties, in particular it must not
be linear.
Usually, for each Boolean function one defines a generating associated with the expressions
representing the Boolean function and sets up a sort of a recursive description of the Boolean
function in terms of the other Boolean functions. If we do that for $2$-Xor formulas, we get a
linear system of functional equations, which is therefore not covered by the Drmota-Lalley-Woods
framework. Despite linearity, the system is complicated to analyze, and so we decided to approach
the problem through a bijection to certain classes of multigraphs and exploit the rich existing
knowledge on multigraph generating functions.
\end{remark}
\subsection{Asymptotics for Multigraphs} \label{sec:asymptotics-multigraphs}
\subsubsection{Connected Multigraphs}
Connected graphs with a large number of vertices
have been counted for various ranges of number of edges.
The first result is attributed to Cayley,
who obtained in~$1889$ an exact formula for the number
of unrooted trees by resolution of a recurrence
(see~\cite[p.~51]{BLW74} for a historical discussion by Biggs, Lloyd and Wilson).
R\'enyi~\cite{ER59} derived an asymptotic formula
for the number of unicyclic graphs.
Erd\H{o}s and R\'enyi obtained in~\cite{ER59}
the probability for a random graph
with high density of edges to be connected.
From their result follows an expression for the asymptotic
number of connected graphs with $n$ vertices
and $m$ edges when~$m-n = \frac{1}{2} n (\log(n) + c)$
for any value~$c$ fixed or growing to infinity.
Using generating functions,
Wright~\cite{Wri1977} gave the asymptotic number of connected graphs
for~$m-n$ arbitrary but fixed,
and also studied the case~$m-n = o(n^{1/3})$ in~\cite{Wri1980}.
Finally, Bender, Canfield, and McKay~\cite{BCMK90}
obtained the asymptotic number of connected graphs for all $n, m-n \rightarrow \infty$.
Their proof is based on a recursive formula derived by Wright.
New proofs were proposed in~\cite{PW05} and~\cite{HS06}.
For historical reasons, most of those results
were only stated for simple graphs.
In the following theorems, we summarize those results
and adapt them to multigraphs.
\begin{notation} \label{th:CmnCr}
The number of connected multigraphs
with $n$ vertices and $m$ edges is denoted by $C_{m,n}$.
The exponential generating function
of connected multigraphs with excess $r= m-n$
is denoted by
\[
C_r(v) = \sum_{n \geq 0} C_{n+r,n} \frac{v^n}{n!}.
\]
\end{notation}
\begin{theorem}
\label{th:connected-multigraphs-fixed-excess}
When the excess $r=m-n$ is fixed, then
\begin{equation} \label{eq:asympt-bernhard-exces}
C_{m,n} \sim K_r n^{n + \frac{3 r - 1}{2}},
\end{equation}
where the value of $K_r$ is
\[
K_r =
\begin{cases}
1 & \text{if } r = -1, \\
\frac{\sqrt{2\pi}}{4} & \text{if } r = 0,\\
\frac{ \sqrt{2 \pi} }{ 2^{3r/2} \Gamma( 3r/2 ) }
[v^{2r}] \log \left(\sum_{\ell \geq 0} \frac{(6\ell)!}{288^\ell (3\ell)!} \frac{v^{2\ell}}{(2\ell)!} \right) & \text{if } r > 0.
\end{cases}
\]
\end{theorem}
\begin{remark}
Note that the excess of a connected multigraph is always greater or equal to $-1$.
\end{remark}
\begin{proof}
\begin{itemize}
\item
For $r=-1$, the connected component is an unrooted tree,
$C_{-1}(v) = T(v) - T(v)^2 /2$ where $T(v)= \sum_n n^{n-1} \frac{v^n}{n!}$
is the so-called tree function, and \cite[p.~132]{FlajoletSedgewick}:
\[
n! [ v^n] C_{-1}(v) = n^{n-2}.
\]
\item
For $r=0$, the connected component is unicyclic, $C_{0}(v) = \frac{1}{2} \log \frac{1}{1-T(v)}$ and
(again from \cite[p.~133]{FlajoletSedgewick}):
\[
n! [ v^n] C_{0}(v) \sim \frac{1}{4} n^{n-1} \sqrt{2 n\pi} .
\]
\item
For $r \geq 1$, we follow the approach of Wright~\cite{Wri1977}.
A \emph{kernel} is a multigraph with minimum degree at least $3$.
%
Let us define the \emph{deficiency} of a kernel of excess $r$ with $n$ vertices
as $d = 2 r - n$. it follows that the number of edges of a kernel is $m = 3r - d$.
Let also $C^{(\geq 3)}_{r,d}$ denote the number of connected kernels of excess $r$ and deficiency $d$.
%
All connected multigraphs of excess $r \geq 1$
can be build from the connected kernels of excess $r$
by replacing the edges with paths
and the vertices with rooted trees, so
\[
C_{r}(v) = \sum_{d=0}^{2r-1} \frac{C^{(\geq 3)}_{r,d}}{(2 r - d)!} . \frac{T(v)^{2r-d}}{(1-T(v))^{3r-d}},
\]
which gives
\begin{equation}
\label{eq:asymptotic-phi-n}
C_{n+r,n} = n! [v^n] C_{r}(v) = \sum_{d=0}^{2r-1} \frac{C^{(\geq 3)}_{r,d}}{(2 r - d)!} . [ v^n ] \frac{T(v)^{2r-d}}{(1-T(v))^{3r-d}}.
\end{equation}
We must compute the coefficients $[ v^n ] \frac{T(v)^{2r-d}}{(1-T(v))^{3r-d}}$.
We have, for any fixed positive integers $a$ and $b$,
\[
n! [v^n] \frac{T(v)^a}{(1-T(v))^b} \sim
\frac{2^{-b/2}}{\Gamma(b/2)} \, e^n \, n^{b/2-1} \, n!,
\]
which is independent of~$a$.
When $r$ is fixed, we see that, of the $2r$ terms in Equation~(\ref{eq:asymptotic-phi-n}),
the one for $d=0$ gives the dominant term and we get, also from \cite[p.~134]{FlajoletSedgewick}:
\[
n! [v^n] C_{r}(v) \sim
\frac{C^{(\geq 3)}_{r,0}}{(2 r)!} \, \frac{\sqrt{2\pi}}{2^{3r/2} \, \Gamma(3r/2)} \; n^{n+\frac{3r-1}{2}}.
\]
Finally, the constant $C^{(\geq 3)}_{r,0}$ is
the number of connected cubic multigraphs
(\textit{i.e.} $3$-regular multigraphs).
Since there are $\frac{(6\ell)!}{(3!)^{2 \ell} 2^{3 \ell} (3 \ell)!}$ cubic multigraphs with $2
\ell$ vertices (see Section~\ref{EsCM}),
the generating function of connected cubic multigraphs is
\[
\sum_{\ell \geq 1} C^{(\geq 3)}_{\ell,0} \frac{v^{2\ell}}{(2\ell)!} =
\log \left( \sum_{\ell \geq 0} \frac{(6\ell)!}{288^\ell (3\ell)!} \frac{v^{2\ell}}{(2\ell)!} \right),
\]
and a coefficient extraction leads to
\[
\frac{C^{(\geq 3)}_{r,0}}{(2 r)!} =
[v^{2 r}] \log \left( \sum_{\ell \geq 0} \frac{(6\ell)!}{288^\ell (3\ell)!} \frac{v^{2\ell}}{(2\ell)!} \right).
\qedhere
\]
\end{itemize}
\end{proof}
When the excess~$r$ goes to infinity,
non-cubic kernels cease to be negligible,
and a different approach is needed
to enumerate the connected multigraphs.
\begin{theorem}
\label{th:multigraphs-large-excess}
When $m-n$ goes to infinity and $\frac{2m}{n} - \log(n)$
tends towards a constant or~$- \infty$,
the asymptotic number of connected multigraphs is
\[
C_{m,n} =
\sqrt{\frac{2(e^\lambda - 1-\lambda)^2}{\lambda (e^{2\lambda} -1 - 2\lambda e^\lambda)}}
\frac{n^m}{\sqrt{2 \pi n}}
\frac{\left( 2 \sinh(\lambda /2) \right)^n}{\lambda^m}
\left( 1 + \mathcal{O} \left( (m-n) e^{-2 m/n} \right)^{-1/2+\epsilon} \right)
\]
for any $\epsilon > 0$,
where the value $\lambda$ is characterized by the relation
\[
\frac{\lambda}{2} \coth \frac{\lambda}{2} = \frac{m}{n}.
\]
\end{theorem}
\begin{proof}
This asymptotic expression has already been derived for simple graphs.
Unfortunately, the corresponding proofs are too long
to be reproduced and adapted here for multigraphs.
Instead, we follow the proof from Pittel and Wormald~\cite{PW05}.
and indicate the necessary changes in order to obtain
the same result for multigraphs.
A proof based on analytic combinatorics
is also available in~\cite[Theorem~$5.1.3$]{ElieThesis}.
It is however restricted to the case where~$2m/n$ tends toward a constant.
The proof starts with the enumeration of \emph{cores},
which are multigraphs with minimum degree at least~$2$.
Cores correspond to sequences of vertices
\[
u_1, v_1, \ldots, u_m, v_m
\]
where each vertex appears at least~$2$ times.
The number of such sequences of length $2m$
with $n$ vertices is
\[
\sum_{\substack{d_1, \ldots, d_n \geq 2 \\ d_1 + \cdots + d_n = 2 m}}
\binom{2 m}{d_1, \ldots, d_n}
=
(2m)! Q(n,m),
\]
where the quantity $Q(n,m)$ is defined in~\cite[Equation~(2.1)]{PW05} by
\[
Q(n,m) =
\sum_{\substack{d_1, \ldots, d_n \geq 2 \\ d_1 + \cdots + d_n = 2 m}}
\prod_{j=1}^n \frac{1}{d_j!}.
\]
Therefore, the number
of cores with $n$ vertices and $m$ edges,
defined as the sum of their compensation factors, is
\[
\operatorname{Core}_{m,n} =
\frac{(2m)!}{2^m m!}
Q(n,m),
\]
which replaces Equation~(3.9) of~\cite[Theorem~$8$, p.~13]{PW05}.
Its asymptotic estimate, given in \cite[Equation~(3.11), p.~13]{PW05} is now
\[
\operatorname{Core}_{m,n} =
(1 + \mathcal{O}((m-n)^{-1} + (m-n)^{1/2} n^{-1 + \epsilon}))
\frac{(2m-1)!! f(\lambda)^n}{\lambda^{2m}}
\frac{1}{\sqrt{2 \pi n c (1+\bar{\eta} - c)}}
\]
where $\lambda$, $f$, $c$ and $\bar{\eta}$ have the same definition as in~\cite{PW05}.
The second step of the proof is the enumeration of cores
that contain no isolated cycles.
Let $\core^{(\setminus \text{cycle})}_{m,n}$ denote the number of such multigraphs
with $n$ vertices and $m$ edges.
The result is stated in \cite[p.~4, Theorem~2]{PW05}
and its proof can be found in \cite[Section~6]{PW05}.
It relies on the exponential generating function of simple undirected cycles
\[
\sum_{\ell \geq 3} \frac{x^\ell}{2 \ell}
= - \frac{1}{2} \log(1-x) - \frac{x}{2} - \frac{x^2}{4}.
\]
In multigraphs, a cycle might also have size $1$ (a loop),
or size $2$ (a double edge),
so we replace the previous generating function with
\[
\sum_{\ell \geq 1} \frac{x^\ell}{2 \ell}
= - \frac{1}{2} \log(1-x)
\]
and replace the function $h(x)$, defined in \cite[p.~4, Equation~(2.3)]{PW05}, by
\[
h(x)
= e^{- \sum_{\ell \geq 1} \frac{x^\ell}{2 \ell}}
= (1-x)^{1/2}.
\]
Theorem~$2$ of~\cite[p.~4]{PW05} becomes for multigraphs:
``\emph{when $m-n$ goes to infinity and $m = \mathcal{O}(n \log(n))$,
then for any fixed $\epsilon > 0$,
the number of cores with $n$ vertices and $m$ edges
that contain no isolated cycles is
\[
\core^{(\setminus \text{cycle})}_{m,n} =
(1 + \mathcal{O}(n^{-1/2 + \epsilon} + (m-n)^{-1}))
h \left( \frac{\lambda}{e^\lambda - 1} \right)
\operatorname{Core}_{m,n}
\]
where $\lambda$ is the unique positive root of
$
\frac{\lambda(e^\lambda-1)}{e^\lambda-1-\lambda} = \frac{m}{n}
$.
}''
The last ingredient of the proof is an observation from Erd\H{o}s and R\'enyi,
that when $m-n$ tends to infinity, almost all graphs or multigraphs
that contain neither trees nor unicyclic components are connected.
Therefore, $C_{m,n}$ is asymptotically equal to the number of such multigraphs.
They correspond to the cores without isolated cycle,
where each vertex is replaced with a rooted tree.
Their exact number is derived in \cite[Equation~(7.1)]{PW05}
and becomes for multigraphs
\[
\sum_{\mu=1}^n
\binom{n}{\mu}
\mu
n^{n - \mu - 1}
\core^{(\setminus \text{cycle})}_{m-n+\mu,\mu}.
\]
Borrowing the notation of~\cite{PW05},
the summand is estimated in \cite[Equation~(7.2)]{PW05} by
combining \cite[Theorem~2]{PW05} and \cite[Equation~(3.11)]{PW05},
which we both have modified
\[
\binom{n}{\mu}
\mu
n^{n - \mu - 1}
\core^{(\setminus \text{cycle})}_{m-n+\mu,\mu}
=
(1+\mathcal{O}(\beta_1))
n^m
F_n(y)
\exp(n H(y, \lambda)).
\]
The adaptation for multigraphs only recquires to change the definition of $F_n(y)$
and replace it with
\[
F_n(y) =
\frac{1}{2 \pi n}
\sqrt{
\frac{(1-\sigma) y}{
u(1+\bar{\eta} - 2 u/y)(1-y+\rho)
}
},
\]
using again the notations $u$, $c$, $\lambda$, $\bar{\eta}$, $\rho$ and $\sigma$ of~\cite{PW05}.
The rest of the proof is a Laplace method.
The modification we made in the definition of $F_n$ also impacts
\cite[p.~31, Equation~(7.16)]{PW05} which becomes
\[
F_n(\bar{y}) =
\frac{
\sqrt{2} ( e^{\bar{\lambda}} -1 - \bar{\lambda} )^{3/2}
}{
2 \pi n \bar{\lambda}
\sqrt{ (e^{\bar{\lambda}}-1)^2 - \bar{\lambda}^2 e^{\bar{\lambda}} }
}.
\]
As a consequence, the definition of the value~$\alpha$ of~\cite[p.~5, Theorem~3]{PW05}
is, for multigraphs,
\[
\alpha =
\sqrt{
\frac{ 2 ( e^{\bar{\lambda}} - 1 - \bar{\lambda} )^2 }{
\bar{\lambda} ( e^{2 \bar{\lambda}} - 1 - 2\bar{\lambda}e^{\bar{\lambda}} )
}
}
\]
while the other quantities of the theorem stay unchanged.
\end{proof}
Remark that the value $\lambda$ of the previous theorem
is a constant only when $\frac{m}{n}$ is fixed.
As observed by Pittel and Wormald in~\cite{PW05},
the asymptotic formula of the previous theorem also holds
when $\frac{2m}{n} - \log(n)$ tends slowly towards infinity.
However, we do not need this extension, because
this range of $m$ is already covered by the following theorem.
\begin{theorem} \label{connected_excess-infinite}
When both $m-n$ and $\frac{2 m}{n} - \log(n)$ go to infinity,
the asymptotic number of connected multigraphs becomes
\[
C_{m,n} \sim
\frac{n^{2m}}{2^m m!}.
\]
\end{theorem}
\begin{proof}
Erd\H{o}s and R\'enyi proved in~\cite{ER59}
that when~$2m/n - \log(n)$ goes to infinity,
a random multigraph with $n$ vertices and $m$ edges
is connected with high probability.
Therefore, the number of connected multigraphs
is then asymptotically equal to the total number of multigraphs, $\frac{n^{2m}}{2^m m!}$.
\end{proof}
\subsubsection{Weighted Multigraphs}
As recalled in the definition of the multigraph process,
multigraphs are counted according to their compensation factor,
meaning that the number of multigraphs in a family~$\mathcal{F}$
is defined as the sum of their compensation factors~$\sum_{G \in \mathcal{F}} \kappa(G)$.
The proof of Theorems~\ref{th:proba(sat)} and~\ref{th:proba-random-input}
require a refinement of this definition,
involving the number of connected components of the multigraphs.
Specifically, we now count the number of multigraphs with $n$ vertices
and $m$ edges according to their compensation factor and a factor $\sigma$
for each connected component
\[
\sum_{G \in \mathcal M_{m,n}} \kappa(G) \sigma^{c(G)}
\]
where $\sigma$ is a positive real value
and $c(G)$ denotes the number of components of~$G$.
Since the generating function of connected multigraphs is $\log M(z,v)$
and a multigraph is a set of connected multigraphs,
the previous quantity can be expressed by a coefficient extraction
\[
\sum_{G \in \mathcal M_{m,n}} \kappa(G) \sigma^{c(G)}
=
n! [z^m v^n]
e^{\sigma \log M(z,v)}
=
n! [z^m v^n]
M^\sigma(z,v).
\]
We list asymptotic formulas for those values
in the following lemma, which combines
Theorems~$8$, $9$ and~$10$ of~\cite{PR14}.
The first part focuses on multigraphs
with less edges than half the number of vertices.
As proved by Erd\H{o}s and R\'enyi,
with high probability, they contain
only trees and unicyclic components.
The second part investigates the critical window
where the number of edges is around half the number of vertices.
In this range, connected components with fixed excess appear.
Higher number of edges seem more technical to analyze.
However, the probability of satisfiability
of the corresponding $2$-Xor formulas has already reached~$0$ almost surely,
and its study is therefore less interesting.
\begin{lemma} \label{th:sigmacritical}
Let $\sigma$ denote a fixed positive value.
When $\frac{m}{n}$ is in a fixed closed interval of~$]0, 1/2[$, then
\[
n! [z^m v^n] M^{\sigma}(z,v)
\sim
\frac{n^{2m}}{2^m m!}
\sigma^{n-m}
\left( 1 - \frac{2m}{n} \right)^{\frac{1-\sigma}{2}}.
\]
When $m=\frac{n}{2} (1 + \mu n^{-1/3})$
and~$\mu$ is bounded, then
\[
n! [z^m v^n] M^{\sigma}(z,v)
\sim
\frac{n^{2m}}{2^m m!}
\sigma^{n-m}
n^{\frac{\sigma-1}{6}}
\sum_{r \geq 0}
\sigma^{r}
e^{(\sigma)}_r
\sqrt{2 \pi}
A(3 r + \sigma/2, \mu),
\]
where the value of $e^{(\sigma)}_r$ is
\[
e^{(\sigma)}_r =
[z^{2r}]
\left(
\sum_{k \geq 0}
\frac{(6k!)}{2^{5k} 3^{2k} (3k)!}
\frac{z^{2k}}{(2k)!}
\right)^{\sigma}
\]
and the function $A$ is defined in~\cite[Lemma~3]{giant} by
\[
A(y, \mu) =
\frac{e^{-\mu^3/6}}{3^{(y+1)/3}}
\sum_{k \geq 0}
\frac{(3^{2/3} \mu / 2)^k}{k! \Gamma\(\frac{y+1-2k}3\)}.
\]
\end{lemma}
\begin{remark}
In the rest of the paper, we will only need the cases~$\sigma = 1$ and $\sigma = 1/2$.
\end{remark}
The function~$A(y,\mu)$ is a variation
of the classical Airy function which has been
thoroughly analyzed in~\cite[Lemma~3]{giant}.
For example, as mentioned in~\cite[Equation~(10.28)]{giant},
for $y=1$ it satisfies the relation
\[
A(1,\mu) = e^{-\mu/12} \operatorname{Ai}(\mu^2/4),
\]
and for $y=0$, it holds that
\[
A(0,\mu) =
- \frac{1}{2} \mu e^{- \mu^3/12} \operatorname{Ai}(\mu^2/4)
- e^{-\mu^3/12}
\operatorname{Ai}'(\mu^2/4).
\]
It is also close to the function defined in~\cite[Theorem~11]{BFSS01}
and~\cite[Theorem~IX.16]{FlajoletSedgewick},
denoted by $G$ in the first one, and by~$S$ in the second one.
\begin{comment}
\[
S_\lambda(\mu) =
\frac{1}{\lambda \pi} \sum_{k \geq 0} \frac{(-\mu)^k}{k!}
\sin\left(\pi \frac{1+k}{\lambda} \right) \Gamma\left(\frac{1+k}{\lambda}\right).
\]
Indeed, for $y = 0$ and $\lambda = 3/2$, by application of the complement formula
of the gamma function, $\sin(\pi a) \Gamma(a) = \pi / \Gamma(1-a)$,
we obtain
\[
S_{\frac{3}{2}} \left( - \frac{3^{2/3}}{2} \mu \right)
=
\frac{2}{3} \sum_{k \geq 0} \frac{(\frac{1}{2} 3^{2/3} \mu)^k}{k !}
\frac{1}{\Gamma((1-2k)/3)},
\]
so the function~$A(0,\mu)$ defined in~\cite[Lemma~3]{giant}
is equal to
\[
A(0, \mu) = \frac{3}{2} \frac{e^{-\mu^3/6}}{3^{1/3}} S_{\frac{3}{2}} \left(- \frac{3^{2/3}}{2} \mu \right).
\]
\end{comment}
\subsection{Probability of Satisfiability}
The probability of satisfiability
of a random $2$-Xor expression
has been studied by Creignou and Daud\'e~\cite{CreignouDaude99, CrDa04},
Daud\'e and Ravelomanana~\cite{DaudeRavelomanana}
and Pittel and Yeum~\cite{PittelYeum}.
We derive anew their results
to give a first application
of the link between $2$-Xor expressions
and colored multigraphs.
\begin{theorem}\label{th:proba(sat)}
The probability that a random expression is satisfiable is
\[
\mathds{P}({\mathrm Sat}) = \frac{[z^m v^n] \sqrt{M(4z,2v)}}{[ z^m v^n] M(8z,v)}.
\]
Its limit for $n \rightarrow + \infty$
when $\frac{m}{n}$ is in a fixed closed interval
of $]0, \frac{1}{2}[$ is
\[
\left( 1 - \frac{2m}{n} \right)^{1/4}.
\]
When $m = \frac{n}{2}(1 + \mu n^{-1/3})$
and $\mu$ is bounded,
this becomes
\[
n^{-1/12} \sqrt{2 \pi}
\sum_{r \geq 0} \frac{e_r^{(1/2)}}{2^r} A(3r + 1/4, \mu),
\]
with the notations of Lemma~\ref{th:sigmacritical}.
\end{theorem}
\begin{proof
To obtain the generating function for satisfiable expressions,
we shall count the number of pairs
$\{$satisfiable expression, satisfying as\-sign\-ment$\}$,
then get rid of the number of satisfying assignments.
We can assign $\textsc{True}$ or $\textsc{False}$ to each variable,
and one of eight colors to an edge,
hence $M(8z,2v)$ is the generating function associated with the
pairs $\{$expression, as\-sign\-ment$\}$.
Once we have chosen an assignment of variables,
for an expression to be satisfiable
we have to restrict the edges we allow.
Say that $x$ and $y$ are assigned the same value;
then the edges colored by
$x\oplus y$, $y \oplus x$, $\bar{x} \oplus \bar{y}$ or $ \bar{y} \oplus \bar{x }$
cannot appear in a satisfiable expression.
For a similar reason, the only loops allowed
are $x \oplus \bar{x}$ or $\bar{x} \oplus {x}$.
We thus count multigraphs with $2$ colors of loops
and $4$ colors of edges, which gives a generating function equal to $M(4z,2v)$.
Now consider the generating function $S(z,v)$
for satisfiable expressions:
We claim that it is equal to $\sqrt{M(4z,2v)}$.
To see this, choose an expression
computing a Boolean function~$f$,
and consider how many assignments satisfy it:
We have seen (cf. Proposition~\ref{prop:classes})
that their number is equal to $2^ {\xi(f)}$,
with $\xi(f)$ the number of connected components
(once we have chosen the value of a single variable in a block,
all other variables in that block have received their values
if the expression is to be satisfiable).
This means that, writing
$S(z,v) = \exp \left(\log S(z,v)\right)$ with $\log S(z,v)$
the function for connected components,
the generating function enumerating the pairs
$\{$expression, satisfiable as\-sign\-ment$\}$
is equal to $\exp (2 \log S(z,v)) = S(z,v)^2$.
As we have just shown that it is also equal
to $M(4z,2v)$, the value of $\mathds{P}(Sat)$ follows.
To obtain the asymptotics before and in the critical window~$m = n/2 + \mathcal{O}(n^{2/3})$,
we use Lemma~\ref{th:sigmacritical}.
\end{proof}
The link between the enumeration
of $2$-Xor expressions and of multigraphs
and the knowledge of the asymptotic number of multigraphs
can also be combined to investigate
the probability for a satisfiable expression
to be satisfied by an input.
\begin{theorem}
\label{th:proba-random-input}
The probability that an input (fixed or random)
satisfies a random satisfiable expression
with $n$ variables, $m$ clauses and excess $r = m-n$ is
\[
\frac{[z^m v^n] M(4z,2v)}{2^n [z^m v^n] \sqrt{M(4z,2v)}}.
\]
When $\frac{m}{n}$ is in a closed interval of $]0,\frac{1}{2}[$, then this is asymptotically equivalent to
\[
\frac{1}{2^m}
\left( 1 - \frac{2m}{n} \right)^{-1/4},
\]
and it is
\[
\frac{n^{1/12}}{2^m}
\frac{1}{\sum_r 2^{-r} e_r^{(2)} A(3r + 1/4, \mu)}.
\]
for $m = \frac{n}{2}(1 + \mu n^{-1/3})$ with $\mu$ bounded,
using the notation of Lemma~\ref{th:sigmacritical}.
\end{theorem}
\begin{proof
The probability that a random expression is satisfied
by a random assignment is equal to the number of pairs
$\{$satisfiable expression, satisfying assignment$\}$,
divided by the number of satisfiable expressions
and by the number $2^n$ of assignments.
The exact value follows from the fact that
the generating functions for the number of satisfiable expressions
and for the number of pairs $\{$satisfiable expression, satisfying assignment$\}$
are respectively $\sqrt{M(4z,2v)}$ and $M(4z,2v)$;
the asymptotic approximations come again
from Lemma~\ref{th:sigmacritical}.
\begin{comment}
The probability that an arbitrary fixed input
satisfy a random expression is the quotient
of the number $n! [z^m v^n] M(4z,v)$
of expressions satisfied by this input
with the total number of satisfiable expressions $n! [z^m v^n] \sqrt{M(4z,2v)}$.
It is therefore equal to the probability
that a random expression is satisfied
by a random input.
\end{comment}
\end{proof}
\subsection{Probability of a Given $2$-Xor Function}
\label{sec:res-general}
We now refine the probability of satisfiability, by computing the probability of a specific Boolean function $\neq \textsc{False}$.
We first give in Proposition~\ref{prop:fgs} the generating functions for all Boolean functions
(except again $\textsc{False}$), then use it to provide a general expression for the probability of a
Boolean function in Theorem~\ref{th:proba-f}, or rather of all the functions of an equivalence class~$C_\mathbf i$.
This theorem is at a level of generality that does not give readily precise probabilities, and we delay until Section~\ref{sec:results} such examples of asymptotic probabilities.
\begin{proposition}
\label{prop:fgs}
Let $f$ denote a Boolean function in~$\mathcal X$
and $\mathbf i(f)$ the corresponding integer partition.
Define $\phi_{\mathbf i(f)} (z)$ as the generating function
for Boolean expressions that compute~$f$:
\[
\phi_{\mathbf i(f)} (z)= \sum_m E_{m,n}(f) \frac{z^m}{m!}.
\]
When $\mathbf i = \mathbf i_{\bf max} (n)$, we set $\phi_n(z) := \phi_{\mathbf i_{\bf max}(n)}(z)$.
Then
\begin{eqnarray*}
\phi_n (z)= n! [v^n] C(4z,v) ;
\qquad
\phi_{\mathbf i(f)} (z) = \prod_{\ell \geq 1} \left( \ell! [v^{\ell}] C(4z,v) \right)^{i(f)_\ell}.
\end{eqnarray*}
\end{proposition}
\begin{proof}
A canonical representant of the class $\mathbf i_{\bf max} (n)$ is the function $x_1 \sim \cdots \sim x_n$.
Any expression that computes it corresponds to a connected multigraph, where we only allow the 2 types of loops that compute $\textsc{True}$ and the 4 types of edges between $x_i$ and $x_j$ ($i \neq j$) that compute $x_i \sim x_j$; this gives readily the expression of $\phi_n(z)$.
As for functions whose associated multigraphs have several components, such multigraphs are a product of connected components; hence the global generating function is itself the product of the generating functions for each component.
\end{proof}
\begin{theorem}
\label{th:proba-f}
\begin{enumerate}
\item
The probability that a random expression of $m$ clauses on $n$ variables
computes the function $x_1 \sim \cdots \sim x_n$~is
\[
\mathds{P} (x_1 \sim \cdots \sim x_n) =
\frac{ m! n! [z^m v^n] C(4z,v)}{m! n! [z^m v^n] M(8z,v)}
=
\frac{m!}{n^{2m}} \; n! [v^n] C_{m-n}(v).
\]
\item
Let $f$ be a function of $\mathcal X$, with $q= \sum_\ell \mathbf i(f)_\ell$,
and $B_1, \ldots, B_q$ be the blocks of $\mathbf i(f)$,
with $r_j$ ($1\leq j \leq q$) the excess of the block~$B_j$.
The probability that a random expression of $m$ clauses on $n$ variables computes~$f$~is
\begin{eqnarray*}
\mathds{P} (f) &=&
\frac{m!}{n^{2m}} \;
\sum_{ \substack{r_1, \ldots ,r_q \geq -1 \\ r_1+ \cdots +r_q=m-n} } \;
\prod_{j=1}^q
|B_j|! [ v^{|B_j|}] C_{r_j}(v),
\end{eqnarray*}
where~$C_r(v)$ denote the generating function
of connected multigraphs of excess~$r$,
defined in Notation~\ref{th:CmnCr}.
\end{enumerate}
\end{theorem}
\begin{proof}
The probability $\mathds{P}(f)$ that an expression of~$m$ clauses on~$n$ variables
computes a function~$f$
is the quotient of the number of corresponding expressions
divided by the total number of expressions
\[
\mathds{P}(f) =
\frac{ m! [z^m] \phi_{\mathbf i(f)}(z)}
{m! n! [z^m v^n] M(8z,v)}.
\]
For $f = x_1 \sim \cdots \sim x_n$,
we obtain the first part of the theorem
by substitution of the expression of~$\phi_{\mathbf i(f)}$,
derived in Proposition~\ref{prop:fgs}.
More generally, we have
\[
\phi_{\mathbf i(f)} (z) =
\prod_{\ell \geq 1}
\left( \ell! [ v^\ell] C(4z,v) \right)^{i(f)_\ell}.
\]
By definition, $i(f)_\ell$ is the number of
blocks of~$f$ of size~$\ell$,
so the previous equation can be rewritten
\[
\phi_{\mathbf i(f)} (z) =
\prod_{j=1}^q
|B_j|! [v^{|B_j|}] C(4z,v),
\]
and
\begin{equation} \label{eq:phiextraction}
m! [z^m] \phi_{\mathbf i(f)} (z) =
m!
\sum_{m_1 + \cdots + m_q = m}
\prod_{j=1}^q
|B_j|! [z^{m_j} v^{|B_j|}] C(4z,v).
\end{equation}
The generating function~$C(z,v)$ can be expanded with respect to the excesses
\[
C(z,v) = \sum_{r \geq -1} z^r C_r(v z),
\]
so
\begin{equation} \label{eq:Crextraction}
|B_j|! [ z^{m_j} v^{|B_j|}] C(4z,v)
=
4^{m_j}
|B_j|! [ v^{|B_j|}]
C_{r_j}(v),
\end{equation}
where $r_j = m_j - |B_j|$.
We obtain the second part of the theorem
by combination of Equations~\eqref{eq:phiextraction} and~\eqref{eq:Crextraction}.
\end{proof}
\begin{comment}
-------------------------------------------------------------------------------------
{\bf The proof is given in more detail below -- unify with the sketch above!}
\begin{proof}
\begin{enumerate}
\item
For the function $x_1 \sim ... \sim x_n$,
the first expression is obtained easily,
by dividing the number of relevant expressions
by the total number of expressions.
Then
\[
\left[ z^m \frac{v^n}{n!} \right] M(8z,v)
= 8^m \left[ z^m \frac{v^n}{n!} \right] M(z,v)
= 8^m \frac{n^{2m}}{2^m m!}
\]
and
\[
\left[ z^m \frac{v^n}{n!} \right] C(4z,v) =
\left[ z^m \frac{v^n}{n!} \right] \sum_{r \geq -1} (4z)^r C_r (4zv)
= 4^m \left[ \frac{v^n}{n!} \right] C_{m-n}(v).
\]
\item
By the correspondance between 2-XOR expressions and weighted multigraphs,
the probability of any function $f$ can be expressed as follows:
\[
\mathds{P}(f)=\frac{[z^m]\phi_{\mathbf i}(z)}{[z^m\frac{v^n}{n!}]M(8z,v)}.
\]
By definition of $M(z,v)$ we directly obtain $[z^m\frac{v^n}{n!}]M(8z,v)=(4n^2)^m/m!$.
Furthermore,
\[
[z^m]\phi_{\mathbf i}(z)=[z^m]\prod_{\ell\ge 1}\phi^{i_\ell}_{\ell}(z)
=[z^m]\prod_{\ell\ge 1} \big([\frac{v^\ell}{\ell!}]C(4z,v)\big)^{i_\ell}
=4^m[z^m]\prod_{\ell\ge 1} \big([\frac{v^\ell}{\ell!}]C(z,v)\big)^{i_\ell}.
\]
Since $C(z,v)=\sum_{r\ge -1}z^rC_r(zv)$, with $C_r(z)$ enumerating the connected multigraphs of
excess $r$,
we obtain further
\begin{equation}
[z^m]\phi_{\mathbf i}(z)=4^m[z^m]\prod_{\ell\ge 1} \big([\frac{v^\ell}{\ell!}]\sum_{r\ge
-1}z^rC_r(zv)\big)^{i_\ell}.
\end{equation}
One can rewrite the equation above by summing over all individual blocks: Let $\mathcal{B}(\mathbf i)$
denote a
partition of the $n$ Boolean variables corresponding to $\mathbf i$. Then,
\[
[z^m]\phi_{\mathbf i}(z)=4^m[z^m]\prod_{b\in\mathcal{B}(\mathbf i)} [\frac{v^{|b|}}{|b|!}]\sum_{r\ge
-1}z^rC_r(zv).
\]
We assume that $\xi(\mathbf i)=q=|\mathcal{B}(\mathbf i)|$, such that
\[
[z^m]\phi_{\mathbf i}(z)=4^m[z^m]\prod_{j=1}^{q}[\frac{v^{|b_j|}}{|b_j|!}]\sum_{r\ge -1}z^rC_r(zv)
=4^m[z^m]\prod_{j=1}^{q}[\frac{v_j^{|b_j|}}{|b_j|!}]\sum_{r\ge -1}z^{r+|b_j|}C_{r}(v_j).
\]
Hence, extraction of coefficients with respect to the variable $z$ gives the multiple sum
\[
[z^m]\phi_{\mathbf i}(z)=4^m\sum_{\substack{\sum_{k=1}^{q}(r_k+|b_k|)=m\\r_k\ge
-1}}\prod_{j=1}^{q}\Big[\frac{v_j^{|b_j|}}{|b_j|!}\Big]C_{r_j}(v_j)
=4^m\sum_{\substack{\sum_{k=1}^{q}r_k=m-n\\r_k\ge
-1}}\Big[\frac{v_1^{|b_1|}}{|b_1|!}\dots\frac{v_q^{|b_q|}}{|b_q|!}\Big]\prod_{j=1}^{q}C_{r_j}(v_j).
\]
Consequently, we have
\[
\mathds{P}(f)=\frac{\sum_{\substack{\sum_{k=1}^{q}r_k=m-n\\r_k\ge
-1}}\Big[\frac{v^{|b_1|}}{|b_1|!}\dots\frac{v^{|b_q|}}{|b_q|!}\Big]\prod_{j=1}^{q}C_{r_j}(v_j)}{(n^2)^m/m!}.
\qedhere
\]
\end{enumerate}
\end{proof}
\end{comment}
\section{Explicit Probabilities}
\label{sec:results}
We now show on examples how Theorem \ref{th:proba-f} allows us to compute the asymptotic probability of a specific function.
Attempting to give explicit results for each and every case that may appear is not realistic; rather we aim at giving the reader a feeling of the kind of results our method allows to obtain and the kind of technical tools we need for obtaining precise asymptotics.
We consider first a fixed Boolean function $f$ and how its probability varies when $n \rightarrow +\infty$ (i.e. when we add non-essential variables),
then turn to a family of functions that vary with $n$, either with a fixed number of blocks (this includes functions that are ``close to'' $\textsc{False}$ in the sense that they have few blocks, hence few satisfying assigments), or with a number of blocks that grows with~$n$ (e.g., $\frac{n}{j}$ blocks of size $j$ for some $j\geq 2$).
\subsection{Probability of a fixed function}
We compute here the probability of any specific function, once it can be obtained, and see how it varies when $n, m \rightarrow + \infty$ with fixed ratio~$\alpha$.
\begin{proposition}
\label{prop:fixed-function}
Let $f \in \mathcal X_n$, with $e(f)$ being the number of its essential variables, and $\mathbf i = \mathbf i (f)=(i_1, i_2,\dots)=(n-e(f), i_2,\dots)$ its associated integer partition. Assume $m=\alpha n \geq n - \xi( \mathbf i(f))$; then
\[
P_{[\alpha n , n]}(f) \sim
\frac{e^{\alpha \, e(f)}}{(2n)^{\alpha n}} \; \prod_{\ell \geq 2}
\left( \ell ! \phi_{\ell}\left( \frac{\alpha}{2} \right) \right)^{i_{\ell}}
\qquad (n \rightarrow + \infty) .
\]
\end{proposition}
\begin{proof}
Let~$\vect{i}= \mathbf i(f)$ be an integer partition with~$s(\vect{i}) = n$
and for all~$\ell \geq 2$, let~${i}_{\ell}$ be fixed, independent of~$n$.
The number of expressions with~$n$ variables and~$m$ clauses
that correspond to Boolean functions in~${\cal C}_\mathbf i$ is then (cf: Proposition~\ref{prop:fgs})
\[
n! m! [ z^m ] \frac{e^{\vect{i}_1 2 z}}{\vect{i}_1!}
\prod_{\ell \geq 2} \frac{\phi_{\ell}^{\vect{i}_{\ell}}(z)}{\vect{i}_{\ell}!}.
\]
We derive an asymptotic equivalent by the saddle point method for a \emph{large power} scheme,
assuming that~$\alpha = \frac{m}{n}$ is bounded (\cite[Th.~VIII.8 p.~587]{FlajoletSedgewick}).
We get
\[
m! \; \frac{n!}{\vect{i}_1 !}
\left( \frac{2 e n}{m} \right)^m
\frac{1}{\sqrt{2 \pi m}}
e^{-(s(\vect{i}) - \vect{i}_1) m/n}
\prod_{\ell \geq 2}
\frac{\phi_{\ell}^{\vect{i}_{\ell}}\left( \frac{m}{2n} \right)}{\vect{i}_{\ell}!}
( 1 + o(1) ).
\]
Using Stirling's formula, this can be rewritten as
\[
\frac{n!}{\vect{i}_1 !} (2n)^m
e^{-(s(\vect{i}) - \vect{i}_1) m/n}
\prod_{\ell \geq 2}
\frac{\phi_{\ell}^{\vect{i}_{\ell}}\left( \frac{m}{2n} \right)}{\vect{i}_{\ell}!}
( 1 + o(1) ).
\]
By division by~$|{\cal C}_\mathbf i| = 2^{n - \xi(\vect{i})} \frac{n!}{ \prod_{\ell \geq 2} \vect{i}_\ell! (\ell!)^{\vect{i}_\ell}}$, we obtain the number of expressions
that correspond to any given function in~${\cal C}_\mathbf i$:
\[
2^{\xi(\vect{i}) - n} (2n)^m
e^{-(s(\vect{i}) - \vect{i}_1) m/n}
\prod_{\ell \geq 2}
\left( \ell ! \phi_{\ell}\left( \frac{m}{2n} \right) \right)^{\vect{i}_{\ell}}.
\]
We finally divide by the number of $(n,m)$-expressions, $4^m n^{2m}$, to obtain the asymptotic probability that a random expression with~$n$ variables and~$m$ clauses corresponds to the given Boolean function~$f$ described by
the integer partition~$\vect{i}$:
\[
\frac{ e^{-(s(\vect{i}) - {i}_1) m/n} }{ (2n)^m }
\prod_{\ell \geq 2}
\left( \ell ! \phi_{\ell}\left( \frac{m}{2n} \right) \right)^{\vect{i}_{\ell}}.
\]
The final form comes from the fact that $s(\mathbf i)-i_1$ is precisely the number of essential variables of~$f$.
\end{proof}
\subsection{Asymptotics for a single-block function}
\label{singleblock}
All Boolean variables are in a single block: We consider the class of $ x_1 \sim ... \sim x_n$ and the range $m\geq n-1$.
From Theorem~\ref{th:proba-f} , we have
\[
\mathds{P}(x_1 \sim ... \sim x_n) = \frac{m!}{n^{2m}} . n! [ v^n ] C_{m-n}(v).
\]
We now specialize this according to the possible values for the excess $r=m-n$ and obtain the
\begin{proposition}
\begin{enumerate}
\item
For $r=-1$, we have
$\mathds{P}(x_1 \sim ... \sim x_n) =
\frac{(n-1)!}{n^{n}} \sim \sqrt{\frac{2\pi}{n}} \; e^{-n}$ .
\item
For $r=0$, we get
$\mathds{P}(x_1 \sim ... \sim x_n)\sim \frac{\pi}{2} \; e^{-n} $.
\item
For $ r \geq 1$ but still fixed,
$\mathds{P}(x_1 \sim ... \sim x_n) \sim C_r \, e^{-n} n^{r/2}$
where $c_r = \sqrt{2\pi} \, e^{-r} \, K_r$ with $K_r$ as in Theorem~\ref{th:connected-multigraphs-fixed-excess}.
\item
For~$r \rightarrow \infty$ and~$r = o(\sqrt{n})$,
$\mathds{P}(x_1 \sim ... \sim x_n) \sim
\sqrt{\frac{3}{2}} \frac{e^{r/2}}{(2\sqrt{3})^r} e^{-n} \left(\frac{n}{r}\right)^{r/2}$.
\item
For~$r = (\alpha-1) n$ with $\alpha > 1$,
$\mathds{P}(x_1 \sim ... \sim x_n) \sim
K \left( \frac{\alpha^{\alpha-1} \cosh \zeta}{(2 \zeta)^{\alpha-1} e^\alpha} \right)^n$
where~$\zeta \coth \zeta = \alpha$
and~$K = \sqrt{\alpha} \frac{e^{2\zeta}-1-2\zeta}{\sqrt{\zeta(e^{4\zeta}-1-4\zeta e^{2\zeta})}}$.
\item
When~$r \rightarrow + \infty$ and~$2m/n - \log(n)$ is bounded, then
$$\mathds{P}(x_1 \sim ... \sim x_n) \sim
\frac{K}{(2 \zeta)^r} \left( \frac{\sinh \zeta}{\zeta} \right)^n \frac{\left(
1+r/n\right)^{n+r+1/2}}{e^{n+r}}$$
with~$\zeta$ the positive solution of~$\zeta \coth \zeta = 1 + \frac{r}{n}$
and~$K = \frac{e^{2\zeta}-1-2\zeta}{\sqrt{\zeta(e^{4\zeta}-1-4\zeta e^{2\zeta})}}$.
\item
Finally, when~$2m/n - \log(n) \rightarrow + \infty$ as~$n \rightarrow + \infty$,
$\mathds{P}(x_1 \sim ... \sim x_n) \sim \frac{1}{2^m}$
because almost all multigraphs are connected.
\end{enumerate}
\end{proposition}
\begin{proof}
We simply use the expressions for the coefficients of $C_r(v)$ given in Section~\ref{sec:asymptotics-multigraphs}.
The first three cases come from Theorem~\ref{th:connected-multigraphs-fixed-excess}, the next two cases are subcases of the sixth case which comes from Theorem~\ref{th:multigraphs-large-excess}, and the last case comes from Theorem~\ref{connected_excess-infinite}.
\end{proof}
\subsection{Asymptotics for a two-blocks function}
\label{twoblocks}
We consider a function in the class of $x_1 \sim ... \sim x_p$, $x_{p+1} \sim ... \sim x_n$ (the block sizes are $p$ and $n-p$), which has cardinality
$2^{n-2} \frac{n!}{p! (n-p)!}$.
We are again in range~C: $m \geq n-2$, i.e. $r\geq -2$.
Theorem~\ref{th:proba-f} gives the generating function as
$$
\phi_\mathbf j (z) =
p! [ v^p ] C(4z,v) \,\cdot\, (n-p)! [ w^{n-p} ] C(4z,w),
$$
from which we readily obtain that
$$
\mathds{P}(f) =
\frac{m!}{n^{2m}} \; \sum_{d=-1}^{r+1} p! [ v^p ] C_d(v) \,\cdot\, (n-p)! [ w^{n-p} ] C_{r-d}(w).
$$
Its asymptotics varies with the excess $r = m-n$,
and the sizes of the two blocks.
In the following propositions, we consider several cases,
depending of the respective sizes of the blocks
and the excess corresponding to the underlying multigraph.
The proofs are then presented in Sections~\ref{sec:two_blocks_large_excess} and~\ref{sec:two_blocks_fixed_excess}.
\begin{proposition}[Fixed excess and a single large part] \label{th:fixed_excess_and_a_single_large_part}
If $p$ and $d$ belong to some fixed, finite set which does not depend on $n$, then
\[
\mathds{P} (f) \sim K_f \, . \, n^{\frac{r+3}{2}-p} \, e^{-n},
\]
for some explicitly computable constant $K_f$.
\end{proposition}
\begin{proposition}[Fixed excess $r$ and two large parts] \label{th:fixed_excess_and_two_large_parts}
Assume that $p$ and $n-p$ both tend to infinity, as $n\to\infty$. W.l.o.g. let $p \leq n-p$.
Then we have
\[
\mathds{P} (f) \sim
\frac{2\pi}{e^n n^{2n+2r}}
(n-p)^{2n+\frac{3r}{2}} \left( \frac{p}{n-p} \right)^{2p}
\sum_{d=-1}^{r+1} K_d \, K_{r-d} \,
\left( \frac{p}{n-p} \right)^{\frac{3d}{2}}
\]
for suitable constants $K_j$.
Depending on the actual growth rate of $p$ we can distinguish two cases:
\begin{enumerate}
\item
If $p = \gamma n$ for some constant $\gamma>0$, then $p/(n-p) = \Theta(1)$ and
\[
\mathds{P} (f) \sim K \, n^{-\frac{r+1}{2}} \, \beta^{2n} \, e^{-n}
\qquad with \qquad
\beta = (1-\gamma)^{1-\gamma}\,\gamma^{\gamma}.
\]
\item
If $p=\varepsilon_n \, n $ with $\varepsilon_n=o(1)$, then $p/(n-p)=o(1)$ and
\[
\mathds{P}(f) \sim
K\, e^{-n} \, n^{\frac{r-1}{2}} \, \varepsilon^{n\varepsilon_n-1} \,
(1-\varepsilon_n)^{(1-\varepsilon_n) n}.
\]
A more precise evaluation of probabilities gives for instance
\begin{enumerate}
\item
If $p=\sqrt{n}$, then $\varepsilon_n = n^{-1/2}$ and the probability of the function has order
$ \frac{n^{-\frac{r}{2} + \frac{3}{4}} \, e^{-n-2\sqrt{n}}}{n^{\sqrt{n}}}.$
\item
If $p= \log n$, then $\varepsilon_n = \frac{\log n}{n}$ and the probability is of order
$ \left( \frac{\log n}{n} \right)^{\log n -1} n^{\frac{r+1}{2}} \, e^{-n}$.
\end{enumerate}
\end{enumerate}
\end{proposition}
\begin{proposition}[Large excess $r$] \label{th:two_parts_large_excess}
Assume that $r = c n$ for a fixed positive value~$c$.
Again, we distinguish two cases:
\begin{enumerate}
\item {\bf Single large part.}
If~$p$ is constant, then
\[
\mathds{P} (f) \sim \frac{K_f}{n^{p-1}}
\left(
\frac{ (1+c)^c \cosh(\zeta) }{ e^{1+c} (2 \zeta)^c }
\right)^n,
\]
for some explicitly computable constant $K_f$,
where~$\zeta \coth \zeta = 1 + \frac{cn+1}{n-p}$.
\item {\bf Two proportional large parts.}
If~$p = \gamma n$ and~$r = c n$, then
\[
\mathds{P} (f) \sim
\frac{K_f}{n}
\left(
\frac{ \gamma^\gamma (1-\gamma)^{1-\gamma} (1+c)^{1+c} }{ 2^c e^{1+c }}
g(a_0)
\right)^n
\]
where~$K_f$ is a computable constant,
and~$g(a_0)$ is the unique maximum of the function in~$[0,1]$.
\begin{align*}
g(a)
&=
\left( \frac{ \cosh(\zeta_{\frac{a c}{\gamma}}(a)) }{ 1 + \frac{a c}{\gamma} } \right)^\gamma
\left( \frac{ \cosh(\zeta_{\frac{(1-a) c}{1-\gamma}}(a)) }{ 1 + \frac{(1-a) c}{1-\gamma} } \right)^{1-\gamma}
\left( \frac{\gamma}{\zeta_{\frac{a c}{\gamma}}(a)} \right)^{a c}
\left( \frac{1-\gamma}{\zeta_{\frac{(1-a) c}{1-\gamma}}(a)} \right)^{(1-a) c}
\end{align*}
where~$
\zeta_{\frac{a c}{\gamma}}(a) \coth \zeta_{\frac{a c}{\gamma}}(a)
=
1 + \frac{a c}{\gamma}
$ and~$
\zeta_{\frac{(1-a) c}{1-\gamma}}(a) \coth \zeta_{\frac{(1-a) c}{1-\gamma}}(a)
=
1 + \frac{(1-a) c}{1-\gamma}$.
\end{enumerate}
\end{proposition}
Decomposing the two connected multigraphs according to excess gives the generating function for multigraphs with 2 connected components of respective number of vertices $p$ and $n-p$:
\begin{eqnarray*}
\phi_\mathbf j (z)&=&
p! [v^p] \sum_{r \geq -1} (4z)^r C_r(4zv) \cdot (n-p)! [ w^{n-p} ] \sum_{s \geq -1} (4z)^s C_s(4zw)
\\ &=&
p! (n-p)! [ v^p w^{n-p} ]
\sum_{r,s \geq -1} (4z)^{r+s}\, C_r(4zv) \, C_s(4zw)
\end{eqnarray*}
and
\begin{eqnarray*}
[z^m]\phi_\mathbf j (z)&=&
p! (n-p)! [ z^m v^p w^{n-p} ]
\sum_{r,s \geq -1} (4z)^{r+s}\, C_r(4zv) \, C_s(4zw)
\\ &=&
4^m \,
p! (n-p)! [ v^p w^{n-p} ]
\sum_{r,s \geq -1, r+s+n=m} C_r(v) \, C_s(w)
\\ &=&
4^m \,
p! (n-p)! [ v^p w^{n-p} ]
\sum_{r=-1}^{m-n+1} C_r(v) \, C_{m-n-r}(w)\\ &=&
4^m \, \sum_{r=-1}^{m-n+1}
p! [ v^p ] C_r(v) \cdot
(n-p)! [ w^{n-p} ] C_{m-n-r}(w).
\end{eqnarray*}
Then
\begin{eqnarray*}
\mathds{P}(f) &=& \frac{m!}{4^m n^{2m}} [z^m]\phi_\mathbf j (z)
\\&=&
\frac{m!}{n^{2m}} \; \sum_{d=-1}^{r+1} p! [v^p] C_d(v) \cdot (n-p)! [ w^{n-p} ] C_{r-d}(w).
\end{eqnarray*}
\subsubsection{Function with two blocks and fixed excess} \label{sec:two_blocks_fixed_excess}
\paragraph{Single large part}
We now present the proof of Proposition~\ref{th:fixed_excess_and_a_single_large_part}.
In the range we are working in, $p$ and $d$ belong to a fixed, finite set; let us define
\[
\gamma_{d,p} = p! [ v^p ] C_d(v).
\]
Then
\[
\mathds{P}(f) =
\frac{m!}{n^{2m}} \; \sum_{d=-1}^{r+1} \gamma_{d,p} (n-p)! [ w^{n-p} ] C_{r-d}(w)
\]
and the asymptotic value of the coefficient $(n-p)! [ w^{n-p} ] C_{r-d}(w)$ is given by Equation~(\ref{eq:asympt-bernhard-exces}), with a suitable constant:
\begin{eqnarray*}
(n-p)! [ w^{n-p} ] C_{r-d}(w) &\sim&
K_{r-d} .\, n^{n-p+\frac{3(r-d)-1}{2}} .
\end{eqnarray*}
We see that the dominant term of the sum $\sum_{d=-1}^{r+1} \gamma_{d,p} \left[ w^{n-p} \right] C_{r-d}(w)$ will be obtained for $d=-1$, which gives, for some suitable constant $K_f$ that can be explicitly computed
\[
\mathds{P}(f) \sim K_f \, . \, n^{\frac{r+3}{2}-p} \, e^{-n} .
\]
\paragraph{Two large parts}
This paragraph contains the proof of Proposition~\ref{th:fixed_excess_and_two_large_parts}.
By symmetry, we can assume that $p \leq n-p$.
Recall that
\begin{eqnarray*}
\mathds{P} (f) =
\frac{m!}{n^{2m}} \; \sum_{d=-1}^{r+1} p! [v^p] C_d(v) \cdot (n-p)! [ w^{n-p} ] C_{r-d}(w),
\end{eqnarray*}
but now the coefficients $\left[ v^p \right] C_d(v)$ and $ \left[ w^{n-p} \right] C_{r-d}$ can both be obtained from the expansion~(\ref{eq:asympt-bernhard-exces}) ($p$ \emph{and} $n-p$ are large); moreover we are dealing with a fixed number of terms:
\begin{eqnarray*}
\mathds{P}(f) &\sim &
\frac{m!}{n^{2m}} \; \sum_{d=-1}^{r+1} K_d \, p^{p+\frac{3d-1}{2}} K_{r-d} \, (n-p)^{n-p+\frac{3(r-d)-1}{2}}
\\ &\sim &
\frac{m!}{n^{2m}} \; p^{p-\frac{1}{2}} (n-p)^{n-p+\frac{3r-1}{2}}
\sum_{d=-1}^{r+1} K_d \, K_{r-d} \,
\left( \frac{p}{n-p} \right)^{\frac{3d}{2}}
\\ &\sim &
\sqrt{\frac{2\pi \,n}{p(n-p)}} \,\frac{e^{-n}}{n^{n+r}} \, (n-p)^{n+\frac{3r}{2}} \, \left( \frac{p}{n-p} \right)^{p}
\sum_{d=-1}^{r+1} K_d \, K_{r-d} \,
\left( \frac{p}{n-p} \right)^{\frac{3d}{2}} .
\end{eqnarray*}
Now we have to find the behaviour of the sum in the above expression, and we see that there are two different cases:
\begin{enumerate}
\item
If $p$ and $n$ are proportional, then $p/(n-p) = \Theta(1)$ (for simplification we set $p = \gamma n$ and assume $\gamma$ is constant, but the sequel only requires that $\gamma = \Theta(1)$);
all terms $\left( \frac{p}{n-p} \right)^{\frac{3d}{2}}$ contribute to a constant factor, and the sum itself is constant, hence for a suitable constant \footnote{Here and in what follows, the constant denoted by $K$ may vary and may depend on~$r$ -- but it is always possible to get an explicit, though cumbersome, expression for it.} $K$ we have
\[
\mathds{P}(f) \sim K \, n^{\frac{r-1}{2}} \, \beta^{n} \, e^{-n}
\qquad with \qquad
\beta = (1-\gamma)^{1-\gamma}\,\gamma^{\gamma}.
\]
\item
If $p/(n-p)=o(1)$ i.e. $p=o(n)$, then the first term of the sum dominates: Up to a constant
multiplicative factor, the whole sum is asymptotically equivalent to $\left( \frac{n-p}{p}
\right)^{\frac{3}{2}}$. Setting $\varepsilon = p/n$ we get
\begin{eqnarray*}
\mathds{P}(f) &\sim &
K\, e^{-n} \, n^{\frac{r-1}{2}} \,\varepsilon^{n\varepsilon-1} \, (1-\varepsilon)^{(1-\varepsilon) n} .
\end{eqnarray*}
\end{enumerate}
\subsubsection{Large excess} \label{sec:two_blocks_large_excess}
This section contains the proof of Proposition~\ref{th:two_parts_large_excess}.
Let~$C_{n+r,n}$ denote the number of connected multigraphs
with~$n$ vertices and excess~$r$.
For this proof, we rewrite the asymptotics of $C_{n+r,n}$
when~$r \rightarrow \infty$ and~$(r+n) e^{-2 r/n} \rightarrow \infty$,
already derived in Theorem~\ref{th:multigraphs-large-excess}, as
\begin{equation} \label{eq:cmg}
C_{n+r,n}
=
\frac{ \alpha(\zeta_{\frac{r}{n}}) }{ \sqrt{2\pi} (2 \zeta_{\frac{r}{n}})^r }
\left(
\frac{\cosh \zeta_{\frac{r}{n}}}{1 + \frac{r}{n}}
\right)^n
n^{n+r-\frac{1}{2}}
\left(1 + \mathcal{O} \left( r e^{-2r/n} \right)^{-\frac{1}{2}+\epsilon} \right)
\end{equation}
for any small~$\epsilon > 0$,
where~$\zeta_{\frac{r}{n}} \coth \zeta_{\frac{r}{n}} = 1+\frac{r}{n}$
and~$\alpha(\zeta) = \frac{e^{2\zeta}-1-2\zeta}{\sqrt{\zeta (e^{4\zeta}-1-4\zeta e^{2\zeta})}}$.
We are interested here in the probability that
a random~$2$-Xor expression
with~$n$ variables and~$m$ clauses
compute the Boolean function
with two blocks of sizes~$\gamma n$ and~$(1-\gamma) n$
\[
x_1 \sim \ldots \sim x_{\gamma n},\ x_{\gamma n + 1} \sim \ldots \sim x_n.
\]
This probability can be expressed as
\begin{align*}
\proba(2\; {\mathrm {blocks}})
&=
\frac{m!}{n^{2m}} \sum_{d=-1}^{r+1} C_{\gamma n + d, \gamma n} C_{(1-\gamma)n + r-d, (1-\gamma)n}
\\
&=
\frac{m!}{n^{2m}} \sum_{d=-1}^{r+1} A_d
\end{align*}
where~$r = m-n$ is the global excess of the multigraphs
representing the random expression
and $d$ (resp.~$r-d$) the excess of its first (resp. second) connected component.
The main ingredient for the proof of Proposition~\ref{th:two_parts_large_excess}
is the Laplace methode.
It involves first a reduction to a problem of real analysis,
then the analysis of a real function.
Those steps are detailed in the next two paragraphs.
\paragraph{Reduction to a real analysis problem}
We make the assumption that the excess~$r$ increases proportionately to~$n$,
so~$r = (\alpha-1) n$ where~$\alpha=\frac{m}{n} > 0$ is a constant.
In that case,
\begin{align*}
\frac{m!}{n^{2m}}
&=
\frac{(n+r)!}{n^{2(n+r)}}
\\
&\sim
\frac{(n+r)^{n+r} \sqrt{2 \pi (n+r)}}{n^{2(n+r)} e^{n+r}}
\\
&\sim
\frac{\left( 1+\frac{r}{n} \right)^{n+r+1/2} \sqrt{2 \pi n}}{n^{n+r} e^{n+r}}
\\
&\sim
\sqrt{2\pi} \, \frac{\alpha^{\alpha n + 1/2}}{e^{\alpha n}} n^{-\alpha n + 1/2}
\end{align*}
Let us summarize some notations\\
\begin{center}
\begin{tabular}{ll}
\text{total number of vertices} & $n$ \\
\text{size of the first and smallest block} & $p = \gamma n$ \\
\text{size of the second block} & $n-p = (1-\gamma) n$ \\
\text{total excess} & $r = c n$ \\
\text{excess of the first block} & $d = a r$ \\
\text{excess of the second block} & $r-d = (1-a) r$
\end{tabular}
\end{center}
The expression of~$A_d$ is quite complicated,
so, in order to avoid forgetting some terms in the product,
we write them down in the following array
\[
\begin{array}{cc}
C_{\gamma n + a r, \gamma n} &
C_{(1-\gamma)n + (1-a) r, (1-\gamma)n}
\\ \hline
\gamma n &
(1-\gamma)n
\\
a r &
(1-a) r
\\ \hline
\alpha(\zeta_{\frac{a c}{\gamma}}) &
\alpha(\zeta_{\frac{(1-a) c}{1-\gamma}})
\\
\cosh(\zeta_{\frac{a c}{\gamma}})^{\gamma n} &
\cosh(\zeta_{\frac{(1-a) c}{1-\gamma}})^{(1-\gamma) n}
\\
\scriptstyle \gamma^{(\gamma+a c) n - 1/2} n^{(\gamma+a c) n -
1/2} &
\scriptstyle (1-\gamma)^{(1-\gamma+(1-a) c) n - 1/2} n^{(1-\gamma+(1-a) c) n - 1/2}
\\
2^{a c n} \zeta_{\frac{a c}{\gamma}}^{a c n} &
2^{(1-a) c n} \zeta_{\frac{(1-a) c}{1-\gamma}}^{(1-a) c n}
\\
\left(1+\frac{a c}{\gamma}\right)^{\gamma n} & \left(1+\frac{(1-a) c}{1-\gamma}\right)^{(1-\gamma) n}
\end{array}.
\]
\bigskip
Now write $A_d = C_{\gamma n + a r, \gamma n} C_{(1-\gamma)n + (1-a) r, (1-\gamma)n}$ as
\begin{eqnarray*}
A_d &\sim&
\frac{\left( \gamma^{\gamma} (1-\gamma)^{1-\gamma} \right)^n n^{(c+1) n - 1}}{2\pi \sqrt{\gamma(1-\gamma)} 2^{c n}}
\alpha(\zeta_{\frac{a c}{\gamma}}) \alpha(\zeta_{\frac{(1-a) c}{1-\gamma}})
\\ && \qquad \times
\left(
\frac{ \cosh(\zeta_{\frac{a c}{\gamma}})^{\gamma} \cosh(\zeta_{\frac{(1-a) c}{1-\gamma}})^{1-\gamma} }{ \left( 1 + \frac{ a c}{\gamma} \right)^\gamma \left( 1 + \frac{(1-a) c}{1-\gamma} \right)^{1-\gamma} }
\left( \frac{\gamma}{\zeta_{\frac{a c}{\gamma}}} \right)^{a c}
\left( \frac{1-\gamma}{\zeta_{\frac{(1-a) c}{1-\gamma}}} \right)^{(1-a)c}
\right)^n
\end{eqnarray*}
which gives
\begin{align*}
\frac{m!}{n^{2m}} A_d
&\sim
\frac{\left( \gamma^{\gamma} (1-\gamma)^{1-\gamma} \right)^n
(c+1)^{(c+1) n + \frac{1}{2}}}{\sqrt{2 \pi n} \sqrt{ \gamma (1-\gamma)} 2^{c n} e^{(c+1) n}}
\alpha(\zeta_{\frac{a c}{\gamma}}) \alpha(\zeta_{\frac{(1-a) c}{1-\gamma}})
g(a)^n
\\
&\sim
\sqrt{\frac{c+1}{\gamma(1-\gamma)}}
\left(
\frac{ \gamma^\gamma (1-\gamma)^{1-\gamma} (c+1)^{(c+1)} }{ 2^{c} e^{c+1}}
\right)^n
\frac{\alpha(\zeta_{\frac{a c}{\gamma}}) \alpha(\zeta_{\frac{(1-a) c}{1-\gamma}})}{\sqrt{2 \pi n}}
g(a)^n
\end{align*}
where
\begin{align*}
g(a)
&=
\left( \frac{ \cosh(\zeta_{\frac{a c}{\gamma}}) }{ 1 + \frac{a c}{\gamma} } \right)^\gamma
\left( \frac{ \cosh(\zeta_{\frac{(1-a) c}{1-\gamma}}) }{ 1 + \frac{(1-a) c}{1-\gamma} } \right)^{1-\gamma}
\left( \frac{\gamma}{\zeta_{\frac{a c}{\gamma}}} \right)^{a c}
\left( \frac{1-\gamma}{\zeta_{\frac{(1-a) c}{1-\gamma}}} \right)^{(1-a) c},
\\
\zeta_{\frac{a c}{\gamma}} \coth \zeta_{\frac{a c}{\gamma}}
&=
1 + \frac{a (\alpha-1)}{\gamma}, \qquad \alpha(\zeta_{\frac{a c}{\gamma}}) =
\frac{e^{2\zeta_{\frac{a c}{\gamma}}}-1-2\zeta_{\frac{a c}{\gamma}}}{\sqrt{\zeta_{\frac{a c}{\gamma}} (e^{4\zeta_{\frac{a c}{\gamma}}}-1-4\zeta_{\frac{a c}{\gamma}} e^{2\zeta_{\frac{a c}{\gamma}}})}},
\\
\zeta_{\frac{(1-a) c}{1-\gamma}} \coth \zeta_{\frac{(1-a) c}{1-\gamma}}
&=
1 + \frac{(1-a) c}{1-\gamma}, \qquad \alpha(\zeta_{\frac{(1-a) c}{1-\gamma}}) =
\frac{e^{2\zeta_{\frac{(1-a) c}{1-\gamma}}}-1-2\zeta_{\frac{(1-a) c}{1-\gamma}}}{\sqrt{\zeta_{\frac{(1-a) c}{1-\gamma}} (e^{4\zeta_{\frac{(1-a) c}{1-\gamma}}}-1-4\zeta_{\frac{(1-a) c}{1-\gamma}} e^{2\zeta_{\frac{(1-a) c}{1-\gamma}}})}}.
\end{align*}
We will see in the next paragraph that
the dominant part of the sum~$\sum_{d=-1}^{r+1} A_d$
is reached for a compact range of~$a$ included in~$]0,1[$.
This justifies the use of the asymptotic formula~\ref{eq:cmg}.
Furthermore, the error term of~$A_d$
\[
\left( 1 + \mathcal{O} \left( a r e^{- a c/\gamma} \right)^{-\frac{1}{2}+\epsilon} \right)
\left( 1 + \mathcal{O} \left( (1-a) r e^{-(1-a) c / (1-\gamma)} \right)^{-\frac{1}{2}+\epsilon} \right)
\]
becomes uniform in~$a$, so
\[
\proba(2\; {\mathrm {blocks}})
\sim
\sqrt{\frac{c+1}{\gamma(1-\gamma)}}
\left(
\frac { \gamma^\gamma (1-\gamma)^{1-\gamma} (c+1)^{c+1} }
{ 2^{c} e^{c+1}}
\right)^n
\frac{1}{\sqrt{2 \pi n}}
\sum_{d=0}^r
\alpha(\zeta_{\frac{a c}{\gamma}}) \alpha(\zeta_{\frac{(1-a) c}{1-\gamma}})
g \left( \frac{d}{r} \right)^n.
\]
\paragraph{Analysis of~$g(a)$}
We prove here that~$g(a)$ has a unique maximum~$a_0$ in~$[0,1]$
such that~$0 < a_0 < 1$.
To do so, we use the concavity of~$\log(g(a))$.
The \emph{Laplace's method for sums}
described in~\cite{FlajoletSedgewick} p.761
then leads to
\begin{align*}
\sum_{d=0}^r
\alpha(\zeta_{\frac{a c}{\gamma}}) \alpha(\zeta_{\frac{(1-a) c}{1-\gamma}})
g \left( \frac{d}{r} \right)^n
&\sim
\sqrt{\frac{2 \pi}{\lambda n}}
\alpha(\zeta_{\frac{a_0 c}{\gamma}}) \alpha(\zeta_{\frac{(1-a_0) c}{1-\gamma}})
g(a_0)^n
\end{align*}
where~$\lambda = - \frac{g''(a_0)}{g(a_0)}$, so
\begin{align*}
\proba(2\; {\mathrm {blocks}})
&\sim
\sqrt{\frac{c+1}{\gamma(1-\gamma) \lambda}}
\left(
\frac{ \gamma^\gamma (1-\gamma)^{1-\gamma} (c+1)^{c+1} }{ 2^c e^{c+1}}
\right)^n
\frac{\alpha(\zeta_{\frac{a c}{\gamma}})(a_0) \alpha(\zeta_{\frac{(1-a) c}{1-\gamma}})(a_0)}{n}
g(a_0)^n .
\end{align*}
The proof of the asymptotics is now reduced
to a real analysis problem: Proving that
\begin{align*}
g(a)
&=
\left( \frac{ \cosh(\zeta_{\frac{a c}{\gamma}}) }{ 1 + \frac{a c}{\gamma} } \right)^\gamma
\left( \frac{ \cosh(\zeta_{\frac{(1-a) c}{1-\gamma}}) }{ 1 + \frac{(1-a) c}{1-\gamma} } \right)^{1-\gamma}
\left( \frac{\gamma}{\zeta_{\frac{a c}{\gamma}}} \right)^{a c}
\left( \frac{1-\gamma}{\zeta_{\frac{(1-a) c}{1-\gamma}}} \right)^{(1-a) c}
\\
&=
\left(
\frac{ \cosh \zeta_{\frac{a c}{\gamma}} }{ \zeta_{\frac{a c}{\gamma}}^{x_1} }
\frac{ \gamma^{x_1} }{ 1+x_1 }
\right)^\gamma
\left(
\frac{ \cosh \zeta_{\frac{(1-a) c}{1-\gamma}} }{ \zeta_{\frac{(1-a) c}{1-\gamma}}^{x_2} }
\frac{ (1-\gamma)^{x_2} }{ 1+x_2 }
\right)^{1-\gamma},
\end{align*}
where~$x_1 = \frac{ a c }{ \gamma }$ and~$x_2 = \frac{ (1-a) c }{ 1-\gamma }$,
has a unique maximum in the interior of~$]0,1[$
for all~$c > 0$ and~$\gamma \in ]0, 1/2]$.
Let~$\zeta(x)$ be defined implicitly as
\[
\zeta \coth \zeta = 1 + x,
\]
then
\begin{align*}
\frac{\zeta'}{\zeta} &= \frac{1}{\zeta^2 - x (1+x)}, \\
\zeta' \tanh \zeta &= \frac{\zeta^2}{(\zeta^2 - x (1+x))(1+x)},
\end{align*}
so
\begin{align*}
\frac{d}{d x} \log
\left(
\frac{ \cosh \zeta}{ \zeta^{x} }
\frac{ \gamma^{x} }{ 1+x }
\right)
&=
\zeta' \tanh(\zeta) - x \frac{\zeta'}{\zeta} - \frac{1}{1+x} + \log(\gamma) - \log(\zeta)
\\
&=
\frac{\zeta^2}{(\zeta^2 - x (1+x))(1+x)} - \frac{x}{\zeta^2 - x (1+x)} - \frac{1}{1+x} + \log \left( \frac{\gamma}{\zeta} \right)
\\
&=
\frac{\zeta^2 - x(1+x)}{(\zeta^2 - x (1+x))(1+x)} - \frac{1}{1+x} + \log \left( \frac{\gamma}{\zeta} \right)
\\
&=
\log \left( \frac{\gamma}{\zeta} \right)
\end{align*}
and
\begin{align*}
\frac{d}{d x} \log
\left(
\frac{ \cosh \zeta}{ \zeta^{x} }
\frac{ (1-\gamma)^{x} }{ 1+x }
\right)
&=
\log \left( \frac{1-\gamma}{\zeta} \right).
\end{align*}
Therefore,
\begin{align*}
\frac{d}{d a} \log(g(a))
&=
\gamma
\left( \frac{d}{d a} x_1 \right)
\frac{d}{d x_1} \log
\left(
\frac{ \cosh \zeta_{\frac{a c}{\gamma}}}{ \zeta_{\frac{a c}{\gamma}}^{x_1} }
\frac{ \gamma^{x_1} }{ 1+x_1 }
\right)\\
& \hspace{1.5cm}
+
(1-\gamma)
\left( \frac{d}{d a} x_2 \right)
\frac{d}{d x_2} \log
\left(
\frac{ \cosh \zeta_{\frac{(1-a) c}{1-\gamma}}}{ \zeta_{\frac{(1-a) c}{1-\gamma}}^{x_2} }
\frac{ (1-\gamma)^{x_2} }{ 1+x_2 }
\right)
\\
&=
c \log \left( \frac{\gamma}{\zeta_{\frac{a c}{\gamma}}} \right) - c \log \left( \frac{1-\gamma}{\zeta_{\frac{(1-a) c}{1-\gamma}}} \right)
\\
&=
c \log \left( \frac{\gamma}{1-\gamma} \frac{\zeta_{\frac{(1-a) c}{1-\gamma}}}{\zeta_{\frac{a c}{\gamma}}} \right)
\end{align*}
and
\begin{align*}
\frac{1}{c} \frac{d^2}{(d a)^2} \log(g(a))
&=
\frac{d}{d a} \log \left( \frac{\zeta_{\frac{(1-a) c}{1-\gamma}}}{\zeta_{\frac{a c}{\gamma}}} \right)
\\
&=
\left( \frac{d}{d a} x_2 \right)
\frac{ \zeta_{\frac{(1-a) c}{1-\gamma}}' }{ \zeta_{\frac{(1-a) c}{1-\gamma}} }
-
\left( \frac{d}{d a} x_1 \right)
\frac{ \zeta_{\frac{a c}{\gamma}}' }{ \zeta_{\frac{a c}{\gamma}} }
\\
&=
- \frac{c}{1-\gamma}
\frac{1}{ \zeta_{\frac{(1-a) c}{1-\gamma}}^2 - x_2 (1+x_2) }
-
\frac{c}{\gamma}
\frac{1}{ \zeta_{\frac{a c}{\gamma}}^2 - x_1 (1+x_1) }
\end{align*}
which is negative because for all~$x>0$,
\[
\zeta(x) > \sqrt{x(1+x)}.
\]
Therefore, $\frac{d}{d a} \log(g(a))$ is decreasing on~$]0,1[$.
Let us summarize some values:
\[
\begin{array}{c|ccc}
a & 0 & \gamma & 1\\ \hline
x_1 = \frac{a c}{\gamma} & 0 & c & \frac{c}{\gamma} \\
x_2 = \frac{(1-a) c}{1-\gamma} & \frac{c}{1-\gamma} & c & 0\\
\zeta_{\frac{a c}{\gamma}} & 0 & \zeta(c) & \zeta \left(\frac{c}{\gamma} \right)\\
\zeta_{\frac{(1-a) c}{1-\gamma}} & \zeta \left( \frac{c}{1-\gamma} \right) & \zeta(c) & 0\\
\frac{\zeta_{\frac{(1-a) c}{1-\gamma}}}{\zeta_{\frac{a c}{\gamma}}} & +\infty & & -\infty\\
\frac{d}{d a} \log(g(a)) & + \infty & & -\infty
\end{array}
\]
so $\frac{d}{d a} \log(g(a))$ has a zero on~$]0,1[$,
and~$g(a)$ has a unique maximum in~$]0,1[$.
\subsection{Number of blocks proportional to $n$}
A general approach via Theorem~\ref{th:proba-f} seems difficult, so we assume a certain regularity:
Let $f$ denote a Boolean function such that the associated integer partition is of the form
$\mathbf{i}(f)=(0,\dots,0,n/g,0,\dots)$, with $g\ge 2$.
Note that the corresponding multigraph has to have at least $m=(g-1)\cdot\frac{n}{g}$ edges. Thus, in contrary to the previously discussed cases, the excess $r = - \frac{n}{g}$ is no more bounded from below as $n\to\infty$. Such functions may now appear even close to the threshold $1/2$.
In Proposition~\ref{th:exact_proportional}, we derive an exact result for those functions; an asymptotic result is stated in Proposition~\ref{th:asympt_proportional}.
\begin{proposition} \label{th:exact_proportional}
The number of expressions $E_{m,n}(f)$ with $n$ variables and $m$ clauses computing a function $f$ with
associated integer partition representation of the form $\mathbf{i}(f)=(0,\dots,0,n/g,0,\dots)$,
i.e. $n/g$ blocks of size $g$, is
given by
\begin{equation} \label{explicit_form}
E_{m,n}(f)= m!4^m (g!)^{\frac{n}{g}}[z^m]\Big(\sum_{j=1}^{g}\frac{(-1)^{j-1}}j e_{j,g-j}(z)\Big)^{\frac{n}g}
\end{equation}
with
\[
e_{j,n}(z
=\sum_{\substack{\sum_{\ell=1}^{j}k_\ell=n\\k_\ell\ge 0}}
\binom{n}{k_1,\dots,k_j}\frac{\exp\(\sum_{\ell=1}^j\frac{(k_\ell+1)^2z}2\)}{\prod_{r=1}^{j} (k_\ell+1)!}.
\]
\end{proposition}
\begin{remark}
One might be tempted to use again Theorem~\ref{th:proba-f}. For Boolean functions $f$
having associated integer partition of the form $\mathbf{i}(f)=(0,\dots,0,n/g,0,\dots)$ with
$g\ge 2$ this yields
\[
E_{m,n} (f) = m!4^m
\sum_{\substack{\sum_{k=1}^{q}r_k=m-n\\r_k\ge -1}}
|B_1|! \cdots |B_q|! \left[ v^{|B_1|} \dots v^{|B_q|} \right]
\prod_{j=1}^{q}C_{r_j}(v_j),
\]
where $B_1, \ldots B_q$ are the blocks of the set partition, or equivalently the components of the Boolean function, with respective excesses $r_1, \ldots , r_q$.
Here the number of blocks is $q=\frac{n}g$ and all of them have size~$g$; hence
\[
E_{m,n}(f) = 4^m
(g!)^{\frac{n}{g}}\sum_{\substack{\sum_{k=1}^{q}r_k=m-n\\r_k\ge
-1}}\prod_{j=1}^{\frac{n}{g}} [v^{g} ]C_{r_j}(v).
\]
However, it seems to get enough information on $C_{r_j}(v)$ to derive expression
\eqref{explicit_form} from this formula.
\end{remark}
\begin{proof}
Instead of analyzing the coefficients $C_{r}(v)$ of $C(z,v)$ we use directly the relation
$C(z,v)=\log M(z,v)$.
Since $\mathbf{i}(f)=(0,\dots,0,n/g,0,\dots)$, with $g\ge 2$, we have
\begin{eqnarray*}
E_{m,n}(f) &=& m![z^m]\phi_{\mathbf i}(z)
\\ &=&
m![z^m]\prod_{\ell\ge 1} \big( \ell! [ v^\ell ] C(4z,v) \big)^{i_\ell}
\\ &=&
m!4^m(g!)^{\frac{n}{g}}[z^m]\Big([v^g]\log M(z,v)\Big)^{\frac{n}g}.
\end{eqnarray*}
Let $$\hat{\MG}(z,v)=(M(z,v)-1)/v=\sum_{n\ge 0}e^{\frac{(n+1)^2z}2}\frac{v^n}{(n+1)!},$$ such that
\[
\log M(z,v)=\sum_{j\ge 1}\frac{(-1)^{j-1}}j v^j\hat{\MG}^j(z,v).
\]
We get
\begin{align*}
E_{m,n}(f)
&= m! [z^m]\prod_{\ell\ge 1} \big([\frac{v^\ell}{\ell!}]C(4z,v)\big)^{i_\ell}
= m!4^m(g!)^{\frac{n}{g}}[z^m]\Big([v^g]\log M(z,v)\Big)^{\frac{n}g}
\\
&= m!4^m(g!)^{\frac{n}{g}}[z^m]\Big(\sum_{j=1}^{g}\frac{(-1)^{j-1}}j[v^{g-j}]\hat{\MG}^j(z,v)\Big)^{\frac{n}g}.
\end{align*}
We can expand $\hat{\MG}^j(z,v)$ in terms of the functions $e_{j,n}(z)$ as defined above using the multinomial theorem:
\[
\hat{\MG}^j(z,v)=\bigg(\sum_{n\ge 0}e^{\frac{(n+1)^2z}2}\frac{v^n}{(n+1)!}\bigg)^j
= \sum_{n\ge 0}e_{j,n}(z)v^n.
\]
Extraction of coefficients then directly leads to the stated result.
\end{proof}
\begin{coroll}
Under the assumptions of Proposition~\ref{th:exact_proportional}, in the case $g=2$ we get
\[
\mathds{P}(f
=\frac{1}{n^{2m}}\sum_{\ell=0}^{\frac{n}2}\binom{\frac{n}2}\ell \big(\ell+\frac{n}2\big)^{m}(-1)^{\frac{n}2-\ell},
\]
and for $g=3$
\[
\mathds{P}(f
=\frac{1}{n^{2m}}\sum_{\ell=0}^{\frac{n}3}\sum_{j=0}^\ell \binom{\frac{n}3}\ell \binom{\ell}j (\frac{n}2+\ell+2j)^{m}(-3)^{\ell-j} 2^{\frac{n}3-\ell}.
\]
\end{coroll}
\begin{proof}
Using Proposition~\ref{th:exact_proportional}, Equation~\eqref{explicit_form}, we obtain first
\begin{align*}
E_{m,n}(f)&= m!4^m (2!)^{\frac{n}{2}}[z^m]\Big(\frac{1}{2}e^{2z}-\frac12 e^z\Big)^{\frac{n}2}
=m!4^m[z^m]\Big(e^{2z}-e^z\Big)^{\frac{n}2} \\
&=m!4^m[z^m]e^{\frac{zn}2}\Big(e^{z}-1\Big)^{\frac{n}2}.
\end{align*}
The expansion of $\Big(e^{z}-1\Big)^{\frac{n}2}$ by the binomial theorem and the extraction of coefficients
leads then to the stated result after dividing by the total number of expressions $(4n^2)^m$. We
proceed for $g=3$ in a similar way:
\begin{equation*}
E_{m,n}(f)= m!4^m (3!)^{\frac{n}{3}}[z^m]\left(\frac{1}{6} e^{\frac{9z}2}-\frac12
e^{\frac{5z}2}+\frac13e^{\frac{3z}2}\right)^{\frac{n}3}.
\end{equation*}
In order to extract coefficients we use
\[
\left(\frac{1}{6}e^{\frac{9z}2}-\frac12
e^{\frac{5z}2}+\frac13 e^{\frac{3z}2}\right)^{\frac{n}3}
=e^{\frac{nz}2}\Big(\frac{1}{6} e^{3z}-\frac12 e^{z}+\frac13 \Big)^{\frac{n}3},
\]
and expand twice using the binomial theorem. This leads to the stated result after a few elementary computations.
\end{proof}
When considering asymptotics, we observed in the Sections~\ref{singleblock} and~\ref{twoblocks}
that the asymptotic behaviour is different depending on the fact whether the excess is constant or
large, \emph{i.e.} tending to infinity. For functions with two blocks there are also several
phases in the case of large excess. But this observation is misleading, because in fact is not the
excess but rather the distance from the minimal possible excess which determines the behaviour. In
the case considered in this section, we will therefore write $m=\frac{g-1}{g}\cdot n +\kappa_n$,
with $\kappa_n\ge 0$, because the minimal excess is $-n/g$. Furthermore, it turns out that there
is no qualitative difference between constant and large $\kappa_n$ in the sense that both cases
can be covered by one single formula. This holds, however, only up to the interesting range, which
has been shown to be $\kappa_n=\Theta(n^{2/3})$ in \cite{CrDa04}.
The expression for $E_{m,n}(f)$ given in Equation~\eqref{explicit_form} that appears is a fixed
function $G(z)=[v^g]\log M(z,v)$ raised to a large power $n/g$; e.g., for $g=2$ we have
$G(z)=\frac12e^{2z}-\frac12 e^z$ and for $g=3$ we have $G(z)=\frac{1}{6}e^{\frac{9z}2}-\frac12
e^{\frac{5z}2}+\frac13e^{\frac{3z}2}$.
By definition of $\log M(z,v)=\sum_{r\ge -1}z^rC_r(zv)$, the function $G(z)$ is of the form
$G(z)=\sum_{\ell\ge g-1}a_\ell z^\ell$ for certain coefficients $a_\ell$.
Thus, in case of constant $\kappa_n$, Equation~\eqref{explicit_form} involves a sum with a bounded range
depending only on $\kappa_n$:
\begin{align*}
E_{m,n}(f)&= m!4^m (g!)^{\frac{n}{g}}[z^m]G(z)^{\frac{n}g}
= m!4^m(g!)^{\frac{n}{g}} [z^{\frac{g-1}{g}\cdot n +\kappa_n}]\left(\sum_{\ell\ge g-1}a_\ell
z^\ell\right)^{\frac{n}g} \\
&=m!4^m(g!)^{\frac{n}{g}} [z^{\kappa_n}]\left(\sum_{\ell\ge 0}\tilde{a}_{\ell} z^\ell\right)^{\frac{n}g},
\end{align*}
with $\tilde{a}_\ell=a_{g-1+\ell}$ for $\ell\ge 0$.
Using
\begin{equation*}
\left(\sum_{\ell\ge 0}\tilde{a}_{\ell} z^\ell\right)^{\frac{n}g}
= \sum_{i\ge 0}z^i\sum_{\substack{k_j\ge0\\ \sum_{j=1}^{\frac{n}g}k_j=i}}\binom{n/g}{k_1,\dots,k_{n/g}}\prod_{s=1}^{n/g} \tilde{a}_{k_s}
\end{equation*}
we get
\[
E_{m,n}(f)= m!4^m (g!)^{\frac{n}{g}}\sum_{\substack{k_j\ge 0\\ \sum_{j=1}^{\frac{n}g}k_j=\kappa_n}}
\binom{n/g}{k_1,\dots,k_{n/g}}\prod_{s=1}^{n/g} \tilde{a}_{k_s},
\]
with $\tilde{a}_\ell$ denoting the shifting coefficients of $G(z)=[v^g]\log M(z,v)$.
For $\kappa_n\to\infty$ the saddle point method applies and we can compute $E_{m,n}(f)$ asymptotically,
though the expressions quickly become messy as $g$ grows. For $g=2$ we obtain the following
result:
\begin{proposition} \label{th:asympt_proportional}
The number of expressions $E_{m,n}(f)$ with $n$ variables and $m$ clauses computing a function $f$ with
associated integer partition representation of the form $\mathbf{i}(f)=(0,n/2,0,0,\dots)$,
i.e. $n/2$ blocks of size $2$, is given for $m= \frac{n}{2} + \kappa_n$ with $\kappa_n = O (n^ {2/3} )$ by
\[
E_{m,n}(f)=m!\frac{2^{2m+\frac n2+1}}{\sqrt{6\pi ns_n}}
s_n^{-m+\frac{n}2}\exp\(\frac{3ns_n}4+\frac1 {48}n s_n^2+O(ns_n^4)\).
\]
where $s_n$ is the unique positive solution of
$
\frac{z(2e^{z}-1)}{e^{z}-1}=1+\frac{2\kappa_n}{n},
$
and satisfies
\[
s_n=\frac{4}{3}\cdot\frac{\kappa_n}{n}+\mathcal{O}\left(\frac{\kappa_n^2}{n^2}\right).
\]
\end{proposition}
\begin{proof}
In the expression for $E_{m,n}(f)$, Eq.~\eqref{explicit_form}, a fixed function
\[
G(z)=\sum_{j=1}^{g}\frac{(-1)^{j-1}}j e_{j,g-j}(z)
\]
raised to a large power appears. Hence, we can apply the saddle-point technique to obtain an asymptotic expansion of $E_{m,n}(f)$ for $m$ and $n$ tending to infinity.
In general,
\begin{eqnarray*}
E_{m,n}(f) &=&
m!4^m(g!)^{\frac{n}{g}}[z^m]\big(G(z)\big)^{\frac{n}g}
\\ &=&
\frac{m!4^m(g!)^{\frac{n}{g}}}{2\pi i } \oint_r \frac{G^{\frac{n}g}(z)}{z^{m+1}}dz
\\ &=&
\frac{m!4^m(g!)^{\frac{n}{g}}}{2\pi i } \oint_r \exp\big( \frac{n}{g}\log G(z) - (m+1)\log z \big) dz.
\end{eqnarray*}
The saddle point equation is given by
\[
\frac{zG'(z)}{G(z)}=\frac{m+1}{\frac{n}g}.
\]
By our previous observation on functions $f$ with associated integer partition representation of
the form $\mathbf{i}(f)=(0,\dots,0,n/g,0,\dots)$ we must have $m\ge \frac{g-1}{g}\cdot n$ in order
to ensure that $E_{m,n}(f)>0$.
Hence, we assume that $m=\frac{g-1}{g}\cdot n +\kappa_n-1$, with $\kappa_n\ge 1$ and asympotically
$\kappa_n=o(n)$.\footnote{It is also possible to extend the analysis to larger $m$, i.e. $m\sim
\alpha\cdot n$ with $\alpha>\frac{g-1}{g}$, or $m\gg n$.}
Thus, we obtain further
\[
\frac{zG'(z)}{G(z)}=g-1 + g\frac{\kappa_n}{n}.
\]
For every concrete fixed $g$ it should be possible to treat this equation (preferentially using a computer algebra system); we outline the main steps for the simplest case $g=2$ and the case of $\kappa_n\to\infty$, assuming that $\kappa_n=\mathcal{O}(n^{\frac23})$. For
$g=2$ we get $G(z)=\frac12 e^{z}\cdot (e^{z}-1)$. It is convenient to cancel the factor $\frac12$,
appearing in $G(z)$ (and which is then raised to the power $\frac{n}2$) with
$(2!)^{\frac{n}{2}}$. We define $\tilde{G}(z)= e^{z}\cdot (e^{z}-1)$ such that the saddle point equation for $\tilde{G}(z)$ is identical to the previous equation for $G(z)$. We obtain
\begin{align*}
E_{m,n}(f) &= \frac{m!4^m}{2\pi i } \oint_r \exp\big( \frac{n}2\log \tilde{G}(z) - (m+1)\log z
\big)dz\\
&= \frac{m!4^m}{2\pi i } \oint_r \exp\big( \frac{n}{2}z + \frac{n}2\log (e^z-1) -
(m+1)\log z \big)dz,
\end{align*}
and the saddle point equation simplifies to
\[
\frac{z(2e^{z}-1)}{e^{z}-1}=1+\frac{2\kappa_n}{n}.
\]
Note that for $n\to\infty$ we have $\frac{\kappa_n}{n}\to 0$;
$\frac{zG'(z)}{G(z)}$ can be expressed in terms of the Bernoulli numbers, such that
\[
\frac{z(2e^{z}-1)}{e^{z}-1} = \sum_{k\ge 0}B_k\big(2\cdot (-1)^k-1\big)\frac{z^k}k! = 1+ \frac32
z+ \frac1{12}z^2-\frac1{720}z^4+\mathcal{O}(z^6),
\]
in a neighbourhood of zero. Thus, we obtain the solution $s_n$ of the saddle point equation,
with $\lim_{n\to\infty}s_n=0$, by a bootstrapping procedure. First, we obtain
\[
s_n=\frac{4}{3}\cdot\frac{\kappa_n}{n}+\mathcal{O}\left(\frac{\kappa_n^2}{n^2}\right).\]
A second bootstrapping step gives the refinement
\[
s_n=\frac{4}{3}\cdot\frac{\kappa_n}{n}-\frac{8}{81}\cdot\frac{\kappa_n^2}{n^2}+\mathcal{O}\left(\frac{\kappa_n^3}{n^3}\right).
\]
Changing the integration path to $z=s_n\cdot e^{i\varphi}$, $-\pi\le \varphi <\pi$ gives for $g=2$
\begin{align*}
E_{m,n}(f)&= \frac{m!4^m}{2\pi i } \oint_r \exp\big( \frac{n}{2}z + \frac{n}2\log
(e^z-1) - (m+1)\log z \big)dz\\
&=\frac{m!4^m}{2\pi } \int_{-\pi}^{\pi} s_n \exp\big(i\varphi+ \frac{n}{2}\log
\tilde{G}(s_n\cdot e^{i\varphi}) - (m+1)(\log s_n + i\varphi) \big)d\varphi\\
\end{align*}
Since $\tilde{G}(z)=e^{z}\cdot (e^{z}-1)$ we obtain further
\begin{align*}
E_{m,n}(f)&=\frac{m!4^m s_n^{-m}}{2\pi } \int_{-\pi}^{\pi}
\exp\big(\frac{n}2s_ne^{i\varphi}+\frac{n}2\log (e^{s_n\cdot
e^{i\varphi}}-1)-mi\varphi\big)d\varphi.
\end{align*}
The function $|\tilde{G}(s_n\cdot e^{i\varphi})|$ is maximal at $\varphi=0$. Thus, we restrict ourselves
to a neighbourhood of zero $\varphi\in(-\theta,\theta)$.
The expansion of the term $\frac{n}2\log \tilde{G}(s_n\cdot e^{i\varphi})-im\varphi$ at $\varphi=0$ gives
\begin{align*}
&\frac12n s_n +\frac12 n\log(e^{s_n}-1)+\varphi i\left(\frac12 ns_n+\frac12
n\frac{s_ne^{s_n}}{e^{s_n}-1}-m\right)\\
&\quad+ \varphi^2 \frac{n s_n}4
\left(\frac{e^{2s_n}s_n}{(e^{s_n}-1)^2}-\frac{e^{s_n}(s_n+1)}{e^{s_n}-1} -1\right)+\mathcal{O}(ns_n
\varphi^3).
\end{align*}
By definition of $s_n$ as the solution of the saddle point equation the linear term vanishes. We
obtain
\begin{align*}
&E_{m,n}(f)\sim\frac{4^m(2!)^{\frac{n}{2}} s_n^{-m}e^{\frac{ns_n}2}(e^{s_n}-1)^{\frac{n}2}}{2\pi }\\
&\quad\times\int_{-\theta}^{\theta}\exp\left(\varphi^2 \frac{n s_n}4
\left(\frac{e^{2s_n}s_n}{(e^{s_n}-1)^2}-\frac{e^{s_n}(s_n+1)}{e^{s_n}-1} -1\right)
+ \mathcal{O}(ns_n \varphi^3)\right)d\varphi.
\end{align*}
The expansion of $(e^{s_n}-1)^{\frac{n}2}$ gives
\[
(e^{s_n}-1)^{\frac{n}2}=\exp(\frac{n}2\log(e^{s_n}-1))
=\exp\Big(\frac12 n \log s_n+ \frac14 n s_n+\frac1 {48}n s_n^2+\mathcal{O}(n s_n^4)\Big).
\]
Moreover, using
\[
\frac{n s_n}4 \Big(\frac{e^{2s_n}s_n}{(e^{s_n}-1)^2}-\frac{e^{s_n}(s_n+1)}{e^{s_n}-1}
-1\Big)=-\frac38 s_n n -\frac1{24}s_n^2n+\frac{1}{720}n s_n^4+\mathcal{O}(ns_n^6),
\]
we get for the integral the asymptotic expansion
\[
\int_{-\theta}^{\theta}\exp\Big(\varphi^2 \big(-\frac38 s_n n -\frac1{24}s_n^2n\big)
+ \mathcal{O}(ns_n \varphi^2(s_n^2+\varphi))\Big)d\varphi.
\]
Note that the level of precision of the expansions has to be adapted on the actual growth of
$\kappa_n$, here $\kappa_n=\mathcal{O}(n^{\frac23})$.
In the final step we substitute $\varphi=\vartheta/\sqrt{ns_n}$ and complete the tails:
\begin{align*}
E_{m,n}(f)&\sim\frac{m!4^m \cdot 2^{\frac n2}s_n^{-m+\frac{n}2}e^{\frac{3ns_n}4+\frac1 {48}n
s_n^2}}{2\pi ns_n}\int_{-\theta\sqrt{ns_n}}^{\theta\sqrt{ns_n}}
\exp\Big(\vartheta^2 \big(-\frac38
+\mathcal{O}(s_n+\frac{\vartheta}{\sqrt{ns_n}})\big)\Big)d\vartheta\\
&\sim\frac{m!2^{2m+\frac n2} s_n^{-m+\frac{n}2}e^{\frac{3ns_n}4+\frac1 {48}n s_n^2}}{2\pi ns_n}
\int_{-\infty}^{\infty}
\exp\Big(\vartheta^2 \big(-\frac38\big)\Big)d\vartheta.
\end{align*}
Finally, we use $\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}e^{-x^2/2}dx=1$ to obtain the
assertion.
\end{proof}
\section{Discussion}
We have analysed the probability of Boolean functions generated by random 2-Xor expressions. This is strongly related to the 2-Xor-SAT problem. For people working in SAT-solver design the structure of solutions of satisfiable expressions, which corresponds to the component structure of the associated multigraphs, is also important.
We derived expressions in terms of coefficients of generating functions for the probability of satisfiability in the critical region ($m\sim \frac n2+\Theta(n^{2/3})$) as well as a general expression for the probability of any function (Theorem~\ref{th:proba-f}). Unfortunately, this expression is too complicated to be used for an asymptotic analysis of general functions. So, we discussed several particular classes of functions: Single block functions are completely analyzed. The asymptotic probability very much depends on the range of the excess. For two block functions, the only missing case is that of two large components which are not proportional in size. All those functions are rather close to \textsc{False}. Finally, functions on the other edge (close to \textsc{True}, with many blocks of bounded size)
were studied and, under some regularity conditions on the block sizes, we were able to get the asymptotic probability.
Apart from extensions of our results to cover, e.g., the extension of Theorem~\ref{th:multigraphs-large-excess} to the supercritical case, or of Proposition~\ref{th:asympt_proportional} to a larger number of edges,
what is missing is an asymptotic analysis of functions on the boundaries \textsc{True}\ and \textsc{False}\ having a more irregular component structure as well as the study of functions in the intermediate range.
\vskip .5cm
{\bf Acknowledgments.}
We thank Herv\'e Daud\'e and Vlady Ravelomanana for fruitful discussions.
|
2,877,628,089,394 | arxiv | \section{Introduction}
The information areal density of hard disk drives (HDDs) is expected to continue increasing, while remaining cost competitive. HDDs rely on the magnetic recording channel for digital storage. Conventional one-dimensional magnetic recording (1DMR) channels store bits along one track with enough spacing between tracks to prevent inter-track interference (ITI). To increase areal density, two-dimensional magnetic recording (TDMR) decreases the cross-track spacing, resulting in significant ITI in the readback waveforms \cite{The_Feasibility_MR_at_10_Tb_Wood}. To potentially compensate for the increased ITI, the current implementation of TDMR uses two read heads positioned in the HDD reader to capture the ITI from adjacent tracks.
The readback waveforms observed by the reader are passed through a low-pass anti-aliasing filter. The output is sampled at an appropriate rate to provide the readback analog-to-digital converter (ADC) discrete-time samples. The inter-symbol interference (ISI) in the ADC samples spans many bits in the down-track direction. For maximum likelihood (ML) detection of the written bits, the Viterbi algorithm (VA) is used. The number of trellis states assumed by the VA is exponential in the length of the ISI, and the number of computations per bit estimate is directly proportional to the number of trellis states. In TDMR, the large number of trellis states due to the 2D-ISI/ITI results in impractical complexity for ML detection.
Furthermore, for optimality in the ML sense, the canonical VA assumes that the noise is Gaussian and the 2D-ISI/ITI is linear in the coded bits. Both assumptions are not generally true in practice. Indeed, the readback waveforms suffer from non-linear impairments such as partial erasures, non-linear ISI/ITI, jitter noise, and asymmetry \cite{Data_Storage_Channel_Equalization_Neural_Networks_Nair,Nonlinear_Equalization_TDMR_Using_NNs_Shen}.
To shorten the ISI/ITI, the typical data recovery system uses the 2D-linear minimum mean squared error (2D-LMMSE) equalizer followed the VA detector \cite{Data_Storage_Channel_Equalization_Neural_Networks_Nair,Nonlinear_Equalization_TDMR_Using_NNs_Shen}. The 2D-LMMSE is realized as a finite impulse response (FIR) filter for simplicity of implementation. The equalizer's output is then passed to the VA for ML detection. Hence, ideally, the number of trellis states needed by the VA to recover the data from the equalizer's output is manageable. But the 2D-LMMSE equalizer is optimal in terms of the mean-squared error (MSE) only when the noise is Gaussian and the channel is linear \cite{Lessons_Estimation_Theory_Mendel}. Hence, an equalizer trained to minimize the MSE may not lead to the optimal detector bit error rate (BER), which is the desired figure of merit. Furthermore, as the areal density of the storage channel increases, non-linear impairments become more severe. Hence, the actual channel conditions deviate significantly from the optimality requirements
Compared with conventional equalizers and detection systems, neural networks (NNs) have been shown to compensate better for the non-linear impairments in high density magnetic recording channels with significant improvement in the overall system performance for 1DMR in \cite{Data_Storage_Channel_Equalization_Neural_Networks_Nair}, TDMR in \cite{Nonlinear_Equalization_TDMR_Using_NNs_Shen,DNN_APP_Detector_TDMR_Shen,DeepNeuralNetworkBasedMediaNoisePredictorsforUseinHighDensityMagneticRecordingTurboDetectors, Study_NN_Equalization_TDMR_Luo}, and multilayer magnetic recording (MLMR) in \cite{DeepNeuralNetwork-basedDetectionandPartialResponseEqualizationforMultilayerMagneticRecording}.
In \cite{Data_Storage_Channel_Equalization_Neural_Networks_Nair}, Nair and Moon propose using the multilayer perceptron (MLP) as an equalizer for high density 1DMR channels. Their results show that, as a non-linear equalizer, the MLP outperforms the conventional linear equalizer in terms of the MSE and BER. In \cite{DeepNeuralNetworkBasedMediaNoisePredictorsforUseinHighDensityMagneticRecordingTurboDetectors}, Sayyafan {\it et al.} suggest integrating a convolutional NN (CNN) with the Bahl–Cocke–Jelinek–Raviv (BCJR) to iteratively estimate and cancel the non-linear media noise. The proposed system achieves significant information density gains over competing non-NN-based system for 1DMR and TDMR channels. In \cite{Study_NN_Equalization_TDMR_Luo}, Luo {\it et al.} investigate the performance of a NN equalizer and show that it achieves lower BERs than the 2D-LMMSE over a TDMR channel. In addition, for application in TDMR, in \cite{DNN_APP_Detector_TDMR_Shen}, Shen {\it et al.} propose a detection system consisting of a 2D-LMMSE followed by a CNN detector.
In \cite{DeepNeuralNetwork-basedDetectionandPartialResponseEqualizationforMultilayerMagneticRecording}, Aboutaleb {\it et al.} propose CNN-based systems for detection and equalization in MLMR channels with severe non-linear distortions.
In \cite{Nonlinear_Equalization_TDMR_Using_NNs_Shen}, Shen and Nangare propose an NN equalizer followed by a VA, where the NN's parameters adapt to minimize the cross-entropy (CE). Therein, the authors show that adapting the equalizer's parameters to minimize the CE loss results lower detector BERs compared with MSE adaptation. Also, their study further confirms that the non-linear impairments are better handled using an NN equalizer.
Despite the improvements in performance reported in \cite{DNN_APP_Detector_TDMR_Shen,DeepNeuralNetworkBasedMediaNoisePredictorsforUseinHighDensityMagneticRecordingTurboDetectors, Study_NN_Equalization_TDMR_Luo,DeepNeuralNetwork-basedDetectionandPartialResponseEqualizationforMultilayerMagneticRecording}, the mentioned NN-based equalizers are much more complex than the linear equalizer baseline. Indeed, the high complexity of NN-based methods precludes practical implementation. For example, among the lowest complexity NN equalizers proposed by previous studies, the complexity of the MLP in \cite{Nonlinear_Equalization_TDMR_Using_NNs_Shen} is about $6.6\times$ the complexity of the 2D-LMMSE.
To facilitate practical implementation, we propose four variants of a reduced complexity MLP (RC-MLP) architecture. RC-MLP consists of FIR filters and non-linear activations; both components can be easily realized in practice. We show that the RC-MLP architectures offer an excellent balance between performance and complexity. Among the proposed variants, an architecture achieves the best trade-off between performance and complexity. This non-linear equalizer is comprised of FIR filters and a non-linear activation function.
Furthermore, the reported results in previous studies assume synthesized data. Although advanced synthesis methods provide accurate data sets, such as the grain flipping probability model \cite{ChannelModelsandDetectorsforTDMR_Chan,AD_Capability_Dual_Structure_Media_MAMR_Greaves}, we examine the performance of NN-based methods on actual HDD waveforms.
The novel contributions of this paper are summarized as follows.
\begin{enumerate}
\item We investigate reduced complexity neural network architectures for equalization over TDMR channels. We propose four variants of RC-MLP that achieve most of the performance gains achieved by the high complexity and high performance MLP.
\item We consider candidate radial basis function neural network (RBFNN)-based equalizer architectures. We compare the performance and complexities of these architectures with MLP and RC-MLP.
\item We test the performance of the proposed architectures and baseline methods using actual HDD data and readback waveforms. The data is obtained from an HDD with TDMR technology. We show the balance between performance and complexity for each method. Then, we identify the architecture with the best performance-complexity balance.
\end{enumerate}
\section{Channel Model and System Overview}
\begin{figure}[t]
\centering
\includegraphics[width=0.40\textwidth]{Equalizer_System_Diagram.pdf}
\caption{{\small Equalizer-detector system. The equalizer accepts the readback ADC samples and outputs the PR signal $\mathbf{y}$. The Viterbi detector along with the SOVA compute the soft-bit estimate $\mathbf{p}_0$.}}
\label{System_diagram}
\end{figure}
This paper uses actual HDD waveforms for testing. We summarize a channel model that approximates the TDMR channel, where details can be found in \cite[Sec. II]{Nonlinear_Equalization_TDMR_Using_NNs_Shen} and \cite{MultidimensionalSignalProcessingandDetectionforStorageSystemswithDatadependentTransitionNoise}. The magnetic recording channel is a binary input, 2D ISI/ITI channel with non-linear distortions, correlated media noise, and additive white Gaussian noise (AWGN).
Let $\mathbf{u}=[u_n]\in \{-1,+1\}$ be a binary input sequence to be written on the media.
Define the transition sequence $\mathbf{b}$ such that $b_n \triangleq (u_n - u_{n-1})/2$. Let $h(t,w)$ denote the 2D channel response, modeled using the erf$(\cdot)$ function as in \cite{Nonlinear_Equalization_TDMR_Using_NNs_Shen, GeneralizedPRTargetswithPerpendicularRecording, MultidimensionalSignalProcessingandDetectionforStorageSystemswithDatadependentTransitionNoise}. Then, $r_a(t)$ the continuous-time readback waveform is given by:
\begin{align}
r_a(t) = \sum_{m,n} b_nh(t-nT+\Delta t_n, w+\Delta w_n) + n(t),
\end{align}
where $T$ is the symbol interval, $\Delta t_n$ is the down-track jitter noise, modeled as a Gaussian random process using a truncated Gaussian distribution such that $\Delta t_n < T/2$, $\Delta w_n$ is the cross-track jitter noise, and $n(t)$ is the AWGN, which models the reader electronics noise.
Let $p(t,w) \triangleq h(t,w) - h(t-T,w)$ denote the channel response. Then, as detailed in \cite{Nonlinear_Equalization_TDMR_Using_NNs_Shen}, the equivalent discrete-time channel model is given by:
\begin{align}
r^{\langle l\rangle}_n = (\mathbf{p}^{\langle l\rangle}*\mathbf{u})_n + n_n^{\langle l\rangle},\quad l=0,1,
\end{align}
where $\mathbf{p}^{\langle l\rangle}$ is the 2D response for reader $l$.
Fig. \ref{System_diagram} shows the equalizer-detector system. The equalizer processes the ADC samples from the readers. The equalizer's output is passed to the SOVA detector \cite{AViterbiAlgorithmwithSoftDecisionOutputsandItsApplication}, which computes soft bit estimates as LLRs. The LLRs are then fed to the LDPC decoder to recover the information bits. The equalizer's adaptable parameters and target are adjusted using the MSE or the CE criteria.
\section{Adaptation}
\subsection{Mean-Squared Error}
Let the equalizer's output be denoted by $y_n$ and the noise-free partial response (PR) target signal by $\hat{y}_n$. The equalizer output $y_n$ depends on the equalizer's architecture design and its learnable parameters, which include the weights $\mathcal{W}$ and biases $\mathcal{B}$. The equalizer's design is discussed in Sections \ref{NNEqualizers} and \ref{ProposedNNEqualizers}. The noise-free PR signal is given by:
\begin{align}
\hat{y}_n = (\mathbf{g}*\mathbf{u})_n,
\end{align}
where $*$ denotes 2D discrete-time convolution, $\mathbf{g}$ represents the PR target, and $\mathbf{u}$ represents the written bit sequence. For a length-$N$ minibatch, the average MSE $J_{\text{MSE}}$ is commuted as:
\begin{align}
J_{\text{MSE}} = \dfrac{1}{N}\sum_{n=0}^{N-1}(\hat{y}_n - y_n )^2.
\end{align}
For MSE adaptation, the optimization problem is given by:
\begin{subequations}
\label{constrainedMSE}
\begin{alignat}{4}
&\!\underset{\mathcal{W}, \mathcal{B}, \mathbf{g}}{\text{arg}} &\quad& J_{\text{MSE}} \\
&\text{subject to} & & \mathbf{u}^T\mathbf{g}=1, \label{monic_constraint1}
\end{alignat}
\end{subequations}
where \eqref{monic_constraint1} is the monic constraint (MC) on the target which sets the first tap of the target to one, i.e., $\mathbf{u}=[1,0,0,\ldots]$. In MSE adaptation, including the MC prevents the target coefficients from converging to the trivial solution of $\mathbf{g}=\mathbf{0}$ and has been shown to improve the BER performance \cite{Equalization_for_ML_Detectors_Moon}.
\subsection{Cross-Entropy}
Consider estimating the $n$th bit $u_n$. Its estimate is denoted by $\hat{u}_n$. For CE adaptation, $\hat{u}_n$ is obtained by passing the equalizer's output to the VA and computing the soft decisions, in the form of log-likelihood ratios (LLRs), using the soft output VA (SOVA) \cite{AViterbiAlgorithmwithSoftDecisionOutputsandItsApplication}. Let $ \mathbbm{1}_{u_n=i}$ denote the indicator function such that $ \mathbbm{1}_{u_n=i} = 1$ if $u_n=i$, $i\in \{0,1\}$ (and zero otherwise), and define $p_{0,n} \triangleq \Pr\{\hat{u}_n = 0\}$.
Then, for the $n$th bit, the CE loss is computed as:
\begin{align}
\mathcal{H}\{u_n, \hat{u}_n\} =&
- \mathbbm{1}_{u_n=0}\log(p_{0,n}) -\mathbbm{1}_{u_n=1}\log( 1 - p_{0,n}).
\end{align}
The procedure in \cite{Nonlinear_Equalization_TDMR_Using_NNs_Shen} is followed to compute $p_{0,n}$ from the LLR. The objective function to minimize is the average CE loss computed over a minibatch of length $N$ as:
\begin{align}
J_{\text{CE}} = \dfrac{1}{N}\sum_{k=0}^{N-1} \mathcal{H}\{u_n, \hat{u}_n\}.
\end{align}
Hence, the optimization problem can be written as:
\begin{alignat}{4}
&\!\underset{\mathcal{W}, \mathcal{B}, \mathbf{g}}{\text{minimize}} &\quad& J_{\text{CE}}. \label{constrainedCE}
\end{alignat}
The optimization problems in \eqref{constrainedMSE} and \eqref{constrainedCE} are solved numerically using the back-propagation algorithm with stochastic gradient descent (SGD) \cite[Ch. 8]{DeepLearning_Goodfellow}.
Adapting the equalizer's parameters using the CE criterion is equivalent to maximum likelihood adaptation since the BER and soft-bit information are jointly improved. Thus, for any equalizer structure CE adaptation outperforms MSE adaptation in terms of BER \cite{Nonlinear_Equalization_TDMR_Using_NNs_Shen}.
\section{Neural Network-based Equalizers}
\label{NNEqualizers}
\subsection{Multilayer Perceptron}
\begin{figure}[t]
\centering
\includegraphics[width=0.27\textwidth]{MLP.pdf}
\caption{{\small Multilayer perceptron (MLP). The inputs are delayed ADC samples over a sliding window. The connections here represent a fully connected layer consisting of a matrix with dimensions consistent with the input length and the designed number of hidden outputs. The final equalizer output is an affine combination of all hidden outputs.}}
\label{MLP_diagram}
\end{figure}
Fig. \ref{MLP_diagram} shows the architecture of the MLP equalizer. To estimate the $n$th bit in the downtrack direction, the MLP uses the ADC samples observed over a sliding window of size $2M+1$ for each reader centered around the $n$th readback sample. Let $\mathbf{r}_n \triangleq [\mathbf{r}_n^{\langle 0\rangle},\mathbf{r}_n^{\langle 1\rangle}]$ denote such ADC samples from the TDMR reader centered around the $n$th sample, where the ADC samples per reader are defined as: $\mathbf{r}_n^{\langle l\rangle} \triangleq [r^{\langle l\rangle}_{n-M}, r^{\langle l\rangle}_{n-1},\ldots, r^{\langle l\rangle}_n, r^{\langle l\rangle}_{n+1}, \ldots,r^{\langle l\rangle}_{n+M} ]^T $, $l=0,1$. The MLP contains a matrix of weights $\mathbf{W}=[w_{i,j}]\in \mathbb{R}^{(4M+2) \times K}$ that multiplies the vectorized input samples $\mathbf{r}_{n,\text{vec}} = \text{vec}(\mathbf{r}_n)$ to provide $K$ intermediate outputs. After adding the biases $b_{0,k}$, $k=1,\ldots,K$, to the intermediate outputs, the hidden outputs are obtained by applying an element-wise non-linear activation function $\Psi(\cdot)$ to the outcome. Examples of $\Psi(\cdot)$ include the hyperbolic tangent, the sigmoid, and rectified linear units (ReLU) activations. The MLP output is an affine combination of the $K$ hidden outputs, where $\mathbf{v} =[v_j] \in \mathbb{R}^K $ is the vector of coefficients used in the linear combination and $b_2$ is the added bias term.
More precisely, the output of the MLP is given by:
\begin{align}
y_n= \sum_{j=0}^{K-1}v_j\Psi\Bigg(\sum_{i = 0 }^{4M+1} w_{i,j}\mathbf{r}_{n, \text{vec}, i} + b_{0,j}\Bigg) + b_{1}. \label{MLP_Output_Equation}
\end{align}
The learnable parameters for the MLP are the weights and biases $\mathcal{W}= \{\mathbf{W}, \mathbf{v}\}$ and $\mathcal{B}= \{\{b_{0,k}\}_{k=0}^{K-1},b_1\}$, respectively. Hence, the total number of parameters for the MLP is $|\mathcal{W}| + |\mathcal{B}|= 4MK+4K+1$. From \eqref{MLP_Output_Equation}, for each output, implementing the MLP requires $4MK+3K$ multiplications and additions, and $K$ evaluations of $\Psi(\cdot)$.
\subsection{Radial Basis Function Neural Network}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{RBFNN.pdf}
\caption{{\small Radial basis function neural network (RBFNN). The readback samples are used to compute the distances $d_k^l=\Vert \mathbf{r}_{n, \text{vec}} - \mathbf{c}_k^{l} \Vert$. A basis function $\phi(\cdot)$ is applied on the hidden outputs.}}
\label{RBFNN_diagram}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{FIR_RBFNN.pdf}
\caption{{\small Finite impulse response-radial basis function neural network (FIR-RBFNN). Two FIR filters reduce the input dimension so that the centers $\mathbf{c}_k^l$ lie in a lower dimensional space compared with the readings $\mathbf{r}_n$, thereby reducing the overall complexity. }}
\label{FIR_RBFNN_diagram}
\end{figure}
Fig. \ref{RBFNN_diagram} shows the architecture of the RBFNN. The RBFNN replaces the matrix multiplication and non-linear activation in the MLP with distances from cluster centroids and a basis function, respectively. The RBFNN is characterized by the centroids $\mathbf{c}_k$ with $K$ biases $b_{1,k}$, $k=1,\ldots,K$, a length-$K$ vector $\mathbf{v}$ with bias $b_2$, a norm $\Vert\cdot\Vert$, typically the Euclidean norm, and a basis function $\phi(\cdot)$. The length of the centroids $\mathbf{c}_k$ matches the number of the input samples. The canonical implementation of the RBFNN uses the Euclidean distance. Given an input instance, the distance between the input samples vectors and centroids is computed. Then, the basis function is applied to the distances to provide the hidden output. Finally, the RBFNN output is computed as an affine combination of the hidden outputs. More specifically, the RBFNN output is described by:
\begin{align}
y_n= \sum_{k=0}^{K-1}v_k\phi(\Vert \mathbf{r}_{n, \text{vec}} - \mathbf{c}_k \Vert + b_{0,k} ) +b_1, \label{RBFNN_vect_equation}
\end{align}
where $\mathbf{c}_k\in \mathbb{R}^{4M+2}$.
To reduce the dimension of the centroids, we can use centroids that have the same length as the number of ADC samples per reader, instead of the vectorized ADC samples for both readers. If $K$ is even, then this formulation gives the output of the RBFNN as:
\begin{align}
y_n= \sum_{l=0}^{1}\sum_{k=0}^{K/2-1}v_k\phi(\Vert \mathbf{r}_{n}^{\langle l\rangle} - \mathbf{c}_k^{l} \Vert + b_{l,k} )+b_l, \label{RBFNN_equation}
\end{align}
where $\mathbf{c}_k^{l}\in \mathbb{R}^{2M+1}$. In \eqref{RBFNN_equation}, $K/2$ centroids are assigned for each ADC sequence. The learnable parameters consist of the centers $\mathcal{C}= \{\mathbf{c}_k\}_{k=0}^{K-1}$, weights vector $\mathbf{v}$, and biases $\mathcal{B} = \{\{ b_{l,k} \}_{k=0}^{K-1}, b_l\}_{l=0}^1$. Thus, the RBFNN uses $|\mathcal{C}|+|\mathbf{v}|+|\mathcal{B}|=2KM+3K+2$ learnable parameters.
\begin{figure}[t]
\centering
\includegraphics[width=0.33\textwidth]{RC_MLP_detailed_diagram.pdf}
\caption{{\small RC-MLP1 architecture.}}
\label{RC_MLP_Figure}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{RC_MLP_Block.pdf}
\caption{{\small FIR representation of RC-MLP1.}}
\label{FIR_representation_RC_MLP}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.3\textwidth]{RC_MLP2_Block.pdf}
\caption{{\small RC-MLP2.}}
\label{RC_MLP2_diagram}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.3\textwidth]{RC_MLP3_Block.pdf}
\caption{{\small RC-MLP3.}}
\label{RC_MLP3_diagram}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.3\textwidth]{RC_MLP4_Block.pdf}
\caption{{\small RC-MLP4.}}
\label{RC_MLP4_diagram}
\end{figure}
\section{Proposed Reduced Complexity Neural Network-based Equalizers}
\label{ProposedNNEqualizers}
\subsection{FIR-RBFNN}
To decrease the complexity of the RBFNN, we introduce an FIR filter $\mathbf{f}^{\langle l \rangle} \in \mathbf{R}^{2P+1}$, $l=0,1$, per ADC sequence, that interfaces the two input ADC sequences, as shown in Fig. \ref{FIR_RBFNN_diagram}. The FIR filter maps the length-$(2M+1)$ ADC input to a length-$(2M'+1)$ sequence, where $M'<M$. The FIR-RBFNN output is given by:
\begin{align}
y_n= \sum_{l=0}^{1}\sum_{k=0}^{K/2-1}v_k\phi(\Vert (\mathbf{f}^{\langle l \rangle} * \mathbf{r}_{n}^{\langle l\rangle}) - \mathbf{c}_k^{l} \Vert + b_{l,k} )+b_l, \label{FIR_RBFNN_Equation}
\end{align}
where the dimension of the centroids is now $2M'+1$.
\subsection{RC-MLP}
We propose four variants of reduced complexity MLP (RC-MLP) architectures. RC-MLP1 uses FIR filters to implement the matrix multiplication and affine combination operations in MLP, resulting in a lower complexity architecture. Two FIR filters, FIR-1 and FIR-2, are introduced at the input layer -- one filter per ADC stream. After applying a non-linear activation, the hidden outputs corresponding to each filter are then stacked in one or two delay lines, depending on the specific architecture as shown in Fig. \ref{RC_MLP_Figure} and \ref{FIR_representation_RC_MLP}.
\subsubsection{RC-MLP1}
Fig. \ref{RC_MLP_Figure} shows the architecture of RC-MLP1. Two intermediary delay lines are introduced to store temporary hidden outcomes from FIR-1 and FIR-2 separately. The last layer consists of two FIR filters, FIR-3 and FIR-4, that map the delayed hidden samples to the final output.
Let $\mathbf{f}^{\langle l \rangle}$ and $\mathbf{q}^{\langle l \rangle}$, $l=0,1$, denote the FIR filters interfacing the input ADC samples and hidden delayed outputs, respectively. The lengths of $\mathbf{f}^{\langle l \rangle}$ and $\mathbf{q}^{\langle l \rangle}$ are $2M+1$ and $K$, respectively, where $K$ is the number of hidden delay samples per ADC path. Then, the equalizer output $y_n$ is given in terms of the hidden outputs $h_n^{\langle l \rangle}$ as:
\begin{align}
&h_{n}^{\langle l \rangle}=\Psi((\mathbf{f}^{\langle l \rangle}*\mathbf{r}_n^{\langle l \rangle})_n + b_{0,l} ) \\
&y_{n}=\sum_{l=0}^1 (\mathbf{q}^{\langle l\rangle} *\mathbf{h}_{n}^{\langle l \rangle})_n + b_{1},
\end{align}
where $b_{0,l}$ and $b_1$ are bias terms. RC-MLP1 uses $4M+2K+7$ learnable parameters and requires $4M + 2K + 2$ multipliers and two evaluations of $\psi(\cdot)$ per bit estimate.
\subsubsection{RC-MLP2}
In RC-MLP2, FIR-1's and FIR-2's outputs are summed. Then, a non-linear activation function is applied to the summation. Hence, only one hidden delay line is needed with $K$ samples. FIR-3 interfaces the hidden delay line to provide the equalizer output. The hidden outputs and equalizer output can be expressed as:
\begin{align}
&h_{n}=\Psi\Bigg(\sum_{l=0}^1 ( \mathbf{f}^{\langle i \rangle}*\mathbf{r}_n^{\langle i \rangle})_n + b_{0} \Bigg) \\
&y_{n}=(\mathbf{q} *\mathbf{h}_{n})_n + b_{1}.
\end{align}
RC-MLP2 requires $4M+K+4$ parameters, $4M+K$ multipliers, and one evaluation of $\psi(\cdot)$ per bit estimate.
\subsubsection{RC-MLP3}
RC-MLP3 adds a linear connection from the input to the output in the RC-MLP2 structure as shown in Fig. \ref{RC_MLP3_diagram}. The motivation for adding the linear connection is to jump start the system using the FIR-1 and FIR-2 combination as a linear equalizer, while maintaining the improved generalization afforded by the non-linear NN equalizer. Furthermore, including this direct connection between the first hidden output to the final output provides a more direct path for error propagation from the output to adapting FIR-1 and FIR-2. The equalizer's output for RC-MLP3 can be written as:
\begin{align}
& h_{n,\text{ Linear}} = \sum_{l=0}^1 ( \mathbf{f}^{\langle i \rangle}*\mathbf{r}_n^{\langle i \rangle})_n + b_{0},\\
&h_{n}= \Psi(h_{n,\text{ Linear}}) , \\
&y_{n}= (\mathbf{q} *\mathbf{h}_{n})_n + ch_{n,\text{ Linear}} + b_{1}.
\end{align}
\subsubsection{RC-MLP4}
RC-MLP4 adds a linear connection to RC-MLP1 as shown in Fig. \ref{RC_MLP4_diagram}. The output is given by:
\begin{align}
& h_{n,\text{ Linear}}^{\langle l\rangle} = \sum_{l=0}^1 ( \mathbf{f}^{\langle i \rangle}*\mathbf{r}_n^{\langle i \rangle})_n + b_{0,l},\\
&h_{n}^{\langle l\rangle}= \Psi(h_{n,\text{ Linear}}^{\langle l\rangle}), \\
&y_{n}= \Big(\sum_{l=0}^1 \mathbf{q}^{\langle l\rangle} *\mathbf{h}_{n}^{\langle l\rangle}\Big)_n + ch_{n,\text{ Linear}} + b_{1}.
\end{align}
\section{Numerical Results}
\begin{table}[t]
\centering
\caption{\small Performance and complexity comparison. The BER is computed over the first 20 sectors.}
\label{PerformancevsComplexity20Sectors}
\setlength{\tabcolsep}{2pt}
\begin{tabular}{@{}cccc@{}}
\toprule
Architecture & $K$ & BER &Complexity\\ \midrule
2D-LMMSE with fixed [3,7,1] target &N/A &0.027982 &22 \\
\textbf{2D-LMMSE} &N/A & \textbf{0.025548} &\textbf{22}\\
2D-LECE &N/A & 0.023066 &22 \\
2D-LECE with 21 Taps per ADC &N/A &0.022658 &42 \\
RBFNN &6 &0.02315 &157 \\
RBFNN &20 &0.021608 &521 \\
RBFNN &30 &0.021733 &781 \\
FIR-RBFNN, Gaussian Basis &6 &0.021860 &107 \\
FIR RBFNN, Tanh Basis &6 &0.022497 &107 \\
RC-FIR-RBFNN, Linear Basis &6 &0.022773 &41 \\
RC-FIR-RBFNN, Gaussian Basis &6 &0.023744 &41 \\
\textbf{MLP} & 6 & \textbf{0.020757} &\textbf{145} \\
RC-MLP1 &6 &0.022736 &31 \\
RC-MLP1 & 10 &0.022048 &35 \\
RC-MLP1 & 14 &0.021916 &39 \\
RC-MLP1 & 18 &0.021862 &43 \\
RC-MLP2 & 9 &0.021668 &34 \\
\textbf{RC-MLP3} & 9 &\textbf{0.021243} &\textbf{35} \\
RC-MLP4 & 18 &0.021367 &44 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Setup}
We test the discussed architectures on raw ADC samples obtained from an HDD with TDMR technology. The data consists of $520$ sectors, where each sector contains $N = 39,268$ bits. Two ADC samples per bit are available via the TDMR read head. The CTS is $52\%$, and the track pitch is $85$ nm.
\subsection{Linear Equalizers}
Table \ref{PerformancevsComplexity20Sectors} shows the BER performance and complexity of the discussed methods evaluated on the first 20 sectors. All equalizers are adapted based on the CE criterion except the 2D-LMMSE. The linear equalizer uses $11$ taps per ADC sequence except when noted. Computational complexity here is measured as the number of learnable parameters required by the method. The number of parameters is directly proportional to the number of FLOPs required per estimation. The baseline method used in practice is the 2D-LMMSE equalizer. In all experiments, the target coefficients adapt to minimize the objective function under the monic constraint. The exception is an experiment where the target is fixed to $[3,7,1]$ and the 2D-LMMSE is used. This configuration is a baseline that is often used in practice. Adapting the target reduces the BER by $8.70 \%$ when the MSE is used as the adaptation criterion. Adapting the linear equalizer with CE instead of the MSE results in further reduction of the BER by about $12.67\%$. A further BER improvement of the linear equalizer by about $1.77\%$ is achieved by increasing the number of FIR taps to $21$ per ADC sequence. With $21$ taps per ADC, the linear equalizer's complexity increases by about $90\%$.
\subsection{RBFNN-based Equalizers}
The basic RBFNN equalizer increases complexity without significant improvement in the BER even with $20$ and $30$ cluster centroids per ADC.
To improve the performance, FIR-RBFNN is used. The two FIR filters, one per ADC sequence, interface the ADC samples and map the $11$ samples per ADC to $5$ samples per ADC. In this case, the centroids lie in $5$ dimensional space (instead of $11$), making the distance computation more efficient. This FIR-RBFNN architecture with $6$ centroids per ADC achieves BER performance that is only $1.67\%$ higher that the RBFNN with $20$ centroids, while requiring about $4.87$ less complexity. Nonetheless, FIR-RBFNN's BER is $5.23\%$ lower than 2D linear equalizer with cross-entropy (2D-LECE), but its complexity is about $4.86$ higher.
RC-FIR-RBFNN decreases complexity by about $37\%$ over FIR-RBFNN. With linear basis, RC-FIR-RBFNN achieves a similar BER.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{Performance_vs_Complexity_520Sectors.pdf}
\caption{{\small Performance versus complexity for 520 sectors.}}
\label{Performance_vs_complexity_520_sectors_plot}
\end{figure}
\subsection{RC-MLP}
The MLP with $6$ hidden nodes achieves the lowest BER, which is $10\%$ lower than the 2D-LECE with $11$ taps, while demanding a $6.6 \times$ increase in complexity. In comparison, RC-MLP1 with 6 hidden nodes achieves a BER performance that is $5.22\%$ lower than the 2D-LECE with only a $1.95 \times$ complexity increase. RC-MLP2 achieves a $6.06\%$ lower BER compared with 2D-LECE while requiring only a $1.54 \times$ increase in complexity. RC-MLP3 achieves a BER reduction of $7.9\%$ with a complexity increase of $1.59 \times$ over 2D-LECE. RC-MLP4 achieves $7.37\%$ lower BER with a $2\times$ increase in complexity. Thus, RC-MLP3 achieves the best balance between complexity and performance improvement. Also, RC-MLP3 decreases the implementation complexity by about $4.14\times$ compared with the MLP. Furthermore, RC-MLP3 achieves $24.08\%$ and $16.85\%$ reductions in the BER compared with 2D-LMMSE with fixed and adapting targets, respectively. Fig. \ref{Performance_vs_complexity_520_sectors_plot} summarizes the performance complexity trade-off.
\section{Conclusion}
We examine the performance-complexity trade-off for different equalizer architectures. The multilayer perceptron (MLP) achieves significant performance gains over the linear equalizer. But its complexity is about $6.6\times$ the complexity of the linear equalizer. To enable practical implementation of non-linear neural network equalizer, four variants of a reduced complexity MLP (RC-MLP) are proposed. Among them, an architecture, named RC-MLP3, outperforms its variants and achieves most of the performance gains of the MLP, while requiring only $1.59\times$ the complexity of the linear equalizer. If $L$ and $K$ are the lengths of the input and hidden output, respectively, then the complexities of MLP and RC-MLP scale as $O(LK+K)$ and $O(L+K)$, respectively.
\bibliographystyle{IEEEtran}
|
2,877,628,089,395 | arxiv | \section{Introduction}
\label{intro}
The spectrum of nuclear data needed for accelerator applications
is extensive, versatile and complex. The complexity of this vivid field is
reflected in the large number of European (and world wide) projects currently
supported in the 6th framework programs of the EU. First, few of them will be
briefly
presented in this contribution in order to give a coarse overview of ongoing
accelerator application actions in Europe. To a large extend community-research
infrastructure activities and integrated projects of FP6 EU-programs as
e.g.~CARE (Coordinated Accelerator Research for Europe)\cite{care}, FAIR
(Facility for
Antiproton and Heavy Ion Research)\cite{fair}, EU-LIFE (Light Ion facility
Europe) or IP
EUROTRANS (European Research Programme for the Transmutation of high level
nuclear waste in an accelerator driven system)\cite{eurotrans} contain domains
(e.g.~NUDATRA--Nuclear Data for Transmutation) devoted to compiling and
providing nuclear data indispensable for realization of large scale facilities.
In the framework of designing such facilities as for example subcritical
assemblies, intense pulsed spallation neutron sources (SNS \cite{spa-sns}, JSNS
\cite{nag99}, ESSI \cite{ess02_iii}),
antiproton facilities (FAIR), accelerator-driven nuclear reactors, nuclear waste
transmutation plants, energy amplifier systems, proton drivers for future
neutrino factories (BENE--Beams for European Neutrino Experiments) and also for
the application for radioactive beams, improved accuracy nuclear data are of
great importance not only for shielding layouts, but also for estimation of
damage in target and structural materials. Under irradiation the structural
damages of materials used in construction of the facilities manifest themselves
as atomic displacements, radiotoxicity, chemical corrosion and embrittlement.
The expected radiation-induced damage of materials employed is mainly due to
\begin{itemize}
\item helium gas production
\item elastic scattering of neutrons, charged particles, and in particular
intermediate mass fragments and heavy residuals produced in spallation
reactions
\end{itemize}
Compared to the extensively studied radiation
damage in fission reactors the damage in spallation neutron sources is
characterized by an about 500 times larger ratio of produced helium gas per atom
to displacements per atom (He/dpa) in materials which are directly exposed to
the incident proton beam.
The second part of this contribution will focus on the latest progress on
nuclear data achieved in the NUDATRA domain of the IP EUROTRANS. The objective
of this project is to provide reliable and comprehensive experimental data
serving as benchmarks for code development and validation in the 200-2000 MeV
energy range. To scrutinize several of such codes and to calculate as reliably
as possible quantities related to high energy reactions, hadronic interaction
lengths, reaction cross sections, average particle multiplicities, particle
multiplicity and double differential energy distributions need to be
investigated. In this context the latest results of crutial experiments
performed at GSI and COSY essentially on helium and intermediate mass production
will be presented and compared to model predictions.
\section{European accelerator driven projects}
The number of small and medium sized accelerator driven projects in Europe is
quite large.
New ideas and initiatives are emerging and certainly worth being mentioned,
however let us here
restrict the discussion to probably the most important forum consulted by the
European Commission for
decision taking on supporting future facilities in Europe called ESFRI (European
Strategy Forum on Research Infrastructures) \cite{esfri06}.
\subsection{ESFRI}
The ESFRI Roadmap identifies new Research Infrastructure (RI) of European
interest corresponding to the long term needs of the European research
communities, covering all scientific areas, regardless of possible
location. Potential new RI (or major upgrade) identified are likely to be
realized in the next 10 to 20 years. Therefore they may have different degrees
of maturity. The ESFRI roadmap is an on-going process; therefore this roadmap
will be periodically updated. The first revision of roadmap will already start
early 2007. Following a request from the European Commission, ESFRI decided to
compile a list of opportunities in order to assist the Commission in the
preparation of its proposal for the Seventh Framework programme (FP7).
In autumn 2006 ESFRI agreed on a first list of 35 mature proposals for new (or
major upgrade of) facilities of European interest covering seven key
research areas (environmental sciences; energy; materials sciences;
astrophysics, astronomy, particle and nuclear physics; biomedical and life
sciences; social science and the humanities; computation, data treatment)
Those ones listed in the 2006 report \cite{esfri06} and related to accelerator
(and reactor) applications are itemized in the following:\\
Energy:
\begin{itemize}
\item IFMIF int. fusion materials irradiation facility (10MW high flux n-source)
\item JHR high flux research reactor for fission reactors materials testing
\end{itemize}
Material Science:
\begin{itemize}
\item ELI extreme light intensity attosecond pulse laser
\item ESFR upgrade of the European synchrotron radiation facility (in 7
years)
\item ESS-I Eur.~spallation neutron source for n-spectroscopy
\item Eur.~XFEL hard X-ray Free electron laser in Hamburg
\item ILL 20/20 upgrade of the European neutron spectroscopy facility (2
phases)
\item IRUVX-FEL infrared to soft X-ray free electron lasers (in 5 user
facilities)
\end{itemize}
Astro-, nuclear and particle physics:
\begin{itemize}
\item FAIR Facility for antiproton and ion research
\item SPIRAL2 production and study of rare isotope
\item LHC large hadron collider at CERN
\end{itemize}
\subsection{Accelerator R\&D projects (EU)}
The European Commission provides a variety of instruments (I3 -Integrated
Infrastructure Initiatives, DS -Design Studies, Network Activities, NEST -New
and Emerging Science and Technology) to strengthen and financially support the
high energy physics European community in order to play a leadership role in i)
the improvement of existing accelerators ii) the development of new
accelerators. To name a few such instruments there are e.g.:
\begin{itemize}
\item CARE Coordinated Accelerator Research in Europe, (I3)
\item BENE Beams for European Neutrino Experiments (part of CARE, Network
Activity)
\item EUROTeV European Design study Towards a global TeV linear Collider (plays
a major role for the ILC (int.linear collider), (Design Study)
\item EURISOL European Isotope Separation Online Radioactive Ion Beam facility,
(Design Study)
\item EUROLEAP European laser electron controlled acceleration in plasmas to
GeV energy range, (NEST)
\end{itemize}
The duration of such projects is 3-5 years and generally about roughly 1/3 of
the total costs is covered by EU contributions.
\subsection{Multi MW-target projects}
\label{mw-target}
Doubtless and in common for any high intensity accelerator driven MW
target project nuclear data are indispensable for
\begin{itemize}
\item performance optimization (choice of material, geometry, secondary particle
production)
\item life time assessment (aging, material damage as dpa, gas production,
embrittlement in window and target
container, corrosion, composition modifications)
\item radioprotection (activation, radioactive inventory, short-lived residue
activity, shielding high-energy neutrons)
\item waste management (long-lived residue radiotoxicity)
\end{itemize}
Very briefly some large scale facilities currently planned or under construction
in Europe facing during their design
these kind of issues will be adumbrated in the following:
\subsubsection{FAIR}
The FAIR (Facility for Antiproton and Ion Research) \cite{fair} has been
presented as large
scale international accelerator facility of the next generation capable of
producing primary beams of protons (up
to 30~GeV, $2.5\times 10^{13}/s$), heavy ions (up to Uranium, up to 25~GeV/u,
$10^{10}/s$) and secondary beams of radioactive ions (up to 2~GeV/u) and
antiprotons (up to 30~GeV, up to $7\times 10^{10}/h$). For the antiproton
production it is planned to bunch compress the 29~GeV protons on the production
target to 50~ns. FAIR builds on the experience and technological developments
already made at the existing GSI facility, and incorporates new technological
concepts. At its heart is a double ring facility with a circumference of of 1100
meters. A system of cooler-storage rings for effective beam cooling at high
energies and various experimental halls will be connected to the facility. The
new facility will be organized as a European/international research centre.
\subsubsection{Neutrino factories}
Intense neutrino factories as currently lively discussed in the EU CARE
integrated infrastructure initiative \cite{care} and the BENE network activity
\cite{bene} employ very powerful MW proton drivers and produce finally neutrinos
from $\pi$ and $\mu$ decay, respectively. These kind of facilities aim at
$\nu$-production rates of up to $10^{21}$ useful $\mu$-decays/year.
An alternative scenario of producing intense $\nu$-beams is currently proposed
by the decay of stored beta-active emitters instead of a $\mu$:
\begin{eqnarray}
^6_2He &\rightarrow & ^6_3Li+e^-+\overline{\nu}_e \qquad
T_{1/2}=0.8s
\label{eq:he} \\
^{18}_{10}Ne &\rightarrow & ^{18}_9Li+e^++\nu_e \qquad T_{1/2}=1.7s
\label{eq:ne}
\end{eqnarray}
As shown in Fig.\ref{fig:he4he6} the He-production could be performed by
converter technology using spallation
neutrons from water cooled W or liquid Pb on BeO concentric cylinders via the
reaction $n+^9Be\rightarrow^6He+^4He$. A nominal production rate of $5\times
10^{13}$ ions/s can be achieved.
\begin{figure}[h]
\centering
\resizebox{0.95\columnwidth}{!}{%
\includegraphics{724_goldenbaum_f1_he4he6}
}
\caption{$^6He$-production by converter technology using spallation neutrons.}
\label{fig:he4he6}
\end{figure}
For the $^{18}Ne$-production a spallation of
close-by target nuclides $^{18}Ne$ from MgO $^{24}_{12}Mg(p,p_3n_4)^{18}_{10}Ne$
is assumed to be a realistic reaction--the beam hitting directly the oxide
target.However for a few GeV proton beam and 200~kW dc power the estimated
production rate will be more than one order of magnitude too low. Novel
production scenarios will be required. For the production of the specific
ions, usually an isotope separation (e.g.ISOL) is following the intense proton
driver. These ions are accelerated to final energies and stored in large (km)
decay rings at high $\gamma$ ($^6He: \gamma\sim150, \quad ^{18}Ne:\gamma\sim60$)
for providing neutrino sources to experiments. Post accelerating the parent ions
to relativistic $\gamma_{\textrm{max}}$ has the advantage of boosted neutrino
energy spectra and forward focussing of neutrinos: $\theta\le1/\gamma$. Two (or
more) different parent ions can be used for $\nu$ and $\overline\nu$-beams on
$\beta^+$ and $\beta^-$-decay. The physics applications of beta beams are
primarily neutrino oscillations $\nu_e \leftrightarrow\nu_\mu$ (in particularly
the single flavor decays as of eqs.\ref{eq:he} and \ref{eq:ne}) and CP violation
studies, but also measurements on cross sections of $\nu$-nucleus interactions.
By far however not all technical issues are addressed and solved yet.
\subsubsection{Neutron spallation source}
As a successor of the original 2003 ESS feasibility study resulting in a
comprehensive technical report \cite{ess02_iii}, the European Spallation Source
Initiative (ESS-I) was established as an association in which the users of
neutron facilities, and regional consortia interested in hosting a new facility
collaborate to prepare the case and the planning for a next generation neutron
spallation source in Europe. The objectives of the ESS-I are to:
\begin{itemize}
\item gather all those interested in building at the earliest possible
occasion a next generation top tier neutron source in Europe
\item stimulate, coordinate where necessary and possibly engage in all
activities that need to be carried out before a decision to recommence baseline
engineering, construction planning and detailed site assessment can be taken
\item act as the interlocutor to national governments and funding agencies,
the EU and the ESF in matters regarding the initiative
\item have one European counterpart to SNS and the JSNS as part of J-PARC
\end{itemize}
Related to the project laid down in the ESFRI roadmap, i.e. the 5~MW long pulse
target station, several countries/governments have declared to take an active
part in the "preparatory phase" project of ESS within EU FP7 and shown interest
in hosting the large scale facility. However appropriate conditions for
international co-financing are not set up so far.
A baseline design phase resulted in a series of Volumes \cite{ess02_iii} by the
end of 2003 addressing the technical challenges that had been identified. As
concerns the need for nuclear data, of crutial importance of any revised layout
will be the optimization of the target/moderator/reflector assembly (geometry,
material,...) and the maximization of neutron flux. Carefully estimated also
have to be the production rates in GeV p-induced spallation on heavy targets and
the activities of target and structural materials during and after many years of
full power operation.
\begin{figure}[h]
\centering
\resizebox{0.95\columnwidth}{!}{%
\includegraphics{724_goldenbaum_f2_nuclide-decay2}
}
\caption{Decay curves of long lived nuclides in Hg-target which determine the
activity during operation and after 30years of 5~MW operation.}
\label{fig:nuclide-decay2}
\end{figure}
Figure \ref{fig:nuclide-decay2} shows as an example for a Hg target the typical
decay curves of individual nuclides during operation and after 30years of 5~MW
operation as for ESS conditions. Of largest abundance is here the tritium
activity up to t=10 years as well as the $^{194}Hg$ and $^{194}Au$ activity
dominating up to 1000 years and representing at the same time as long lived
isotopes the nuclides of largest concern.
\section{Nuclear data EU-projects}
Already in the fifth framework program, projects aiming at new nuclear data
measurements and evaluations as e.g. HINDAS (High and Intermediate energy
Nuclear Data for Accelerator-driven Systems) were supported by the EU. Within
the three years program HINDAS provided a wealth of new nuclear reaction cross
sections in the energy range between 20 MeV and 2~GeV on three elements of
crutial importance for ADS systems: Pb as a target element, U as an actinide and
Fe as a shielding element. As a natural extension or successor of the FP5 HINDAS
project, in FP6, the project EUROTRANS (EUROpean Research Programme for the
TRANSmutation of high level nuclear waste in an accelerator driven system) has
been launched. Due to the lack of progress in implementing the radioactive waste
disposal in geological repositories, the Partitioning and Transmutation (P\&T)
strategy has been put forward.
requirements. In addition, P\&T both will contribute to the sustainability of
nuclear energy in those countries that pursue this source of energy, and will
assist in combating global warming.
In the Euratom Sixth Framework Program (FP6), one specific targeted research
project on the study of the impact of P\&T (RED-IMPACT) and two integrated
projects: (i) EUROPART (partitioning) and (ii) EUROTRANS (transmutation) have
been launched. EUROTRANS benefits not only from the technological developments
and scientific progress achieved in Europe during FP5, but also from worldwide
co-operation (OECD/NEA, IAEA, USA, Japan, ISTC etc.) in the field of P\&T. The
FP5 R\&D results in the areas of preliminary design studies of an experimental
ADS, low-power coupling of an accelerator to the Masurca zero power reactor made
sub-critical for the purpose of the MUSE programme, studies on transmutation
fuels, heavy liquid metal technology and high-energy nuclear data are
available as a basis for the advancement of this Integrated Project (IP). The
EUROTRANS project is structured into one management and five technical Domains
(DM) that are further subdivided into work packages and tasks.
\section{Selected nuclear data (NUDATRA)}
In the following the emphasis will be on EUROTRANS - DM5 NUDATRA (Nuclear data
for transmutation of nuclear waste). The goal of this domain is to improve
nuclear data evaluated files and models which involves sensitivity analysis and
validation of simulation tools, low and intermediate energy nuclear data
measurements, nuclear data libraries evaluation at low and medium energies, and
high energy experiments and modeling. In the following the focus is on NUDATRA
WP5.4---High energy experiments and modeling. This workpackage aims at the
investigation of:
\begin{itemize}
\item pA (spallation) reactions in the GeV regime
\item data measured from exclusive experiments for testing, validating and
developing theoretical models
\item double differential cross sections (DDXS) $d\sigma/dEd\Omega$ of light
charged particles (LCP=p,d,t,$^3$He,$^4$He,...) and intermediate mass fragments
(IMFs, $Z\le 16$) in spallation and fragmentation p-induced reactions
(0.1-2.5~GeV, C to Au)
\item reaction mechanism of pN reactions in terms of time scales, simultaneous
or sequential emission of IMFs, origin of pre-equilibrium and evaporation
processes
\end{itemize}
\subsection{Light charged particle and IMF production}
As an example for light charged particle production, double differential energy
spectra of $^{1,2,3}$H and $^{3,4}$He ejectiles following 1.2~GeV p-induced
reactions on Ta target as measured by NESSI collaboration at COSY-J\"ulich is
shown for different angles in respect to the incident proton in
fig.\ref{fig:fig7a}.
\begin{figure}[h]
\centering
\resizebox{0.95\columnwidth}{!}{%
\includegraphics{724_goldenbaum_f3_fig7a}
}
\caption{Energy spectra of $^{1,2,3}$H and $^{3,4}$He for 1.2~GeV p+Ta. dots:
exp.~data, shaded hist.: calculated evap.~spectra, dashed hist.: pre-equilibrium
protons as calculated by INCL2.0; \protect\cite{her06}}
\label{fig:fig7a}
\end{figure}
The experimental data clearly feature two components, an evaporation component
dominant for all angles and at low kinetic energies and a high energy component
all the more pronounced the smaller the angle of the ejectile in respect to the
incident proton is. Here \cite{her06} for the theoretical description the
INCL2.0 \cite{cug97} intranuclear cascade code is coupled to the evaporation
code GEMINI \cite{cha88}. Only for protons both components can be well
described. Due to the lack of composite particle emission in the early phase of
the reaction in the INCL2.0 model, the high energy tails of the spectra for
d,t,$^{3,4}$He are not described by the calculations. The shape of the
calculated evaporation component (shaded yellow histogram in
fig.\ref{fig:fig7a}) however is well reflected also for composite particles.
\begin{figure}[h]
\centering
\resizebox{0.95\columnwidth}{!}{%
\includegraphics{724_goldenbaum_f4_fig8}
}
\caption{Angular distributions of $^{1,2,3}$H and $^{3,4}$He for 1.2~GeV p+Au.
symbols: exp.~data, lines calculation by INCL2.0+GEMINI; \protect\cite{her06}}
\label{fig:fig8}
\end{figure}
For 1.2~GeV p+Au in fig.\ref{fig:fig8}, the angular distribution of disentangled
evaporation (left panel) and pre-equilibrium (right panel) components are shown.
For all particle species the evaporation exhibits an isotopic behaviour, while
more directly emitted particles show larger forward/backward asymmetry. Note
that for pre-equilibrium protons the angular dependence is well described in the
INCL2.0 model. It would be certainly worth to compare the current experimental
data \cite{her06} with the latest version of INCL4.3 \cite{bou04} including a
coalescence formalism allowing for the cluster emission of composite nucleons
(d,t,$^{3,4}$He) in the early phase of the reaction.
\begin{figure}[h]
\centering
\resizebox{0.95\columnwidth}{!}{%
\includegraphics{724_goldenbaum_f5_fig16}
}
\caption{Production cross section of $^{7,9,10}$Be isotopes for 1.2~GeV p+X.
$\bullet$: NESSI \protect\cite{her06}, $\star$: R.~Michel \protect\cite{mic95}
data, lines: calculation by INCL2.0+GEMINI}
\label{fig:fig16}
\end{figure}
The production cross sections of $^{7,9,10}$Be isotopes for 1.2~GeV protons on
different targets (C to Au) as well as the total $\sigma_{\textrm{Be}}$ are
shown in fig.\ref{fig:fig16}. The production of all individual isotopes does not
strongly depend on $Z$, respectively. When looking more carefully at the energy
spectra of IMFs (not shown in this contribution), ones more as expected the
combination of INCL2.0+GEMINI fails to describe the high energy tails of the
energy spectra. Nevertheless in fig.\ref{fig:fig16} the calculated angle and
energy integrated production cross sections agree generally rather well with the
NESSI \cite{her06,mic95} data, because the pre-equilibrium component amounts to
the total cross section only on the percent level. The lines representing the
model prediction are reflecting the ejectiles coming from evaporation model
only, i.e.~GEMINI. The experimental data on $^7$Be and $^{10}$Be ejectiles
measured for low $Z$-targets by mass spectrometry \cite{mic95} coincide with the
systematics of the NESSI experiment \cite{her06}. In a similar presentation one
would observe the multiplicity/production cross sections of the neutron rich He
isotope ($^6$He) strongly increasing with atomic number $Z$ of the bombarded
target (not explicitly shown here)--a very similar behavior as the one which is
observed for the "neutron rich" triton. In contrast to the $^{3,4}$He isotopes,
for $^6$He the INCL2.0+GEMINI calculations {\em overestimates} the experimental
results of Herbach et al.\cite{her06} by approx.~30\%.
The international collaboration PISA (Proton Induced SpAllation) \cite{pisa-web,gol05,bar03,bub04} is aiming at a quite similar physics program as NESSI, however with a completely different setup and at an internal beam location. At the internal beam of COSY the investigation of the reactions induced by protons on thin targets (50-200 $\mu$g/cm$^2$) enables us to get the cross sections without the absorption and energy loss involved with the propagation of reaction products in the material of the target. The multiple circulation of the beam in the COSY ring is used to compensate for the small reaction rate of beam-protons with the thin targets. The advantage being higher statistics and more precise information on the very tails of double differential energy spectra
Isotope separation was done by combining the information from multi-channel-plates (time-of-flight), silicon detector telescopes and Bragg curve spectroscopy (energy deposited inside Bragg curve detectors) allowing for the separation of following isotopes: 6Li, 7Li, 8Li - 7Be, 9Be, 10Be - 10B, 11B - 11C, 12C, 13C, 14C and 13N, 14N \cite{bub04}. Measurements of these cross sections
are important for providing benchmark data in the GeV incident p- energy range, understanding the complex reaction mechanism itself and testing the reliability of physical models describing the fast intranuclear cascade (INC) phase as well as the subsequent statistical decay from an equilibrated or thermalized hot nucleus. As already mentioned, a particular focus is on developing new models for the description of highly energetic composite particles \cite{bou04}.
\begin{figure}[h]
\centering
\resizebox{0.95\columnwidth}{!}{%
\includegraphics{724_goldenbaum_f6_he-prod}
}
\caption{Production cross section of $^{3,4,6}$He isotopes as a function of incident proton beam energy. symbols: NESSI,Hannover,SPALADIN,PISA data, curves: calculation by INCL4.3+GEMINI/ABLA}
\label{fig:he-prod}
\end{figure}
As a function of incident proton beam energy the He-production cross sections on
Fe measured by NESSI\cite{her06}, Hannover\cite{mic95}, SPALADIN\cite{gen06},
and PISA\cite{pis07} are compiled in fig.~\ref{fig:he-prod}. The latest data
points of SPALADIN at 1~GeV and PISA at 175~MeV are also included. The SPALADIN
result obtained in inverse kinematics of Fe on p at GSI shows a value slightly
above the NESSI data, but is definitely still smaller than the systematics of
R.~Michel et al.\cite{mic95}. The data shown here for PISA are for Ni reaction,
but a comparison should be legitim, because Fe and Ni are very close in terms of
atomic number. Note, that for the PISA data \cite{pis07} the cross sections for
the individual $^{3,4,6}$He isotopes are given at 175~MeV. The Monte Carlo
calculation getting closest to the available experimental He data is the
INCL4-Clus-GEMINI version (dashed line in fig.~\ref{fig:he-prod}), which
accounts-using a coalescence approach-for cluster (here composite He particles)
emitted in the first fast phase of the reaction. The two solid lines in
fig.~\ref{fig:he-prod} take into account only the He particles being emitted
during the slow evaporation phase and therefore as expected the abundance of
production cross sections is underestimated in INCL4+ABLA or INCL4+GEMNI,
respectively.
Of great value and particular interest are the measurements performed by mass spectrometry \cite{mic95,ley05} The authors provide excitation functions in the whole energy range of interest, however in particular for light targets typically the measured He production cross sections do not coincide. The discrepancies between the two experimental methods for light targets is not yet understood. The huge amount of data collected for proton induced reactions will be valuable for the identification of deficiencies of existing INC/evaporation codes.
\subsection{Neutron production}
Of significant interest for a wide range of applications and fundamental
research, in particular at the crux of spallation neutron sources, transmutation
of nuclear waste in accelerator driven systems \cite{nif01}, and shielding
issues are also neutron production double differential cross sections in GeV
proton-induced spallation reactions. Altough generally best described by
INC+evaporation codes, neutrons are more difficult to detect than protons or
LCP. Experimental double differential neutron production spectra represent a
valueable observable also for validating new model developments or improvements
\cite{bou04,cug97b,dua07,bou02}. It is also interesting to look at neutron
multiplicities $M_n$, global properties of neutron spectra which are not easily
revealed by their inspection. An extensive overview on the observable $M_n$ for
both thin and thick (ejectiles induce secondary reactions) targets is compiled
in Refs.~\cite{bou02,fil01,let00,hil98}.
\section{Conclusion}
It has been shown, that a series of large scale accelerator facilities are currently planned or even under construction in Europe. The EU financially supports activities and provides within framework programs several instruments
for pushing forward the realization of such facilities. The spectrum of nuclear data needed for accelerator applications is quite broad and complex and certainly for the realization of facilities at the border of todays technology, new nuclear data are indispensable for estimates on radiation damage by displacements or embrittlement.
Some selected aspects within the Integrated Project EUROTRANS domain NUDATRA have been presented. Few experiments have been consulted to validate models with regard to reaction cross-sections or reaction probabilities, charged particle production cross-sections and angular- and energy- distributions for GeV proton induced reactions on various thin targets. PISA experiment e.g.~has showm to be able to measure
the products of proton--nucleus collisions with Z-identification up to at least Z=16 and isotope identification to masses up to 13-14
A comprehensive comparison not only of light charged particles, but also of IMF particles with model predictions is in progress. Therefore the high energy experiments presented here provide an important set of benchmark data for the development and test of reliable new models capable of describing the emission of the high energy component of composite particles
\begin{acknowledgement}
The author acknowledges gratefully the support of the European Community-Research Infrastructure Activity under FP6 "Structuring the European Research Area" programme (CARE-BENE, contract number RII3-CT-2003-506395 and HadronPhysics, contract number RII3-CT-2004-506078). The NESSI/PISA collaboration appreciates the financial support of the European Commission through the FP6 IP-EUROTRANS FI6W-CT-2004-516520.
\end{acknowledgement}
|
2,877,628,089,396 | arxiv | \section{Introduction}
There is a great need for increased accuracy in numerical simulations involving turbulent flows of magnetized fluids in fields varying from engineering to astrophysics. In astrophysics, in particular, compressible magnetohydrodynamic (MHD) turbulence is an important ingredient in the solution of outstanding problems on many scales such as the generation and sustainment of galactic and super-galactic scale magnetic fields\cite{Subramanian2006, Cho2014, Latif2013}; the detailed process of star formation, including self-regulation and fragmentation\cite{Low2004, Price2008, Hennebelle2008}; stellar convection in the interior and stellar atmospheres \cite{Balbus1998}; accretion and protoplanetary discs, stellar ejecta, e.g. jets, winds, outflows\cite{Biskamp2003, Pudritz2007}; the dynamics of the solar tachocline, the solar wind and the solar corona
\cite{Priest2014, Zhou2004, Mangeney1991, Chernyshov2010, Bruno2013}. The dynamical range of these phenomena is usually much larger than what is computationally tractable. Numerically, this translates to unphysical dissipation and turbulence dynamics due to the limited resolution. For example, in finite-volume numerical schemes \REV{it} leads to enhanced dissipation. In large eddy simulations (LES)\cite{Sagaut2006, Sagaut2009, Miesch2015, Schmidt2015} this problem is tackled by directly solving only the evolution equations for the resolved fields. The \REV{contribution} of the small under- and unresolved scales (i.e. the scales which are badly contaminated by numerical noise or simply unrepresented) on them has to be incorporated \REV{via explicit modeling}.
Formally these scales are identified by the introduction of a finite resolution operator, in effect a low-pass filter. Large eddy simulations are typically used with grid-based numerical schemes, e.g. \REV{based on finite-differences or finite-volumes}. As such the grid-scale can be taken to be the filter scale and hence the terms responsible for the small-scale effects are known as subgrid-scale (SGS) terms.
The magnetohydrodynamic LES equations are obtained by applying a finite resolution operator to the MHD equations. It can be shown that this operator can be expressed as a convolution with a low-pass filter kernel. There are several comprehensive reviews of the formalism and its
application to hydrodynamics\cite{Sagaut2006, Sagaut2009, Schmidt2015} and MHD \cite{Chernyshov2014}.
Applying the formalism with a static, homogeneous and isotropic kernel $G$ with a constant grid-scale (which can be used to represent the commonly used grid-based numerical schemes in physical or spectral space) under periodic boundary conditions to the ideal MHD equations results in the following equations for the large-scale fields:
\begin{subequations}
\label{eq: les}
\begin{align}
\label{eq: les mass}
\pd{\rres}{t}+\div {\rres \fav{\v{u}}} = 0,\\
\label{eq: les mom}
\pd{\flt{\rho} \fav{\v{u}}}{t}
+ \div{\flt{\rho} \fav{\v{u}} \otimes \fav{\v{u}}
- \flt{\v{B}} \otimes \flt{\v{B}}}
+ \grad{\pres + \frac{\flt{B}^2}{2}} =&- \nabla \cdot \tau,\\
\label{eq: les ind}
\pd{\flt{\v{B}}}{t} - \curl\bra{\fav{\v{u}} \times \flt{\v{B}}} =
\curl \emf-.
\end{align}
\end{subequations}
Here a large scale, filtered field is denoted by an overbar. For instance, the large scale component of the pressure $P$ is given by a convolution with the filter kernel $G$, i.e. $\filt{P}=G\ast P$ and similarly for the filtered density $\rres$ and the magnetic field $\bres-$. The treatment of the pressure term is beyond the scope of this work due to the wide array of possible equations of state used to close the MHD system. Nevertheless, briefly, if the equation of state is linear in the primary fields (e.g. in isothermal conditions), the pressure does not lead to any SGS contributions.\REV{\newline}
The tilde denotes a mass-weighted (also known as Favre) filtered field\cite{Favre1983}, i.e.~the Favre-filtered velocity field $\ures=\flt{\rho\v{u}}/\rres$.
\REV{Using $\ures$ as a primary quantity} precludes the introduction of SGS terms in the mass conservation equation\REV{}. Additionally, it fits well with physical-space-based compressible schemes, where often the momentum $\rho \v{u}$ is evolved as the primary quantity instead of the velocity $\v{u}$.
The momentum and induction equations contain two new, SGS terms, $\div \tau$ and $\curl \emf-$, which will occupy the focus of this article. They are simply the commutators between the finite resolution operator and the nonlinearities of the respective MHD equations. \REV{Thus} they carry information about the interactions across the filter scale.\REV{} Analytically they are given by
\begin{align}
\label{eq:tau_emf_def}
\nonumber \emf- &= \flt{\v{u}\times\v{B}} - \ures- \times\bres- ,\; \mathrm{and}\\\nonumber
\tau[][ij] &= \tau[u][ij]-\tau[b][ij]+\half\tau[b][kk]\dirac\;\;\mathrm{ with}, \\
\tau[u][ij]&= \flt{\rho} \bra{\fav{u_i u_j} - \fav{u}_i \fav{u}_j},\;\; \tau[b][ij]= \bra{\flt{B_i B_j} - \flt{B}_i~\flt{B}_j},
\end{align}
where the Einstein summation convention is assumed.
The tensor $\tau$ is known as the SGS stress and can be decomposed into kinetic and magnetic components, SGS Reynolds stress $\tau[u]$ and SGS Maxwell stress $\tau[b]$ respectively. The (pseudo-)vector $\emf-$ is known as the electromotive force.
\REV{They carry information about the subgrid-scales via the terms $\flt{\v{u}\times\v{B}}$, $\fav{u_i u_j}$, and $\flt{B_i B_j}$ and thus cannot be explicitly expressed only in terms of large scale fields. This renders} the
system of equations (\ref{eq: les}) unclosed. The evolution equations of the SGS terms\cite{Sagaut2006}, involve new, higher order unknown terms.
This continues to build an infinite hierarchy. This is the LES aspect of the well-known turbulence closure problem.
\REV{\newline}
\REV{The resolved, i.e large scale, energies and cross-helicity} are defined as
\begin{align}
&\ekinres = \half \rres \ures-^2, \; \emagres = \half \bres-^2,\;\eres = \ekinres+\emagres\, ,\\ \nonumber
& \textrm{and }W_\res = \ures-\cdot \bres-.\;
\end{align}
Their evolution equations are obtained in the classical manner from the corresponding
primary LES equations\cite{Vlaykov2015}. For ideal MHD they can be written as
\begin{align}
\label{eq: kinenres}\nonumber
\pd{}{t}\ekinres +
\div{\ures-\ekinres}
+\ures- \cdot \bres- \times \jres-
+ \ures- \cdot\grad \pres &=\\- \ures-\cdot\bra{\nabla \cdot
\tau},\\
\label{eq: magenres}\nonumber
\pd{}{t}\emagres - \bres-\cdot\curl\bra{\ures- \times
\bres-} &=\\
\bres- \cdot \curl \emf-,\\
\label{eq: totenres}\nonumber
\pd{\eres}{t} +
\div{\ures-\ekinres+
2\ures- \emagres-
\bres- W_\res}+ \ures- \cdot \grad \flt{P} &=\\ \bres- \cdot \curl \emf- - \ures-\cdot\bra{\nabla \cdot
\tau} ,\\
\label{eq: crhelres}\nonumber
\pd{}{t}W_\res+ \div{ \ures- W_\res - \frac{\bres-}{\rres} \ekinres}
+ \frac{\bres-}{\rres}\cdot \grad \pres &= \\
\ures- \cdot \curl\emf- - \frac{\bres-}{\rres}
\cdot \bra{\div \tau},
\end{align}
where $\jres-=\nabla \times \bres-$ is the resolved current.
Although the total energy and cross-helicity are ideal MHD invariants, their resolved counterparts, as defined above, are not,
due to the SGS terms on the right hand side of \cref{eq: totenres,eq: crhelres}. \REV{The equations show that} the SGS stress and EMF encode the entire transfer of energy and cross-helicity across the filter scale \REV{and truncating the SGS hierarchy at the level of $\tau$ and $\emf-$ closes these equations as well}.
Various approaches have been developed to address the closure problem
for hydrodynamics\cite{Sagaut2009, Sagaut2006}, in astrophysical settings\cite{Schmidt2015}. Several models have also been extended to the case of magnetized fluids\cite{Muller2002, Muller2002a, Pietarila2009}, some of them taking into account compressibility as well\cite{Chernyshov2014, Grete2015}.
They can be separated heuristically into structural and functional ones.
\REV{Functional closures focus on the effect of the SGS terms on the resolved scales and are thus largely phenomenological. For instance, the eddy-viscosity models\cite{Chernyshov2014} address the anomalous energy dissipation due to turbulence, while dynamo models\cite{Widmer2015,Yokoi2013} address the generation and amplification of magnetic fields.
Structural models try to mimic some aspect of the structure of the SGS terms, expecting that the desired effects on the large scale will follow automatically. Thus they largely rely on the robustness of these aspects. In the self-similarity closures\cite{Bardina1983,Chernyshov2014} for example, the main assumption is the self-similarity of turbulence in the inertial range.
In that context, functional models are useful in situations in which the effect of the unresolved scales is well understood and quantified. Since in practice this is rarely the case for compressible MHD, and in the absence of extensive experimental data for calibration and validation,}
we proceed with the derivation of a nonlinear structural closure, which is based on the properties of the finite resolution operator, rather than turbulence itself. Thus the MHD turbulence dynamics is not required to obey any strong assumptions, like scale-similarity, existence of an inertial range, energy cascade etc. The resulting closure is closely related to a previously \aprio validated one\cite{Grete2015}, but \REV{includes} additional compressibility effects. \REV{The present paper focuses on the derivation of the new compressible MHD closure, the analytic description of its scope of applicability and energy dissipation properties.
A numerical validation of the closure is performed in} an accompanying work\cite{Grete2016} by \aprio comparison to well-resolved numerical data\REV{, where} it is found to outperform all closures with which it has been compared.
\section{Approximate deconvolution}
As is usual in LES theory, the presented closure has its origins in incompressible hydrodynamics. In particular, it is \REV{a self-consistent extension of } the Yeo-Bedford \REV{(YB)} expansions\cite{Yeo1987, Yeo1993} \REV{as applied to compressible MHD.}
\REV{Closures of this family have} been recently applied to incompressible\cite{Balarac2008, Fabre2011, Balarac2013} and compressible (supersonic) MHD\cite{Grete2015, Vlaykov2015} turbulence with encouraging results. The same method has also been used to model the transport of a passive scalar \cite{Balarac2013}.
Here, we focus on the closure derivation and extend it to include so far unaccounted for compressibility effects.
\REV{For clarity, this section summarizes the original derivation\cite{Yeo1987} as applied to a Gaussian filter kernel and the incompressible MHD SGS terms.
The Gaussian kernel can be represented by its Fourier transform, i.e. transfer function $\widehat{G}$ given by
\begin{equation}
\label{eq: gauss ker}
\hat{G}(k)=\textrm{exp}\bra{-\Delta^2 k^2/(4\gamma)},
\end{equation}
with wavenumber $k$ and filter scale $\Delta$. It is infinitely differentiable, which renders it particularly suitable for analytical manipulation. It is also positive, and therefore signature preserving. Thus under its action the SGS counterparts of positive definite quantities like energy are also positive definite\cite{Sagaut2006}. Furthermore, by setting the width parameter $\gamma=6$, its first and second order moments match those of a
box filter with the same filter scale $\Delta$.
The main idea of the YB expansion is to compute an approximation of the inverse filtering operator based on gradient expansion of the filter kernel $G$.
This amounts to computing an approximation of the inverse Fourier transform of $1/\widehat{G}$.
}The first step is to perform a Taylor expansion of the transfer function and its inverse in terms of the filter scale $\Delta$, i.e.
\begin{align}
\widehat{G}(\v{k}) = \sum_{n=0}^\infty \frac{(-1)^n}{n!}\bra{\frac{\Delta^2}{4\gamma}\v{k}^2}^n,\\
\frac{1}{\widehat{G}(\v{k})} = \sum_{n=0}^\infty \frac{1}{n!}\bra{\frac{\Delta^2}{4\gamma}\v{k}^2}^n .
\end{align}
\REV{Applying the expansions to the} test fields $\widehat{f}$ and $\widehat{\filt{f}}$ respectively, \REV{followed by} an inverse Fourier transformation \REV{yields} infinite series representations of the filter operator and its inverse in terms of gradient operators acting on the test fields,
\begin{align}
\label{eq: backward expansion}
\filt{f}= G \ast f = \sum_{n=0}^\infty \frac{1}{n!}\bra{\frac{\Delta^2}{4\gamma}\grad^2}^n f,\\
\label{eq: forward expansion}
f = G^{-1} \ast \filt{f} = \sum_{n=0}^\infty \frac{(-1)^n}{n!}\bra{\frac{\Delta^2}{4\gamma}\grad^2}^n \filt{f}.
\end{align}
They are absolutely convergent and formally accurate at all orders, since the Gaussian kernel is infinitely differentiable and with unbounded support.
In fact, it has been found\cite{Pruett2001} that the series \REV{given in} \cref{eq: backward expansion} converges for all canonical filters, and more generally, symmetry of the filtering kernel and non-negativity of its transfer function are sufficient conditions for its convergence for a periodic band-limited field $f$. (The last condition is trivial\REV{ly satisfied} in any numerical simulation.) It has also been suggested\cite{Pruett2001} that qualitatively the convergence rate tends to decrease as the dissipative strength of the filter increases. In the case of the Gaussian filter, the same results hold for the forward expansion \cref{eq: forward expansion}, as it differs from \cref{eq: backward expansion} only by an alternating sign
To proceed note that the unknown components of the SGS stresses and \REV{the} EMF are of the form $\flt{fg}$. Applying \cref{eq: backward expansion} to such an expression results in a series in terms of $(fg)$. As it is absolutely convergent, \cref{eq: forward expansion} \REV{can be applied}
separately to each $f$ and $g$ term of the series.
The result \REV{can be simplified to}
\begin{align}
\label{eq: YB formulae}
\flt{fg} =&
\flt{f}\flt{g}+2a\flt{f}_{,k}\flt{g}_{,k}+
\frac{1}{2!}\bra{2a}^2 \flt{f}_{,kl}\flt{g}_{,kl}+\\ \nonumber
& \frac{1}{3!}\bra{2a}^3 \flt{f}_{,klm}\flt{g}_{,klm}+O\bra{a^4\grad^8}\REV{,}
\end{align}
\REV{as given in eq.~(5.21) of Yeo (1987)\cite{Yeo1987}.}
Here a comma \REV{is used} to represent differentiation with respect to a co-ordinate
and $a = \Delta^2/\bra{4 \gamma}$.
The coefficients in the expansions are given in terms of moments of the transfer function and its inverse.
This relationship comes from the orthogonality of the terms in the Fourier expansion and thus holds for any filter kernel for which the expansion exists.
There is a closed form expression\cite{Carati2001} for the coefficients in \cref{eq: YB formulae} for a
symmetric filter kernel $G$ with infinitely differentiable transfer function \REV{-- they} are given by
the Taylor coefficients of the function $F(f,g) = G(-i(f+g))/(G(-if) G(-ig))$.
Moreover, since \REV{any} symmetric filter has a real transfer function, only the even coefficients are non-zero.
This symmetry has \REV{a fundamental} impact on the form of the terms in the expansion as well, namely each field
is differentiated at most once with respect to a co-ordinate.
Recall that for $\gamma=6$ the Gaussian and box filter kernels have identical first and second moments. Therefore with this parameter choice \cref{eq: YB formulae} is also valid for a box filter up to second order.
Furthermore, since all moments of a Gaussian function can be expressed in terms of its second order moment, here $(2a)$, it is the only parameter which
can appear in \cref{eq: YB formulae}.
Applying \cref{eq: YB formulae} to the SGS terms in the incompressible MHD equations is sufficient to completely close them,
\begin{align}
\label{eq: YB incompressible sgs}
\flt{u_i u_j} - \flt{u}_i \flt{u}_j &= 2a\flt{u}_{i,k}
\flt{u}_{j,k} ,\\ \nonumber
\flt{B_i B_j}- \bres_i\bres_j &= 2a \bres_{i,k} \bres_{j,k},\\ \nonumber
\bra{\flt{\v{u}\times\v{B}}-\flt{\v{u}}\times \flt{\v{B}}}_i &= 2a\levi
\flt{u}_{j,l}\flt{B}_{k,l}.
\end{align}
It should be noted that the resulting closures have been reached by alternative routes in hydrodynamic LES.
The tensor-diffusivity models\cite{Clark1979, Leonard1974, Vreman1996}, for instance, use Taylor expansions of the SGS terms
with respect to the turbulent fluctuations (e.g.~$\v{u}'=\v{u}-\ures-$) or the entire (unfiltered) fields (e.g.~$\v{u}$).
These \REV{derivations} however are questionable as they require smoothness of the small scales\cite{Love1980}.
Another alternative, originally designed for image processing \cite{vanCittert1931}, is given by approximate deconvolution closures\cite{Stolz1999,Stolz1999, Stolz2001, Stolz2001a, Stolz2003, Stolz2005, Stolz2007, Sagaut2009}.
\REV{They are} again based on the truncation of an infinite series to reconstruct the inverse of the filtering operator. However, in this approach the series is not necessarily convergent and truncating at the optimal order is \REV{critical}. The results of both approaches for a Gaussian filter agree
with \cref{eq: YB formulae} up to second order\cite{Sagaut2009}.
The different motivations and derivation are revealed only at higher orders.
\section{Compressible extensions}
\REV{To apply the} presented derivation self-consistently to the \REV{compressible} Reynolds SGS stress and EMF, as defined in \cref{eq:tau_emf_def} the compressibility effects onto the mass-weighted large scale velocity \REV{has to be taken further into account}. The issue can be addressed from several viewpoints.
On the one hand, one can dispense with the mass-weighted filtering operator altogether, and re-substitute $\ffilt{f}\,\rres=\filt{f \rho}$ in the relevant
SGS terms. This requires that an additional SGS term $\flt{\rho u_i} -\rres \, \flt{u}_i $ is introduced in the continuity equation,
and that the EMF and the Reynolds SGS stress are re-defined.
The complexity of the Reynolds SGS stress $\tu$ \REV{is} formally increased, as it now contains an unclosed product of three fields, i.e. $\filt{\rho u_i u_j}$.
Nevertheless, the derivation outlined above still holds. Applying \cref{eq: backward expansion,eq: forward expansion} to a general term of third order leads to \REV{(as given by eq.~(5.23) of Yeo (1987)\cite{Yeo1987})}
\begin{align}
\label{eq: YB triple}\nonumber
\flt{fgh} =& \flt{f} \flt{g} \flt{h} +
2a\bra{ \flt{f}_{,k} \flt{g}_{,k} \flt{h}+
\flt{f}_{,k} \flt{g} \flt{h}_{,k}+
\flt{f} \flt{g}_{,k} \flt{h}_{,k}}+\\&\nonumber
\frac{1}{2!}\bra{2a}^2 \left(\flt{f}_{,kl} \flt{g}_{,kl} \flt{h}+
\flt{f}_{,kl} \flt{g} \flt{h}_{,kl}+
\flt{f} \flt{g}_{,kl} \flt{h}_{,kl}+\right.\\&\nonumber
\left. 2\flt{f}_{,k} \flt{g}_{,kl} \flt{h}_{,l}+
2\flt{f}_{,k} \flt{g}_{,l} \flt{h}_{,kl}+
2\flt{f}_{,kl} \flt{g}_{,k} \flt{h}_{,l}
\right)+\\& +O\bra{a^3\grad^6}.
\end{align}
To first order in $a$ this technique leads to
the following results for the primary SGS terms\REV{:}
\begin{align}
\label{eq: YB incompressible tu}
\flt{\rho u_i} -\rres \, \flt{u}_i &= 2a \rres_{,k}\flt{u}_{i,k}\\ \nonumber
\flt{\rho u_i u_j} - \rres\, \flt{u}_i \flt{u}_j &= 2a \rres \,\flt{u}_{i,k}
\flt{u}_{j,k} +
2a \rres_{,k}\bra{\flt{u}_{i,k} \flt{u}_j+\flt{u}_i\flt{u}_{j,k}},\\ \nonumber
\flt{B_i B_j}- \bres_i\bres_j &= 2a \bres_{i,k} \bres_{j,k},\\ \nonumber
\bra{\flt{\v{u}\times\v{B}}-\flt{\v{u}}\times \flt{\v{B}}}_i &= 2a\levi
\flt{u}_{j,l}\flt{B}_{k,l}.
\end{align}
This constitutes a complete closure of the compressible MHD equations (barring pressure considerations).
This approach is applicable for numerical schemes which evolve the velocity field, because only directly filtered fields are present.
Even though such schemes are not frequently used to address highly compressible problems, such a model has been implemented in compressible hydrodynamics\cite{Bin2007}.
On the other hand, for applications to compressible codes which treat the momentum as a primary quantity, e.g. using finite volume schemes, one needs to take into account the mass-weighted filtering operator. For a field $f$ it is given by $\ffilt{f}=(G\ast(\rho f))/(G\ast\rho) $.
In the process of directly applying the outlined procedure to this operator, several fundamental challenges are encountered. The main obstacle is that since its filter kernel contains strongly fluctuating contributions (e.g. from the $G\ast \rho$ component), the Taylor expansion of its transfer function is not well-defined. Additionally,
the existence of the inverse transfer function is not assured over an extended interval in spectral space.
\subsection{Simple compressible extension}
The simplest hypothesis which circumvents the complications outlined above would be to assume that even if the derivation is not valid for compressible MHD, its result still holds, i.e. to apply the map
\begin{equation}
\overline{\v{u}} \rightarrow \tilde{\v{u}}.
\label{eq: trivial map}
\end{equation}
to the incompressible closures \cref{eq: YB incompressible sgs}.
This would imply that the compressibility effects are implicitly taken into account by the change of operator.
Qualitatively, this approach could be motivated by invoking the \REV{reduction} of compressibility effects at smaller scales\cite{Erlebacher1992}, but ultimately it is the simplest compressibility extension of \cref{eq: YB incompressible sgs}.
In fact, \REV{a previous \aprio comparison\cite{Grete2015} with data from supersonic numerical simulations showed that this extension} yields consistently higher correlation with the data than the other tested classical closures. \REV{However},
while the results for the SGS stress were consistently high, the EMF closure exhibited a comparatively larger scatter. \REV{This difference can be explained by the self-consistent derivation of compressibility effects which follows.}
\subsection{Primary compressible extension}
The goal is to obtain an expression of a simply filtered field in terms of the corresponding mass-weighted filtered field. Since mass-weighting applies to velocity-related fields, consider in particular $\ures-= \flt{\v{u}\rho}/\rres$.
Applying \cref{eq: YB formulae} to the right-hand side \REV{leads to}
\begin{equation}
\label{eq: utilde recurrence basis}
\ures_i = \flt{u}_i+2a y_{,k} \flt{u}_{i,k}+2a^2
\bra{y_{,kl}+y_{,k}y_{,l}}\flt{u}_{i,kl}+O\bra{a^3},
\end{equation}
where we denote for brevity the natural logarithm of the resolved density as $y= \ln \rres$. As \eqref{eq: utilde recurrence basis}
represents an absolutely convergent series, under the same conditions as the original expansion \cref{eq: backward expansion}, it can be rearranged to give
\begin{equation}
\label{eq: ubar recurrence basis}
\flt{u}_i =\ures_i-2a y_{,k} \flt{u}_{i,k}-2a^2
\bra{y_{,kl}+y_{,k}y_{,l}}\flt{u}_{i,kl}-O\bra{a^3}.
\end{equation}
To this we can apply a recurrence technique. To second order in $a$ it gives
\begin{align}
\label{eq: recurse2}
\flt{u}_i =&\ures_i-2a y_{,k}\ures_{i,k}-\\\nonumber &
2a^2\bra{\bra{y_{,kl}-y_{,k}y_{,l}}\ures_{i,kl}-
2y_{,k}y_{,kl}\ures_{i,l}}-O\bra{a^3}.
\end{align}
This expression, along with \cref{eq: YB formulae,eq: YB triple}, can be applied to the definition of the SGS terms, \cref{eq:tau_emf_def}, to obtain
\TODO{apply the result to the EMF and stresses, recommend first order truncation and emphasize no extra computational costs}
\begin{align}
\label{eq: nonlin final tu}
\tu[][ij]&=2a\rres \ures_{i,k}\ures_{j,k}+\\ \nonumber &
2a^2\rres \bra{\ures_{i,kl}\ures_{j,kl}-2y_{,kl}\ures_{i,k}\ures_{j,l}}+O\bra{a^3},\\
\label{eq: nonlin compr emf}
\emf_i &= 2a\levi \bra{\ures_{j,l} \bres_{k,l}-
y_{,l}\ures_{j,l}\bres_{k}}+\\\nonumber&
2a^2\levi
\left(\ures_{j,lm}\bres_{k,lm}-2\bra{y_{,lm}\ures_{j,l}+y_{,l}\ures_{j,
lm}}\bres_{k,l}+\right.\\\nonumber&\left.
\bra{2y_{,l}y_{,lm}\ures_{j,m}+\bra{y_{,p}y_{,l}-y_{,pl}}\ures_{j,pl}}
\bres_k\right)+O\bra{a^3}.
\end{align}
As the Maxwell SGS stress is not directly affected by density variations, its closure is identical to the one from \cref{eq: YB incompressible sgs}.
Remarkably, to first order the compressibility effects on the Reynolds SGS stress are implicitly accounted for by the mass-weighted filtering itself.
This is a consequence of the symmetry of the Reynolds SGS stress tensor ($\tu[][ij]= \tu[][ji]$). \REV{Explicit d}ensity variations appear here only at second order and as
second order logarithmic derivatives. Therefore
only very strong compressibility cannot be accounted for by the simple compressibility extension implied by \cref{eq: trivial map}.
In contrast, in the EMF closure density variations appear already at first order,
and at second order they are much more extensive than for $\tu$.
This explains the different levels of success of the simple compressibility extension\cite{Grete2015}\REV{--} terms which account for compressibility effects \REV{are missing } in the EMF \REV{closure but not in the Reynolds SGS stress one}.
We note that combining the recurrence relation \cref{eq: recurse2} with expansions of the type of \cref{eq: YB formulae,eq: YB triple} allows \REV{the construction of} self-consistent closures for an SGS term of any type to any order. The SGS kinetic and magnetic energies for instance are given trivially as half the traces of the Reynolds or Maxwell SGS stress tensors, respectively. If we were to construct the
SGS cross-helicity $W_\sgs = \filt{\v{u} \cdot \v{B}} - \ures-\cdot\bres-$, \REV{e.g.} to gauge the correlation between kinetic and magnetic SGS effects, its closure to first order would
be given by
\begin{equation}
\label{eq: nonlin compr crhel}
W_\sgs= 2 a\bra{\ures_{i,j}\bres_{i,j} - \ures_{i,j}y_{,j} \bres_{i}}+ O(a^2).
\end{equation}
Retaining terms to first order in $a$ is expected to provide sufficient SGS information, as suggested by the previously reported results\cite{Grete2015,Balarac2008, Fabre2011, Balarac2013}.
Furthermore, the computational overhead of including such closures in an LES is minimal, as they can contain at most first order derivatives in large scale primary fields.
\subsection{Extension for the SGS derivatives}
Direct comparison of the outlined closures with the corresponding SGS terms
\REV{based on numerical data} reveals directly the probity of the method\cite{Grete2016}. However, for \apost application of the closures in LES simulations a further compressible effect needs to be considered.
The simple filtering operator is a convolution and as such commutes with differentiation, however the mass-weighted filtering operator does not.
This is critical since the SGS stress and EMF enter the evolution equations under a gradient.
For the purposes of this section, let $\widehat{f}$ denote the closure of an SGS term $f$ incorporating mass-weighted filtering. Then propagating the commutator between mass-weighted filtering and differentiation through the closure calculations \REV{above} yields the following additional contributions to the momentum and induction equations
\begin{align}
\label{eq: diff commutator}
\widehat{\partial_i \tu[][ij]} - \partial_i \widehat{\tu[][ij]} =&
2a \rres\bra{\ures_i\ures_{j,l} + \ures_j\ures_{i,l}}y_{,il},\\ \nonumber
\bra{ \widehat{\curl \emf-} - \curl \widehat{\emf-}}_i =& 2 a \levi \levi[klm] \ures_{l,p} \bres_m y_{,jp}.
\end{align}
These expressions show the difference between applying the closure procedure to the derivatives of the SGS terms and
taking derivatives of the respective closures. The additional corrections are expected to be important \REV{primarily} for very strong density variations, as they contain second derivatives in the logarithmic density. This can be also seen by comparing the expressions above to the ones obtained by differentiating \cref{eq: YB incompressible tu}. Furthermore, they are of leading order (in $a$) for the derivatives of both SGS terms and these are precisely the quantities which enter the LES evolution equations and affect the large scale dynamics.
Combining the two compressibility effects leads to significant cancellation of the first order terms in the EMF closure with a final result given by
\begin{equation}
\bra{\widehat{\curl \emf-}}_i = 2 a \levi \levi[klm] \bra{\bra{\ures_{l} \bres_{m}}_{,j} -\bra{\ures_{l,p} \bres_m}_{,j} y_{,p}}.
\TODO{double check}
\end{equation}
For the Reynolds SGS stress, the final closure can be given as
\begin{equation}
\widehat{\partial_{i}\tu[][ij]}=2a\bra{\rres \ures_{i,k}\ures_{j,k}}_{,i} + 2a \rres\bra{\ures_i\ures_{j,l} + \ures_j\ures_{i,l}}y_{,il}.
\end{equation}
Once again, the SGS Maxwell stress closure is trivially derived from \REV{\cref{eq: YB incompressible sgs}}, as it does not contain any mass-weighted large scale fields.
The effects of the two types of compressibility corrections can be identified by different types of \aprio testing.\
In fact, the validity of the compressible closures \REV{were} tested \aprio against a range of data from sub- to hypersonic turbulence simulations and benchmarked against a wide range of alternative closures\cite{Grete2016} with very positive results. In particular, we investigate their performance with respect to the resolved energy and cross-helicity dynamics (cf. \cref{eq: totenres,eq: crhelres}). The primary compressible closures \cref{eq: nonlin final tu,eq: nonlin compr emf} are validated by considering their effect on the spatially local (in the Eulerian sense) dynamics, i.e. on terms of the form $(\tu \cdot \grad) \cdot\ures$ and $\emf- \cdot \grad \times \bres-$. These terms are usually identified with contributions to the resolved energy or cross-helicity cascades. The impact of these closures on the overall resolved energy or cross-helicity dynamics, e.g. $\ures- \cdot (\grad \cdot\tu)$ and $\bres- \cdot \grad \times \emf-$, is also tested.
While the impact of the differentiation commutators \cref{eq: diff commutator} is best tested directly in \apost application, by comparing the results of the local and non-local \aprio tests, we give an indication of the parameter regime where these extensions can be important.
\section{Scope of applicability}
The closure described above has been derived without any strong assumptions about the flow or the magnetic field.
Thus their application is not limited to turbulence simulations, but can be applied in principle to any MHD \REV{simulation in which the small scales are not sufficiently well-resolved}.
Nevertheless, several limitations need to be kept in mind.
Firstly, we have implicitly assumed that the filter kernel is homogeneous and isotropic and
has a constant filter scale. This translates to numerical schemes with a regular grid.
Furthermore, no boundary terms have been taken into account, which is consistent with periodic domains. Extensions of SGS closures
to non-regular grids and non-periodic conditions have been studied in incompressible hydrodynamics\cite{Sagaut2006}. However, their application
to the current closure is beyond the scope of this article.
Secondly, the described closures are derived from the analytical form of a filter kernel.
As the effective kernel of an LES for a particular numerical scheme is a combination of various discretizations, e.g.~grid spacing, time-stepping, differential approximations, quadrature, flux limiting, divergence cleaning (for the magnetic field), shock capturing, etc.,
its exact analytical form is rarely available.
Additional errors stem from the truncation of the infinite series \REV{\cref{eq: YB formulae,eq: utilde recurrence basis}}, i.e. higher order closures are in principle more accurate. Depending on the convergence rate of the expansions for a particular filter, this error may also need to be considered.
Conversely, due to the nonlinear combination of gradient fields, higher order closures are more prone to numerical instabilities\cite{Geurts1997, Vreman1996}.
Finally, in LES applications the SGS terms are based upon information contained in resolved fields, which resides above the Nyquist scale, i.e.~the grid resolution. This can be represented by decomposing the effective filter kernel into a spectral kernel at the Nyquist scale and a remainder. The spectral kernel renders the inverse transfer function of the effective filter ill-defined. In order to circumvent this, a two-step procedure can be applied. First, the derivation above should be applied to the component of the effective filtering operator with a formally well-defined inverse.
The spectral filter can then applied to the resulting equations.
To allow for the mentioned inaccuracies and numerical instabilities additional renormalization may be applied to the final closures.
Parametric renormalization may also be applied to the results of a closure for a well-behaved filter, as outlined above, in order to boost its dissipative effect or render it suitable for a selection of numerical schemes.
The renormalization can come in the form of constant coefficients or variable fields. Both practices are common in LES. Most canonical SGS closures include a constant coefficient whose value is calibrated \REV{dynamically or } against experimental data. Allowing for distinct coefficients for the different additive terms in the proposed closures and calibrating them against a particular dataset, may be used as a guide for the relative importance of the different terms in the respective flow.
With respect to spatially varying modulation, the SGS energy for instance can be used to renormalize the strength of the SGS effects in a hydrodynamic LES with a related closure\cite{Schmidt2011, Woodward2006}. This technique naturally requires an additional closure for the SGS energy -- a common situation in hydrodynamics\cite{Bardina1983, Speziale1988, Schmidt2006, Schmidt2011, Carati2001, Sagaut2009, Iapichino2009}, where different closures are frequently combined in order to alleviate their respective shortcomings.
Both types of renormalization outlined above are applied and \aprio tested\cite{Grete2016} for the proposed closures, however it is found that neither is particularly necessary or beneficial.
\section{Energy and cross-helicity dissipation properties}
One of the main functions of SGS closures is to correct for the transfer of energy across the resolution scale.
Therefore we proceed with an analysis of the dissipation properties of the proposed closures.
\REV{In particular, we consider the local dissipation of the resolved kinetic energy, magnetic energy and cross-helicity given respectively by
\begin{align}
\Sigma^\kin=-\tau[][ij] \S[][ij], \;\;\;\Sigma^\mag=- \emf-\cdot \jres-\;\; \textrm{ and }\\
\chcasc = -\frac{\tau[][ij]}{\rres} \bra{\M[][ij]-\bres_j y_{,i}} - \emf- \cdot \wres,
\end{align}
with the usual definitions of the resolved rate-of-strain $\S[][ij]=1/2\bra{\ures_{i,j}+\ures_{j,i}}$, vorticity $(\wres-)_k= (\nabla \times \ures-)_k$, current $(\jres-)_k= (\nabla \times \bres-)_k$ and magnetic rate-of-strain $\M[][ij]=1/2\bra{\bres_{i,j}+\bres_{j,i}}$.
The signs of the $\Sigma$ fields are chosen such that positive values correspond to a down-scale transfer, i.e. dissipation.
We consider each dissipation term in turn. The kinetic energy dissipation can be further decomposed according to \cref{eq:tau_emf_def} into $\Sigma^\kin= \Sigma^\kin_{\tu}+\Sigma^\kin_{\tb}+\Sigma^\kin_{\tb[][kk]}$. The contribution from the Reynolds SGS stress is given by $\Sigma^\kin_{\tu} = -\tu[][ij] \S[][ij]$. The results here will be the same as in the hydrodynamic limit.}
As a basis for comparison, consider the classical \REV{incompressible} eddy-viscosity (EV) family of closures\cite{Chernyshov2007}, which take the form $\tu=-\nu_\turb\S$ \REV{with $\Tr(\S)\equiv0$} for some (usually non-negative) turbulent viscosity $\nu_\turb$. For it \REV{$\Sigma^\kin_{\tu}$} takes the form
\begin{equation}
\Sigma^\kin_\mathrm{EV}=\nu_\turb\REV{\Tr(\S[2]),}
\end{equation}
\REV{where $\S[n]$ represents a tensor product, e.g.~ $(\S[2])_{ij}=\S[][ik]\S[][kj]$}. As \REV{$\Tr(\S[2])$} is always non-negative, this closure can transfer energy across the resolution scale only in one direction,
depending on the sign of $\nu_\turb$, e.g. from resolved to subgrid scales for $\nu_\turb>0$. \REV{This model can provide energy backscatter only in the compressible regime via an additional (not self-consistent) closure for the SGS kinetic energy and even then only from regions where $\Tr(\S)>0$. This can be seen to be problematic since the presence of strong energy cascades in both directions is a key characteristic of MHD turbulence\cite{Pouquet1976,Muller2012}, which differentiates it from the hydrodynamic case.}
\REV{In contrast, the proposed} closure for the Reynolds SGS stress $\tu$ can be written as
\begin{align}
\tu[][ij]
=& 2 a \rres\bra{\S[][ik]\S[][\REV{jk}] + \wres_{ik} \wres_{\REV{jk}}+ \S[][ik]\wres_{\REV{jk}}+ \wres_{ik}\S[][\REV{jk}]},
\end{align}
\REV{with vorticity tensor $\wres_{ij}= -1/2\levi (\wres-)_k$.} Substituting this in \REV{$\Sigma^\kin_{\tu}$} leads to
\begin{align}
\label{eq: eps tu_k 1}
\Sigma^\kin_{\tu} =&\REV{-2a\rres\bra{ \Tr(\S[3])+ \frac{1}{4}\wres-^2\Tr(\S)-\frac{1}{4}\wres-^\mathrm{T}\!\cdot\S \cdot \wres-}.}
\end{align}
\REV{The first term is reminiscent of the eddy-viscosity expression, as it depends only on the strain tensor. However, there are two qualitative differences stemming
from the fact that this term is cubic in $\S$. Firstly, the larger power leads to stronger sensitivity to the resolved rate-of-strain. Secondly, and perhaps more importantly, this term has indefinite signature, which allows for bi-directional energy cascade. Because of it totally compressive rate-of-strain leads to dissipation while expansion leads to back-scatter of kinetic energy.
The proposed model includes a further effect, associated with the last two terms in \cref{eq: eps tu_k 1}, namely vortex stretching. This is the compressible analogue of the incompressible vortex stretching effect encoded in the last term. Geometrically, the combination of the two terms represent the interaction of the vorticity vector with the strain lying in a plane orthogonal to it. As intuition suggests, if a simple vortex tube is compressed perpendicular to its axis, its radius decreases and bigger proportion of its kinetic energy is associated with smaller scales, i.e. this leads to dissipation. Conversely, stretching a vortex, shifts its associated energy to larger scales and the result is back-scatter.
Next, consider the contribution of the Maxwell SGS stress to the kinetic energy flux given by $\Sigma^\kin_{\tb}=\tb[][ij]\S[][ij]$. The proposed closure can be written as}
\begin{align}
\tb[][\REV{ij}]=
& 2a\left(\M[][ik] \M[][\REV{jk}]+\jres_{ik}\jres_{\REV{jk}}+ \M[][ik]\jres_{\REV{jk}}+ \jres_{ik}\M[][\REV{jk}]\right),
\end{align}
\REV{with current tensor $\jres_{ij}= -1/2\levi (\jres-)_k$.} Its contribution to the kinetic energy dissipation is given by
\begin{align}
\nonumber
\Sigma^\kin_{\tb}=
2a &\REV{\left( \Tr(\M\S\M)+ 2\Tr(\M \S \jres) \right. }\\
&\REV{ \left.+\frac{1}{4} \jres-^2 \Tr(\S)- \frac{1}{4} {\jres-}^\mathrm{T} \! \cdot \S \cdot \jres- \right).}
\label{eq: kin diss taub}
\end{align}
This expression is similar to the contribution of the Reynolds SGS stress\REV{. Note however, that the entire Maxwell SGS stress
works in the opposite direction to the Reynolds SGS stress (because of the different overall sign). The first term represents the interaction between the magnetic and kinetic rates-of-strain. Here compression (i.e. negative eigenvalues of $\S$) leads to back-scatter, while stretching leads to dissipation. Furthermore, alignment of the eigenvectors of $\S$ and $\M$ maximizes the effect of this term. The second term is associated with the amplification of magnitudes of the rates-of-strain, i.e. $\Tr(\S[2])$ and $\Tr(\M[2])$. It implies that the processes which enhance kinetic and magnetic shearing simultaneously dissipate kinetic energy. The last two terms are the counterpart of the vorticity terms \cref{eq: eps tu_k 1} -- they are associated with current deformation analogous to the vortex stretching effect.
They imply that currents perpendicular to compressive flows lead to backscatter and ones perpendicular to expanding flows -- to dissipation. Currents flowing along a compressive or stretching directions have no effect on the SGS energy.
The final component of the kinetic energy flux comes from the SGS magnetic pressure
\begin{equation}
\Sigma^\kin_{\tb[][kk]}=-\half\tb[][kk] \Tr(\S)=
-2a \Tr(\S)\bra{\frac{\Tr(\M[2])}{2}+\frac{1}{4} \jres-^2}.
\label{eq: kin diss taub_kk}
\end{equation}
It reduces the Maxwell SGS stress effects associated with the overall dilatation rate. It introduces purely compressible effects, as in the incompressible limit $\Tr(\S)=0$. The isotropic current component ($\propto \Tr(\S)\jres-^2$) cancels exactly the contribution from $\Sigma_{\tb}^\kin$. This re-introduces the possibility of dissipation due to compression along the current direction and emphasizes the importance of providing a closure for the total SGS pressure. Moreover, it enhances the closure's overall sensitivity to the relative orientation of the current and the kinetic rate of strain. The magnetic shear term is associated with the growth of $\Tr(\M[2])$ due to overall compression.}
Finally, consider the transfer of magnetic energy across the filter scale. \REV{The analytic form of $\Sigma^\mag$ shows that }there is backscatter, or dynamo-like \REV{effect}, when the electromotive force is aligned with the large-scale currents and dissipation into unresolved energy in cases of anti-alignment.
Decomposing the proposed closure into symmetric and anti-symmetric gradients of the resolved fields and substituting into the expression for $\Sigma^\mag$, leads to the following expression
\begin{align}
\label{eq: mag en diss}
\Sigma^\mag =&\REV{2 a \left(2\Tr( \M \S \jres) + \half \jres-^\mathrm{T} \cdot \S \cdot \jres- - \half \jres-^2 \Tr(\S)\right.}\\\nonumber
&\REV{\;\;\;\;\;\;-\half \wres-^\mathrm{T}\! \cdot \M \cdot \jres-}\\\nonumber
&\REV{\;\;\;\;\;\; +\bra{\bres- \times \jres-}^\mathrm{T}\!\!\cdot\S \cdot \nabla y}\\\nonumber
&\REV{\;\;\;\;\;\;+\left.\half \bra{\wres-\cdot \bres-}\bra{\jres-\cdot \nabla y}
-\half\bra{\wres-\cdot \jres-}\bra{\bres- \cdot \nabla y} \right) .}
\end{align}
Due to the nonlinear coupling between kinetic and magnetic structures in this closure, \REV{these terms involve a large plethora of effects.
Here, like in the kinetic energy case, the relative alignment of the resolved gradients, i.e. the local inhomogeneity and anisotropy, play a vital role in determining the magnetic energy flux. The first four terms are associated with evolution of the total current $\jres-^2$. The first, shearing term is already familiar from \cref{eq: kin diss taub} and has the same effect on the magnetic energy as on the kinetic one. The next two terms can be identified as anomalous (anisotropic) resistivity. They are also found in \cref{eq: kin diss taub}, but with opposite signs and half the amplitude. This identifies an SGS channel for transfer between resolved kinetic and magnetic energy, i.e. half of the dissipated resolved magnetic energy is backscattered into resolved kinetic energy and vice versa, kinetic energy dissipation leads to enhanced turbulence, which in turn causes a dynamo-like increase of resolved magnetic energy. The fourth term is specific to the magnetic energy budget. It is also associated with the enstrophy evolution due to the Lorentz force and connects the relative orientation of vorticity and current with the principal axes of $\M$. For instance, along a magnetically compressive direction it leads to dissipation, if the vorticity and the current are parallel, and backscatter, if they are anti-parallel.
All considerations made so far apply equally to the simple and primary compressible extensions, as well as in the incompressible limit (allowing for $\Tr(\S)=0$). The final three terms of the magnetic energy dissipation \cref{eq: mag en diss} contain the explicit effect of the primary compressible extension. They have a strong impact primarily in regions of very strong density gradients, e.g.~the neighborhood of shocks, due to the logarithmic density derivative. Formally, they are also strongly anisotropic and can be seen to be related to dynamo-like effects. For instance $\bres- \times \jres-$ is the complement of the current helicity $\bres-\cdot \jres-$, which can be associated with the $\alpha$-dynamo, while $\wres- \cdot \jres-$ is related to the cross-helicity dynamo\cite{Yokoi2013}.
The effect of the primary compressible extension becomes more evident when considering the SGS effects on the cross-helicity evolution.
For completeness we give the exact expressions for the local contributions of the total SGS Maxwell Stress $ \Sigma^W_{\tb[][\mathrm{tot}]}=\Sigma^W_{\tb}+\Sigma^W_{\tb[][kk]}$, the SGS Reynolds stress $\Sigma^W_{\tu[][]}$ and the EMF $\Sigma^W_{\emf-}$, defined analogously to their energy counterparts, to the resolved cross-helicity:
\begin{align}
\Sigma^W_{\tb[][\mathrm{tot}]} = -\frac{2a}{\rres} &\left( \bra{\bres-^\mathrm{T} \cdot \M[2] \cdot \nabla y}-\Tr(\M[3])-\right. \\ \nonumber
&\half \Tr(\M[2]) \bra{\bres- \cdot \nabla y}
+\jres-^\textrm{T}\! \cdot \M \cdot \jres- - \\\nonumber
&\bra{\bres-^\mathrm{T} \cdot \M}\cdot \bra{\jres- \times \nabla y}- \bra{\jres- \times \bres-}^\mathrm{T}\cdot\bra{\M \cdot \nabla y}\\\nonumber
&\left. -\bra{\jres- \cdot \bres-}{\jres- \cdot \nabla y} \right),
\end{align}
\begin{align}
\Sigma^W_{\tu[][]} = 2a &\left(-2\Tr(\S \M \wres) - \frac{1}{4} \bra{\wres-\cdot \bres-} \bra{\wres- \cdot \nabla y} \right. \\ \nonumber
&+\frac{1}{4} \wres-^\mathrm{T} \cdot \M \cdot \wres- + \frac{1}{4}\wres-^2 \bra{\bres- \cdot \nabla y} \\\nonumber
&+ \half \bra{\bres- \times \wres-}^\mathrm{T}\cdot \bra {\S \cdot \nabla y} - \Tr(\S\M\S) \\\nonumber
&\left.-\half \bra{\bres-^\mathrm{T} \cdot \S} \cdot \bra{\wres- \times \nabla y} + \bres-^\mathrm{T} \cdot \S[2] \cdot \nabla y \right),
\end{align}
\begin{align}
\Sigma^W_{\emf-} = 2a & \left(2 \Tr(\S \M \wres) +\half \bra{\wres- \cdot \bres-} \bra{\wres- \cdot \nabla y} \right . \\\nonumber
& -\half \wres-^\mathrm{T} \cdot \M \cdot \wres- - \half \wres-^2 \bra{\bres- \cdot \nabla y}\\\nonumber
&-\bra{\bres- \times \wres-}^\mathrm{T} \cdot \bra {\S \cdot \nabla y}+ \half \wres-^\mathrm{T} \cdot \S \cdot \jres-\\\nonumber
&\left. -\half \bra{\jres- \cdot \wres-}\Tr(\S)\right).
\end{align}
While these expressions contain a large variety of terms, the key point is that there is a strong interplay between Reynolds SGS stress and the EMF contributions, i.e. the terms in $\Sigma^W_{\tu}$ and $\Sigma^W_{\emf-}$. For instance, the cancellation of the $\Tr(\S \M \wres)$ term points to an interaction between the resolved and turbulent fields
which preserves the large scale topology characterized by $W$.
Another example is given by the $\nabla y$-terms in $\Sigma^W_{\tu}$ and $\Sigma^W_{\emf-}$. In $\Sigma^W_{\tu}$ they come from the intrinsic compressibility effect described by $\tu[][ij]\bres_j y_{,i}/\rres $, i.e.~the interaction between velocity fluctuations, density gradients and a large scale magnetic field. The corresponding $\nabla y$-terms in $\Sigma^W_{\emf-}$ are specific to the primary compressible extension. The analogous form of the two sets of terms shows that the primary extension naturally restores the symmetry between kinetic and magnetic turbulent contributions to the effects of compressibility on $W$. As the resolved cross-helicity plays a role in the non-local transfer between kinetic and magnetic energies and affects the rate of energy decay, it is clearly important to treat it with as much care as the resolved energy itself.
}
\section{Conclusion}
The high computational cost of 3-dimensional direct numerical MHD simulations poses severe limitations to our understanding
of astrophysical and terrestrial phenomena involving strongly turbulent magnetized fluids.
Large-eddy simulations can alleviate this issue by explicitly considering the effects of limited resolution.
In this work, we presented the derivation and properties of a nonlinear structural
closure of the compressible MHD LES equations.
It is based on a series expansion\cite{Yeo1987} of the finite resolution operator, a convolution with a low-pass filter kernel, and
careful consideration of the impact of the operator on the compressible dynamics.
As the derivation needs no assumptions on the nature of the flow, the closures can be applied to a wide variety of MHD problems,
as long as they can be described on a regular grid under periodic boundary conditions. In particular, no assumptions were invoked on the level of compressibility,
on the structure, dynamics, or even presence of turbulence and magnetic fields.
Thus the closures are suitable for both statistically stationary and developing disordered velocity and magnetic field configurations, from
the sub- to the hyper-sonic and -Alfvenic regime. Only an isothermal equation of state was considered. However, the formalism can be extended to
incorporate thermal variations, as well as additional evolution equations, e.g. for the SGS energy or for passive scalar transport.
Although the closures for the MHD SGS terms are derived self-consistently, the
information gap below the Nyquist frequency as well as the complicated nature of realistic LES filters leaves room for additional re-normalization
or re-calibration of the proposed closures and for combinations with additional closures.
In fact a simple renormalized version of the closure has already been validated\cite{Grete2015} in \aprio comparison. Here, through a self-consistent derivation of the compressibility effects due to a mass-weighted filter, some of the results of this comparison \REV{are} clarified.
An analysis of the energy dissipation properties of the \REV{simple} compressible closure demonstrates that it can already accommodate sophisticated energy transfers between resolved and unresolved kinetic and magnetic energy budgets. It emphasizes the dependence of the transfer on local geometry, e.g. anisotropy, and topology, e.g the interplay between vortical and shearing magnetic and kinetic structures of different types.
\REV{Furthermore, it allows for imperfect transfer between the resolved kinetic and magnetic energy mediated by the subgrid scales. The additional effects of the self-consistent, primary closure are revealed through the resolved magnetic energy dissipation, where it plays a role in regions of strong compressibility. Moreover, it restores the symmetry between kinetic and magnetic contributions to the cross-helicity dissipation, and thus plays a vital role in the evolution of the large-scale fields' topology.}
Thus presented, the closure is ready to be bench-marked against currently used compressible MHD closures and
to have its properties validated against numerical and experimental turbulence data.
The results of such a comparison with a wide selection of available SGS closures against a suite of simulation data of homogeneous and isotropic turbulence ranging from the sub- to the hyper-sonic regime are presented in an accompanying article\cite{Grete2016}.
\begin{acknowledgments}
P.G. acknowledges financial support by the
\textit{International Max Planck Research School for Solar System
Science at the University of G\"ottingen}.
D.V. acknowledges research funding by the
\textit{Deutsche Forschungsgemeinschaft (DFG)} under grant
\textit{SFB 963/1, project A15} and the \textit{Max Planck Institute for Dynamics and Self-Organization}. DRGS thanks for funding through Fondecyt regular (project code 1161247) and through the ''Concurso Proyectos Internacionales de Investigaci\'on, Convocatoria 2015'' (project code PII20150171). This project is supported by the \textit{North-German Supercomputing Alliance} under grant \textit{nip00037}.
\end{acknowledgments}
\section*{References}
\normalem
|
2,877,628,089,397 | arxiv | \section{Introduction}
Analysis of quantum chaotic systems is often based on the statistical
properties of the spectrum of the Hamiltonian $H$
(in the case of autonomous
systems) or the Floquet operator $F$ (in the case of
periodically perturbed
systems). In general, quantized analogous of classically chaotic systems
display spectral fluctuations conforming to the predictions of random
matrices. Depending on the geometrical properties of the system one uses
orthogonal, unitary or symplectic ensemble \cite{Ha91,CC96}.
Another line of research deals with eigenstates of the analyzed quantum
system. One is interested in their localization properties, which can be
characterized by the eigenvector distribution
\cite{KMH88,Bo91,Iz90,HZ90}
the entropic localization length \cite{CGIS90} or the inverse
participation
ratio \cite{He87}. All this quantities, however, are based on the
expansion
of an eigenstate in a given basis $\{\vec{b}_i\}$, which may be chosen
arbitrarily. If one chooses (with a bad will), $\{\vec{b}_i\}$ as the
eigenbasis
of $F$, all these quantities carry no information whatsoever. One may ask,
therefore, to what extend the quantitative analysis based on the
eigenvector statistics is reliable.
Let $G$ denote a unitary operator, such that $\{ \vec{b}_i\}$ is its
eigenbasis.
We showed \cite{KZ91,Z92,Zy93} that the eigenvector statistics of a
quantum map $F$ conforms to
the prediction of random matrices, if operators $F$ and $G$
are {\sl relatively random}, i.e., their commutators are sufficiently
large.
In this paper we advocate an alternative method of solving the problems with
arbitrariness of the choice of the expansion basis. Instead of working in a
finite discrete basis, we shall use the coherent states expansion of the
eigenstates of $F$. For several examples of compact classical phase spaces
one may construct a canonical
family of the generalized coherent states
\cite{Pe86}. Localization properties of any pure quantum state may be
characterized by the Wehrl entropy, equal to the average log of its overlap
with a coherent state \cite{We78,We91}. We propose to describe the structure
of a given Floquet operator $F$ by the mean Wehrl entropy of its
eigenstates. This quantity, explicitly defined without any
arbitrariness, is shown to be a useful indicator of quantum chaos.
This paper is organized as follows. In section II we review the definition
of the Husimi distribution, stellar representation, and the Wehrl entropy.
For concreteness we work with the $SU(2)$ vector coherent states, linked to
the algebra of the angular momentum operator and corresponding to the
classical phase space isomorphic with the sphere. In section III we
define the
mean Wehrl entropy of eigenstates and present analytical results obtained
for low dimensional Hilbert spaces. Exemplary application of this quantity
to the analysis of the quantum map describing the model of the periodically
kicked top is provided in section IV.
\section{Husimi distribution and stellar representation}
Consider a compact classical phase space $\Omega$, a classical area
preserving map $\Theta:\Omega \to \Omega$ and a corresponding quantum map $F$
acting in an $N$-dimensional Hilbert space ${\cal H}_N$. A link between
classical and quantum mechanics can be established via a family of
generalized coherent states $|\alpha \rangle$. For several examples of
the classical phase spaces there exist a canonical family of coherent
states.
It forms an overcomplete basis and allows for an identity resolution
$\int_{\Omega} |\alpha \rangle \langle \alpha | d\alpha ={\bf 1}$. Any
mixed quantum state,
described by a density matrix $\rho$ can be represented by the
generalized Husimi distribution \cite{Hu40}, (Q-function)
\begin{equation}
H_{\rho}(\alpha ):= \langle \alpha |\rho |\alpha \rangle. \label{hus2}
\end{equation}
The standard normalization of the coherent states,
$\langle \alpha | \alpha\rangle =1$, assures that $\int_{\Omega}
H_{\rho}(\alpha)
d \alpha = 1$. For a pure quantum state $|\psi\rangle$ the Husimi
distribution is equal
to the overlap with a coherent state $H_{\psi}(\alpha ):=
|\langle \psi |\alpha \rangle|^2$. Let us note
that the Husimi distribution was
successfully applied to study dynamical properties of quantized
chaotic systems \cite{Ta86,Zy87}.
Consider a discrete partition of the unity into $n$ terms; $\sum_{i=1}^n
p_i=1$. The Shannon entropy $S_d=-\sum_{i=1}^n p_i\ln p_i$ characterizes
uniformity of this partition. In an analogous way one defines the Werhl
entropy of a quantum state $\rho$ \cite{We78}
\begin{equation}
S_{\rho} = - \int_{\Omega} H_{\rho}(\alpha) \ln [H_{\rho}(\alpha)]
d\alpha,
\label{wer1}
\end{equation}
in which the summation is replaced by the integration over the classical
space $\Omega$. This quantity characterizes the localization properties of a
quantum state in the classical phase space. It is small for coherent states
localized in the classical phase space $\Omega$ and large for the
delocalized states. The maximal Wehrl entropy corresponds to the maximally
mixed state $\rho_*$, proportional to the identity matrix,
for which the Husimi distribution is uniform.
Although the notions of the generalized coherent states, the Husimi
distribution, and the Wehrl entropy are well defined for several
classical
compact phase spaces, in this work we analyze in detail only the most
important case $\Omega=S^2$. This phase space is typical to
physical problems involving spins, due to the algebraic properties of the
angular momentum operator $J$. In this case one uses the family of spin
coherent states $|\vartheta,\varphi \rangle $ localized at points
$(\vartheta,\varphi)$ of the sphere $S^2$. These states, also called
$SU(2)$
vector coherent states, were introduced by Radcliffe \cite{r71} and Arecchi
{\sl et al.} \cite{a72} and are an example of the general group theoretic
construction of Perelomov \cite{Pe86}.
Consider an $N=2j+1$ dimensional representation of the angular momentum
operator $J$. For a reference state one usually takes the maximal eigenstate
$|j,j\rangle$ of the component $J_z$. This state, pointing toward the "north
pole" of the sphere, enjoys the minimal uncertainty. The vector
coherent state represents the reference state rotated by the angles $%
\vartheta$ and $\varphi$. Its expansion in the basis $|j,m\rangle$, $%
m=-j,\dots,+j$ reads \cite{VS95}
\begin{eqnarray}
|\vartheta , \varphi \rangle =\sum_{m=-j}^{m=j}
\sin ^{j-m}({\frac \vartheta 2})\cos ^{j+m}({\frac \vartheta 2})
\times \nonumber \\
\exp \Bigl(i(j-m)\varphi \Bigr)
\Bigl[ \Bigl({\ {\ {{ {2j \atop j-m}
} } } }\Bigr)\Bigr]^{1/2}|j,m\rangle, \label{thetrot}
\end{eqnarray}
where
$\int_0^{2 \pi} d\varphi
\int_0^{\pi} \sin \vartheta d\vartheta
|\vartheta, \varphi \rangle \langle \vartheta,\varphi |
(2j+1)/4\pi ={\bf 1}$.
For the $SU(2)$ coherent state the distribution (\ref{hus2}) equals in this
case $H_{\psi}(\vartheta,\varphi):= |\langle \psi | \vartheta,
\varphi
\rangle|^2$. Two different spin coherent states overlap unless they are
directed into two antipodal points on the sphere. The Husimi representation
of a spin coherent state has thus one zero (degenerated $N-1$ times)
localized at the antipodal point. In general,
any pure quantum state can be
uniquely described by the set of $N-1$ points distributed over the sphere.
Some of these zeros may be degenerated, just as in the case of a coherent
state. This method of characterizing a pure quantum state is called the
stellar representation \cite{Penr,Leb}.
In the analyzed case of the classical phase space isomorphic with the
sphere $S^2$ the Wehrl
entropy (\ref{wer1}) of a state $\rho$ equals
\begin{equation}
S_{\rho} = - {\frac{2j +1 }{4 \pi}} \int_0^{\pi}
\! \! \sin \vartheta d\vartheta \!
\int_0^{2\pi} \! d\varphi H_{\rho}(\vartheta,\varphi) \ln \Bigl[
H_{\rho}(\vartheta,\varphi)\Bigr], \label{wehr2}
\end{equation}
since the measure element $d \alpha$ is equal to
$(2j+1) \sin \vartheta d \vartheta d \varphi /4 \pi$.
Under this normalization the entropy of the maximally mixed state $\rho_*$
equals to $\ln N$.
The Husimi distribution of an eigenstate $|j,m\rangle $ may be computed
directly from the definition (\ref{thetrot}). Due to the rotational symmetry
it does not depend on the azimuthal angle $\varphi $
\begin{equation}
H_{|j,m\rangle }(\vartheta )= \sin^{2(j-m)}(\vartheta / 2 )\cos
^{2(j+m)}( \vartheta / 2 )\Bigl({ {2j \atop {j-m}}
}\Bigr), \label{husjz}
\end{equation}
which simplifies the computation of the Wehrl entropy. Simple integration
gives for the reference state $|j,j\rangle $
\begin{equation}
S_{{\rm coh}}={\frac{N-1}{N}}={\frac{2j}{2j+1}}. \label{lieb}
\end{equation}
Due to the rotational invariance the Wehrl entropy is the same for
any other coherent state.
\medskip
\begin{tabular}{|c|c|c|c|c|}
\hline
$N$ & $j$ & $m$ & $S_{|j,m\rangle }$ & ${\bar S}_{J_z}$ \\ \hline\hline
$2$ & $1/2$ & $1/2$ & $1/2$ & $1/2=0.5$ \\ \hline\hline
$3$ & $1$ & $1$ & $2/3$ & $1-\frac{\ln 2}{3} \approx 0.769$
\\ \cline{3-4}
& & $0$ & $5/3-\ln 2$ & \\ \hline\hline
$4$ & $3/2$ & $3/2$ & $3/4$ & $\frac{3}{2}-\frac{\ln 3}{2}
\approx 0.951$ \\ \cline{3-4}
& & $1/2$ & $9/4-\ln 3$ & \\ \hline\hline
& & $2$ & $4/5$ & \\ \cline{3-4}
$5$& $2$ & $1$ & $79/30-\ln 4$ & $2-\frac{\ln 96}{5}
\approx 1.087 $ \\ \cline{3-4}
& & $0$ & $47/15-\ln 6$ & \\ \hline\hline
& & $5/2$ & $5/6$ & \\ \cline{3-4}
$6$& $5/2$ & $3/2$ & $35/12-\ln 5$ &
$\frac{5}{2}-\frac{1}{3} \ln 50\approx 1.196$ \\
\cline{3-4}
& & $1/2$ & $15/4-\ln 10$ & \\ \hline
\end{tabular}
\smallskip
Table 1. Wehrl entropy $S_{|j,m\rangle}$ for the eigenstates of $J_z$
and its mean ${\bar S}_{J_z}$
for $N=2,3,4,5,6$. Due to the geometrical symmetry
$S_{|j,m\rangle}=S_{|j,-m\rangle}$.
\medskip
The Wehrl entropies for other eigenstates of $J_{z}$ are collected in
Tab. 1
for some lower values of $N$. These results may be also obtained from
the general formulae provided by Lee \cite{Le88} for the Wehrl entropy
of the
pure states in the stellar representation. Eigenstate $|j,m\rangle $ is
represented by $j+m$ zeros at the south pole and $j-m$ zeros at the
north
poles. For $j=1/2$ ($N=2$) all the states are $SU(2)$ coherent, so their
entropies are equal. For $j=1$ ($N=3$) the coherent state $|1,1\rangle $ is
characterized by the smallest entropy, while the state $|1,0\rangle $ by the
largest (among the pure states).
The larger $N$, the more place for a
various behaviour of pure states, measured by the values of $S$. The
axis of the Wehrl entropy is drawn schematically in Fig.1.
\vskip -1.5cm
\begin{figure}
\hspace*{-1.6cm}
\vspace*{-2.7cm}
\epsfxsize=9.5cm
\epsfbox{meanrys1.ps} \\
\caption{Axis of Wehrl entropy for pure states
of $N$ dimensional Hilbert space;
a) $N=2$ for which
${\bar S}_{\rm min}={\bar S}_{\rm max}=1/2$;
b) $N=3$ for which
${\bar S}_{\rm min}=2/3$, $\bar{S}_{J_z}\approx 0.77$,
$\langle S \rangle_3 \approx 0.83$, $S_{\rm max}\approx
0.973$;
c) $N=4$ for which
${\bar S}_{\rm min}=3/4$, $\bar{S}_{J_z}\approx 0.95$,
$\langle S \rangle_4 \approx 1.08$, $S_{\rm max}\approx
1.24$;
and d) $N>>1$, where ${\bar S}_{\rm min}=1-1/N$
while $\langle S \rangle_N \approx \ln N -0.423$.
}
\label{f1}
\end{figure}
It has been conjectured by Lieb \cite{Li78} that vector coherent
states are
characterized by the minimal value of the Wehrl entropy $S_{{\rm min}}=S_{%
{\rm coh}}$, the minimum taken over all mixed states. For partial results
in the direction to prove this conjecture see
\cite{Scutar,Le88,Schupp}. It
was also conjectured \cite{Le88} that the states with possibly regular
distribution of all $N-1$ zeros of the Husimi function on the sphere
are characterized by the
largest possible Wehrl entropy among all pure states $S_{\rm max}$.
Such a distribution of zeros is easy to specify for
$(N-1)=4,6,8,12,20$, which correspond to the Platonian polyhedra.
For $N=2$ all pure states are coherent, so $S_{\rm min}=S_{\rm
max}=1/2$. For $N=3$ the maximal Wehrl entropy
$S_{\rm max}=5/3-\ln 2\approx 0.97$ is achieved for the state
$|1,0\rangle$, for which the two zeros of Husimi function are localized
at the opposite poles of the sphere. For $N=4$ the state with three
zeros located at the equilateral
triangle inscribed in a great circle is characterized by
$S_{\rm max}=21/8-2\ln 2\approx 1.24$. It will be interesting to
find
such maximally delocalized pure states for larger values of $N$, and to
study the dependence $S_{\rm max}$ of $N$.
Let us emphasize that for $N>>1$ the pure states exhibiting small Wehrl
entropy, (of the order of $S_{\rm min}$), are not typical. In the
stellar
representation coherent states correspond to coalescence of all $N-1$ zeros
of Husimi distribution in one point. In a typical situation the density
of the zeros is close to uniform on the sphere,
and the Wehrl entropy of such delocalized pure states is large.
A random state can be generated according to the natural uniform
masure on the space of pure
states by taking any vector of a $N \times N$ random matrix
distributed according to the Haar measure on $U(N)$.
Averaging over this measure
one may compute the mean Wehrl entropy $\langle
S\rangle_N $ of the pure states belonging to the $N$ dimensional
Hilbert space.
Such integration was performed in \cite{KMH88,J91,SZ98}
in a slightly different context leading to
\begin{equation}
\left\langle S\right\rangle_N=\Psi \left( N+1\right) -\Psi \left(
2\right) =\sum_{n=2}^{N}{\frac{1}{n}}, \label{wehmean}
\end{equation}
where $\Psi $ denotes the digamma function. Note that another normalization
of the coherent states used in Ref. \cite{SZ98}, leads to results shifted by
a constant $- \ln N$. Such a normalization
allows one for a direct comparison between the entropies
describing the states of various $N$.
In the asymptotic limit $N\rightarrow \infty $ the mean
entropy $\langle S \rangle_N$
behaves as $\ln N+\gamma -1\sim \ln N-0.42278$, which is close to
the maximal possible Wehrl entropy
for mixed states $S_{\rho _{\ast }}=\ln N$. This result is
schematically marked in Fig 1d.
\section{Mean Wehrl entropy of eigenstates of quantum map}
Consider a quantum pure state in the $N$-dimensional Hilbert space. Its
Wehrl entropy computed in the vector coherent states representation may
vary from $1-1/N$, for a coherent state, to the number of order of
$\ln N$,
for the typical delocalized state. This difference suggests a simple
measure
of localization of eigenstates of a quantum map $F$. Denoting its
eigenstates by $|\psi_i\rangle$; $i=1,\dots,N$ we define the mean Wehrl
entropy of eigenstates
\begin{equation}
{\bar{S}_F} ={\frac{1 }{N}} \sum_{i=1}^N S_{|\psi_i \rangle }.
\label{meawer}
\end{equation}
This quantity may be straightforwardly computed numerically for an
arbitrary quantum
map $F$. For quantum analogues of classically chaotic systems exhibiting no
time reversal symmetry all eigenstates are delocalized. In this case the
mean Wehrl entropy of eigenvectors ${\bar S}_F$ fluctuates around
$\langle S \rangle_N \sim \ln N$.
In the opposite case of an integrable dynamics the eigenstates are, at least
partially, localized. A simple example is provided by any Hamiltonian
diagonal in the $J_{z}$ basis (or the basis of any other component of
$J$). The mean Werhl entropy
$\bar{S}_{J_z}$ is given in table 1 for some values of $N$.
Further analysis shows \cite{SKZ99} that for larger $N$ the mean entropy
behaves as ${1 \over 2} \ln N$. This result has a simple
interpretation. Let us
divide the surface of the sphere into $N\sim \hbar ^{-1}$ cells. A typical
eigenstate of $J_{z}$ is localized in a longitudinal strip of a constant
polar angle $\theta $ and covers $\sqrt{N}$ of the cells, so its entropy
is of the order of $\ln \sqrt{N}$.
The quantity $\bar S$ is well--defined in a generic
case of operators $F$ with a nondegenerate spectrum.
In the case of the degeneracy, there
exists a freedom of choosing the eigenvectors; to cure this lack of
uniqueness we define ${\bar S}_F$ as the minimum over all possible sets
of eigenvectors of $F$.
Having a general definition of the mean Wehrl entropy of eigenvectors
of an arbitrary unitary operator, one
may pose a question, for which operators $F_{\rm min}$
($F_{\rm max}$) of a fixed $N$ with a nondegenerate spectrum
this quantity is the smallest
(the largest). It is clear that $\bar{S}_{F_{\rm min}}$ is larger than
$S_{\rm coh}$ (for $N>2$), since the set of any $N$ coherent states
does not form an orthogonal basis. On the other hand, the
minimum is smaller
than $\bar{S}_{J_z}$,
as explicitly demonstrated in Appendix for $N=3$.
The value $\bar {S}_{F_{\rm max}}$ is larger than the average over the
random unitary matrices
$\langle S \rangle_{U(N)}=\langle S \rangle_N$ and smaller than
$S_{\rho_*}=\ln N$.
The mean Werhl entropy of the eigenstates ${\bar S}_F$
may be related
with the eigenvector statistics of the operator $F$. Let us expand a
given coherent state in the
eigenbasis of the Floquet operator,
$|\alpha \rangle=\sum_{i=1}^N c_i(\alpha) |\psi_i\rangle $. The
dynamical properties of a quantum system are characterized locally \cite
{Zy90} by the Shannon entropy $S_s(\alpha):=-\sum_{i=1}^N
|c_i(\alpha)|^2 \ln |c_i(\alpha)|^2$.
The mean Wehrl entropy may be thus written as an average over the phase
space
\begin{equation}
{\bar{S}_F} ={\frac{1 }{N}} \int_{\Omega} S_s(\alpha) d\alpha.
\label{meawe2}
\end{equation}
This link is particularly useful to analyze the influence of the time
reversal symmetry. In presence of any generalized antiunitary symmetry
the operator $F$ may be described by the circular orthogonal ensemble
(COE).
There exists a symmetry line in the phase space and the coherent states
located along this line display eigenvector statistics typical of COE
\cite{Zy91}. This symmetry is also visible in the stellar representation
of the eigenstates and manifests itself by a clustering of zeros
of Husimi functions \cite{BBL,BKZ97}. However, a typical coherent
state does not exhibit such a symmetry and its eigenvector statistics is
typical to the circular unitary ensemble
(CUE). Thus for a system with the time-reversal symmetry the mean Wehrl
entropy will be slightly smaller than for the analogous system with the
time reversal symmetry broken, but much larger than the Shannon entropy
of real eigenvectors of matrices pertaining to the orthogonal ensemble.
\section{Mean Wehrl entropy for the kicked top}
In order to demonstrate usefulness of the mean Wehrl entropy in the
analysis of quantum chaotic systems we present numerical results
obtained for the periodically kicked top. This model is very suitable
for investigation of quantum chaos \cite{HKS,Ha91}. Classical dynamics
takes place on the sphere, while the quantum map is defined in terms of
the components of the angular momentum operator $J$. The size of the
Hilbert space is determined by the quantum number $j$ and equals
$N=2j+1$. One step evolution
operator reads $F_o=\exp(-ipJ_z)\exp(-ikJ_x^2/2j)$. For $p=1.7$ the
classical system becomes fully chaotic for the kicking strength
$k\approx 6$ \cite{HKS}. This system possesses a generalized antiunitary
symmetry and can be described by the orthogonal ensemble. The time
reversal symmetry may be broken by an additional kick \cite{Ha91}. The
system $F_u=F_o \exp(-ik'J_y^2/2j)$ pertains to CUE and
will be called the unitary top.
\vskip -0.2cm
\begin{figure}
\hspace*{-0.6cm}
\vspace*{0.4cm}
\epsfxsize=9.0cm
\epsfbox{meanrys2.ps} \\
\caption{Husimi distribution of exemplary eigenstates of the
Floquet
operator of the orthogonal kicked top for $N=62$ in the
dominantly regular
regime ($k=0.5$), a) and b), and chaotic regime ($k=8.0$),
c) and d).
The sphere is represented in a rectangular projection with
$t=\cos\vartheta$. }
\label{f2}
\end{figure}
Fig. 2 presents the Husimi distributions of two eigenstates of $F_o$
for $p=1.7$
in the regime of regular motion $(k=0.5)$ and two, for which
the classical dynamics is chaotic $(k=8.0$). In the quasiregular case
the eigenstates are localized close to parallel strips,
covered uniformly by the eigenstates of $J_z$. On the other
hand, the eigenstates of the chaotic map are delocalized at the entire
sphere. These differences are well characterized by the values of the
Wehrl entropies, equal correspondingly: a) $2.77$, b) $2.66$; and
c) $3.72$, d) $3.80$. The data, obtained for $N=62$, may be compared
with the mean entropy of the unperturbed system,
$\bar{S}_{J_z}\approx 2.465$,
the mean Wehrl entropy of chaotic system without time reversal symetry,
$\langle S \rangle_{62}\approx 3.712$, and the maximal entropy of
the mixed state, $S_{\rho_*}=\ln 62 \approx 4.1271$.
The above eigenstates are typical for both systems, and the other $60$
states display a similar character. The properties of all eigenstates
are thus described by the mean Wehrl entropy of eigenstates $\bar
{S}_F$. The dependence of this quantity on the kicking strength $k$ is
presented in Fig. 3. To show a relative difference between the entropies
typical to the regular dynamics we use the scaled coefficient
\begin{equation}
\mu (F) := { {\bar{S}}_F - {\bar{S}}_{J_z} \over
\langle S \rangle_N - {\bar{S}}_{J_z} }.
\label{gamma}
\end{equation}
Per definition $\mu$ is equal to zero, if $F$ is diagonal in the
$J_z$ basis, which corresponds to the integrability. In the chaotic
regime $F$ is well described by CUE and $\mu$ is close to unity.
This is indeed the case for the unitary top with $k'=k/2$ and $k>6$.
The growth of $\mu$ is bounded and therefore it cannot, in general,
follow
the increase of the classical Kolmogorov--Sinai entropy $\Lambda$ (the
Lapunov exponent averaged over the phase space), which for the
classical system grows with the kicking strength $k$ \cite{Zy93}.
The data for the orthogonal top fluctuate below unity, due to existence
of the symmetry line. The difference between the coefficients $\mu$
obtained for both models does not depend on the kicking strength,
but decreases with $N$ and vanish in the classical limit $N\to \infty$.
\vskip -1.4cm
\begin{figure}
\hspace*{-1.6cm}
\vspace*{-6.6cm}
\epsfxsize=9.9cm
\epsfbox{meanrys3.ps} \\
\caption{Scaled mean Wehrl entropy $\mu$ of the eigenvectors
of the Floquet operator for the
kicked top as a function of the kicking strength $k$ for
$N=62$. The
data are obtained for two models: unitary top $(\bullet)$ and
orthogonal top $(\diamond)$.
The crosses denote values of the classical Kolmogorov-Sinai entropy
$\Lambda$, which characterize
the transition to chaos in the classical analogue of the
orthogonal top.
}
\label{f3}
\end{figure}
\section{Concluding remarks}
The Wehrl entropy of a given state characterizes its localization in the
classical phase space. We have shown that the mean Wehrl entropy
${\bar S}_F$ of eigenstates of a given evolution operator
$F$ may serve as a useful
indicator of quantum chaos. Let us emphasize that this quantity, linked
to the classical phase space by a family of coherent states, does not
depend on the choice of basis. This contrasts the others quantities,
like eigenvector statistics, localization entropy, inverse participation
ratio, often used to study the properties of eigenvectors.
It will be interesting to find the unitary operators (or rather the
repers) for which ${\bar S}_F$ is the smallest or the largest.
The mean Wehrl entropy of eigenstates enables one to detect the
transition from regular motion to chaotic dynamics. On the other hand, it
is not related to the classical Kolmogorov--Sinai entropy (or
to the Lapunov exponent), so it cannot be
used to measure the degree of chaos in quantum systems. Such a link with
the classical dynamics is established for
the {\sl coherent states dynamical entropy} of a given quantum map
\cite{SZ95,SZ98},
but this quantity is much more difficult to calculate.
Both these quantities characterize the
{\sl global} dynamical properties of a quantum system, in contrast to
the entropy of Mirbach and Korsch \cite{MK95,MK98}, which describes the
{\sl local} features.
Mean Wehrl entropy characterizes the structure of eigenvectors of $F$,
and is not related at all to the spectrum of this operator. Thus it is
possible to construct a unitary operator with a Poissonian spectrum and
the delocalized eigenvectors. Or conversely, one may find an operator
with spectrum typical to CUE and all eigenstates localized.
This shows that the relevant information concerning the dynamical
properties of a quantum system described by an unitary evolution
operator $F$ is contained as well in its spectrum and in its eigenstates.
I am indebted to W. S{\l}omczy{\'n}ski for
fruitful discussions and a constant interest in the progress of this
research. I am also thankful to M. Ku{\' s} and P. Pako{\' n}ski for
helpful remarks.
It is a pleasure to thank Bernhard Mehlig for the invitation to Dresden
and the Center for Complex Systems for a support during the workshop.
Financial support from Polski Komitet Bada{\'n}
Naukowych in Warsaw under the grant no 2P-03B/00915
is gratefully acknowledged.
|
2,877,628,089,398 | arxiv | \section{Introduction}
Duality is a leading paradigm of theoretical physics. Electric-magnetic duality is
one of the oldest and most studied examples. Maxwell
theory is self-dual, i.e., admits duality symmetry under rotation of
the electric field into the magnetic one.
Schr\"odinger \cite{Schrodinger} was the first to show that the
nonlinear theory of electromagnetism of Born and Infeld, quite
remarkably has the same $U(1)$ duality symmetry property.
The study of electric-magnetic duality symmetry has found
further motivations since its appearance in extended supergravity
theories \cite{FSZ77,csf,crju}. In \cite{csf} the first example of
a noncompact duality rotation group was considered, it arises in $N=4$
supergravity and is due to scalar fields transforming under duality
rotations.
These results triggered
further investigations in the general structure of self-dual
theories. In particular the symplectic formalism for nonlinear
electromagnetism coupled to scalar and
fermion fields was initiated in \cite{GZ}, there the
duality groups were shown to be subgroups of noncompact symplectic
groups (the compact case being recovered in the absence of scalar
fields). A nonlinear example is Born-Infeld electrodynamics
coupled to axion and dilaton fields \cite{Gibbons:1995ap}.
Another relevant aspect \cite{BG} is that the spontaneous breaking of $N=2$ rigid
supersymmetry to $N=1$ can lead to a Goldstone vector multiplet whose
action is the supersymmetric and self-dual Born-Infeld action
\cite{DP, CF}.
Higher supersymmetric Born-Infeld type actions are also self-dual and related to spontaneous
supersymmetry breakings in field theory \cite{KET, KT, KT2, BIK} and in string
theory \cite{KET2, RT}.
\vskip .4cm
Duality symmetry is a powerful tool to investigate the
structure of possible counterterms in extended supergravity.
After the explicit computations that showed the 3-loop UV finiteness of
$N=8$ supergravity \cite{Bern}, an explanation based on
$E_{7(7)}$ duality symmetry was provided
\cite{Brodel:2009hu, Elvang:2010kc, Beisert:2010jx, Bossard:2010bd}.
Furthermore
duality symmetry arguments have also been used to suggest
all loop finiteness of $N=8$ supergravity \cite{Kallosh:2011dp}.
Related to these developments,
in \cite{BN} a proposal on how to implement
duality rotation invariant counterterms in a corrected action $S[F]$ leading to a
self-dual theory was put forward under the name of ``deformed
twisted self-duality conditions'' (see eq. (\ref{Iconst})).
Examples included
counterterms dependent on derivatives
of the field strength. The proposal (renamed ``nonlinear
twisted self-duality conditions'') was further elaborated in
\cite{CKR} and \cite{CKO}; see also \cite{BCFKR}, and
\cite{Kuzenko, Kuzenko:2013gr}, for the supersymmetric extensions and examples.
The proposal is equivalent to a formulation of self-dual theories using auxiliary fields studied in \cite{IZ2001} and \cite{Ivanov:2003uj} in case of nonlinear electromagnetism without higher derivatives of the
field strength.
This coincidence has been brought to light in a very recent paper \cite{IZ}.
The supergravity motivated studies have provided new examples of
self-dual theories and have touched upon basic issues
like consistency and equivalence of different formulations of self-duality
conditions, reconstruction of the action from these conditions and of
duality invariant expressions. This paper is a systematic study of
these issues.
\vskip .4cm
A nonlinear and higher derivative
electromagnetic theory is determined by defining, eventually
implicitly, the relation between the electric field strength $F$
(given by the electric field $E$ and the magnetic induction
$B$ ) and the magnetic field strength $G$ (given by the magnetic field $H$
and the electric displacement $D$).
We call {\sl constitutive relations} the relations defining $G$ in
terms of $F$ or vice versa.
We begin Section \ref{dualityrot} by proving that (locally) the equations
of motions of an arbitrary, not necessarily self-dual, nonlinear
electromagnetic theory satisfying an integrability condition can
always be obtained from a variational principle via an action $S[F]$
that is explicitly computed (reconstructed) from the constitutive
relations.
This reconstruction procedure works also for theories
with higher derivatives if we further assume that they
can be obtained from an action principle.
We then study the general theory
of $U(1)$ duality rotations.
Self-duality of the equations of motion constrains the constitutive
relations. The deformed twisted self-duality
conditions are just constitutive relations obtained from a variational
procedure. In these deformed twisted self-duality
conditions the dependence of $G$ from $F$ is given implicitly, but the
constraint that leads to self-dual theories is easily implemented.
This is due to the use of the complex and chiral variables $T^+$,
$T^-$, $\overline{T^+}$, $\overline{T^-}$ that are the chiral
projections of the variables $T=F-iG$ and $\overline
T=F+iG$ introduced by Schr\"odinger \cite{Schrodinger, GZS}.
The fields $T^+$, $T^-$, $\overline{T^+}$, $\overline{T^-}$ have definite
electric-magnetic duality charge and chirality: $(T^+,+1,+1),
~(T^-,+1,-1), ~(\overline{T^+},-1,-1), ~(\overline{T^-},-1,+1)$.
The action $S[F]$ can
always be reconstructed from the action $\cal I[T^-,\overline{T^-}]$ that determines the
deformed twisted self-duality conditions, and vice versa. Indeed,
as also shown in \cite{Ivanov:2003uj}, the
two actions are related by a Legendre transformation. This shows that
the
deformed twisted self-duality conditions are the
general framework needed to discuss self-dual theories obtained from a
variational principle.
Section \ref{constitutiverelations} is devoted to a detailed study of
the constitutive relations of the kind
${{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}{}_{\mu\nu}={{\cal N}_2}_{\,} F_{\mu\nu} +{{\cal N}_1}_{\,}
{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}$ where ${{\cal N}_1}$ and ${{\cal N}_2}$ are real
(pseudo)scalar functions of $F$, $G$ and their derivatives.
These are not the most general constitutive relations because
${{\cal N}_1}$ and ${{\cal N}_2}$ are not differential operators and do not
act on the $\mu\nu$-indices of $F_{\mu\nu}$
and its Hodge dual ${{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}$. However they describe a wide class of
nonlinear theories. For example theories without higher
derivatives are determined by this kind of
relations.\footnote{Indeed in this case the elementary
antisymmetric 2-tensors in the theory are only $F_{\mu\nu}$ and its Hodge dual ${{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}_{\mu\nu}$, hence any
antisymmetric 2-tensor will be a linear combination (with
coefficients
dependent on the field strength)
of $F_{\mu\nu}$ and ${{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}_{\mu\nu}$.}
Equivalent but more duality symmetric formulations of these constitutive relations are
then investigated. In particular we formulate consistent constitutive
relations in terms of the complex variables $T=F-iG$ and $\overline
T=F+iG$, thus generalizing Schr\"odinger study of Born-Infeld theory \cite{Schrodinger, GZS}.
In Section \ref{SCH} the constitutive relations of Section \ref{constitutiverelations}
are constrained to define self-dual theories.
These self-dual constitutive relations turn out to be very simple. They
are determined for example by expressing the ratio
$\frac{T_{\mu\nu}\overline T^{\mu\nu}}{|T_{\mu\nu\,}{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}^{\mu\nu}|}$
in terms of $T,\, \overline T$ and their derivatives.
In particular we see that self-duality constraints the phases of
$T_{\mu\nu}{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}^{\mu\nu}$ and $T_{\mu\nu}T^{\mu\nu}$ to differ by a $-\pi/2$ angle and
the square of their moduli to differ by $|T_{\mu\nu}\overline T^{\mu\nu}|^2$.
Section \ref{nhd} considers self-dual theories that do not involve higher
derivatives of the field strength. In this case the natural
independent variable is $|T_{\mu\nu}T^{\mu\nu}|$. We present a
closed form expression of the {deformed twisted self-duality
conditions} that determine Born-Infeld theory.
Comparison of this expression with the one in terms of a
hypergeometric function ${\mathfrak F}$ previously considered in
\cite{CKR} leads to a hidden quartic equation for ${\mathfrak F}$.
This quartic equation is not just a feature of Born-Infeld theory. It also enters the explicit relation we obtain
between deformed twisted self-duality conditions of any nonlinear
theory and the corresponding constitutive relations in
the Schr\"odinger's variables $T$,$\overline T$.
In the appendices we provide examples of self-dual theories with higher
derivatives, a basic result on the energy momentum tensor
of nonlinear theories and details on a technical calculation.
\section{U(1) duality rotations in nonlinear and higher derivatives
electromagnetism \label{dualityrot}}
\subsection{Action functionals from equations of motion\label{AFEOM}}
Nonlinear and higher derivatives electromagnetism is described by the equations of motion
\begin{eqnarray}
&&{\partial}_{\mu}
{\widetilde F}^{\mu\nu} =0~,\label{max22}\\
&&{\partial}_{\mu}
\widetilde{G}^{\mu\nu}=0~, \label{max11}\\
&&
\widetilde G^{\mu\nu}=h^{\mu\nu}[F,\lambda]
\label{maxwww}~.
\end{eqnarray}
The first two simply state that the 2-forms $F$ and $G$ are closed, ${{d}} F={{d}} G=0$, indeed
$\widetilde
F^{\mu\nu}\equiv\frac{1}{2}{\varepsilon}^{\mu\nu\rho\sigma}F_{\rho\sigma}$,
$\widetilde
G^{\mu\nu}\equiv\frac{1}{2}{\varepsilon}^{\mu\nu\rho\sigma}G_{\rho\sigma}$
(with ${\varepsilon}^{0123}=1$). The last set
$\widetilde G^{\mu\nu}=h^{\mu\nu}[F,\lambda]$, where $\lambda$ is the
dimensionful parameter typically present in a nonlinear theory\footnote{Nonlinear and higher derivatives theories of electromagnetism admit
one (or more) dimensionful coupling constant(s) $\lambda$. Since the
expansions for weak and slowly varying fields are expansions in
adimensional variables (like for example $\lambda F F$ and $\lambda F
{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$, or, schematically
and using a
different coupling constant, $\lambda\partial F\partial F$)
we will equivalently say that these expansions are in power
series of the coupling constant(s) $\lambda$.
},
are the constitutive relations. They specify the dynamics and
determine the magnetic field strength $G$ as a functional in term of the electric field strength $F$, and, vice versa, determine $F$ in term
of $G$, indeed $F$ and $G$ should be treated on equal footing in (\ref{max22})-(\ref{maxwww}).
The square bracket notation $h^{\mu\nu}[F,\lambda]$ stems from
the possible dependence of $h^{\mu\nu} $ from derivatives of $F$.
Since in general we consider curved background metrics
$g_{\mu\nu}$, it is convenient to introduce the $\ast$-Hodge operator;
on an arbitrary antisymmetric tensor $F_{\mu\nu}$ it is defined by
\begin{equation}
{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\mu\nu}=
\frac{1}{2\sqrt{g}}
g_{\mu\alpha}g_{\nu\beta}\,{\varepsilon}^{\alpha\beta\rho\sigma}F_{\rho\sigma}
=\frac{1}{\sqrt{g}}\widetilde F_{\mu\nu}~,
\end{equation}
where $g=-\det(g_{\mu\nu})$, and it squares to
minus the identity.
The constitutive relations (\ref{maxwww}) implicitly include also
a dependence on the background metric $g_{\mu\nu}$ and for example in
case of usual electromagnetism they read $G_{\mu\nu}={{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}_{\mu\nu}=\frac{1}{\sqrt{g}}\widetilde
F_{\mu\nu}$, while for
Born-Infeld theory,
\begin{equation}
{S}_{BI}= \frac{1}{\lambda}\int\!d^4x\,\sqrt{g}\Big( 1-\sqrt{1+\frac{1}{2}\lambda
F^2-\frac{1}{16}\lambda^2(F{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})^2}\;\Big)~,\label{BILag}
\end{equation}
where $F^2=FF=F_{\mu\nu}F^{\mu\nu}$ and
$F{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=F_{\mu\nu}{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}^{\mu\nu}$,
they read
\begin{equation}\label{BIcr}
{G}_{\mu\nu}=
\frac{{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}+{1\over 4} \lambda (F{{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}})\,F_{\mu\nu}}{
\sqrt{1+{1\over 2}\lambda F^2-\frac{1}{16}\lambda^2(F{{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}})^2}}~.
\end{equation}
The constitutive relations (\ref{maxwww}) define a nonlinear and higher
derivative extension of electromagnetism because we require that setting $\lambda=0$ in
(\ref{maxwww})
we recover usual electromagnetism: $G_{\mu\nu}={{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}$.
\vskip .4cm
We now show that in the general nonlinear case (where the constitutive relations do not
involve derivatives of $F$) the equations of motion (\ref{max22})-(\ref{maxwww})
can always be obtained from a variational principle provided they
satisfy the integrability conditions
\begin{equation}\label{intcond}
\frac{\partial
{h}^{\mu\nu}}{\partial F_{\rho\sigma}}=\frac{\partial
{h}^{\rho\sigma}}{\partial F_{\mu\nu}}~.
\end{equation}
These conditions are necessary in order to obtain (\ref{maxwww}) from
an action $S[F]=\int \!d^4x \/{\cal L}(F)$. Indeed if\/\footnote{The factor 2 is due to the
convention $\frac{\partial{F_{\rho\sigma}}}{\partial
F_{\mu\nu}}=\delta^\mu_\rho\delta^\nu_\sigma\,$ adopted in \cite{GZ}
and in the review \cite{AFZ}. It will be used
throughout the paper.\label{funo}} $h^{\mu\nu}=2\frac{\partial
{\cal L}}{\partial F_{\mu\nu}}$ then (\ref{intcond}) trivially holds.
In order to show that (\ref{intcond}) is also sufficient we recall that the field
strength $F_{\mu\nu}(x)$ locally is a map from spacetime to
$\mathbb{R}^6$ (with coordinates $F_{\mu\nu}$, {\small{$\mu<\nu$}}). We assume
$h^{\mu\nu}(F,\lambda)$ to be well defined functions
on $\mathbb{R}^6$ or more in general on an open submanifold $M\subset
\mathbb{R}^6$ that includes the origin ($F_{\mu\nu}=0$) and that is a star shaped
region w.r.t. the origin (e.g. a 6-dimensional ball or cube
centered in the origin).
Then condition (\ref{intcond}) states that the 1-form
$\mathpzc{h}=h^{\mu\nu}dF_{\mu\nu}$ is closed, and hence, by Poincar\'e lemma,
exact on $M$; we write $\mathpzc{h}=d{\cal L}$. We have ${\cal L}(F)-{\cal L}(0)=\int_\gamma {}_{\!}\mathpzc{h}\,$ for
any curve $\gamma(c)$ of coordinates $\gamma_{\mu\nu}(c)$ such that
$\gamma_{\mu\nu}(0)=0$ and $\gamma_{\mu\nu}(1)=F_{\mu\nu}$. In
particular, choosing the straight line from the origin to the point of
coordinates $F_{\mu\nu}$,
and setting $S=\int d^4x \,{\cal L}(F)$, we immediately obtain
\begin{Theorem}\label{actionfromeom}
If the constitutive relations (\ref{maxwww}) do not depend on
derivatives of $F$ (i.e. if $h^{\mu\nu}[F,\lambda]=h^{\mu\nu}(
F,\lambda)\,$) and the functions $h^{\mu\nu}(F,\lambda)$ are defined in a
star shaped region $M$ (of coordinates $F_{\mu\nu}$) and satisfy the integrability conditions (\ref{intcond}),
then the constitutive relations (\ref{maxwww}) are equivalent to the equations\/$^{\ref{funo}}$
\begin{equation}
{\widetilde G}^{\;\mu\nu}= 2\frac{\delta S[F]}{\delta F_{\mu\nu}} ~\label{Sconst0}
\end{equation}
where the action functional $S[F]$ is given by
\begin{equation}
S[F]=\frac{1}{2}\int \!d^4x_{\,} F_{\mu\nu}\!\!\int_0^1\! dc \, h^{\mu\nu}(cF,\lambda)~\label{recide}.
\end{equation}
\end{Theorem}
\begin{Corollary} On spacetimes where closed two forms are exact
($dF=0\Rightarrow F=dA$), the
equations of motion (\ref{max22})-(\ref{maxwww}) of
nonlinear electromagnetism satisfying the conditions of Theorem \ref{actionfromeom}
are equivalent to the equations of motion
\begin{equation}
\frac{\delta S}{\delta A_{\mu}}=0 ~\label{Aeom}
\end{equation}
where $S=\frac{1}{2}\int \!d^4x_{} \int_0^1\! dc \, F_{\,}
h(cF,\lambda)$.
\end{Corollary}
\begin{proof} Equation (\ref{max22}) is the
Bianchi identity for $F=dA$, (\ref{maxwww}) holds because of
Theorem \ref{actionfromeom}, and (\ref{max11}) is equivalent to the equations of
motion (\ref{Aeom}).
\end{proof}
We have seen that under the integrability conditions (\ref{intcond})
locally the equations of motion of nonlinear electromagnetism
(\ref{max22})-(\ref{maxwww}) can be obtained from the action
\begin{equation}
S=\frac{1}{2}\int \!d^4x_{} \int_0^1\! dc \,c F_{\,} \widetilde G_c~,
\end{equation}
where $\widetilde G_c=\frac{1}{c} h(cF,\lambda)$.
It is interesting to generalize these results to the case of nonlinear
and higher derivatives electromagnetism. We here present a first step in this
direction
\begin{Proposition}\label{fstep}
If the equations of motion (\ref{max22})-(\ref{maxwww}) of a nonlinear and higher derivatives
electromagnetic theory are obtained from an action functional $S[F]$
then we have
\begin{equation}
S[F]=\frac{1}{2}\int \!d^4x_{} \int_0^1\! dc \, F_{\,} h[cF,\lambda]~,
\end{equation}
that we simply rewrite $S=\frac{1}{2}\int \!d^4x_{} \int_0^1\! dc \,c
F_{\,} \widetilde G_{c}$.
\end{Proposition}
\begin{proof}
Consider the one parameter family of actions
$S_c[F]=\frac{1}{c^2}S[cF]$.
Deriving with respect to $c$ we obtain
\begin{equation}
-c\frac{\partial S_c}{\partial c}=2S_c-\int \!d^4x ~F\frac{\delta
S_c[F]}{\delta F}~,\label{Ttrace}
\end{equation}
i.e. $-c\frac{\partial S_c}{\partial c}=2S_c-\frac{1}{2}\int \!d^4x
~F\widetilde G_c$. It is easy to see that
$S_c=\frac{1}{2c^2}\int \!d^4x_{} \int_0^c\! dc' \,c'
F_{\,} \widetilde G_{c'}$ is the primitive with the correct behaviour
under rescaling of $c$ and $F$. We conclude that
$\frac{1}{c^2}S[cF]= \frac{1}{2c^2}\int \!d^4x_{} \int_0^c\! dc' \,c'
F_{\,} \widetilde G_{c'}$, and setting $c=1$ we obtain the thesis.
\end{proof}
\vskip .4cm
We now consider the following expansion of an action $S[F]$
even under $F\to -F$,
\begin{equation}
S[F]=S^{\{0\}}[F]+S^{\{2\}}[F]+S^{\{4\}}[F]+\ldots\label{FexpansionS}
\end{equation}
where $S^{\{2n\}}$ is the term homogeneous in
$2n$ field strengths or their derivatives. Similarly we consider
$S_c[F]=\frac{1}{c^2}S[cF]$ and expand
$\widetilde G_c=2\frac{\delta S_c}{\delta F}$ as
\begin{eqnarray}
\widetilde G_c&=&\widetilde G_c^{\{1\}}+\widetilde
G_c^{\{3\}}+\widetilde G_c^{\{5\}}+\ldots~\nonumber\\
&=&\widetilde G^{\{1\}}+c^2\widetilde
G^{\{3\}}+c^4\widetilde G^{\{5\}}+\ldots~\label{FexpansionG}
\end{eqnarray}
where $G_c^{\{2n-1\}}$ is the term homogeneous in
$2n-1$ field strengths or their derivatives, and in the second
equality we observed that it is also the
term proportional to
$c^{2n-2}$ so that $G_c^{\{2n-1\}}=c^{2n-2}G_{c=1}^{\{2n-1\}}=c^{2n-2}G^{\{2n-1\}}$.
Proposition \ref{fstep} then implies
\begin{equation}
S^{\{2n\}}=\frac{1}{4n}\int d^4x\, F\widetilde G^{\{2n-1\}}~.\label{2.20}
\end{equation}
This expression relates the term in the action proportional to the
$2n^{\rm th}$ power of $F$ or its derivatives, to the term in $\widetilde G$ proportional
to the $(2n-1)^{\rm th}$ power of $F$ or its derivatives.
\begin{Note}\label{note3'} Expression $S=\frac{1}{2}\int \!d^4x_{} \int_0^1\! dc \,c
F_{\,} \widetilde G_{c}$, in the equivalent form
\begin{equation}
S=\frac{1}{4}\int \!d^4x \int_0^1 d\kappa \,F\widetilde G_{\kappa}~\label{recid}
\end{equation}
\noindent
(where $\kappa=c^2$) has been considered for self-dual theories in \cite{CKO} and called reconstruction identity.
It has been used, together with an expression equivalent to (\ref{2.20}), to reconstruct the action $S$ from equations of motion with duality
rotation symmetry in examples with higher derivatives of $F$.
\end{Note}
\begin{Note}\label{note3''} In Appendix \ref{TTT} we show that for nonlinear
theories without higher derivatives, the l.h.s.~and r.h.s of (\ref{Ttrace}) are
half the spacetime integral of the trace of the energy momentum
tensor.
\end{Note}
\subsection{$U(1)$ duality rotations}
Nonlinear and higher derivatives electromagnetism admits $U(1)$ duality rotation symmetry if
given a field configuration $F,G$ that satisfies
(\ref{max22})-(\ref{maxwww}) then the rotated configuration
\begin{equation}\label{rotFG}
\left(\begin{array}{c}
F' \\
G'
\end{array}\right)=
\left(\begin{array}{cc}
\cos\alpha & -\sin {\alpha}\\
\sin\alpha & \cos\alpha
\end{array}\right)
\left(\begin{array}{c}
F \\
G
\end{array}\right)~,
\end{equation}
that is trivially a solution of
${\partial}_{\mu}
{\widetilde F}^{\mu\nu} =0\,,\;
{\partial}_{\mu}
\widetilde{G}^{\mu\nu}=0\,,
$
satisfies also
$\widetilde G'_{\mu\nu}=h_{\mu\nu}[F',\lambda]$, so that $F',G'$ is again a solution of
the equations of motion.
If we consider an infinitesimal duality rotation, $F\to F+\Delta F$,
$G\to G+\Delta G$ then condition
$\widetilde G'_{\mu\nu}=h_{\mu\nu}[F',\lambda]$ reads
$\Delta\widetilde G_{\mu\nu}=
\int d^4x\, \frac{\delta
h_{\mu\nu}}{\delta F_{\rho\sigma}}\,\Delta F^{\rho\sigma}$,
i.e., $\widetilde F_{\mu\nu}=-\int d^4x\, \frac{\delta
h_{\mu\nu}}{\delta F_{\rho\sigma}}\,G^{\rho\sigma}$, that we simply rewrite
\begin{equation}\label{basicDR}
\widetilde F_{\mu\nu}=-\int d^4x\, \frac{\delta \widetilde G_{\mu\nu}}{\delta F_{\rho\sigma}}\,G^{\rho\sigma}~.
\end{equation}
It is straightforward to check that electromagnetism and Born-Infeld
theory satisfy (\ref{basicDR}).
\vskip .4cm
If the theory is obtained from an action
functional $S[F]$ (in the field strength $F$ and its derivatives) then
(\ref{maxwww}) is given by
\begin{equation}
\widetilde G^{\mu\nu}= 2\frac{\delta S[F]}{\delta F_{\mu\nu}} ~.\label{Sconst}
\end{equation}
In particular it follows that
\begin{equation}
\frac{\delta{\widetilde G}^{\mu\nu}}{\delta F_{\rho\sigma}}=\frac{\delta
{\widetilde G}^{\rho\sigma}}{\delta F_{\mu\nu}}~,
\end{equation}
hence the duality symmetry condition (or self-duality condition)
(\ref{basicDR}) equivalently reads
$
\widetilde F_{\mu\nu}=-\int d^4x\, \frac{\delta\widetilde G_{\rho\sigma}}{\delta F_{\mu\nu}}\,G^{\rho\sigma}
$. Now writing $\widetilde F_{\mu\nu}=\frac{\delta}{\delta
F_{\mu\nu}}\,\frac{1}{2}\!\int \!d^4x \:F_{\rho\sigma}\widetilde F^{\rho\sigma}$ we equivalently have
\begin{equation}
\frac{\delta}{\delta F_{\mu\nu}}\int \! d^4x\:F\widetilde F+G\widetilde G=0~, \label{NGZ1}
\end{equation}
where $F\widetilde F=F_{\rho\sigma}\widetilde F^{\rho\sigma}$ and similarly
for $G\widetilde G$.
We require this condition to hold for any field configuration $F$
(i.e. off shell of (\ref{max22}), (\ref{max11})) and
hence we obtain the Noether-Gaillard-Zumino (NGZ) self-duality
condition\footnote{Note that (\ref{NGZ2}) (the integrated form of
(\ref{NGZ})) also follows in a straightforward manner by repeating
the passages in \cite{GZ} but with $G$
the functional derivative of the action rather than the partial
derivative of the lagrangian \cite{KT, AFZ}. This makes a difference for nonlinear
theories which also contain terms in derivatives of $F$.}
\begin{equation}
\int \! d^4x~F\widetilde F+G\widetilde G=0 \label{NGZ2}~.
\end{equation}
The vanishing of the integration constant is determined for example by
the condition $G={{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$ for weak and slowly varying fields,
i.e. by the condition that in this regime the theory is
approximated by usual electromagnetism.
\vskip .4cm
We also observe that the NGZ self-duality condition (\ref{NGZ2}) is equivalent to
the
invariance of $S^{inv}=S-\frac{1}{4}\int \!d^4x\,F\widetilde G$,
indeed under a rotation (\ref{rotFG}) with infinitesimal parameter
$\alpha$ we have
$S^{inv}[F']-S^{inv}[F]=-\frac{\alpha}{4}\int\!d^4x\; F\widetilde F+ G\widetilde G=0$.
\begin{Note}\label{note1}
If the Lagrangian $L(F)$ of the action functional $S[F]$ does not depend on the derivatives
of $F$, then we cannot integrate by parts and condition (\ref{NGZ2}) is equivalent to
\begin{equation}
F\widetilde F+G\widetilde G=0\label{NGZ}
\end{equation}
since the field configuration $F$ is arbitrary (and therefore with
arbitrary support in spacetime).
On shell of (\ref{max22}), (\ref{max11}) we can introduce the electric potential $A_\mu$ and the
magnetic one $B_\mu$ so that
$F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$,
$G_{\mu\nu}=\partial_\mu B_\nu-\partial_\nu B_\mu$
and (\ref{NGZ}) becomes the (Noether-Gaillard-Zumino) current conservation condition $\partial_\mu
J^\mu=\partial_\mu( A_\nu \widetilde F^{\mu\nu}+B_\nu\widetilde
G^{\mu\nu})=0$.
Examples of theories satisfying (\ref{NGZ2}) and not
(\ref{NGZ}) are obtained in Appendix \ref{HDactions}, where we generalize the example
presented in \cite{BN}.
\end{Note}
\begin{Note}\label{Note2}
If the Lagrangian $L(F)$ is in Minkowski spacetime and if it depends only on $F$ and not on its
derivatives, then Lorentz invariance implies that it depends on the
scalars $FF$ and $(F\widetilde F)^2$, where the square in $(F\widetilde F)^2$
is needed for parity symmetry (space inversion invariance). More in
general we can consider a Lagrangian in curved spacetime that depends only
on the (pseudo)scalars $FF$ and $F{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$.
It is then possible to integrate the differential equation
(\ref{NGZ}): $F_{\mu\nu}{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}^{\mu\nu}-4\,{\big(}^{\!\!\!\!\!\!\ast}\;{\frac{\partial
L}{\partial F^{\mu\nu}}}\big)\frac{\partial L}{\partial
F_{\mu\nu}}=0$.
The solution is presented in \cite{GZS} (and an alternative form is
presented in \cite{HKS}, see also \cite{Ivanov:2003uj}), it depends
on an arbitrary real valued function $v(s)$ of a real variable $s$, with the
initial condition that in the limit $s\rightarrow 0$ then $v(s)\to
-s$.
However $L(F)$ is explicitly determined only after inverting a
function related to $v(s)$. Hence explicit solutions $L(F)$
in terms of simple functions are very difficult to be found.
This suggests to look for solutions $L(F)$, and more in general actions
$S[F]$, that are power series in the
coupling constant $\lambda$.
\end{Note}
\begin{Note}
Given an action $S[F]$ with self-dual
equations of motion the one parameter family of
theories defined by $S_c[F]=\frac{1}{c^2}S[cF]$ (with $c\geq0$, cf. end of Section \ref{AFEOM}) are also self-dual.
This is so because for any given value of $c$ the action $S_c[F]$ satisfies the corresponding NGZ self-duality
conditions (\ref{NGZ2}):
\begin{equation}
\int\!d^4x~ F\widetilde F-2\frac{\delta S_c[F]}{\delta
F}2\widetilde{\frac{\delta S_c[F]}{\delta F}}\,=0\label{FFgGG}~.
\end{equation}
Indeed $\frac{\delta S_c[F]}{\delta
F}\widetilde{\frac{\delta S_c[F]}{\delta F}}=
\frac{1}{c^4}\frac{\delta S[cF]}{\delta
F}\widetilde{\frac{\delta S[cF]}{\delta F}}=
\frac{1}{c^2}\frac{\delta S[cF]}{\delta
cF}\widetilde{\frac{\delta S[cF]}{\delta cF}}$.
Therefore condition (\ref{FFgGG}) is equivalent to
$\int\!d^4x ~cF c\widetilde{F}-2\frac{\delta S[cF]}{\delta
cF}2\widetilde{\frac{\delta S[cF]}{\delta cF}}\,=0$.
These are the self-duality conditions for the action $S[\hat F]$ with
$\hat F=cF$. Hence these conditions hold because
the self-duality conditions for the
initial action $S$ hold for any field configuration.
This result allows to provide jet another derivation of the invariance
under duality rotation of expression (\ref{Ttrace}) for self-dual actions:
One has just to recall that the variation of the action with respect to a duality invariant
parameter is duality invariant \cite{GZ}.
\end{Note}
\subsection{Complex and chiral variables}\label{complexvariables}
Following Schr\"odinger \cite{Schrodinger, GZS} it is convenient to consider the complex variables
\begin{equation}
T=F-iG~,~~\overline T=F+i G~,
\end{equation}
that under duality transform with a phase: $T\to e^{-i\alpha}T$,
$\overline T \to e^{i\alpha}\overline T$, and that treat on an equal footing
the electric and magnetic field strengths $F$ and $G$.
In the new variables the NGZ self-duality condition (\ref{NGZ2}) reads $\int
d^4x \;\overline{T}\,{{\widetilde T}}=0$, or equivalently
\begin{equation}\label{GZN2}
\int \!d^4x \sqrt{g}\,~\overline{T}{{{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}}\,=\,0~.
\end{equation}
Following \cite{BN} we further consider the complex (anti)selfdual
combinations
$F^\pm=\frac{1}{2}(F\pm i{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})$, $G^\pm=\frac{1}{2}(G\pm i{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})$
and
\begin{eqnarray}\label{TPM}
T^+&=&\frac{1}{2}(T+i{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})=F^+-iG^+~,~~~~~~~~~~T^-=\frac{1}{2}(T-i{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})=F^--iG^-~,~~\\
\overline{T^+}&=&\frac{1}{2}(\overline T-i ~\,{\overline T}^{\!\!\!\!\!\!\!\!\!\ast\,~~})=F^-+iG^-={{\overline T}{^{\mbox{${\;\!}^-$}}\!}}~,~~
\overline{T^-} =\frac{1}{2}(\overline T+i{~\,{\overline T}^{\!\!\!\!\!\!\!\!\!\ast\,~~}})=F^+
+iG^+={\overline T}{^{\mbox{${\;\!}^+$}}\!}\,.~~~~~~\label{OTPM}
\end{eqnarray}
The fields in the first row have duality charge
$+1$ because transform with $e^{-i\alpha}$ under the duality rotation (\ref{rotFG}), while
their complex conjugates in the second row have duality charge $-1$.
Complex conjugation inverts chirality hence $T^+$ and
$\overline{T^-}={\overline T}{^{\mbox{${\;\!}^+$}}\!}$ have chirality $+1$ while
$T^-$ and $\overline{T^+}={\overline T}{^{\mbox{${\;\!}^-$}}\!}$ have chirality $-1$.
The (anti)selfdual combinations have definite behaviour in the
coupling constant $\lambda\to 0$ limit. Since in this limit we recover
usual electromagnetism we have $G\to{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$
and $G^\pm\to \mp i F^\pm$, and hence
\begin{equation}
T^+\,\to\, 0~~,~~T^-\,\to 2 F^-~.
\end{equation}
The NGZ self-duality condition (\ref{NGZ2}) in these variables reads (use
$({{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}})^{\, \pm}= {{(T^\pm)_{\,}}}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\ast\,~~~}\,\,\,=\mp i T^\pm\,$)
\begin{equation}
\int \!d^4x\sqrt{g}\;\,\, T^+\,\overline{T^-} -\,\overline{T^+}\,T^-\, =0~.
\end{equation}
\subsection{The action functional ${\cal I}[T^-,{\overline{T^-}}]$\label{ITTm}}
As noticed in \cite{CKR},
the Bossard and
Nicolai proposal \cite{BN} for constructing self-dual equations of
motions is easily expressed in terms of chiral variables:
We consider a real valued functional ${\cal I}[T^-,{\overline{T^-}}]$ in the
chiral variables\footnote{We stress that
the independent variables in ${\cal I}$ are $T^-$ and its complex conjugate
$\overline{T^-}$, just like in $S[F]$ or $S[F^-,F^+]$ the independent variables are
$F^-$ and its complex conjugate $F^+$. The variables $T^+,
\overline{T^+}$ (and hence $T,\overline T$) are then defined
in terms of the $T^-$, $\overline{T^-}$ ones.} $T^- ,{\overline{T^-}}$
and define the constitutive relations (called deformed twisted self
duality conditions in \cite{BN}, and nonlinear twisted self-duality
conditions in \cite{CKR})
\begin{equation}
{T^+}^{\mu\nu}=\frac{1}{\sqrt{g}}\frac{\delta {\cal I}[T^-,{\overline{T^-}}]}{\delta{\overline{T^-}_{\!\!\mu\nu}}}~~,~~~~{\overline{T^+}{_{}}^{\mu\nu}}=\frac{1}{\sqrt{g}}\frac{\delta {\cal I}[T^-,{\overline{T^-}}]}{\delta T^-_{\;\mu\nu}}\label{Iconst}~.
~
\end{equation}
Reality\footnote{The reality condition is ${\cal
I}[T^-,{\overline{T^-}}]=\overline{{\cal
I}[T^-,{\overline{T^-}}]}$. Then we extend
${\cal I}[T^-,{\overline{T^-}}]$ to ${\widehat{\cal
I}}[T^-,{\overline{U^-}}]\equiv \frac{1}{2}\big({\cal
I}[T^-,{\overline{U^-}}]+\overline{{\cal
I}[U^-,{\overline{T^-}}]}_{\,}\big)$ that by construction
satisfies
$\overline{{\widehat{\cal I}}\big[T^-,{\overline{U^-}}\big]}={\widehat{\cal
I}}\big[\,{{{U^-}}}\,,\, \overline{T^-}\,\big]$ for arbitrary
complex and independent fields $T^-$ and ${{U^-}}$.
The functional variation in (\ref{Iconst}), where $\,{\overline{T^-}}$ is kept
independent from $T^-$, then explicitly reads
$\,
T^+=\frac{1}{\sqrt{g}}\frac{\delta {\widehat{\cal I}}[T^-,{\overline{U^-}}]}{\delta{\overline{U^-}}}\Big|_{{U^-=T^-}}\,,~{\overline{T^+}}=\frac{1}{\sqrt{g}}\frac{\delta {\widehat{\cal I}}[T^-,{\overline{U^-}}]}{\delta T^-}\Big|_{{U^-=T^-}}
$.
} of
$\cal I$
implies that the second equation is just the
complex conjugate of the first one, hence the constitutive relations
are 6 real equations as in (\ref{maxwww}) and in (\ref{Sconst}).
If moreover ${\cal I}$ is duality invariant under $T^-\to
e^{-i\alpha}T^-$, $\overline{T^-}\to e^{i\alpha}\overline{T^-}$ then
relations (\ref{Iconst}) imply the NGZ self-duality condition (\ref{NGZ2});
indeed under an infinitesimal duality rotation $T^-\to T^-+\Delta T^-$,
$\Delta T^-=-i\alpha T^-$
we have:
\begin{equation}
0=\Delta{\cal I}=\int d^4x\;\,\frac{\delta{\cal I}}{\delta {\overline{T^-}}}\Delta
{\overline{T^-}}+
\frac{\delta{\cal I}}{\delta T^-}\Delta
T^-=i\alpha \int \!d^4x\sqrt{g}\, \,\,T^+\,{\overline{T^-}} -\,{\overline{T^+}} \,T^-~.
\end{equation}
This is a powerful approach because the constitutive relations are
easily given (though the dependence
$\widetilde G_{\mu\nu}=h_{\mu\nu}[F,\lambda]$ is determined implicitly), and the self-duality condition is also easily
implemented: just consider duality invariant functionals ${\cal I}$. Furthermore,
Lorentz (or, in curved spacetime, diffeomorphisms) invariance of the
functional $\cal I$ implies Lorentz (diffeomorphisms)
covariance of the nonlinear and higher derivatives equations of motion.
\vskip .4cm\sk
The problem with this approach is that of finding an
action functional $S[F]$ such that the constitutive relations (\ref{Sconst})$\,$:
$
{G^{}_{}}^{\!\!\!\!\!\:\!\!\!\!\!\!\ast~~}{}^{\mu\nu}= \frac{2}{\sqrt{g}}\frac{\delta S[F]}{\delta F_{\mu\nu}}
$,
are equivalent to the constitutive relations (\ref{Iconst}).
We first approach this problem perturbatively, and give explicit
expressions for the lowest order perturbations; in the next section
we solve the problem (albeit implicitly) by using a Legendre transform
between $S$ and ${\cal I}$.
In the perturbative approach we assume that ${\cal I}={\cal I}[T^-,\overline{T^-}]$ is a power
series in the coupling constant $\lambda$,
\begin{equation}
{\cal I}[T^-,\overline{T^-}]={\cal I}^{[0]}[T^-,\overline{T^-}]+{\cal I}^{[1]}[T^-,\overline{T^-}]+{\cal I}^{[2]}[T^-,\overline{T^-}]+\ldots
\end{equation}
where ${\cal I}^{[n]}$ denotes the term proportional to $\lambda^n$, and
in this expansion $T^-,\overline{T^-}$ are considered the elementary
independent fields (and hence $\lambda$ independent).
Then $S[F]=S[F^-,F^+]$ is found as a power series
\begin{equation}
S[F^-,F^+]=S^{(0)}[F^-,F^+]+S^{(1)}[F^-,F^+]+S^{(2)}[F^-,F^+]+\ldots
\end{equation}
where ${S}^{(n)}$ denotes the term proportional to $\lambda^n$, and
in this expansion $F^-,F^+$ are the elementary independent fields (and hence $\lambda$ independent).
The initial condition is ${\cal I}^{[0]}=0$, that corresponds to linear
electromagnetism, ${S}^{(0)}=-{\frac{1}{4}}\int\!d^4x\sqrt{g}\,F^2$.
Since $\overline{T^+}=F^-+iG^-$ implies ${\overline{T^+}}^{(n)}=i{G^-}^{(n)}$ for $n\geq
1$, we see that
equivalence of the constitutive relations (\ref{Iconst}) and
(\ref{Sconst}), that we rewrite as $G^{\pm\,\mu\nu}=\frac{\pm2i}{\sqrt{g}}\frac{\delta S}{\delta
F^\pm_{\,\mu\nu}}$, is obtained by requiring order by order in $n$ that
the term ${S}^{(n)}$
satisfies the condition
\begin{equation}\label{recS}
2\frac{\delta S^{(n)}}{\delta F^-_{\,\mu\nu}}=\Big(\,\frac{\delta {\cal I}}{\delta
T^-_{\,\mu\nu}}\Big|_{{{{}^{T^-[F^-,F^+]}_{\overline{T^-}[F^-,F^+]}}}}\Big)^{(n)}
\end{equation}
where on the right hand side we consider
$\frac{\delta {\cal I}}{\delta T^-}$ as a functional of $F^-$ and
$F^+$ because $T^-=T^-[F^-,F^+]$;
the dependence
$T^-=T^-[F^-,F^+]$
being implicitly determined by the chiral variables constitutive
relations (\ref{Iconst}) and the relations $T^{\pm}=F^\pm-iG^\pm$,
that, in order to stress that the independent variables are
$T^-$ and $\overline{T^-}$, we rewrite as
\begin{eqnarray}
&&~\,~~2F^-=T^-+\frac{1}{\sqrt{g}}\frac{\delta {\cal I}[T^-,{\overline{T^-}}]}{\delta T^-_{\;\mu\nu}}~~,\,~~~~~~~2F^+=\frac{1}{\sqrt{g}}\frac{\delta {\cal I}[T^-,{\overline{T^-}}]}{\delta \overline{T^-}_{\;\mu\nu}} +\overline{T^-}~,\label{al}\\~
&&-2iG^-=T^--\frac{1}{\sqrt{g}}\frac{\delta {\cal I}[T^-,{\overline{T^-}}]}{\delta T^-_{\;\mu\nu}}~~,~~~-2iG^+=\frac{1}{\sqrt{g}}\frac{\delta {\cal I}[T^-,{\overline{T^-}}]}{\delta \overline{T^-}_{\;\mu\nu}} -\overline{T^-}~.
\label{al1}
\end{eqnarray}
In Appendix \ref{app1} we determine the first two nontrivial terms of the
nonlinear and higher derivatives electromagnetic action associated
with an arbitrary functional ${\cal I}={\cal I}^{[0]}+{\cal I}^{[1]}+{\cal I}^{[2]}+\ldots$, with ${\cal I}^{[0]}=0$.
They read
\begin{eqnarray}
S^{(1)}[F^-,F^+]&=&\frac{1}{4}{\cal I}^{[1]}[2F^-,2F^+]~,~~\nonumber\\[.4em]
S^{(2)}[F^-,F^+]&=&\frac{1}{4}{\cal I}^{[2]}[2F^-,2F^+]-
\frac{1}{2}\int \!\!d^4x\frac{1}{\sqrt{g}}\,~\frac{\delta {S}^{(1)}}{\delta
F^-}
\frac{\delta {S}^{(1)}}{\delta
F^-}+\frac{\delta {S}^{(1)}}{\delta
F^+}\frac{\delta {S}^{(1)}}{\delta
F^+}~~.\label{Scorrec}
\end{eqnarray}
We recall that
at zeroth
order
$S^{(0)}[F^-,F^+]=-\frac{1}{4}\int\!d^4x\sqrt{g}~\, {F^-}^2 +_{\,} {F^+}^2=-\frac{1}{4}\int\!d^4x\sqrt{g}~ F^2$.
\subsection{From $S[F]$ to ${\cal I}[T^-,\overline{T^-}]$ via Legendre transform}
We now show, as in\cite{Ivanov:2003uj}, that ${\cal
I}[T^-,\overline{T^-}]$ and $S[F]$ are related by
\begin{equation}\label{LegendreT}
\!\frac{1}{4}{\cal I}[T^-,\overline{T^-}]=S[F]+\int\!d^4x\sqrt{g}\, ~
\frac{1}{2}T^-F^--\frac{1}{8}{T^-}^2-\frac{1}{4}{F^-}^2 +
\frac{1}{2}\overline{T^-} F^+-\frac{1}{8}{\overline{T^-}}^2-\frac{1}{4}{F^+}^2~.
\end{equation}
This is actually a Legendre transform, and it implies that
the constitutive relations (\ref{Iconst}) are equivalent to the
constitutive relations
(\ref{Sconst}),
i.e., $G^{\pm\,\mu\nu}=\frac{\pm2i}{\sqrt{g}}\frac{\delta S[F^-,F^+]}{\delta
F^\pm_{\,\mu\nu}}$.
In order to recognize (\ref{LegendreT}) as a Legendre transform we define the functional
\begin{equation}
U[F^-,F^+]=-2S[F^-,F^+]+\frac{1}{2}\int\! d^4x \sqrt{g}~{F^-}^2+{F^+}^2 ~.
\end{equation}
Recalling that $iG^-=F^--T^-$ (see (\ref{TPM})) the constitutive relations $G^{\pm\,\mu\nu}=\frac{\pm2i}{\sqrt{g}}\frac{\delta S[F^-,F^+]}{\delta
F^\pm_{\,\mu\nu}}$ now read
\begin{equation}\label{TUF}
T^-=\frac{1}{\sqrt{g}}\frac{\delta U}{\delta F^-}~, ~~\overline{T^-}=\frac{1}{\sqrt{g}}\frac{\delta U}{\delta F^+}~.
\end{equation}
These relations (at least for weak and slowly varying fields) implicitly
determine $F^\pm=F^\pm[T^-,\overline{T^-}]$.
We then consider the Legendre transform
\begin{equation}\label{LTVU}
V[T^-,\overline{T^-}]=-U[F^-,{F^+}]+\int\!d^4x\sqrt{g}~\,T^-F^-+\overline{T^-}F^+~.
\end{equation}
Varying w.r.t. $T^-$ and $\overline{T^-}$ we obtain that the
dependence $F^\pm=F^\pm[T^-,\overline{T^-}]$ is given by
\begin{equation}\label{FVT}
F^-=\frac{1}{\sqrt{g}}\frac{\delta V}{\delta T^-}~,~~F^+=\frac{1}{\sqrt{g}}\frac{\delta V}{\delta {\overline{T^-}}}~.
\end{equation}
Therefore (\ref{FVT}) are the inverse relations of (\ref{TUF}), in
particular they are equivalent to
$G^{\pm\,\mu\nu}=\frac{\pm2i}{\sqrt{g}}\frac{\delta S[F^-,F^+]}{\delta
F^\pm_{\,\mu\nu}}$.
We now define the functional $ {\cal I}[T^-,\overline{T^-}]$ via
\begin{equation}
V[T^-,\overline{T^-}]=\frac{1}{2}{\cal
I}[T^-,\overline{T^-}]+\frac{1}{4}\int\!d^4x\sqrt{g}~\,{T^-}^2+{\overline{T^-}}^2~.
\end{equation}
Relation (\ref{LegendreT}) is trivially equivalent to
(\ref{LTVU}). Furthermore the constitutive relations
$G^{\pm\,\mu\nu}=\frac{\pm2i}{\sqrt{g}}\frac{\delta S[F^-,F^+]}{\delta
F^\pm_{\,\mu\nu}}$ and (\ref{Iconst}) are equivalent because
(\ref{FVT}) is easily seen to be equivalent to (\ref{al}), i.e., to (\ref{Iconst}).
\vskip .4cm
Let's now study duality rotations.
We consider $F$ to be the elementary fields and let
$S[F]$ give self-dual constitutive relations.
Under infinitesimal duality rotations (\ref{rotFG}), $F\to F+\Delta F=F-\alpha G$,
$G\to G+\Delta G=G+\alpha F$ we have (since
$T^-=F^--\frac{2}{\sqrt{g}}\frac{\delta S}{\delta F^-}$) that $T^-\to
T^-+\Delta T^-=T^--i\alpha T^-$. We calculate the variation of (\ref{LegendreT}) under duality
rotations. After a little algebra we
see that
\begin{eqnarray}\label{sdequiv}
\Delta {\cal I}&=& {\cal I}[T^-+\Delta
T^-,\overline{T^-}+\Delta\overline{T^-}]- {\cal I}[T^-,\overline{T^-}]\\
&=&S[F+\Delta F]-S[F]+\frac{\alpha}{4}\int\!d^4x\sqrt{g}~\,G\widetilde
G-F\widetilde F
=
-\frac{\alpha}{4}\int\!d^4x\sqrt{g}~\,G\widetilde
G+F\widetilde F
=0\nonumber\end{eqnarray}
where we used that $S[F+\Delta F]-S[F]=\int\!d^4x\; \frac{\delta S}{\delta F}\Delta
F=-\frac{\alpha}{2}\int\!d^4x\;\widetilde G G$, and the self-duality conditions (\ref{NGZ2}).
Hence $\cal I$
is invariant under duality rotations.
Vice versa, we can consider $T^-$, $\overline{T^-}$ to be the elementary
fields and assume ${\cal I}[T^-,\overline{T^-}]$ to be duality
invariant. Then from (\ref{al}) and $iG^-=F^--T^-$, i.e., form
(\ref{al}) and (\ref{al1}), it follows that
under the infinitesimal rotation $T^-\to T^-+\Delta T^-=T^--i\alpha T^-$ we have
$F\to F+\Delta F=F-\alpha G$,
$G\to G+\Delta G=G+\alpha F$, and from (\ref{sdequiv}) we
recover the self-duality conditions
(\ref{NGZ2}) for the action $S[F]$.
\vskip .4cm
This shows the equivalence betweeen the $S[F]$ and the ${\cal
I}[T^-,\overline{T^-}]$
formulations of self-dual constitutive relations. Hence
the deformed twisted self-duality condition
proposal originated in the context of supergravity counterterms is
actually the general framework needed to discuss self-dual theories
starting from a variational principle.
\section{Constitutive relations without self-duality\label{constitutiverelations}}
The constitutive relations (\ref{maxwww}) or (\ref{Sconst}) express $G$ as a function
of $F$ and its derivatives. They do not treat on equal footing $F$
and $G$ and therefore their eventual compatibility with
duality symmetry is hidden.
On the other hand the independent chiral variables $T^-, \overline{T^-}$ of the constitutive relations
(\ref{Iconst}) (the deformed twisted self duality
conditions) treat by construction $F$ and $G$ on equal footing, and
duality rotations are simply implemented via multiplication by a phase.
There however the relation betweeen $G$ and $F$ is implicitly
given.
Moreover, already the description of Born-Infeld theory is quite
nontrivial in these chiral variables.
We here further study the nonlinear relations between these two
formulations and related ones. This study
sheds light on the structure of self-dual theories, in particular it will lead to a
closed form expression of the constitutive relations (\ref{Iconst}) for the Born-Infeld theory.
We proceed with a manifestly duality symmetric reformulation of the constitutive relations
(\ref{maxwww}) (and more precisely of the relations (\ref{GNN}) below).
This is achieved doubling them (to ${{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=h[F,\lambda]$ and ${{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=k[G,\lambda]$)
and then constraining them via a symplectic
matrix $\cal M$. This matrix is well known in the study of duality rotations in
linear electromagnetism coupled to scalar fields
(see e.g. \cite{AFZ}). Here $\cal M$ will be in
general dependent on the field strengths $F,G$ and their
derivatives, leading to nonlinear and higher derivatives
electromagnetism.
Its structure will be fully determined only by requiring
that the doubled constitutive relations consistently give just 6 independent
equations that determine $G$ in terms of $F$ and vice versa. Notice that
even thought our aim is the study of self-dual
theories, in this section we do not assume that the constitutive relations
are compatible with duality symmetry.
The constraints on the $\cal M$ matrix are then
analized in terms of the Schr\"odinger's variables $T$, $\overline T$.
It is in these variables that Born-Infeld theory has an extemely
simple description \cite{Schrodinger, GZS}.
\subsection{The ${\cal N}$ and ${\cal M}$ matrices}
More insights in the constitutive relations (\ref{maxwww}) can be obtained if we
restrict our study to the wide subclass that can be written as
\begin{equation}
{{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}{}_{\mu\nu}={{\cal N}_2}_{\,} F_{\mu\nu} +{{\cal N}_1}_{\,} {{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}~,
\label{GNN}
\end{equation}
where ${\cal N}_2$ is a real scalar field,
while ${\cal N}_1$ is a real pseudo-scalar field (i.e., it is not invariant
under parity, or, if we are in curved spacetime, it is not invariant under
an orientation reversing coordinate transformation).
Explicit examples of more general constitutive relations are in
Appendix \ref{HDactions}.
As usual in the literature we set
\begin{equation}
{\cal N}={\cal N}_1+i{\cal N}_2~,
\end{equation}
then, relations
(\ref{GNN}) are equivalent to $G^+={\cal N} F^+$.
In nonlinear theories ${\cal N}$ depends on the field strength $F$, and in
higher derivative theories also on derivatives of $F$, we have
therefore in general a functional dependence ${\cal N}={\cal N}[F,\lambda]$.
Furthermore ${\cal N}$ is required to satisfy ${\cal N}\to -i$ in
the limit $\lambda\to 0$ so that we recover classical electromagnetism
when the coupling constant $\lambda\to 0$, or otherwise stated, in the
weak and slowly varying field limit, i.e., when we discard
higher powers of $F$ and derivatives of $F$.
We also assume that ${\cal N}$ can be expanded in power series of the
coupling constant\footnote{By $\lambda$ we can denote also more than
one coupling constant. For example when a nonlinear theory in flat
space is generalized to a curved background there naturally appears
a new coupling related to the
background curvature. Similarly, as already said, if the theory has higher derivatives
so that it can be expanded in appropriate powers of derivatives of
$F$.} $\lambda$ (we will relax this assumption in Note \ref{Note5}). Then, since ${\cal N}_2= -1+O(\lambda)$,
${\cal N}_2$ is invertible, and from relation (\ref{GNN}) we obtain $\widetilde F={\cal N}_2^{-1} {\cal N}_1
F-{\cal N}_2^{-1} G$ and $\widetilde G={\cal N}_2
F+{\cal N}_1{\cal N}_2^{-1}{\cal N}_1F-{\cal N}_1{\cal N}_2^{-1} G$
so that the constitutive relation \eqn{GNN} is equivalent to the more duality
symmetric one
\begin{equation}\label{FFomMFF}
\left(\begin{array}{c}
{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}} \\
{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}
\end{array}\right)=
\left(\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}\right)\,{\cal M}\,
\left(\begin{array}{c}
F \\
G
\end{array}\right)
\end{equation}
where the matrix ${\cal M}$ is given by
\begin{equation}
{\cal M}({\cal N})=
\left(\begin{array}{cc}
1 & -{\cal N}_1\\
0 & 1
\end{array}\right)
\left(\begin{array}{cc}
{\cal N}_2 & 0 \\
0 & {\cal N}_2^{-1}
\end{array}\right)
\left(\begin{array}{cc}
1 & 0\\
- {\cal N}_1 & 1
\end{array}\right)
=
\left(\begin{array}{cc}
{\cal N}_2 +{\cal N}_1 \,{\cal N}_2^{-1}\, {\cal N}_1 &~ - {\cal N}_1 \,{\cal N}_2^{-1} \\
-{\cal N}_2^{-1}\, {\cal N}_1 &~ {\cal N}_2^{-1}
\end{array}\right)~.~~
\label{M(N)}
\end{equation}
Finally, in order to really treat on equal footing the electric and
magnetic field strengths $F$ and $G$, we should consider
functionals ${N}_1[F,G,\lambda]$ and ${N}_2[F,G,\lambda]$ such that the constitutive relations
${{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}={N_2[F,G,\lambda]}_{\,} F +{N_1[F,G,\lambda]}_{{\,}} {{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$
are equivalent to (\ref{GNN}), i.e., such that on shell of these relations, ${N}_1[F,G,\lambda]={\cal N}_1[F,\lambda]$ and
${N}_2[F,G,\lambda]={\cal N}_2[F,\lambda]$.
Since we assume $N_1[F,G,\lambda]$ and $N_2[F,G,\lambda]$ to be power series
in $\lambda$ with $N_1=O(\lambda)$ and
$N_2=-1+ O(\lambda)$
the constitutive relations
${{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}={N_2[F,G,\lambda]}_{\,} F +{N_1[F,G,\lambda]}_{{}} {{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$ are well given in the sense that they are always equivalent to the
${{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}={{\cal N}_2[F,\lambda]}_{\,} F +{{\cal N}_1[F,\lambda]}_{{}} {{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$ ones
(just expand in power series of $\lambda$ and iteratively substitute $G$ in
$N_1[F,G,\lambda]$ and $N_2[F,G,\lambda]$).
Henceforth, with slight abuse of notation,
from now on the ${\cal N}$, ${\cal N}_1$, ${\cal N}_2$ fields in
(\ref{GNN})-(\ref{M(N)}) will in general be functionals of
both $F$ and $G$.
\vskip .4cm
The matrix ${\cal M}({\cal N})$ in (\ref{M(N)}) is symmetric and symplectic
(it has indeed determinant equal to 1). The space of symmetric and symplectic
matrices has two disconnected components, that of positive definite
and of negative definite matrices. ${\cal M}({\cal N})$
is negative definite because ${\cal N}_2^{-1}\to -1+O(\lambda)$.
Recalling that any symmetric, symplectic and negative definite $2\times 2$ matrix is of the kind
(\ref{M(N)}) with ${\cal N}_1$ real and ${\cal N}_2$ real and negative
(for a proof see for example the review \cite{AFZ}, Appendix A, where
it is also shown that ${\cal M}$ and ${\cal N}={\cal N}_1+i{\cal N}_2$ parametrize the coset
space $Sp(2, \mathbb{R})/U(1)$), we have that
\vskip .4cm
\begin{Proposition}\label{propos2} Any symmetric and symplectic $2\times 2$
matrix ${\cal M}$
that has a power series expansion in $\lambda$ with ${\cal M}=-1+O(\lambda)$ is of the kind
(\ref{M(N)}) with ${\cal N}_1=O(\lambda)$ real and ${\cal N}_2=-1+O(\lambda)$ real.
\end{Proposition}
We now reverse the argument that led from (\ref{GNN}) to (\ref{FFomMFF}).
We consider
constitutive relations of the form
\begin{equation}\label{FFomMFFp}
\left(\begin{array}{c}
{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}} \\
{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}
\end{array}\right)=
\left(\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}\right)\,{\cal M}[F,G,\lambda]\,
\left(\begin{array}{c}
F \\
G
\end{array}\right)
\end{equation}
that treat on equal footing $F$ and $G$, and where ${\cal M}={\cal M}[F,G,\lambda]$ is now
an {\sl arbitrary} real
$2\times 2$ matrix (with scalar entries ${\cal M}_{ij}$).
We require
${\cal M}=-1+O(\lambda)$
so that we recover classical
electromagnetism when the coupling constant $\lambda\to 0$.
A priory (\ref{FFomMFFp}) is a set of 12 real equations, twice as
much as those present in the constitutive relations (\ref{GNN}). We want only 6 of
these 12 relations to be independent so to be able to determine $G$
in terms of independent fields $F$ (or equivalently $F$ in terms of independent fields $G$). Only in this case the constitutive relations are well given.
\vskip .4cm
\begin{Proposition}\label{propos3} The constitutive relations (\ref{FFomMFFp})
with ${\cal M}[F,G,\lambda]=-1+O(\lambda)$ are well given if and only if on shell of (\ref{FFomMFFp}) the matrix
${\cal M}[F,G,\lambda]$
is symmetric and symplectic.
\end{Proposition}
\begin{proof}
i) Let ${\cal M}[F,G,\lambda]=-1+O(\lambda)$ be symmetric and
symplectic on shell of (\ref{FFomMFFp}).
Then, because of Proposition \ref{propos2}, there exists a unique ${\cal N}[F,G,\lambda]=-i+O(\lambda)$ such that ${\cal M}[F,G,\lambda]={\cal M}({\cal N})$ on
shell of (\ref{FFomMFFp}). Hence (\ref{FFomMFFp}) is
equivalent to (\ref{GNN}) and therefore gives well defined
constitutive relations.
\\
ii) If the constitutive relations (\ref{FFomMFFp})
are a set of 6 independent relations that determine $G$ in terms of
$F$ then the matrix entry ${\cal M}_{22}\not=0$ (because otherwise from (\ref{FFomMFFp}),
we would have ${{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=-{\cal M}_{21}F$ that constraints the independent
fields $F$).
It follows that (\ref{FFomMFFp}) is equivalent to
$G=-{\cal M}_{22}^{-1}{\cal M}_{21} F-{\cal M}_{22}^{-1}{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$, i.e. to
${{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}={\cal M}_{22}^{-1} F -{\cal M}_{22}^{-1}{\cal M}_{21} {{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$.
Repeating the argument that lead from (\ref{GNN}) to (\ref{FFomMFF})
we conclude that (\ref{FFomMFFp}) is equivalent to the
equations
\begin{equation}\label{FFomMFFpp}
\left(\begin{array}{c}
{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}} \\
{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}
\end{array}\right)=
\left(\begin{array}{cc}
0 & -1\\
1 & 0
\end{array}\right)
\left(\begin{array}{cc}
{\cal M}_{22}^{-1}+{\cal M}_{22}^{-1}{\cal M}_{21}^2 \:& {\cal M}_{21}\\
{\cal M}_{21} & {\cal M}_{22}
\end{array}\right)
\left(\begin{array}{c}
F \\
G
\end{array}\right)~.
\end{equation}
We show that on shell of the relations (\ref{FFomMFFp}) the matrix
${\cal M}[F,G,\lambda]$ is symmetric and symplectic because
\begin{equation}\label{MSYMM}
{\cal M}[F,G,\lambda]=
\small{\left(\begin{array}{cc}
{\cal M}_{22}^{-1}+{\cal M}_{22}^{-1}{\cal M}_{21}^2 \:& {\cal M}_{21}\\
{\cal M}_{21} &\ M_{22}
\end{array}\right)}~~~~\mbox{ {\sl (on shell)}}.
\end{equation}
Since by hypothesis the relations (\ref{FFomMFFp}) determine $G$ in terms of $F$ and ${{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}= - F +O(\lambda)$,
we can also determine $F$ in terms of $G$ as a power series in $\lambda$.
Then (\ref{FFomMFFp}) is also equivalent to ${{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}={\cal M}_{11} F+{\cal M}_{12}
G$ and, observing that independence of the $G$ fields implies that the matrix
entry ${\cal M}_{11}\not=0$,
we conclude that (\ref{FFomMFFp}) is as well equivalent to
$F={\cal M}^{-1}_{11}{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}} -{\cal M}^{-1}_{11}{\cal M}_{12}G$, that we rewrite as
\begin{equation}\label{FpG0}
F^+=P\,G^+~,~~P\equiv (-{\cal M}^{-1}_{11}-i{\cal M}^{-1}_{11}{\cal M}_{12})~.
\end{equation}
Similarly (\ref{FFomMFFpp}) is also equivalent to its second row,
${{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=({\cal M}^{-1}_{22}+{\cal M}_{22}^{-1}{\cal M}_{21}^2) F+{\cal M}_{21} G$ that we
rewrite as
\begin{equation}\label{FpG1}
F^+=Q\,G^+~,~~Q\equiv \Big(-(M^{-1}_{22}+M_{22}^{-1}M_{21}^2)^{-1}-i(M^{-1}_{22}+M_{22}^{-1}M_{21}^2)^{-1}M_{21}\Big) ~.
\end{equation}
Independence of the fields $G^+$ implies that subtracting (\ref{FpG1}) to (\ref{FpG0}) we obtain that $P-Q=0$ in
each region of spacetime where $G^+\not=0$. Moreover $P-Q=0$ in
each region of spacetime where $G^+=0$ because $G^+=0$ in that region implies
$P=1$ and $Q=1$ in that same region (we consider ${\cal M}[F,G,\lambda]$ to be a
local functional of $F$ and $G$). This shows the on shell equality $P=Q$.
Then equality (\ref{MSYMM}) immediately follows.
\end{proof}
\vskip .4cm
\begin{Note} \label{Note5}
We have assumed that the constitutive relations can
be written as power series expansions in $\lambda$. We here relax this
assumption and consider constitutive relations (\ref{GNN}) such that
${\cal N}[F,G,\lambda]=-i$ (or ${\cal M}[F,G,\lambda]=-1$)
for the field configuration $F=G=0$;
this is equivalent to state that for weak and slowly varying fields
${{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}\approx -F$ (i.e., that in this regime the constitutive relations
are those of usual electromagnetism). Then
applying the implicit function theorem to the constitutive relations (\ref{GNN})
we know that there exists neighbourhoods of the field
configurations $F=0$, $G=0$ such that (\ref{GNN}) are equivalent to
the explicit expressions $G=G[F,\lambda]$ and $F=F[G,\lambda]$.
The result of this section therefore holds also without the power series expansion
in $\lambda$ assumption: just consider fields sufficiently weak and slowly varying.
\end{Note}
\subsection{Complex variables}
As in Section \ref {complexvariables} it is fruitful to consider the
complex variables $T=F-iG$, $\overline{T}=F+iG$.
The transition from the real to the complex variables is given by the
symplectic and unitary matrix ${\cal A}^t$
where
\begin{equation}
{\cal A}={1\over \sqrt{2}}\left(
\begin{array}{cc}
1 & 1\\
-i & i
\end{array}\right)~~,~~~~{\cal A}^{-1}={\cal A}^\dagger~. \label{defAAm1}
\end{equation}
The equation of motions in these variables read $dT=0$, with constitutive
relations obtained applying the matrix ${\cal A}^t$
to (\ref{FFomMFFp}):
\begin{equation}\label{TRT}
\left(\begin{array}{c}
{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}} \\
{~\,{\overline T}^{\!\!\!\!\!\!\!\!\!\ast\,~~}}
\end{array}\right)=
-i\left(\begin{array}{cc}
1 & \,0\\
0 & -1
\end{array}\right)\, {\cal A}^t {\cal M} \overline{\cal A}
\left(\begin{array}{c}
T \\
\overline T
\end{array}\right)~,
\end{equation}
where $ {\cal A}^t {\cal M} \overline{\cal A}$, on shell of (\ref{TRT}), is
complex symplectic and pseudounitary w.r.t the metric
$\big({}^1_0{}^{~0}_{-1}\big)$, i.e. it belongs to $Sp(2,\mathbb{C})\cap
U(1,1)=SU(1,1)$. It is also Hermitian and negative definite.
These properties uniquely characterize the matrices $ {\cal A}^t {\cal M} \overline{\cal A}$
as the matrices
\begin{equation}
\left(\begin{array}{cc}
-\sqrt{1+\tau\overline \tau^{}} & \, -i \tau\\
i \overline \tau & -\sqrt{1+\tau\overline \tau^{}}
\end{array}\right)
\end{equation}
where $\tau=\tau[T,\overline T]$ is a complex field that depends on $T$,
$\overline T$ and possibly also their derivatives.
We then see that the constitutive relations (\ref{TRT}) are equivalent
to the equations
\begin{equation}\label{TccT}
{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}=i \sqrt {1+\tau\overline \tau} \,T_{\mu\nu} -\tau\,\overline T_{\!\mu\nu}
\end{equation}
Notice that if ${\cal
M}=-1+O(\lambda)$ (or equivalently ${\cal N}=-i+O(\lambda)$), then
$\tau=O(\lambda)$. In particular electromagnetism is obtained setting $\tau=0$.
\vskip .4cm
In conclusion equations (\ref{TccT})
are the most general way of writing six independent real equations
that allow to express $G=\frac{i}{2}(T+\overline T)$ in terms of
$F=\frac{1}{2}(T+\overline T)$ as in (\ref{GNN})
(equivalently $F$ in terms of $G$).
These constitutive relations
are determined by a
complex function ${\cal N}$ (depending on $F,G$ and their derivatives
${\cal N}={\cal N}[F,G]$) or
equivalently $\tau$
(depending on $T, \overline T$ and their derivatives $\tau=\tau[T,\overline T]$).
\section{Schr\"odinger approach to self-duality conditions\label{SCH}}
In the previous section we have clarified the structure of the
constitutive relations for an arbitrary nonlinear theory of
electromagnetism. The theory can also be with higher
derivatives of the field strength because the complex
field ${\cal N}$, or equivalently the matrix $\cal M$ in
(\ref{FFomMFFp}) of (pseudo)scalar entries, can depend also on
derivatives of the electric and magnetic field strengths $F$ and $G$.
We now further examine the constitutive relations for theories that satisfy the
NGZ self-duality condition (\ref{NGZ}), i.e., $\overline T\widetilde T=0$,
or equivalently,
\begin{equation}
\overline T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=0~.\label{TGZ}
\end{equation}
The constitutive relations (\ref{TccT})
determine the dependence of the magnetic field strength $G$ form the
electric one $F$ or vice versa. We notice that this dependence is determined also if
we constrain the fields in (\ref{TccT}) to satisfy the condition
$T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}\not=0$. This is so because the set of field
configurations satisfying
$T\widetilde T\not=0$ is dense in the set of unconstrained field configurations.
Hence if we multiply or divide the constitutive relations (\ref{TccT})
by ${T\widetilde T}$ we obtain a set of equivalent constitutive
relations.
Having explained why we can freely divide by $T\widetilde T$ we can state the
following
\vskip .4cm
\begin{Proposition}\label{propos4} The constitutive relations (\ref{TccT})
and the self-duality conditions (\ref{TGZ}) are equivalent to
defining a nonlinear and higher derivatives extension of usual
electromagnetism by the relations
\begin{equation}\label{1/TT}
{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}=-\frac{T^2}{T {{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}} T_{\mu\nu}-\tau \overline T_{\mu\nu}~,
\end{equation}
that henceforth we call self-dual constitutive relations in
Schr\"odinger variables.
Equivalently we have the self-dual constitutive
relations
\begin{equation}\label{sfce1}
{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}=-\frac{T^2}{T {{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}} } T_{\mu\nu}-\frac{T\overline
T}{\overline{T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}} \overline T_{\mu\nu}~,
\end{equation}
\begin{equation}\label{sfce2}
{T\overline T} =r \,| T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|
\end{equation}
where the second equation is a scalar equation where $| T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|$ is
the modulus of $T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$ and $r$ is a
dimensionless scalar field that depends on $T,\overline T$ and their
derivatives, that takes values in the non-negative real number and
that is duality invariant.
\end{Proposition}
\begin{proof}
Contracting the indices of (\ref{TccT}) with ${{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\mu\nu}$ we obtain
\begin{equation}
-T^2=i\sqrt{1+\tau\overline \tau} \,T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}} ~.\label{T2sq}
\end{equation}
Hence the self-duality condition (\ref{TGZ}) (i.e. (\ref{NGZ})), and the constitutive
relations (\ref{TccT}) imply (\ref{1/TT}).
\vskip .4cm
Vice versa (\ref{1/TT}) implies (\ref{TGZ}) and (\ref{TccT}). Indeed, contracting (\ref{1/TT}) with ${{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}$ we
obtain $\overline T {{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=0$. This is trivially the case if $\tau\not=0$.
It holds also if $\tau=0$ because then (\ref{1/TT}) reads
$
{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=-\frac{T^2}{T\,\,\,T_{}^{{}^{\!\!\!\!\!\!\;\!\!\!\ast}~}} T
$
, i.e.,
$
(T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})^2=-T^2T^2
$
that implies
$T=\pm i {{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$, i.e., $F=\pm{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$. This last relation implies the
self-duality condition (\ref{TGZ}).
In order to show that (\ref{1/TT}) implies (\ref{TccT}) first we contract
(\ref{1/TT}) with
${~\,{\overline T}^{\!\!\!\!\!\!\!\!\!\ast\,~~}}{}_{\mu\nu}$, and obtain
\begin{equation}\label{tauTT}
{T\overline T}=\tau\, \overline{T {{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}~.
\end{equation}
Then we contract (\ref{1/TT}) with $T_{\mu\nu}$, and obtain
\begin{equation}
T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=-\frac{T^2 }{T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}} T^2 -\tau T\overline T~.\end{equation}
This expression and the complex conjugate of (\ref{tauTT}) imply
$1+\tau\overline \tau=-\frac{T^2 T^2}{{(T\,\,\,T_{}^{{}^{\!\!\!\!\!\!\;\!\!\!\ast}~})}^2}$, and hence
$-\frac{T^2}{T\,\,\,T_{}^{{}^{\!\!\!\!\!\!\;\!\!\!\ast}~}}=i\sqrt{1+\tau\overline \tau}$, that
substituted in (\ref{tauTT}) gives (\ref{TccT}), as was to be
proven. The sign of
the square root $\sqrt{1+\tau\overline \tau}$ is determined considering the limit $\lambda\to 0$,
where we want to recover usual electromagnetism, that in these variables
reads ${{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=i T$.
\vskip .4cm
The self-duality condition $\overline T
{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=0$ implies (\ref{tauTT}) that fixes the phase of $\tau$ to equal that
of $T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$. This
constraint is automatically satisfied by setting $r=|\tau|$ and
\begin{equation}\label{defr}
\tau= r\, \frac{T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}{|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|}~.
\end{equation}
The equivalence of (\ref{1/TT}) with the self-dual constitutive relations
(\ref{sfce1}), (\ref{sfce2}) is then immediate.
Trivially $r\geq 0$. Finally, recalling that $F$ and ${{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$ are
tensors while ${{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$ and $G$ are pseudo-tensors we easily
check that $T\overline T$ and $T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}\;\overline{T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}$ are scalars, hence
$r$ is a scalar field depending on $T,\overline T$ and their
derivatives (i.e., $r$ is invariant under orientation reversal).
Duality invariance of $r$ (under $T\to e^{-i\alpha} T$) immediately follows from
(\ref{sfce2}).
\end{proof}
In the self-duality conditions (\ref{sfce1}), (\ref{sfce2}) we have
been able to disentangle the general relations that a self-dual
theory must satisfy, i.e., (\ref{sfce1}), from the specific condition
that defines the nonlinear theory: the scalar equation (\ref{sfce2})
that determines the ratio $T\overline T/|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|$.
Nonlinear self-dual theories are defined by imposing that this ratio equals an
arbitrary duality invariant real and nonnegative
scalar function $r$ of $T,\overline T$ and their derivatives.
\begin{Example}\label{EEEEX}
{\sl Linear electromagnetism} ($G={{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$) corresponds to the case $r=0$. Indeed
$T\overline T=0$ in linear electromagnetism, while $T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=2F{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}+2iF^2$ is
arbitrary. \\
{\sl Born-Infeld nonlinear theory} satisfies the constitutive
relations, $\lambda\;{\overline
T}^{\!\!\!\!\!\!\!\!\!\ast\,~~}{}^{\!\mu\nu}=\frac{\partial}{\partial
T_{\mu\nu}}\big(\frac{4\,T^2}{T\,\,\,T_{}^{{}^{\!\!\!\!\!\!\;\!\!\!\ast}~}}\big)$, i.e.,
\begin{equation}
{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}=-\frac{T^2}{T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}} T_{\mu\nu}-\frac{\lambda}{8}(T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})\,\overline T_{\mu\nu}\label{BIconst}
\end{equation}
as remarked by Schr\"odinger \cite{Schrodinger}, see \cite{GZS} for a clear
account in nowadays notations. Comparison with (\ref{sfce1}) and (\ref{sfce2}),
shows that Born-Infeld theory is determined by
\begin{equation}
r=\frac{\lambda}{8} |T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|~.\label{rBI}
\end{equation}
\end{Example}
\vskip .4cm
We gain further insights in the self-dual constitutive relations by
analyzing the phases and moduli of the scalars fields that enter
(\ref{sfce1}) and (\ref{sfce2}).
Relation (\ref{T2sq}) implies that the
phase of $T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$ is bigger than the phase of $T^2$ by a
$\pi/2$ angle. In polar coordinates we have,
\begin{equation}\label{fff}
T^2=|T^2| e^{i\varphi}~,~~T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}} =i|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}| e^{i\varphi}~.
\end{equation}
Use of (\ref{tauTT}) leads to the relation
$\tau\overline\tau=
{|T\overline T|^2}/{|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|^2}$, that inserted in (\ref{T2sq}) gives\footnote{This equation suggests to set $\,{\rm cosh}\beta={\rho_{T^2}}/{\rho^{}_{{T^{\,\ast\,}{\!}T}}}\,$,
$\,{\rm sinh} \beta ={\rho_{T\overline T}}/{\rho^{}_{{T^{\,\ast\,}{\!}T}}}\,$,
so that (\ref{rrr}) is automatically satisfied. With these
variables the constitutive relations read
$\,
{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=i {\rm cosh} \beta\, T-i\, {\rm sinh}\beta
\frac{T^2}{\rho_{T^2}}\, \overline T
$.
Different nonlinear theories are determined by the dependence of the
angle $\beta$ from the fields $T,\overline T$ and their derivatives.}
\begin{equation}
|{T^2}|^2=|{T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}|^2+|{T\overline T}|^2~.
\label{rrr}
\end{equation}
\subsection{Chiral variables\label{cfr}}
The self-dual constitutive relations further simplify when we rewrite them
in term of the chiral variables $T^+,T^-$ and their complex conjugates.
We consider the Hodge dual of equation (\ref{sfce1}), sum it to $\pm i$-times
(\ref{sfce1}), and, with the help of (\ref{TPM}) and (\ref{OTPM}), we
obtain the equivalent relations
\begin{equation}
T^\pm_{\;\mu\nu}=-\frac{T\overline T}{{2 T^\mp}^2}\,
\frac{T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}{\overline {T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}} \:{\overline {T^\mp}}^{}_{\!\!\mu\nu}\label{tpmtbpm}
\end{equation}
where $2{{T}^{\,\mp}}^2=T^2\mp i T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=(|T^2|\pm |{T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}}|)e^{i\varphi} $. Further
use of the phase relations (\ref{fff}) leads to
$T^\pm_{\,\mu\nu}=\frac{T\overline
T}{{2\overline{T^\mp}^2}} \,\overline{T^\mp}^{}_{\!\!\mu\nu}$, i.e., to
\begin{equation}
T^+_{\;\mu\nu}=t \,e^{i\varphi}\: \overline{T^-}^{}_{\!\!\mu\nu}~,\label{tp}
\end{equation}
and $
T^-_{\;\mu\nu}=t^- e^{i\varphi}\:{\overline{T^+}}^{}_{\!\!\mu\nu}$,
where the dimensionless, nonnegative and duality rotation invariant scalar fields $t$ and $t^-$ are defined by
\begin{equation}
t\equiv \frac{T\overline T}{|T^2| + |T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|}~,\label{ttt}
\end{equation}
and $t^-\equiv \frac{T\overline T}{|T^2| - |T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|}$.
Equations $T^-_{\,\mu\nu}=t^- e^{i\varphi}\:{\overline {T^+}}^{}_{\!\!\mu\nu}$ are equivalent
to $T^+_{\,\mu\nu}=t e^{i\varphi}\:{\overline {T^-}}^{}_{\!\!\mu\nu}$ because, due to
(\ref{rrr}), $t^-=t^{-1}$.
\vskip .4cm
The scalar equation (\ref{sfce2}) determines the value of the ratio
$T\overline T/|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|$. Because of the moduli relation
(\ref{rrr}), it equivalently determines the ratio $t$ in (\ref{ttt}).
Therefore, as in the previous section (see paragraph after the proof of Proposition \ref{propos4}), we can conclude that
(\ref{tp}) is the general relation that a self-dual theory must
satisfy, while the specific condition that
defines the nonlinear theory is the dependence of the real
nonnegative duality invariant scalar function $t$ from a set of independent variables
and their derivatives, for example $T^-$ and $\overline{T^-}$.
\vskip .4cm
It is useful to present the explicit relation between the ratio $r=T\overline T/|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|$
and $t$.
We calculate
\begin{equation}|{T^-}^2|(1-t^2)=\frac{1}{2}(|T^2|+|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|)(1-t^2)
=|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|\label{useful}~,
\end{equation}
multiply this last equality by $r=T\overline
T/|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|$ and obtain
\begin{equation}
(1-t^2)r=2t~.\label{trrel}
\end{equation}
\section{Nonlinear theories without higher derivatives\label{nhd}}
If the constitutive relations
$G_{\mu\nu}=h_{\mu\nu}[F,\lambda]$ (see (\ref{maxwww})) do not involve derivatives of the fields
then, as noticed in the introduction, any antisymmetric 2-tensor is a linear combination of
$F_{\mu\nu}$ and ${{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}$ with coefficients that are
(pseudo)scalar functions of $F_{\mu\nu}$. Hence the constitutive
relations (\ref{GNN}) or (\ref{FFomMFF}) are the most general
ones. Furthermore, if we are in Minkowski spacetime
Lorentz invariance implies that
the field $\cal N$ in
(\ref{GNN}) and the matrix $\cal M$ in (\ref{FFomMFF}) can be expressed in terms of
the Lorentz invariant combinations $F^2$ and $(F{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})$.
Similarly, if we choose the chiral fields $T^-$ and $\overline{T^-}$ as independent
variables (cf. Sections \ref{ITTm} and \ref{cfr}) then any Lorentz
invariant field is a function of ${T^-}^2$ and ${\overline{T^-}}^{\,2}$ .
More in
general we consider theories in curved spacetime that depend only on
$F^2$ and $F{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}$, or ${T^-}^{2\!}$ and $\overline{T^-}^{\,2}$.
Since the
action functional ${\cal I}[T^-,\overline{T^-}]$ studied in Section
\ref{ITTm} and the
scalar field $t$ defined in (\ref{ttt}) are duality invariant, and under a duality of angle $\alpha$ we have the
phase rotation ${T^-}^2\to e^{2i\alpha}{T^-}^2$, we conclude that ${\cal
I}$ and $t$ depend only on the modulus of ${T^-}^2$, hence
${\cal I}={\cal I}[T^-,\overline{T^-}]$ and $t=t[T^-,\overline{T^-}]$ simplify to
\begin{equation}
{\cal I}=\frac{1}{\lambda}\int\!d^4x\sqrt{g}\:{I}(u) ~,~~t=t(u)~,
\end{equation}
where $I(u)$ is an adimensional scalar function, and the variable
$u$ is defined by
\begin{equation}
u_{\,}\equiv_{\,} 2\lambda|{T^-}^2|_{\,}=_{\,}\lambda(|T^2|+|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|)~.
\end{equation}
\noindent Similarly, the constitutive relations (\ref{Iconst}) simplify to
\begin{equation}
{T^+}^{\mu\nu}=
\frac{1}{\lambda}\frac{\partial I}{\partial {\overline{T^-}_{\!\!\mu\nu}}}=
\frac{1}{\lambda}\frac{d I}{d u}_{\,}\frac{\partial u\,}{\partial {\overline{T^-}_{\!\!\mu\nu}}}~,
\end{equation} and comparison with with (\ref{tp}) leads to
\begin{equation}
t={2} \frac{d {I}}{d u}~.\label{Ioft}
\end{equation}
In fact, deriving $u^2$ we obtain $\frac{\partial u}{\partial
\overline{T^-}_{\!\!\!\mu\nu}}=2\lambda e^{i\varphi } \,\overline{T^-}_{\!\!\!\mu\nu}$ where
we used the same conventions as in footnote \ref{funo}, and that
${T^-}^2=|{T^-}^2|e^{i\varphi}$ (see expression immediately after
(\ref{tpmtbpm})).
\subsection{Born-Infeld nonlinear theory}
In this section we determine the scalar field $t=t(u)=2\frac{d I}{d u}$ in case of
Born-Infeld theory. This is doable thanks to Schr\"odinger's
formulation (\ref{BIconst}) of Born-Infeld theory, that explicitly
gives $r=\frac{\lambda}{8}|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|$, see (\ref{rBI}).
Then from (\ref{useful}) we have
\begin{equation}
r=\frac{1}{16}u(1-t^2)~,
\end{equation}
and recalling (\ref{trrel}) we obtain \cite{AFunp, IZ}
\begin{equation}
(1-t^2)^2 u= 32 t~. \label{poleq4}
\end{equation}
Now in the limit $u\to 0$, i.e., $\lambda\to 0$, we see from (\ref{ttt}) that
$t\to 0$.
The function $t=t(u)$ defining Born-Infeld theory
is therefore given by the unique positive root of the fourth order
polynomial equation (\ref{poleq4}) that has the correct $\lambda\to 0$
limit. Explicitly,
\begin{equation}
t=\frac{1}{\sqrt{3}}\Big(\sqrt{1+s+s^{-1}} - \sqrt{2-s-s^{-1}
+\frac{24\sqrt{3}}{u\sqrt{1+s+s^{-1}}}}~\,\Big) ~,\label{radical1}
\end{equation}
where
\begin{equation}
s=\frac{1}{u} \Big(216_{\,} u +12 \sqrt{3}\sqrt{108+u^2}_{\,} u+ u^3\Big)^{\mbox{$\frac{1}{3}$}}~.
\label{radical2}\end{equation}
\subsection{The hypergeometric function and its hidden identity}
In \cite{CKR} the action functional $\cal I$ and the function $t(u)$
corresponding to the Born-Infeld action were found via
an iterative procedure order by order in $\lambda$ (or equivalently in $u$). The first
coefficients of the power series expansion of $t(u)$ were recognized to
be those of a generalized hypergeometric function, leading to the conclusion
\begin{equation}
t(u)=\frac{u}{32} {\,}_3F_2\Big(\frac{1}{2}, \frac{3}{4}, \frac{5}{4};\,
\frac{4}{3}, \frac{5}{3};\,-\frac{ u^2}{3^3\cdot 2^2}\Big)~,\label{3F2}
\end{equation}
and, integrating (\ref{Ioft}),
\begin{equation}
{I}(u)={6}\left(1-
{\,}_3F_2\Big(-\frac{1}{2}, -\frac{1}{4}, \frac{1}{4},\,
\frac{1}{3}, \frac{2}{3};\,-\frac{u^2}{3^3\cdot 2^2}\Big)\right)~.
\end{equation}
We have checked that the expansion in power series of $u$ of the closed form expression of
$t(u)$ derived in (\ref{radical1}),(\ref{radical2}) coincides,
up to order $O(u^{1000})$ with $\frac{u}{32}$ times the hypergeometric function in (\ref{3F2}).
Therefore we conjecture that the hypergeometric function in
(\ref{3F2})
\begin{equation}
{\mathfrak F}(u^2)\equiv{}_3F_2\Big(\frac{1}{2}, \frac{3}{4}, \frac{5}{4};\,
\frac{4}{3}, \frac{5}{3};\,-\frac{u^2}{3^3\cdot 2^2}\Big)=
{2}\sum_{k=0}^\infty\frac{(4k+1)!}{(3k+2)!k!}\Big(-\frac{u^2}{4^5}\Big)^k
\end{equation}
has the closed form expression ${\mathfrak F}(u^2)=\frac{32}{u}t(u)$ where $t(u)$
is given in (\ref{radical1}),(\ref{radical2}), and, because of
(\ref{poleq4}), that it satisfies the ``hidden'' identity
\begin{equation}
{\mathfrak F}(u^2)=\Big(1-\frac{u^2}{4^5}{{\mathfrak F}(u^2)}^2\Big)^2~.\label{qeq}
\end{equation}
It is indeed this
identity that we have verified
up to $O(u^{1000})$.
\subsection{General nonlinear theory}
Since Born-Infeld theory is singled out by setting
$r=\frac{\lambda}{8}|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|$, and Maxwell theory by setting $r=0$
(cf. Example \ref{EEEEX}), it is
convenient to describe a general nonlinear theory without higher
derivatives by setting
\begin{equation}\label{fuuu}
r=\frac{\lambda}{8}|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|f(u)/u
\end{equation}
where $f(u)$ is a positive
function of $u$. We require the theory to reduce to
electromagnetism in the weak field limit, i.e.,
${{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}_{\mu\nu} =-F+ o(F)$ for $F\to 0$. Then we have $T^-={\cal O}(F)$,
$T^+=o(F)$, $u={\cal O}(F^2)$. Hence from (\ref{tp}) we obtain $\lim_{u\to 0} t=0$. Moreover from
(\ref{trrel}), $r={\cal O}(t)$ and from $r=\frac{1}{16}f(u)(1-t^2)$
(that follows from (\ref{fuuu}) and (\ref{useful})\/) $f={\cal
O}(t)$.
Hence the theory reduces to
electromagnetism in the weak field limit if and only if $\lim_{u\to
0}f(u)=0$.\footnote{We further notice that $\lim_{u\to
0}f(u)=0$ implies $I(u)=o(u)$. In particular the theory defined
by $I(u)=u$ (or equivalently $f(u)=\frac{2^6}{3^2}$) does not reduce to
electromagnetism for weak fields.
In general, besides requiring that
the theory determined by $f(u)$ reduces to electromagnetism in the
weak field limit we can also require the theory to be analytic in $F$
(the lagrangian to have a power series expansion in $F$ around $F=0$). In
this case from (\ref{LegendreT}) and inverting relations (\ref{TUF}), or
more explicitly from (\ref{Scorrec}), we see that the Legendre transformed
function $I(u)$ must depend on $u^2=4\lambda^2 { T^-}^2 {\overline{T^-}}^2$. Equivalently $f(u)/u$ must depend
on $u^2$.}
From $r=\frac{1}{16}f(u)(1-t^2)$ (that follows from (\ref{fuuu}) and (\ref{useful})\/) and (\ref{trrel}) we obtain that the composite function $t(f(u))$ satisfies the fourth order polynomial equation
\begin{equation}
(1-t^2)^2f(u)=32 t~, \label{TFU}
\end{equation}
so that
$t(f(u))$ is obtained with the substitution $u\to f(u)$ in
(\ref{radical1}) and (\ref{radical2}), or in
(\ref{3F2}).
More explicitly, recalling the constitutive relation (\ref{1/TT}), we
conclude that the constitutive relations \`a la Schr\"odinger
\begin{equation}\label{cr5}
{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}_{\!\mu\nu}=-\frac{T^2}{T {{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}} T_{\mu\nu}-\frac{\lambda}{8}\frac{f(u)}{u}_{\,}(T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})\,\overline T_{\mu\nu}~,
\end{equation}
are equivalent to the constitutive relations (deformed twisted
self-duality conditions)
\begin{equation}\label{cr6}
{T^+}^{\mu\nu}
=
\frac{1}{2\lambda} t(f(u)) \,\frac{\partial u}{\partial{\overline{T^-}_{\!\!\mu\nu}}}~,
\end{equation}
where $t(f(u))$ satisfies the quartic equation (\ref{TFU}), and we
recall that $u = 2\lambda|{T^-}^{2}| = \lambda(|T^2|+|T{{_{\,}\,\,T^{}_{}}^{\!\!\;\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}|)~.$
\vskip .4cm
In other words the appearence of the quartic equation (\ref{TFU}) is a
general feature of the relation between the constitutive relations (\ref{cr5})
and (\ref{cr6}), it appears for any self-dual theory and it is not only a feature
of the Born-Infeld theory.
\vskip .4cm\sk
\noindent
{\large \bf Acknowledgements} \\
We thank W. Chemissany, R. Kallosh, T. Ortin and M. Trigiante for
valuable correspondence during last fall. In particular we thank R. Kallosh for discussions and her interest on the relation
between the hypergeometric function of her work \cite{CKR} and the quartic
equation
expressing Born Infeld theory in duality invariant variables.
We also thank F. Bonechi for fruitful discussions.
The hospitality of CERN Theory Unit where the present work has been initiated is gratefully acknowledged.
This work is supported by the ERC Advanced Grant no. 226455, Supersymmetry, Quantum Gravity and Gauge Fields (SUPERFIELDS).
\begin{appendix}\section{Examples of higher derivatives theories\label{HDactions}}
We construct examples of higher derivatives $U(1)$ actions that define
self-dual theories.
These examples include the Bossard Nicolai one
in \cite{BN}; the actions we present are quadratic in the field strength.
Let
\begin{equation}
S=-\frac{1}{4}\int\!\! d^4x\sqrt{g} \; F O F\label{FOF}
\end{equation}
with $O$ a matrix
$O_{ \mu\nu}^{~\rho\sigma}$
of differential operators independent from $F$; explicitly
$F O F=
F^{\mu\nu}\,O_{ \mu\nu}^{~\rho\sigma} F_{\rho\sigma}\,.$
We recall that by definition the hermitian conjugate
operator $O^\dagger$ satisfies
$\int (O^\dagger K)F=\int KOF$ for all antisymmetric and real tensors
$K$ and $F$.
Since $\int F O F =\int (O^\dagger F) F = \int F (O^\dagger F)
$, there is no restriction in
considering $O$ hermitian,
i.e., $\int (OK) F=\int KOF$, or explicitly
$\int \!d^4x \sqrt{g} \,(O_{\rho\sigma}^{~\mu\nu}
K_{\mu\nu})F^{\rho\sigma}=
\int\! d^4x \sqrt{g}\, K^{\mu\nu}O^{~\rho\sigma}_{\mu\nu} F_{\rho\sigma}\,.$
Let $O$ also satisfy
\begin{equation}
O\circ {}^{\ast}\circ O={}^{\ast}\label{OtOt}
\end{equation}
i.e., $O\,{}^{\ast}(OF)={}^{\ast}F$.
\vskip .4cm
We show
that the action (\ref{FOF}) gives self-dual equations of motion
if $O$ satisfies (\ref{OtOt}).
Indeed in this case the self-duality condition
(\ref{NGZ2}), i.e.,
$\int \!d^4 x \;F\widetilde F +G\widetilde G=0$, holds. The proof is
easy. We first calculate
\begin{equation}\widetilde G^{\mu\nu}= 2
\frac{\delta S}{\delta F_{\mu\nu}}=-\sqrt{g}\,O^{\mu\nu\;\rho\sigma}
F_{\rho\sigma}~,\label{GOFcr}
\end{equation}
i.e., ${{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}{}^{\,\mu\nu}= -(OF)^{\mu\nu}$. Hence
\begin{eqnarray}
\int \!d^4x\,\widetilde G G&=& \int \!d^4x\sqrt{g}\;({{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})\,G=
-\!\int
\!d^4x\sqrt{g}\;{({{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}})}^{\:\ast}({{{{\,\,G^{}_{}}}^{_{\:}\!\!\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}})\nonumber\\
&=&
-\!\int \!d^4x\sqrt{g}\, (OF)~{}^{\ast\!}(OF)=
-\!\int \!d^4x\sqrt{g}\, F O\;{}^{\ast}\!(OF)\nonumber\\
&=&
-\!\int \!d^4x\sqrt{g}\, F {{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}=
-\!\int \!d^4x \,F \widetilde F~,
\end{eqnarray}
where in the fourth equality we used (\ref{OtOt}).
\vskip .4cm
Examples of differential operators $O$ are given by considering operators
$\Delta$ on antisymmetric tensors $F_{\mu\nu}$ that satisfy the
hermiticity condition $\Delta^\dagger=\Delta$ and that anticommute
with the $\ast$-Hodge operator,
\begin{equation}
{}^{\ast}\circ \Delta=-\Delta\circ {}^{\ast}~.
\end{equation}
Let's introduce a coupling constant $\lambda$ so that $\lambda \Delta$ is adimensional, and let $f(\lambda \Delta)$ be an odd function in $\Delta$ (e.g., a polynomial,
or a power series function like $\lambda\Delta$, $\lambda\Delta^3$,
$\sin(\lambda\Delta)$). Then ${}^{\ast}\circ
f(\lambda\Delta)=-f(\lambda\Delta)\circ {}^{\ast}$,
and
\begin{equation}
O=
\big(1-f(\lambda\Delta)\big)^{-1}\big(1+f(\lambda\Delta)\big)
\end{equation}
satisfies (\ref{OtOt}).
In particular, if $f(\lambda\Delta)=\lambda\Delta$ and if
$\Delta_{\mu\nu}^{~\rho\sigma} F_{\rho\sigma}=\nabla_{\kappa}
\big(T_{[\mu}^{~~\kappa\lambda[\sigma}\nabla_\lambda\delta_{\nu]}^{\rho]}F_{\rho\sigma}\big)$,
where the covariant derivatives are with respect to the Levi-Civita
connection, $T^{\mu\kappa\lambda\sigma}$ is the
Bel-Robinson tensor, and the square brackets denote antisymmetrization
in the embraced indices,
then we obtain the action of Bossard and Nicolai \cite{BN}.
\section{The action functional $S[F]$ from
${\cal I}[T^-,\overline{T^-}]$\label{app1}}
We here determine the first two nontrivial terms $S^{(1)}$
and $S^{(2)}$ of the action $S$, see (\ref{Scorrec}) Section
\ref{ITTm}.
Since $S^{(0)}=-\frac{1}{4}\int \!d^4x\sqrt{g}\,F^2$ corresponds to
${\cal I}^{[0]}=0$, we have (cf. (\ref{al1})) ,
${T^+}^{(0)}=0\,,~{T^-}^{(0)}=2F^-\,,~{G^-}^{(0)}=iF^-$, and, for
$n\geq 1$, ${T^-}^{(n)}=-i{G^-}^{(n)}$,
$\overline{T^-}^{(n)}=i{G^+}^{(n)}$.
The following useful formula is then easily derived using the chain rule:
\begin{equation}\label{Imn}
\frac{\delta {\cal I}^{[m]}|_{F^\mp}^{~(n)}}{\delta
F^-}
=2\frac{\delta {\cal I}^{[m]}}{\delta
T^-}\Big|_{F^\mp}^{\;(n)}-2\sum_{p=m}^{n-1}\int\!\!d^4x\frac{1}{\sqrt{g}}\,
\frac{\delta {\cal I}^{[m]}}{\delta
T^-}\Big|_{F^\mp}^{\;(p)}\,\frac{\delta^2S^{(n-p)}}{{\delta
F^-}{\delta F^-}}+
\frac{\delta {\cal I}^{[m]}}{\delta
\overline{T^-}}\Big|_{F^\mp}^{\;(p)}\,\frac{\delta^2S^{(n-p)}}{{\delta
F^-}{\delta F^+}}
\end{equation}
where we have simplified the notation by setting
$\big|_{F^\mp}=\big|_{{{{}^{T^-[F^-,F^+]}_{\overline{T^-}[F^-,F^+]}}}}\,$
and omitting spacetime indices,
and where we have assumed that we know the action $S[F]$ up to order
$n-1$ so that, for all $p=1,2,\ldots n-1$, we have
$\mp {iG^\pm}^{(p)}=\frac{2}{\sqrt{g}}
\frac{\delta^2 S^{(p)}}{\partial F^\pm}$,
and therefore
$\frac{\delta {T^-}^{(p)}}{\delta
F^-}=-i\frac{\delta {G^-}^{(p)}}{\delta F^-}=-\frac{2}{\sqrt{g}}
\frac{\delta^2 S^{(p)}}{\partial F^-\partial F^-}$.
\vskip .4cm
If $m=n$ then the above formula simply reads
$\frac{\delta {\cal I}^{[n]}|_{F^\mp}^{~(n)}}{\delta
F^-}
=2\frac{\delta {\cal I}^{[n]}}{\delta
T^-}\Big|_{F^\mp}^{\;(n)}$, and since
${\cal I}^{[n]}|_{F^\mp}^{~(n)}={\cal I}^{[n]}[2F^-,2F^+]$ (use ${T^-}^{(0)}=2F^-$),
it
simplifies to
\begin{equation}\label{Inn}
\frac{\delta {\cal I}^{[n]}[2F^-,2F^+]}{\delta
F^-}
=2\frac{\delta {\cal I}^{[n]}}{\delta
T^-}\Big|_{F^\mp}^{\;(n)}~.
\end{equation}
Setting $n=1$ and recalling that since ${\cal I}^{[0]}=0$,
$\frac{\delta {\cal I}^{[1]}}{\delta
T^-}|_{F^\mp}^{\;(1)}=\frac{\delta {\cal I}\,}{\delta
T^-}|_{F^\mp}^{\;(1)}\,$, we immediately see
that $S^{(1)}[F^-,F^+]=\frac{1}{4}{\cal I}^{[1]}[2F^-,2F^+]$
satisfies (\ref{recS}).
\vskip .4cm
In order to determine $S^{(2)}$ we first calculate
(using for example the chain rule in deriving w.r.t. $\lambda$)
\begin{eqnarray}\label{I12}
{\cal I}^{[1]}\big|_{F^\mp}^{\;(2)}&=&\int \!\!d^4x~\frac{\delta {\cal I}^{[1]}}{\delta
T^-}\Big|_{F^\mp}^{\;(1)}{T^-}^{(1)}
+\frac{\delta {\cal I}^{[1]}}{\delta\overline{T^-}}
\Big|_{F^\mp}^{\;(1)}
\overline{T^-}^{(1)}\nonumber\\[.4em]
&=&2\int \!\!d^4x~
\frac{\delta {S}^{(1)}}{\delta
F^-}(-iG^-)^{(1)}+
\frac{\delta {S}^{(1)}}{\delta {F^+}} (iG^+)^{(1)}\nonumber\\[.4em]
&=&-4\int \!\! d^4x \frac{1}{\sqrt{g}}~
\frac{\delta {S}^{(1)}}{\delta
F^-}\frac{\delta {S}^{(1)}}{\delta
F^-}+
\frac{\delta {S}^{(1)}}{\delta {F^+}}
\frac{\delta {S}^{(1)}}{\delta {F^+}}
\end{eqnarray}
where in the second line we used $\frac{\delta {\cal I}^{[1]}}{\delta
T^-}|_{F^\mp}^{\;(1)}=\frac{\delta {\cal I}\,}{\delta
T^-}|_{F^\mp}^{\;(1)}\,$ and then
(\ref{recS}) at order $n=1$.
In the third line we used
the constitutive relations (\ref{Sconst}), i.e., $G^-=-\frac{2i}{\sqrt{g}}\frac{\delta
S}{\,\delta F^-}$ at order $n=1$, that we already know to be
implied by the chiral constitutive relations (\ref{Iconst}).
\vskip .4cm
Next for notational simplicity we set
$\int=\int\!d^4x\frac{1}{\sqrt{g}}\,$ and we compute
\begin{eqnarray}
\frac{\delta {\cal I}^{}}{\,\delta
T^-}\Big|_{F^\mp}^{\;(2)}
&=&
\frac{\delta {\cal I}^{[2]}}{\delta
T^-}\Big|_{F^\mp}^{\;(2)}
+\frac{\delta {\cal I}^{[1]}}{\delta
T^-}\Big|_{F^\mp}^{\;(2)}\nonumber\\[.6em]
&=&
\frac{1}{2}\frac{{\cal I}^{[2]}[2F^-,2F^+]}{\delta F^-}
+\frac{1}{2}\frac{\delta{\cal I}^{[1]}|_{F^\mp}^{\;(2)}}{\delta F^-}
+\int
\frac{\delta {\cal I}^{[1]}}{\delta
T^-}\Big|_{F^\mp}^{\;(1)}\frac{\delta^2S^{(1)}}{\delta F^-\delta F^-}
+
\frac{\delta {\cal I}^{[1]}}{\delta
\overline{T^-}}\Big|_{F^\mp}^{\;(1)}\frac{\delta^2S^{(1)}}{\delta F^-\delta F^+}\nonumber\\[.6em]
&=&
\frac{1}{2}\frac{{\cal I}^{[2]}[2F^-,2F^+]}{\delta F^-}
+\frac{1}{2}\frac{\delta{\cal I}^{[1]}|_{F^\mp}^{\;(2)}}{\delta F^-}
+\frac{\delta}{\delta F^-}\int
\frac{\delta S^{(1)}}{\delta F^-}\frac{\delta S^{(1)}}{\delta F^-}+
\frac{\delta S^{(1)}}{\delta F^+}\frac{\delta S^{(1)}}{\delta
F^+} \nonumber\\[.6em]
&=&\frac{\delta}{\delta F^-}\Big(\frac{1}{2}{\cal I}^{[2]}[2F^-,2F^+]-\int
\frac{\delta S^{(1)}}{\delta F^-}\frac{\delta S^{(1)}}{\delta F^-}+
\frac{\delta S^{(1)}}{\delta F^+}\frac{\delta S^{(1)}}{\delta
F^+}\,\Big)\label{IdF}
\end{eqnarray}
where in the second line we have used (\ref{Inn}) and (\ref{Imn}), in
the third line we have noticed again that
$\frac{\delta {\cal I}^{[1]}}{\delta
T^-}|_{F^\mp}^{\;(1)}=\frac{\delta {\cal I}\,}{\delta
T^-}|_{F^\mp}^{\;(1)}=2\frac{\delta S^{(1)}}{\delta F^-}$ (cf. (\ref{recS}), in the
fourth line we have used (\ref{I12}). From the equality (\ref{IdF}) we
see that
$S^{(2)}=\frac{1}{4}_{\!\,}{\cal I}^{[2]}[2F^-,2F^+]-
\frac{1}{2}\!\int\frac{\delta S^{(1)}}{\delta F^-}\frac{\delta S^{(1)}}{\delta F^-}+
\frac{\delta S^{(1)}}{\delta F^+}\frac{\delta S^{(1)}}{\delta
F^+}$ satisfies (\ref{recS}) with $n=2$.
\section{The energy momentum tensor and its trace \label{TTT}}
We first recall that the symmetric energy-momentum tensor $\theta^{\mu\nu}$
of a nonlinear electromagnetic theory
is given by
\begin{equation}
\theta^{\mu\nu}=-\widetilde G^{\mu\lambda}F^{\nu}_{~~\lambda}+g^{\mu\nu\,}{\cal L} \label{emt}
\end{equation}
if the Lagrangian $L$ in the action $S[F]=\int \!d^4x \,{\cal L}=\frac{1}{\lambda}\int \!d^4x \sqrt{g}\,
L$
depends on the field strength $F_{\mu\nu}$ and the metric $g_{\mu\nu}$
only via the invariant and dimensionless combinations
\begin{equation}
\alpha=\frac{\lambda}{4}F^2~~,~~~\beta=\frac{\lambda}{4}F{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}\label{albe}~.
\end{equation}
Indeed we compute
\begin{equation}
\frac{\partial\alpha}{\partial g_{\mu\nu}}=-2\frac{\partial \alpha}{\partial
F_{\mu\rho}}F^\nu_{~\;\rho}~~,~~~
\frac{\partial\beta}{\partial g_{\mu\nu}}=-2\frac{\partial \beta}{\partial F_{\mu\rho}}F^\nu_{~\;\rho}~,~~
\end{equation}
(where the factor $2$ is due to our $\frac{\partial}{\partial
F_{\mu\rho}}$ conventions, cf. (\ref{Sconst}) and its footnote); for
the second equation we used
$\frac{\partial\sqrt{g}^{-1}}{\partial g_{\mu\nu}}=-\sqrt{g}^{-1}g^{\mu\nu}$,
and the property ${{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}^{\mu\lambda}
F_{\nu\lambda}=-\frac{1}{4}\delta^{\mu}_{~\nu\,}{{\,\,F^{}_{}}^{\!\:\!\!\!\!\!\!\!\!\!{{{\ast}}}~~}}^{\rho\sigma}
F_{\rho\sigma}$. Expression (\ref{emt}) for the energy momentum tensor
$\theta^{\mu\nu}=\frac{\delta S}{\delta g_{\mu\nu}}$
is then straightforward.
Now an action in Minkowski spacetime that has no derivatives of the
field strength $F$, by Lorentz invariance depends on $F$ only via the
(pseudo)scalars $F^2$ and $F\widetilde F$. We can then always
minimally couple the action to gravity so that the metric enters
only in (\ref{albe}), and hence so that (\ref{emt}) holds. Even if the
coupling to gravity (for example in order to preserve symmetry properties)
requires terms like $RF^2$ where $R$ is the scalar curvature,
expression (\ref{emt}) still holds in flat spacetime.
\vskip .4cm
From (\ref{emt}) it follows that the trace of the energy momentum
tensor satisfies
\begin{equation}
\frac{1}{4}\theta^\mu_{~~\mu}={\cal L}-\frac{1}{4}\widetilde G F~.
\end{equation}
We therefore have
\begin{equation}\label{D5}
\frac{1}{4}\int \!d^4x \,g_{\mu\nu}\frac{\delta S}{\delta
g_{\mu\nu}}=\int\!d^4x \,\frac{1}{4}\theta^\mu_{~~\mu}=S-\frac{1}{4}\int\!d^4x\,\widetilde G F=
-\lambda\frac{\partial S}{\partial \lambda}~,
\end{equation}
the last relation follows
observing that the inverse metric $g^{\mu\nu}$ appears always with the
factor $\lambda^{1/2}$ in the action $S[F]=\int \!d^4x \,{\cal L}=\frac{1}{\lambda}\int
\!d^4x \sqrt{g}\, L$ (cf. (\ref{albe})).
Finally if we let $S[F]\to S_c[F]=\frac{1}{c^2}S[cF]$, we see that (\ref{D5}) coincides with
(\ref{Ttrace}). Indeed $\lambda\frac{\partial}{\partial \lambda}$ equals
$c^2\frac{\partial}{\partial c^2}$ because $S_c[F]$
depends only on the product $c^2\lambda$.
\end{appendix}
|
2,877,628,089,399 | arxiv | \section{INTRODUCTION}
To pursue the development and increase the capabilities of the VLTI,
ESO selected three instrumental projects in phase A. The three new
instruments are : MATISSE, VSI, and GRAVITY. MATISSE is a mid-infrared
spectro-interferometer. VSI and GRAVITY are two near-infrared
instruments, both with the specificity of using integrated optics beam
combiners
\cite{1999A&AS..138..135M,1999A&AS..139..173B,2000ApOpt..39.2130H,
2001A&A...376L..31B,2002A&A...390.1171L,2006A&A...450.1259L}. To
that end, new integrated optics (IO) beam combiners were developed.
The requirements for VSI were:
\begin{itemize}
\item Being able to work in the J, H and K band.
\item Being able to combine up to 6 telescopes.
\end{itemize}
on the other hand, GRAVITY had several criterium to meet:
\begin{itemize}
\item High sensitivity in the K band (more than 50\% throughput for the
beam combiner as a whole).
\item Non-temporal modulation to allow long integration time exposures.
\end{itemize}
The basic concept for the combiner was an ABCD type-recombination
\cite{1977JOSA...67...81S}. Such a combiner was already presented by
M. Benisty et al. \cite{2006SPIE.6268E..73B}. Multi-axial 6 beam
combiners component are also investigated as a good alternative. The
goal of this new development round is to: (i) prove the validity of
the technology for the K and J band, and (ii) improve the throughput
and therefore the sensitivity by testing different arrangement of the
waveguide paths. Several new combiners were therefore designed, and
were recently delivered to our laboratory.
\section{THE WAFER}
CEA/LETI technical processes use Silica on Silicum, and a lithographic
technique for waveguide tracing. Figure~\ref{fig:waf} gives an
overview of the photomask which was used to transfer the waveguide
paths on the wafer. On this mask, there are 48 beam combiners. Each
type of combiner is duplicated on the right and the left of the
wafer. Each combiner also comes in three versions, corresponding to
guides of different size. For example, the H band combiners exist with
guides of 6.8$\,\mu$m 7$\,\mu$m and 7.2$\,\mu$m. The size of the wafer
is $8\times8$\,inches.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[height=13cm]{nappe2bis}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:waf}
Overview of the wafer guide tracing. On this $8\times8$\,inches wafer are 48 beam combiners.}
\end{figure}
The new types of combiners are:
\begin{itemize}
\item A 4 telescopes ABCD beam combiner in the H band. The goal of this
combiner is to test new achromatized phase shifts. This to (i)
increase the contrast ratio at the output by compensating the chromatic
effects of the couplers, and (ii) increase the precision on the
closure phases by having them constant over a large bandpass.
\item A second 4 telescopes ABCD beam combiner in H band. The size of
the beam combiner is reduced by 30\% by using a combination of
66/33\% and 50/50\% couplers instead of a tricoupler. The component
is therefore not symmetric anymore, with an eventual effect on the
closure phase.
\item Two 2 telescopes ABCD beam combiners to test transmission and response of the K
and J band.
\item A 4 telescopes ABCD fringe tracking combiner in the K
band. Since only pistons measurements are
needed, such a combiner combine the flux using ``bootstrapping''
approach (ie., each telescope is recombined with two other telescopes
only). The goal for this component is to validate fringe tracker
algorithms with an ABCD combiner.
\item Two 6 telescopes multi-axial beam combiners, working in K and H bands
\end{itemize}
\section{CHARACTERIZATION I. TRANSMISSION}
Throughput was investigated by the means of two single mode fibers.
One is used to inject the light in the component, and the other to
convey the flux from the outputs of the component to a K band
monopixel detector. The flux recorded on all the outputs is then
summed, and normalized by the flux measured when putting the two
single mode fibers in contact with each other. Errors are of the order
of half a percent.
\begin{table}[h]
\caption{Transmission of the 2 telescopes ABCD beam combiner in the K band.}
\label{tab:trans}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\rule[-1ex]{0pt}{3.5ex} Wavelength & Guide width & Input 1 & Input 2 \\
\hline
\hline
\rule[-1ex]{0pt}{3.5ex} $2.20\,\mu$m & $9.8\,\mu$m & 52.0\,\% & 52.5\,\%\\
\hline
\rule[-1ex]{0pt}{3.5ex} $2.20\,\mu$m & $10.0\,\mu$m & 51.9\,\% & 53.4\,\%\\
\hline
\rule[-1ex]{0pt}{3.5ex} $2.20\,\mu$m & $10.2\,\mu$m & 60.0\,\% & 53.0\,\%\\
\hline
\rule[-1ex]{0pt}{3.5ex} $2.37\,\mu$m & $9.8\,\mu$m & 65.8\,\% & 66.2\,\%\\
\hline
\rule[-1ex]{0pt}{3.5ex} $2.37\,\mu$m & $10.0\,\mu$m & 68.4\,\% & 67.9\,\%\\
\hline
\rule[-1ex]{0pt}{3.5ex} $2.37\,\mu$m & $10.2\,\mu$m & 67.6\,\% & 68.9\,\%\\
\hline
\end{tabular}
\end{center}
\end{table}
Tests were performed on the K band ABCD using two different SLED: one
at 2.2$\,\mu$m, and one at 2.37$\,\mu$m. Results are reported in
Table~\ref{tab:trans}. The 50\% transmission goal was achieved at both
wavelength. Optimization of the waveguide paths could eventually
further increase the transmission.
\section{CHARACTERIZATION II. COHERENT TRANSFER FUNCTION}
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=4.5cm]{Schema}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:gen} Generalization of the transfer function of an
integrated optics device. $n$ are inputs, $k$ outputs. $E_k$ and
$S^k$ are respectively the entering and exiting complex electric
field.}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[height=4.5cm]{4tabcd_algo_schema}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:4t} }
\end{figure}
Figure~\ref{fig:gen} represents the generalized view of the transfer
function of an integrated optic component. $E_n$ is the complex
electric field entering the component via input $n$, and $S^k$ is the
resulting field on output number $k$. $T_{n}^k$ is a two dimensional
complex matrix linking $S^k$ to $E_n$.
A complete determination of the transfer function of the IO would
therefore be equivalent to the determination of matrix
$T_n^k$. However, the observable is not $S^k$ , but the intensity on
the output channels:
\begin{eqnarray}
|S^k|^2&=&\left|\sum_n T_n^k E_n\right|^2\\
&=&\Re \left[ \sum_n |T_n^k E_n|^2+2\sum_n\sum_{m>n} T_n^kT_m^{k*}\ E_nE_m\right] \, .
\end{eqnarray}
However, this equation assume a fully coherent incoming beam, and no
loss of contrast due to, for example, chromaticity, polarization
etc... Hence, we introduced two extra terms: $V_{n,m}$ correspond to
the coherency of the incoming electric field (the complex
visibility). $C_{n,m}^k$ correspond to the level of which the IO
device conserve the coherence of the light:
\begin{equation}
|S^k|^2=\Re \left[ \sum_n |T_n^k E_n|^2+2\sum_n\sum_{m>n} T_n^kT_m^{k*}C_{n,m}^k\ E_nE_m^*V_{n,m}\right] \, .
\end{equation}
Equation (3) can be rewritten with a matrix product:
\begin{equation}
\left(
\begin {array}{c}
|S^1|^2\\
\vdots \\
|S^K|^2
\end {array}
\right)
=
\Re \left[
V2PM
\cdot
\left(
\begin {array}{c}
|E_1|^2\\
\vdots \\
|E_N|^2\\
E_{1}E_{2}^*V_{1,2}\\
\vdots\\
E_{N-1}E_{N}^*V_{N-1,N}\\
\end {array}
\right)
\right]
\end{equation}
where $N$ is the total number of inputs, and $K$ the number of
outputs. The $V2PM$ matrix is then equal to:
\begin{equation}
V2PM
=
\left(
\begin {array}{cccccc}
|T_1^1|^2&\cdots&|T_N^1|^2& 2T_1^1T_2^{1*}C_{1,2}^1&\cdots&2T_{N-1}^1T_N^{1*}C_{N-1,N}^1\\
\vdots & & \vdots&\vdots & & \vdots\\
|T_1^K|^2&\cdots&|T_N^K|^2& 2T_1^KT_2^{K*}C_{1,2}^K&\cdots&2T_{N-1}^KT_N^{K*}C_{N-1,N}^K
\end {array}
\right)
\end{equation}
The name $V2PM$ was adopted in accordance with previous data reduction done on multi-axial interferometers\cite{2006MNRAS.368.1159T}. It stands for {``Visibilities to Pixels Matrix''}.
Characterization of the components will consist in the determination of
the V2PM. Providing a well engineered OI, the V22M shall be
invertible, allowing a robust determination of the photometry channels
($|E_n|^2$), as well as the contrasts and phases ($E_nE_m^*V_{n,m}$).
We tested the method on the OI combiner already discussed by Benisty
et al.\cite{2006SPIE.6268E..73B}. The schematic of the combiner is
reproduced figure~\ref{fig:4t}. In the ideal case, the V2PM would
wrote :
\begin{equation}
V2PM
=
\left(
\begin {array}{cccccccccc}
\op & \op & 0 & 0 & \oa & 0 & 0 & 0 & 0 & 0 \\
\op & \op & 0 & 0 & \ob & 0 & 0 & 0 & 0 & 0 \\
\op & \op & 0 & 0 & \oc & 0 & 0 & 0 & 0 & 0 \\
\op & \op & 0 & 0 & \od & 0 & 0 & 0 & 0 & 0 \\
0 &\op & \op & 0 & 0 & \oa & 0 & 0 & 0 & 0 \\
0 &\op & \op & 0 & 0 & \ob & 0 & 0 & 0 & 0 \\
\op & 0 &\op & 0 & 0 & 0 & 0 & 0 & 0 & \oa \\
\op & 0 &\op & 0 & 0 & 0 & 0 & 0 & 0 & \ob \\
\op & 0 & \op & 0 & 0 & 0 & 0 & 0 & 0 & \oc \\
\op & 0 & \op & 0 & 0 & 0 & 0 & 0 & 0 & \od \\
\op & 0 & 0 & \op & 0 & 0 & 0 & \oa & 0 & 0 \\
\op & 0 & 0 & \op & 0 & 0 & 0 & \ob & 0 & 0 \\
\op & 0 & 0 & \op & 0 & 0 & 0 & \oc & 0 & 0 \\
\op & 0 & 0 & \op & 0 & 0 & 0 & \od & 0 & 0 \\
0 &\op & 0 & \op & 0 & 0 & \oa & 0 & 0 & 0 \\
0 &\op & 0 & \op & 0 & 0 & \ob & 0 & 0 & 0 \\
0 &\op & 0 & \op & 0 & 0 & \oc & 0 & 0 & 0 \\
0 &\op & 0 & \op & 0 & 0 & \od & 0 & 0 & 0 \\
0 &\op & \op & 0 & 0 & \oc & 0 & 0 & 0 & 0 \\
0 &\op & \op & 0 & 0 & \od & 0 & 0 & 0 & 0 \\
0 & 0 &\op & \op & 0 & 0 & 0 & 0 & \oa & 0 \\
0 & 0 &\op & \op & 0 & 0 & 0 & 0 & \ob & 0 \\
0 & 0 &\op & \op & 0 & 0 & 0 & 0 & \oc & 0 \\
0 & 0 &\op & \op & 0 & 0 & 0 & 0 & \od & 0
\end {array}
\right)
\end{equation}
\begingroup
\everymath{\scriptstyle}
There are several ways to obtained the matrix of ``real''combiner. We
used a method which determine the matrix column by column. The first
four columns correspond to the real transmission of each one of the
entrance separately. To determine these four columns, we inject light
into the entrance guides one after the other. For the six
following column, light is injected into two entrances only, and
optical path modulated to reveals the phase shifts.
On one of our 4T
ABCD H band beam combiner, we obtained the following matrix:
\begin{equation}{
V2PM
=
\left(
\begin {array}{rrrrrrrrrr}
0.085 & 0.039 & 0.001 & 0.001 & 0.112e^{0.00i} & 0.000e^{-2.02i} & 0.000e^{0.21i}& 0.001e^{-2.04i} & 0.000e^{2.79i} & 0.000e^{2.51i} \\
0.037 & 0.065 & 0.001 & 0.001 & 0.099e^{3.13i} & 0.001e^{1.25i} & 0.000e^{-1.65i}& 0.000e^{-0.52i} & 0.000e^{0.55i} & 0.000e^{-2.79i} \\
0.045 & 0.022 & 0.001 & 0.001 & 0.063e^{1.25i} & 0.000e^{-0.80i} & 0.000e^{-0.36i}& 0.000e^{1.24i} & 0.000e^{0.95i} & 0.000e^{-2.46i} \\
0.041 & 0.074 & 0.001 & 0.001 & 0.108e^{-1.88i} & 0.001e^{0.53i} & 0.000e^{2.82i}& 0.000e^{1.84i} & 0.000e^{-0.54i} & 0.001e^{2.28i} \\
0.002 & 0.067 & 0.054 & 0.002 & 0.002e^{2.27i} & 0.116e^{-0.00i} & 0.001e^{-3.11i}& 0.001e^{-2.01i} & 0.002e^{2.63i} & 0.001e^{-0.43i} \\
0.003 & 0.051 & 0.077 & 0.003 & 0.002e^{-1.03i} & 0.121e^{3.14i} & 0.001e^{2.40i}& 0.001e^{0.71i} & 0.000e^{1.28i} & 0.003e^{3.08i} \\
0.206 & 0.000 & 0.089 & 0.002 & 0.001e^{-0.09i} & 0.002e^{0.43i} & 0.000e^{-2.03i}& 0.002e^{-0.04i} & 0.001e^{2.79i} & 0.261e^{0.00i} \\
0.098 & 0.002 & 0.173 & 0.001 & 0.000e^{-2.84i} & 0.001e^{1.24i} & 0.000e^{2.99i}& 0.001e^{0.31i} & 0.001e^{0.19i} & 0.256e^{3.14i} \\
0.192 & 0.002 & 0.095 & 0.002 & 0.001e^{0.29i} & 0.002e^{2.23i} & 0.000e^{-2.10i}& 0.002e^{-2.92i} & 0.003e^{-0.99i} & 0.263e^{0.97i} \\
0.078 & 0.002 & 0.146 & 0.002 & 0.001e^{-1.19i} & 0.001e^{2.69i} & 0.000e^{2.71i}& 0.002e^{-0.09i} & 0.002e^{-1.57i} & 0.213e^{-2.16i} \\
0.093 & 0.003 & 0.000 & 0.035 & 0.001e^{-1.08i} & 0.001e^{2.44i} & 0.002e^{1.64i}& 0.107e^{-0.00i} & 0.001e^{-2.99i} & 0.004e^{0.88i} \\
0.044 & 0.004 & 0.004 & 0.101 & 0.001e^{2.30i} & 0.002e^{-2.90i} & 0.001e^{-1.75i}& 0.127e^{3.13i} & 0.002e^{-1.89i} & 0.001e^{1.40i} \\
0.037 & 0.003 & 0.003 & 0.014 & 0.001e^{2.59i} & 0.001e^{1.32i} & 0.000e^{1.36i}& 0.042e^{0.94i} & 0.001e^{-2.05i} & 0.001e^{2.74i} \\
0.020 & 0.002 & 0.002 & 0.043 & 0.001e^{-2.59i} & 0.000e^{2.44i} & 0.001e^{-0.86i}& 0.054e^{-2.15i} & 0.000e^{2.10i} & 0.001e^{-1.80i} \\
0.002 & 0.178 & 0.003 & 0.095 & 0.001e^{0.07i} & 0.001e^{-2.40i} & 0.237e^{0.00i}& 0.000e^{-1.37i} & 0.001e^{3.12i} & 0.001e^{-2.57i} \\
0.002 & 0.094 & 0.002 & 0.188 & 0.001e^{2.41i} & 0.001e^{1.62i} & 0.251e^{3.13i}& 0.001e^{-1.89i} & 0.001e^{-0.44i} & 0.000e^{-0.85i} \\
0.002 & 0.173 & 0.002 & 0.097 & 0.001e^{0.12i} & 0.001e^{0.75i} & 0.246e^{1.20i}& 0.001e^{-0.27i} & 0.001e^{-0.81i} & 0.001e^{1.86i} \\
0.002 & 0.089 & 0.002 & 0.187 & 0.000e^{-0.41i} & 0.000e^{-0.41i} & 0.245e^{-1.94i}& 0.001e^{-2.21i} & 0.000e^{-1.93i} & 0.001e^{-2.92i} \\
0.003 & 0.075 & 0.049 & 0.001 & 0.002e^{-2.88i} & 0.113e^{1.35i} & 0.003e^{1.60i}& 0.002e^{-1.19i} & 0.000e^{2.80i} & 0.002e^{-1.19i} \\
0.002 & 0.051 & 0.071 & 0.003 & 0.002e^{-1.16i} & 0.112e^{-1.77i} & 0.004e^{-2.65i}& 0.001e^{2.21i} & 0.004e^{1.99i} & 0.002e^{2.57i} \\
0.002 & 0.001 & 0.083 & 0.041 & 0.001e^{1.30i} & 0.001e^{1.68i} & 0.000e^{-2.87i}& 0.000e^{-0.59i} & 0.108e^{0.00i} & 0.001e^{-2.69i} \\
0.001 & 0.002 & 0.041 & 0.079 & 0.001e^{0.56i} & 0.001e^{1.52i} & 0.001e^{1.93i}& 0.001e^{2.17i} & 0.109e^{-3.14i} & 0.000e^{-1.67i} \\
0.001 & 0.001 & 0.068 & 0.036 & 0.000e^{-2.51i} & 0.000e^{2.92i} & 0.000e^{1.81i}& 0.001e^{-1.47i} & 0.094e^{1.26i} & 0.000e^{2.85i} \\
0.001 & 0.001 & 0.033 & 0.066 & 0.000e^{-2.86i} & 0.000e^{-0.14i} & 0.000e^{-2.12i}& 0.000e^{-0.01i} & 0.090e^{-1.86i} & 0.000e^{2.40i} \\
\end {array}
\right)}
\end{equation}
\endgroup
The differences between matrix (5) and (6) are due several
instrumental effects. These effects
are revealed by :
\begin{itemize}
\item Missing zero values in the first four columns. This is due to
crosstalk, consequence of some flux going from one guide to another
because of, for example, intersections. In column 1, the crosstalk
is of the order of 1\%.
\item Values different from $\op$ in the first four columns. This due to a transmission not perfectly equilibrated between the different channels after a coupler or a tricoupler.
\item Phase shifts different than $[0,\pi,\pi/2,3\pi/2]$ for the four main outputs of each telescope pairs. This is a problem due to the engineering of the phase shift.
\end{itemize}
\section{CONCLUSION}
OI circuits presented in Figure~\ref{fig:waf} have recently been
delivered to our laboratory. We have shown that the transmission of
the components is adapted for astronomical observations in the K
band. Optimization of the throughput is nevertheless still under
investigation, with possibilities offered by using new couplers,
optical paths, and/or fiber to combiner injection methods.
We also developed the tools to fully characterize the complex transfer
function of the new components, to have rigorous comparison between
the different combiner engineered. The knowledge of the $V2PM$ matrix
should also allow signal to noise estimation, to provide sound and
practical way to choose the best combiner for the VLTI.
\acknowledgments
This project is funded by the French National Research Agency (ANR) –-- project 2G-VLTI.
|
2,877,628,089,400 | arxiv | \section{Introduction}\label{sec:1}
In this paper, by using motivic Milnor fibers introduced by Denef-Loeser \cite{D-L-1} and \cite{D-L-2}, we obtain explicit formulas for the Jordan normal forms of Milnor monodromies. Let $f(x)= \sum_{v \in \ZZ_+^n} a_v x^v \in \CC[x_1,\ldots,x_n]$ be a polynomial on $\CC^n$ such that the hypersurface $f^{-1}(0)= \{ x \in \CC^n \ |\ f(x)=0 \}$ has an isolated singular point at $0\in \CC^n$. Then by a fundamental theorem of Milnor \cite{Milnor}, the Milnor fiber $F_0$ of $f$ at $0 \in \CC^n$ has the homotopy type of bouquet of $(n-1)$-spheres. In particular, we have $H^j(F_0;\CC) \simeq 0$ ($j\neq 0, \ n-1$). Denote by
\begin{equation}
\Phi_{n-1,0} \colon H^{n-1}(F_0;\CC) \simto H^{n-1}(F_0;\CC)
\end{equation}
\noindent the $(n-1)$-th Milnor monodromy of $f$ at $0 \in \CC^n$. By the theory of monodromy zeta functions due to A'Campo \cite{A'Campo} and Varchenko \cite{Varchenko} etc., the eigenvalues of $\Phi_{n-1,0}$ were fairly well-understood. See Oka's book \cite{Oka} for an excellent exposition of this very important result. However to the best of our knowledge, it seems that the Jordan normal form of $\Phi_{n-1,0}$ is not fully understood yet. In this paper, we give a combinatorial description of the Jordan normal form of $\Phi_{n-1,0}$ by using motivic Milnor fibers (For a computer algorithm by Brieskorn lattices, see Schulze \cite{Schulze} etc.).
From now on, let us assume also that $f$ is convenient and non-degenerate at $0 \in \CC^n$ (see Definitions \ref{CVN} and \ref{NDC}). Note that the second condition is satisfied by generic polynomials $f$. Then we can describe the Jordan normal form of $\Phi_{n-1,0}$ very explicitly as follows. We call the convex hull of $\bigcup_{v \in \supp (f)} \{v + \RR_+^n\}$ in $\RR_+^n$ the Newton polyhedron of $f$ and denote it by $\Gamma_+(f)$. Let $q_1,\ldots,q_l$ (resp. $\gamma_1,\ldots, \gamma_{l^{\prime}}$) be the $0$-dimensional (resp. $1$-dimensional) faces of $\Gamma_{+}(f)$ such that $q_i\in \Int (\RR_+^n)$ (resp. the relative interior $\relint(\gamma_i)$ of $\gamma_i$ is contained in $\Int(\RR_+^n)$). For each $q_i$ (resp. $\gamma_i$), denote by $d_i >0$ (resp. $e_i>0$) its lattice distance $\dist(q_i, 0)$ (resp. $\dist(\gamma_i,0)$) from the origin $0\in \RR^n$. For $1\leq i \leq l^{\prime}$, let $\Delta_i$ be the convex hull of $\{0\}\sqcup \gamma_i$ in $\RR^n$. Then for $\lambda \in \CC \setminus \{1\}$ and $1 \leq i \leq l^{\prime}$ such that $\lambda^{e_i}=1$ we set
\begin{equation}
n(\lambda)_i
= \sharp\{ v\in \ZZ^n \cap \relint(\Delta_i) \ |\ \height (v, \gamma_i)=k\} +\sharp \{ v\in \ZZ^n \cap \relint(\Delta_i) \ |\ \height (v, \gamma_i)=e_i-k\},
\end{equation}
where $k$ is the minimal positive integer satisfying $\lambda=\zeta_{e_i}^{k}$ ($\zeta_{e_i}:=\exp (2\pi\sqrt{-1}/e_i)$) and for $v\in \ZZ^n \cap \relint(\Delta_i)$ we denote by $\height (v, \gamma_i)$ the lattice height of $v$ from the base $\gamma_i$ of $\Delta_i$. Then in Section \ref{sec:4} we prove the following result which describes the number of Jordan blocks for each fixed eigenvalue $\lambda \neq 1$ in $\Phi_{n-1, 0}$. Recall that by the monodromy theorem the sizes of such Jordan blocks are bounded by $n$.
\begin{theorem}\label{thm:1-1}
Assume that $f$ is convenient and non-degenerate at $0 \in \CC^n$. Then for any $\lambda \in \CC^* \setminus \{1\}$ we have
\begin{enumerate}
\item The number of the Jordan blocks for the eigenvalue $\lambda$ with the maximal possible size $n$ in $\Phi_{n-1,0} \colon H^{n-1}(F_0;\CC) \simto H^{n-1}(F_0;\CC)$ is equal to $\sharp \{q_i \ |\ \lambda^{d_i}=1\}$.
\item The number of the Jordan blocks for the eigenvalue $\lambda$ with size $n-1$ in $\Phi_{n-1, 0}$ is equal to $\sum_{i \colon \lambda^{e_i}=1} n(\lambda)_i$.
\end{enumerate}
\end{theorem}
\noindent Namely the Jordan blocks for the eigenvalues $\lambda \neq 1$ in the monodromy $\Phi_{n-1, 0}$ are determined by the lattice distances of the faces of $\Gamma_{+}(f)$ from the origin $0 \in \RR^n$. The monodromy theorem asserts also that the sizes of the Jordan blocks for the eigenvalue $1$ in $\Phi_{n-1, 0}$ are bounded by $n-1$. In this case, we have the following result. Denote by $\Pi_f$ the number of the lattice points on the $1$-skeleton of $\partial \Gamma_{+}(f) \cap \Int (\RR^n_+)$. For a compact face $\gamma \prec \Gamma_{+}(f)$, denote by $l^*(\gamma)$ the number of the lattice points on the relative interior $\relint(\gamma)$ of $\gamma$.
\begin{theorem}\label{thm:1-2}
In the situation of Theorem \ref{thm:1-1}
we have
\begin{enumerate}
\item {\rm (van Doorn-Steenbrink \cite{D-St})} The number of the Jordan blocks for the eigenvalue $1$ with the maximal possible size $n-1$ in $\Phi_{n-1, 0}$ is $\Pi_f$.
\item The number of the Jordan blocks for the eigenvalue $1$ with size $n-2$ in $\Phi_{n-1, 0}$ is equal to $2 \sum_{\gamma} l^*(\gamma)$, where $\gamma$ ranges through the compact faces of $\Gamma_{+}(f)$ such that $\d \gamma =2$ and $\relint(\gamma) \subset \Int (\RR^n_+)$. In particular, this number is even.
\end{enumerate}
\end{theorem}
\noindent Note that Theorem \ref{thm:1-2} (i) was previously obtained in van Doorn-Steenbrink \cite{D-St} by using different methods. Roughly speaking, the nilpotent part for the eigenvalue $1$ in the monodromy $\Phi_{n-1, 0}$ is determined by the convexity of the hypersurface $\partial \Gamma_{+}(f) \cap \Int (\RR^n_+)$. Thus Theorems \ref{thm:1-1} and \ref{thm:1-2} generalize the well-known fact that the monodromies of quasi-homogeneous polynomials are semisimple. In fact, by our results in Sections \ref{sec:2} and \ref{sec:4} a general algorithm for computing all the spectral pairs of the Milnor fiber $F_0$ is obtained. This in particular implies that we can compute the Jordan normal form of $\Phi_{n-1, 0}$ completely. Note that the spectrum of $F_0$ obtained in Saito \cite{Saito-3} and Varchenko-Khovanskii \cite{K-V} is not enough to deduce the Jordan normal form. Moreover, if any compact face of $\Gamma_{+}(f)$ is prime (see Definition \ref{dfn:2-16}) we obtain also a closed formula for the Jordan normal form. See Section \ref{sec:4} for the details.
This paper is organized as follows. In Section \ref{sec:2}, we introduce some generalizations of the results of Danilov-Khovanskii \cite{D-K} obtained in \cite{M-T-3}. By them we obtain a general algorithm for computing the equivariant mixed Hodge numbers of non-degenerate toric hypersurfaces. In Section \ref{sec:3}, we recall some basic definitions and results on motivic Milnor fibers introduced by Denef-Loeser \cite{D-L-1} and \cite{D-L-2}. Then in Section \ref{sec:4}, by rewriting them in terms of the Newton polyhedron $\Gamma_{+}(f)$ with the help of the results in Section \ref{sec:2} and \cite{M-T-3}, we prove various combinatorial formulas for the Jordan normal form of the Milnor monodromy $\Phi_{n-1, 0}$. Although our proof for the eigenvalue $1$ in this paper is very different from the one in \cite{M-T-3}, our results in Section \ref{sec:4} are completely parallel to those for monodromies at infinity obtained in \cite{M-T-3}. We thus find a striking symmetry between local and global. Finally, let us mention that in \cite{E-T} the results for the other eigenvalues $\lambda \not= 1$ in this paper were already generalized to the monodromies over complete intersection subvarieties in $\CC^n$.
\section{Preliminary notions and results}\label{sec:2}
In this section, we recall our results in \cite[Section 2]{M-T-3} which will be used in this paper. They are slight generalizations of the results in Danilov-Khovanskii \cite{D-K}.
\begin{definition}\label{dfn:2-6-2}
Let $g(x)=\sum_{v \in \ZZ^n} a_vx^v$ ($a_v\in \CC$) be a Laurent polynomial on $(\CC^*)^n$.
\begin{enumerate}
\item We call the convex hull of $\supp(g):=\{v\in \ZZ^n \ |\ a_v\neq 0\} \subset \ZZ^n $ in $\RR^n$ the Newton polytope of $g$ and denote it by $NP(g)$.
\item For $u\in (\RR^n)^*$, we set $\Gamma(g;u):=\left\{ v\in NP(g)\ \left| \ \langle u,v\rangle =\min_{w\in NP(g)} \langle u,w\rangle \right.\right\}$.
\item For $u \in (\RR^n)^*$, we define the $u$-part of $g$ by $g^u(x):=\sum_{v \in \Gamma(g;u)} a_vx^v$.
\end{enumerate}
\end{definition}
\begin{definition}[\cite{Kushnirenko}]
Let $g$ be a Laurent polynomial on $(\CC^*)^n$. Then we say that the hypersurface $Z^*=\{ x\in (\CC^*)^n \ |\ g(x)=0 \}$ of $(\CC^*)^n$ is non-degenerate if for any $u \in (\RR^n)^*$ the hypersurface $\{ x\in (\CC^*)^n \ |\ g^u(x)=0 \}$ is smooth and reduced.
\end{definition}
In the sequel, let us fix an element $\tau =(\tau_1,\ldots, \tau_n) \in T:=(\CC^*)^n$ and let $g$ be a Laurent polynomial on $(\CC^*)^n$ such that $Z^*=\{ x\in (\CC^*)^n \ |\ g(x)=0\}$ is non-degenerate and invariant by the automorphism $l_{\tau} \colon (\CC^*)^n \underset{\tau \times}{\simto}(\CC^*)^n$ induced by the multiplication by $\tau$. Set $\Delta =NP(g)$ and for simplicity assume that $\d \Delta=n$. Then there exists $\beta \in \CC$ such that $l_{\tau}^*g= g \circ l_{\tau}=\beta g$. This implies that for any vertex $v$ of $\Delta =NP(g)$ we have ${\tau}^v={\tau}_1^{v_1} \cdots {\tau}_n^{v_n}=\beta$. Moreover by the condition $\d \Delta=n$ we see that $\tau_1, \tau_2, \ldots , \tau_n$ are roots of unity. For $p,q \geq 0$ and $k \geq 0$, let $h^{p,q}(H_c^k(Z^*;\CC))$ be the mixed Hodge number of $H_c^k(Z^*;\CC)$ and set
\begin{equation}
e^{p,q}(Z^*)=\dsum_k (-1)^k h^{p,q}(H_c^k(Z^*;\CC))
\end{equation}
as in \cite{D-K}. The above automorphism of $(\CC^*)^n$ induces a morphism of mixed Hodge structures $l_{\tau}^* \colon H_c^k(Z^*;\CC) \simto H_c^k(Z^*;\CC)$ and hence $\CC$-linear automorphisms of the $(p,q)$-parts $H_c^k(Z^*;\CC)^{p,q}$ of $H_c^k(Z^*;\CC)$. For $\alpha \in \CC$, let $h^{p,q}(H_c^k(Z^*;\CC))_{\alpha}$ be the dimension of the $\alpha$-eigenspace $H_c^k(Z^*;\CC)_{\alpha}^{p,q}$ of this automorphism of $H_c^k(Z^*;\CC)^{p,q}$ and set
\begin{equation}
e^{p,q}(Z^*)_{\alpha}=\dsum_k (-1)^k h^{p,q}(H_c^k(Z^*;\CC))_{\alpha}.
\end{equation}
We call $e^{p,q}(Z^*)_{\alpha}$ the equivariant mixed Hodge numbers of $Z^*$. Since we have $l_{\tau}^r =\id_{Z^*}$ for some $r \gg 0$, these numbers are zero unless $\alpha$ is a root of unity. Obviously we have
\begin{equation}
e^{p,q}(Z^*)=\dsum_{\alpha \in \CC} e^{p,q}(Z^*)_{\alpha}, \qquad
e^{p,q}(Z^*)_{\alpha}=e^{q,p}(Z^*)_{\overline{\alpha}}.
\end{equation}
In this setting, along the lines of Danilov-Khovanskii \cite{D-K} we can give an algorithm for computing these numbers $e^{p,q}(Z^*)_{\alpha}$ as follows. First of all, as in \cite[Section 3]{D-K} we have the following result.
\begin{proposition}\label{prp:2-15}
{\bf (\cite[Proposition 2.6]{M-T-3})} For $p,q \geq 0$ such that $p+q >n-1$, we have
\begin{equation}
e^{p,q}(Z^*)_{\alpha}=
\begin{cases}
(-1)^{n+p+1}\binom{n}{p+1} & (\text{$\alpha=1$ and $p=q$}),\\
\hspace*{10mm}0 & (\text{otherwise}),
\end{cases}
\end{equation}
(we used the convention $\binom{a}{b}=0$ ($0 \leq a <b$) for binomial coefficients).
\end{proposition}
For a vertex $w$ of $\Delta$, consider the translated polytope $\Delta^w:=\Delta -w$ such that $0 \prec \Delta^w$ and ${\tau}^v=1$ for any vertex $v$ of $\Delta^w$. Then for $\alpha \in \CC$ and $k \geq 0$ set
\begin{equation}
l^*(k\Delta)_{\alpha}=\sharp \{ v \in \Int (k\Delta^w) \cap \ZZ^n \ |\ {\tau}^v =\alpha\} \in \ZZ_+:=\ZZ_{\geq 0}.
\end{equation}
We can easily see that these numbers $l^*(k\Delta)_{\alpha}$ do not depend on the choice of the vertex $w$ of $\Delta$. We define a formal power series $P_{\alpha}(\Delta;t)=\sum_{i \geq 0}\varphi_{\alpha, i}(\Delta)t^i$ by
\begin{equation}
P_{\alpha}(\Delta;t)=(1-t)^{n+1} \left\{ \dsum_{k \geq 0} l^*(k\Delta)_{\alpha}t^k\right\}.
\end{equation}
Then we can easily show that $P_{\alpha}(\Delta;t)$ is actually a polynomial as in \cite[Section 4.4]{D-K}.
\begin{theorem}\label{thm:2-14}
{\bf (\cite[Theorem 2.7]{M-T-3})} In the situation as above, we have
\begin{equation}
\dsum_q e^{p,q}(Z^*)_{\alpha}
=\begin{cases}
(-1)^{p+n+1}\binom{n}{p+1} +(-1)^{n+1} \varphi_{\alpha, n-p}(\Delta) & (\alpha=1), \\
(-1)^{n+1} \varphi_{\alpha, n-p}(\Delta) & (\alpha \neq 1).
\end{cases}
\end{equation}
\end{theorem}
By Proposition \ref{prp:2-15} and Theorem \ref{thm:2-14} we can now calculate the numbers $e^{p,q}(Z^*)_{\alpha}$ on the non-degenerate hypersurface $Z^* \subset (\CC^*)^n$ for any $\alpha \in \CC$ as in \cite[Section 5.2]{D-K}. Indeed for a projective toric compactification $X$ of $(\CC^*)^n$ such that the closure $\overline{Z^*}$ of $Z^*$ in $X$ is smooth, the variety $\overline{Z^*}$ is smooth projective and hence there exists a perfect pairing
\begin{equation}
H^{p,q}(\overline{Z^*};\CC)_{\alpha} \times H^{n-1-p, n-1-q}(\overline{Z^*};\CC)_{\alpha^{-1}} \longrightarrow \CC
\end{equation}
for any $p,q \geq 0$ and $\alpha \in \CC^*$ (see for example \cite[Section 5.3.2]{Voisin}). Therefore, we obtain equalities $e^{p,q}(\overline{Z^*})_{\alpha}=e^{n-1-p,n-1-q}(\overline{Z^*})_{\alpha^{-1}}$ which are necessary to proceed the algorithm in \cite[Section 5.2]{D-K}. We have also the following analogue of \cite[Proposition 5.8]{D-K}.
\begin{proposition}\label{prp:new}
{\bf (\cite[Proposition 2.8]{M-T-3})} For any $\alpha \in \CC$ and $p> 0$ we have
\begin{equation}
e^{p,0}(Z^*)_{\alpha}=e^{0,p}(Z^*)_{\overline{\alpha}}= (-1)^{n-1}
\sum_{\begin{subarray}{c} \Gamma \prec \Delta\\ \d \Gamma =p+1\end{subarray}}l^*(\Gamma)_{\alpha}.
\end{equation}
\end{proposition}
The following result is an analogue of \cite[Corollary 5.10]{D-K}. For $\alpha \in \CC$, denote by $\Pi(\Delta)_{\alpha}$ the number of the lattice points $v=(v_1,\ldots, v_n)$ on the $1$-skeleton of $\Delta^w=\Delta-w$ such that ${\tau}^v=\alpha$, where $w$ is a vertex of $\Delta$.
\begin{proposition}\label{prp:2-19}
{\bf (\cite[Proposition 2.9]{M-T-3})} In the situation as above, for any $\alpha \in \CC^*$ we have
\begin{equation}
e^{0,0}(Z^*)_{\alpha}=
\begin{cases}
(-1)^{n-1} \left(\Pi(\Delta)_{1}-1\right) & (\alpha=1), \\
(-1)^{n-1} \Pi(\Delta)_{\alpha^{-1}} & (\alpha \neq 1).
\end{cases}
\end{equation}
\end{proposition}
For a vertex $w$ of $\Delta$, we define a closed convex cone $\Con(\Delta, w)$ by $\Con(\Delta,w)=\{ r \cdot (v -w) \ |\ r \in \RR_+, \ v \in \Delta\} \subset \RR^n$.
\begin{definition}\label{dfn:7-6-4}{\bf (\cite{D-K})}
Let $\Delta$ and $\Delta^{\prime}$ be two $n$-dimensional integral polytopes in $(\RR^n, \ZZ^n)$. We denote by ${\rm som}(\Delta )$ (resp. ${\rm som}(\Delta^{\prime})$) the set of vertices of $\Delta$ (resp. $\Delta^{\prime}$). Then we say that $\Delta^{\prime}$ majorizes $\Delta$ if there exists a map $\Psi \colon {\rm som}(\Delta^{\prime}) \longrightarrow {\rm som}(\Delta)$ such that $\Con(\Delta, \Psi(w)) \subset \Con(\Delta^{\prime}, w)$ for any vertex $w$ of $\Delta^{\prime}$.
\end{definition}
For an integral polytope $\Delta$ in $(\RR^n, \ZZ^n)$, we denote by $X_{\Delta}$ the toric variety associated with the dual fan of $\Delta$ (see Fulton \cite{Fulton} and Oda \cite{Oda} etc.). Recall that if $\Delta^{\prime}$ majorizes $\Delta$ there exists a natural morphism $X_{\Delta^{\prime}} \longrightarrow X_{\Delta}$.
\begin{proposition}\label{prp:7-6-5}
{\bf (\cite[Proposition 2.12]{M-T-3})} Let $\Delta$ and $Z^*_{\Delta}=Z^*$ with an action of $l_{\tau}$ be as above. Assume that an $n$-dimensional integral polytope $\Delta^{\prime}$ in $(\RR^n, \ZZ^n)$ majorizes $\Delta$ by the map $\Psi \colon {\rm som}(\Delta^{\prime}) \longrightarrow {\rm som}(\Delta)$. Then for the closure $\overline{Z^*}$ of $Z^*$ in $X_{\Delta^{\prime}}$ we have
\begin{eqnarray}
\sum_q e^{p,q}(\overline{Z^*})_1
&=&\sum_{\Gamma\prec \Delta^{\prime}} (-1)^{\d \Gamma+p+1} \left\{\binom{\d \Gamma}{p+1}-\binom{\dd_{\Gamma}}{p+1}\right\}\nonumber \\
& & +\sum_{\Gamma \prec \Delta^{\prime}}(-1)^{\d \Gamma +1}\sum_{i=0}^{\min\{\dd_{\Gamma},p\}}\binom{\dd_{\Gamma}}{i}(-1)^i \varphi_{1,\d \Psi(\Gamma)-p+i}(\Psi(\Gamma)),\label{eq:7-6-5}
\end{eqnarray}
where for $\Gamma \prec \Delta^{\prime}$ we set $\dd_{\Gamma}=\d \Gamma -\d \Psi(\Gamma)$.
\end{proposition}
\begin{definition}\label{dfn:2-16}
Let $\Delta$ be an $n$-dimensional integral polytope in $(\RR^n, \ZZ^n)$.
\begin{enumerate}
\item (see \cite[Section2.3]{D-K}) We say that $\Delta$ is prime if for any vertex $w$ of $\Delta$ the cone $\Con(\Delta,w)$ is generated by a basis of $\RR^n$.
\item (see \cite[Definition 2.10]{M-T-3}) We say that $\Delta$ is pseudo-prime if for any $1$-dimensional face $\gamma \prec \Delta$ the number of the $2$-dimensional faces $\gamma^{\prime} \prec \Delta$ such that $\gamma \prec \gamma^{\prime}$ is $n-1$.
\end{enumerate}
\end{definition}
By definition, prime polytopes are pseudo-prime. Moreover any face of a pseudo-prime polytope is again pseudo-prime.
For $\alpha \in \CC \setminus \{1\}$ and a face $\Gamma \prec \Delta$, set $\tl{\varphi}_{\alpha}(\Gamma)=\sum_{i=0}^{\d \Gamma} \varphi_{\alpha, i}(\Gamma)$. Then as in \cite[Section 5.5 and Theorem 5.6]{D-K} we obtain the following result.
\begin{proposition}\label{cor:2-18}
{\bf (\cite[Corollary 2.15]{M-T-3})}
Assume that $\Delta=NP(g)$ is pseudo-prime. Then for any $\alpha \in \CC \setminus \{1\}$ and $r \geq 0$, we have
\begin{equation}
\sum_{p+q=r}e^{p,q}(Z^*)_{\alpha}=(-1)^{n+r} \sum_{\begin{subarray}{c} \Gamma \prec \Delta\\ \d \Gamma =r+1\end{subarray}} \left\{ \sum_{\Gamma^{\prime} \prec \Gamma} (-1)^{\d\Gamma^{\prime}}\tl{\varphi}_{\alpha}(\Gamma^{\prime})\right\}.
\end{equation}
\end{proposition}
The following lemma will be used later.
\begin{lemma}\label{lem:7-6-9}
Let $\gamma$ be a $d$-dimensional prime polytope. Then for any $0 \leq p \leq d$ we have
\begin{equation}\label{eq:7-6-9}
\sum_{\Gamma \prec \gamma} (-1)^{\d \Gamma}\binom{\d \Gamma}{p}=\sum_{\Gamma \prec \gamma}(-1)^{d +\d \Gamma}\binom{\d \Gamma}{d-p}.
\end{equation}
\end{lemma}
\begin{proof}
For a polytope $\Delta$, denote the number of the $j$-dimensional faces of $\Delta$ by $f_{\Delta,j}$ and set $f_{\Delta,-1}=1$. Let $\gamma^{\vee}$ be the dual polytope of $\gamma$. Then $\gamma^{\vee}$ is simplicial and we have $f_{\gamma^{\vee},j}=f_{\gamma, d-1-j}$ for any $0 \leq j \leq d$. Hence \eqref{eq:7-6-9} follows from the Dehn-Sommerville equations (see \cite{Sommerville} etc.) for simplicial polytopes. \qed
\end{proof}
\section{Motivic Milnor fibers}\label{sec:3}
In \cite{D-L-1} and \cite{D-L-2} Denef and Loeser introduced motivic Milnor fibers. In this section, we recall their definition and basic properties. Let $f \in \CC[x_1, x_2, \ldots, x_n]$ be a polynomial such that the hypersurface $f^{-1}(0)= \{x\in \CC^n \ |\ f(x)=0\}$ has an isolated singular point at $0\in \CC^n$. Then by a fundamental theorem of Milnor \cite{Milnor}, for the Milnor fiber $F_0$ of $f$ at $0$ we have $H^j(F_0;\CC) \simeq 0$ ($j\neq 0, \ n-1$). Denote by $\Phi_{n-1,0}\colon H^{n-1}(F_0;\CC) \simto H^{n-1}(F_0;\CC)$ the $(n-1)$-th Milnor monodromy of $f$ at $0 \in \CC^n$. Let $\pi \colon X \longrightarrow \CC^n$ be an embedded resolution of $f^{-1}(0)$ such that $\pi^{-1}(0)$ and $\pi^{-1}(f^{-1}(0))$ are normal crossing divisors in $X$. Let $D_1, D_2, \ldots, D_m$ be the irreducible components of $\pi^{-1}(0)$ and denote by $Z$ the proper transform of $f^{-1}(0)$ in $X$. For $1 \leq i \leq m$ denote by $a_i>0$ the order of the zero of $g:= f \circ \pi$ along $D_i$. For a non-empty subset $I \subset \{ 1,2, \ldots, m\}$ we set $d_I=\gcd (a_i)_{i \in I}>0$, $D_I=\bigcap_{i \in I}D_i$ and
\begin{equation}
D_I^{\circ}=D_I \setminus \left\{ \( \bigcup_{i \notin I}D_i\) \cup Z \right\} \subset X.
\end{equation}
Moreover we set
\begin{equation}
Z_I^{\circ}=\left\{ D_I \setminus \left( \bigcup_{i \notin I}D_i\right) \right\} \cap Z \subset X.
\end{equation}
Then, as in \cite[Section 3.3]{D-L-2}, we can construct an unramified Galois covering $\tl{D_I^{\circ}} \longrightarrow D_I^{\circ}$ of $D_I^{\circ}$ as follows. First, for a point $p \in D_I^{\circ}$ we take an affine open neighborhood $W \subset X \setminus \{ ( \bigcup_{i \notin I} D_i) \cup Z \}$ of $p$ on which there exist regular functions $\xi_i$ ($i \in I$) such that $D_i \cap W=\{ \xi_i=0 \}$ for any $i \in I$. Then on $W$ we have $g= f \circ \pi =g_{1,W} (g_{2,W})^{d_I}$, where we set $g_{1,W}=g \prod_{i \in I}\xi_i^{-a_i}$ and $g_{2,W}=\prod_{i \in I} \xi_i^{\frac{a_i}{d_I}}$. Note that $g_{1,W}$ is a unit on $W$ and $g_{2,W} \colon W \longrightarrow \CC$ is a regular function. It is easy to see that $D_I^{\circ}$ is covered by such affine open subsets $W$. Then as in \cite[Section 3.3]{D-L-2} by gluing the varieties
\begin{equation}\label{eq:6-26}
\tl{D_{I,W}^{\circ}}=\{(t,z) \in \CC^* \times (D_I^{\circ} \cap W) \ |\ t^{d_I} =(g_{1,W})^{-1}(z)\}
\end{equation}
together in the following way, we obtain the variety $\tl{D_I^{\circ}}$ over $D_I^{\circ}$. If $W^{\prime}$ is another such open subset and $g=g_{1,W^{\prime}} (g_{2,W^{\prime}})^{d_I}$ is the decomposition of $g$ on it, we patch $\tl{D_{I,W}^{\circ}}$ and $\tl{D_{I,W^{\prime}}^{\circ}}$ by the morphism $(t,z) \longmapsto (g_{2,W^{\prime}}(z)( g_{2,W})^{-1}(z) \cdot t, z)$ defined over $W \cap W^{\prime}$. Now for $d \in \ZZ_{>0}$, let $\mu_d \simeq \ZZ/\ZZ d$ be the multiplicative group consisting of the $d$-roots in $\CC$. We denote by $\hat{\mu}$ the projective limit $\underset{d}{\varprojlim} \mu_d$ of the projective system $\{ \mu_i \}_{i \geq 1}$ with morphisms $\mu_{id} \longrightarrow \mu_i$ given by $t \longmapsto t^d$. Then the unramified Galois covering $\tl{D_I^{\circ}}$ of $D_I^{\circ}$ admits a natural $\mu_{d_I}$-action defined by assigning the automorphism $(t,z) \longmapsto (\zeta_{d_I} t, z)$ of $\tl{D_I^{\circ}}$ to the generator $\zeta_{d_I}:=\exp (2\pi\sqrt{-1}/d_I) \in \mu_{d_I}$. Namely the variety $\tl{D_I^{\circ}}$ is equipped with a good $\hat{\mu}$-action in the sense of Denef-Loeser \cite[Section 2.4]{D-L-2}. Note that also the variety $Z_I^{\circ}$ is equipped with the trivial good $\hat{\mu}$-action. Following the notations in \cite{D-L-2}, denote by $\M_{\CC}^{\hat{\mu}}$ the ring obtained from the Grothendieck ring $\KK_0^{\hat{\mu}}(\Var_{\CC})$ of varieties over $\CC$ with good $\hat{\mu}$-actions by inverting the Lefschetz motive $\LL\simeq \CC \in \KK_0^{\hat{\mu}}(\Var_{\CC})$. Recall that $\LL \in \KK_0^{\hat{\mu}}(\Var_{\CC})$ is endowed with the trivial action of $\hat{\mu}$.
\begin{definition}{\bf (Denef and Loeser \cite{D-L-1}
and \cite{D-L-2})}\label{dfn:3-1} We define the motivic Milnor fiber $\SS_{f,0} \in \M_{\CC}^{\hat{\mu}}$ of $f$ at $0 \in \CC^n$ by
\begin{equation}\label{MMF}
\SS_{f,0} =\sum_{I \neq \emptyset}\left\{ (1-\LL)^{\sharp I -1} [\tl{D_I^{\circ}}] + (1-\LL)^{\sharp I} [Z_I^{\circ}]\right\} \in \M_{\CC}^{\hat{\mu}}.
\end{equation}
\end{definition}
As in \cite[Section 3.1.2 and 3.1.3]{D-L-2}, we denote by $\HSm$ the abelian category of Hodge structures with a quasi-unipotent endomorphism. Let $\KK_0(\HSm)$ be its Grothendieck ring. Then as in \cite{D-L-2}, to the cohomology groups $H^j(F_0;\CC)$ and the semisimple parts of their monodromy automorphisms, we can naturally associate an element
\begin{equation}
[H_f] \in \KK_0(\HSm).
\end{equation}
To describe the element $[H_f]\in \KK_0(\HSm)$ in terms of $\SS_{f,0} \in \M_{\CC}^{\hat{\mu}}$, let
\begin{equation}
\chi_h \colon \M_{\CC}^{\hat{\mu}} \longrightarrow \KK_0(\HSm)
\end{equation}
be the Hodge characteristic morphism defined in \cite{D-L-2} which associates to a variety $Z$ with a good $\mu_d$-action the Hodge structure
\begin{equation}
\chi_h ([Z])=\sum_{j \in \ZZ} (-1)^j [H_c^j(Z;\QQ)] \in \KK_0(\HSm)
\end{equation}
with the actions induced by the one $z \longmapsto \exp (2\pi\sqrt{-1}/d)z$ ($z\in Z$) on $Z$. Then we have the following fundamental result.
\begin{theorem}\label{thm:7-6}
{\bf (Denef-Loeser \cite[Theorem 4.2.1]{D-L-1})} In the Grothendieck group $\KK_0(\HSm)$, we have
\begin{equation}
[H_f]=\chi_h(\SS_{f,0}).
\end{equation}
\end{theorem}
For $[H_f] \in \KK_0(\HSm)$ also the following result due to Steenbrink \cite{Steenbrink} and Saito \cite{Saito-1}, \cite{Saito-2} is fundamental.
\begin{theorem}[Steenbrink \cite{Steenbrink} and Saito \cite{Saito-1}, \cite{Saito-2}]\label{S-S}
In the situation as above, we have
\begin{enumerate}
\item Let $\lambda \in \CC^* \setminus \{1\}$. Then we have $e^{p,q}( [H_f])_{\lambda}=0$ for $(p,q) \notin [0,n-1] \times [0,n-1]$. Moreover for $(p,q) \in [0,n-1] \times [0,n-1]$ we have
\begin{equation}
e^{p,q}( [H_f])_{\lambda}=e^{n-1-q,n-1-p}( [H_f])_{\lambda}.
\end{equation}
\item We have $e^{p,q}( [H_f])_{1}=0$ for $(p,q) \notin \{(0, 0)\} \sqcup ([1,n-1] \times [1,n-1])$ and $e^{0,0}( [H_f])_{1}=1$. Moreover for $(p,q) \in [1,n-1] \times [1,n-1]$ we have
\begin{equation}
e^{p,q}( [H_f])_{1}=e^{n-q,n-p}( [H_f])_{1}.
\end{equation}
\end{enumerate}
\end{theorem}
We can check these symmetries of $e^{p,q}([H_f])_{\lambda}$ by calculating $\chi_h( \SS_{f,0}) \in \KK_0(\HSm)$ explicitly by our methods (see Section \ref{sec:4}) in many cases. Since the weights of $[H_f] \in \KK_0(\HSm)$ are defined by the monodromy filtration, we have the following result.
\begin{theorem}\label{MF}
In the situation as above, we have
\begin{enumerate}
\item Let $\lambda \in \CC^* \setminus \{1\}$ and $k \geq 1$. Then the number of the Jordan blocks for the eigenvalue $\lambda$ with sizes $\geq k$ in $\Phi_{n-1,0}\colon H^{n-1}(F_0;\CC) \simto H^{n-1}
(F_0;\CC)$ is equal to
\begin{equation}
(-1)^{n-1} \sum_{p+q=n-2+k, n-1+k} e^{p,q}( \chi_h(\SS_{f,0} ))_{\lambda}.
\end{equation}
\item For $k \geq 1$, the number of the Jordan blocks for the eigenvalue $1$ with sizes $\geq k$ in $\Phi_{n-1, 0}$ is equal to
\begin{equation}
(-1)^{n-1} \sum_{p+q=n-1+k, n+k} e^{p,q}( \chi_h(\SS_{f,0} ))_{1}.
\end{equation}
\end{enumerate}
\end{theorem}
\section{Jordan normal forms of Milnor monodromies}\label{sec:4}
Our methods in \cite{M-T-3} can be applied also to the Jordan normal forms of local Milnor monodromies. Let $f\in \CC[x_1,\ldots,x_n]$ be a polynomial such that the hypersurface $\{x\in \CC^n \ |\ f(x)=0\}$ has an isolated singular point at $0\in \CC^n$.
\begin{definition}\label{CVN}
Let $f(x)= \sum_{v \in \ZZ_+^n} a_v x^v \in \CC[x_1,\ldots,x_n]$ be a polynomial on $\CC^n$.
\begin{enumerate}
\item We call the convex hull of $\bigcup_{v \in \supp (f)} \{v + \RR_+^n\}$ in $\RR_+^n$ the Newton polyhedron of $f$ and denote it by $\Gamma_+(f)$.
\item The union of the compact faces of $\Gamma_+(f)$ is called the Newton boundary of $f$ and denoted by $\Gamma_f$.
\item
We say that $f$ is convenient if $\Gamma_+(f)$ intersects the positive part of any coordinate axis in $\RR^n$.
\end{enumerate}
\end{definition}
\begin{definition}[\cite{Kushnirenko}]\label{NDC}
We say that a polynomial $f(x)=\sum_{v \in \ZZ_+^n}a_vx^v$ ($a_v\in \CC$) is non-degenerate at $0\in \CC^n$ if for any face $\gamma \prec \Gamma_+(f)$ such that $\gamma \subset \Gamma_f$ the complex hypersurface $\{x \in (\CC^*)^n\ |\ f_{\gamma}(x)=0\}$ in $(\CC^*)^n$ is smooth and reduced, where we set $f_{\gamma}(x)=\sum_{v \in \gamma \cap \ZZ_+^n} a_vx^v$.
\end{definition}
Recall that generic polynomials having a fixed Newton polyhedron are non-degenerate at $0\in \CC^n$. From now on, we always assume also that $f=\sum_{v \in \ZZ^n_+} a_v x^v\in \CC[x_1,\ldots,x_n]$ is convenient and non-degenerate at $0\in \CC^n$. For each face $\gamma \prec \Gamma_+(f)$ such that $\gamma \subset \Gamma_f$, let $d_{\gamma}>0$ be the lattice distance of $\gamma$ from the origin $0 \in \RR^n$ and $\Delta_{\gamma}$ the convex hull of $\{0\} \sqcup \gamma$ in $\RR^n$. Let $\LL(\Delta_{\gamma})$ be the $(\dim \gamma +1)$-dimensional linear subspace of $\RR^n$ spanned by $\Delta_{\gamma}$ and consider the lattice $M_{\gamma}=\ZZ^n \cap \LL(\Delta_{\gamma}) \simeq \ZZ^{\dim \gamma+1}$ in it. Then we set $T_{\Delta_{\gamma}}:=\Spec (\CC[M_{\gamma}]) \simeq (\CC^*)^{\dim \gamma +1}$. Moreover let $\LL(\gamma)$ be the smallest affine linear subspace of $\RR^n$ containing $\gamma$ and for $v \in M_{\gamma}$ define their lattice heights $\height (v, \gamma) \in \ZZ$ from $\LL(\gamma)$ in $\LL(\Delta_{\gamma})$ so that we have $\height (0, \gamma)=d_{\gamma}>0$. Then to the group homomorphism $M_{\gamma} \longrightarrow \CC^*$ defined by $v \longmapsto \zeta_{d_{\gamma}}^{-\height (v, \gamma)}$ we can naturally associate an element $\tau_{\gamma} \in T_{\Delta_{\gamma}}$. We define a Laurent polynomial $g_{\gamma}=\sum_{v \in M_{\gamma}}b_v x^v$ on $T_{\Delta_{\gamma}}$ by
\begin{equation}
b_v=\begin{cases}
a_v & (v \in \gamma),\\
-1 & (v=0),\\
\ 0 & (\text{otherwise}).
\end{cases}
\end{equation}
Then we have $NP(g_{\gamma}) =\Delta_{\gamma}$, $\supp (g_{\gamma}) \subset \{ 0\} \sqcup \gamma$ and the hypersurface $Z_{\Delta_{\gamma}}^*=\{ x \in T_{\Delta_{\gamma}}\ |\ g_{\gamma}(x)=0\}$ is non-degenerate by \cite[Proposition 5.3]{M-T-3}. Moreover $Z_{\Delta_{\gamma}}^* \subset T_{\Delta_{\gamma}}$ is invariant by the multiplication $l_{\tau_{\gamma}} \colon T_{\Delta_{\gamma}} \simto T_{\Delta_{\gamma}}$ by $\tau_{\gamma}$, and hence we obtain an element $[Z_{\Delta_{\gamma}}^*]$ of $\M_{\CC}^{\hat{\mu}}$. Let $\LL(\gamma)^{\prime} \simeq \RR^{\d \gamma}$ be a linear subspace of $\RR^n$ such that $\LL(\gamma)=\LL(\gamma)^{\prime}+w $ for some $w \in \ZZ^n$ and set $\gamma^{\prime}=\gamma -w \subset \LL(\gamma)^{\prime}$. We define a Laurent polynomial $g_{\gamma}^{\prime} =\sum_{v \in \LL(\gamma)^{\prime} \cap \ZZ^n}b_v^{\prime} x^v$ on $T(\gamma ):= \Spec (\CC[\LL(\gamma)^{\prime} \cap \ZZ^n])\simeq (\CC^*)^{\d \gamma}$ by
\begin{equation}
b_v^{\prime}=\begin{cases}
a_{v+w} & (v \in \gamma^{\prime} ),\\
\ 0 & (\text{otherwise}).
\end{cases}
\end{equation}
Then we have $NP(g_{\gamma}^{\prime}) =\gamma^{\prime}$ and the hypersurface $Z_{\gamma}^*=\{ x \in T(\gamma ) \ |\ g_{\gamma}^{\prime}(x)=0\}$ is non-degenerate. We define $[Z_{\gamma}^*] \in \M_{\CC}^{\hat{\mu}}$ to be the class of the variety $Z_{\gamma}^*$ with the trivial action of $\hat{\mu}$. Finally let $S_{\gamma} \subset \{1,2,\ldots, n\}$ be the minimal subset $S$ of $\{1,2,\ldots,n\}$ such that $\gamma \subset \{ (y_1, y_2, \ldots, y_n) \in \RR^n \ | \ y_i=0 \quad \text{for any} \ i \notin S \} \simeq \RR^{\sharp S}$ and set $m_{\gamma}:=\sharp S_{\gamma}-\dim \gamma -1\geq 0$. Then as in the same way as \cite[Theorem 5.7]{M-T-3} we obtain the following theorem.
\begin{theorem}\label{thm:8-3}
In the situation as above, we have
\begin{enumerate}
\item In the Grothendieck group $\KK_0(\HSm)$, we have
\begin{equation}\label{twot}
\chi_h(\SS_{f,0})= \sum_{\gamma \subset \Gamma_f} \chi_h\big((1-\LL)^{m_{\gamma}}\cdot[Z_{\Delta_{\gamma}}^*]\big)+\sum_{\begin{subarray}{c}\gamma \subset \Gamma_f\\ \d \gamma \geq 1\end{subarray}} \chi_h\big((1-\LL)^{m_{\gamma}+1}\cdot[Z_{\gamma}^*]\big).
\end{equation}
\item Let $\lambda \in \CC^*\setminus\{1\}$ and $k\geq 1$. Then the number of the Jordan blocks for the eigenvalue $\lambda$ with sizes $\geq k$ in $\Phi_{n-1,0}\colon H^{n-1}(F_0;\CC) \simto H^{n-1}(F_0;\CC)$ is equal to
\begin{equation}
(-1)^{n-1}\sum_{p+q=n-2+k, n-1+k}\left\{ \sum_{\gamma \subset \Gamma_f} e^{p,q}\( \chi_h\((1-\LL)^{m_{\gamma}} \cdot [Z_{\Delta_{\gamma}}^*]\)\)_{\lambda} \right\}.
\end{equation}
\item For $k\geq 1$, the number of the Jordan blocks for the eigenvalue $1$ with sizes $\geq k$ in $\Phi_{n-1,0}$ is equal to
\begin{eqnarray}
(-1)^{n-1}\sum_{p+q=n-1+k, n+k}\lefteqn{\Bigg\{ \sum_{\gamma \subset \Gamma_f} e^{p,q}\big( \chi_h\big((1-\LL)^{m_{\gamma}} \cdot [Z_{\Delta_{\gamma}}^*]\big)\big)_{1}} \nonumber \\
& & +\sum_{\begin{subarray}{c}\gamma \subset \Gamma_f\\ \d \gamma \geq 1\end{subarray}} e^{p,q}\big( \chi_h\big((1-\LL)^{m_{\gamma}+1} \cdot [Z_{\gamma}^*]\big)\big)_{1} \Bigg\}.
\end{eqnarray}
\end{enumerate}
\end{theorem}
\begin{proof}
Since (ii) and (iii) follow from (i) and Theorem \ref{MF}, it suffices to prove (i). The proof is very similar to the one in Varchenko \cite{Varchenko}. Let $\Sigma_1$ be the dual fan of $\Gamma_+(f)$ in $\RR_+^n$ and $\Sigma$ its smooth subdivision. Denote by $X_{\Sigma}$ the smooth toric variety associated to $\Sigma$ (see Fulton \cite{Fulton} and Oda \cite{Oda} etc.). Since the union of the cones in $\Sigma$ is $\RR_+^n$, there exists a proper morphism $\pi \colon X_{\Sigma} \longrightarrow \CC^n$. By the convenience of $f$, we can construct the smooth fan $\Sigma$ without subdividing the cones contained in $\partial \RR_+^n$ (see \cite[Lemma (2.6), Chapter II]{Oka}). Then $\pi$ induces an isomorphism $X_{\Sigma} \setminus \pi^{-1}(0) \simeq \CC^n \setminus \{ 0\}$. Moreover by the non-degeneracy at $0 \in \CC^n$ of $f$, the proper transform $Z$ of the hypersurface $\{x\in \CC^n \ |\ f(x)=0\}$ in $X_{\Sigma}$ is smooth and intersects $T$-orbits in $\pi^{-1}(0)$ transversally. Let $D_1, \ldots, D_m$ be the toric divisors in $\pi^{-1}(0) \subset X_{\Sigma}$. For a non-empty subset $I \subset \{ 1,2, \ldots, m\}$ we set $D_I=\bigcap_{i \in I}D_i$ and
\begin{equation}
D_I^{\circ}=D_I \setminus \left\{ \( \bigcup_{i \notin I}D_i\) \cup Z \right\} \subset X_{\Sigma}
\end{equation}
and define its unramified Galois covering $\tl{D_I^{\circ}}$ as in Section \ref{sec:3}. Moreover we set
\begin{equation}
Z_I^{\circ}=\left\{ D_I \setminus \left( \bigcup_{i \notin I}D_i\right) \right\} \cap Z \subset X_{\Sigma}
\end{equation}
and denote by $[Z_I^{\circ}] \in \M_{\CC}^{\hat{\mu}}$ the class of the variety $Z_I^{\circ}$ with the trivial action. Then, unlike the global object $\SS_f^{\infty}$ in \cite{M-T-3}, Denef-Loeser's ``local" motivic Milnor fiber $\SS_{f,0}$ contains not only $(1-\LL)^{\sharp I -1} [\tl{D_I^{\circ}}]$ but also $(1-\LL)^{\sharp I} [Z_I^{\circ}]$ (see Definition \ref{dfn:3-1}). These new elements yield the second term in the right hand side of \eqref{twot}. Finally, in the Grothendieck group $\KK_0(\HSm)$ we can rewrite $\chi_h(\SS_{f,0})$ in terms of the dual fan $\Sigma_1$ (i.e. in terms of $\Gamma_+(f)$) as in the same way as the proof of \cite[Theorem 5.7 (i)]{M-T-3}. This completes the proof. \qed
\end{proof}
Let $q_1,\ldots, q_l$ (resp. $\gamma_1, \ldots, \gamma_{l^{\prime}}$) be the $0$-dimensional (resp. $1$-dimensional) faces of $\Gamma_+(f)$ such that $q_i \in \Int (\RR_+^n)$ (resp. $\relint(\gamma_i) \subset \Int(\RR_+^n)$). Here $\relint (\cdot )$ stands for the relative interior. For each $q_i$ (resp. $\gamma_i$), denote by $d_i >0$ (resp. $e_i>0$) the lattice distance $\dist(q_i, 0)$ (resp. $\dist(\gamma_i,0)$) of it from the origin $0\in \RR^n$. For $1\leq i \leq l^{\prime}$, let $\Delta_i$ be the convex hull of $\{0\}\sqcup \gamma_i$ in $\RR^n$. Then for $\lambda \in \CC \setminus \{1\}$ and $1 \leq i \leq l^{\prime}$ such that $\lambda^{e_i}=1$ we set
\begin{equation}
n(\lambda)_i
= \sharp\{ v\in \ZZ^n \cap \relint(\Delta_i) \ |\ \height (v, \gamma_i)=k\} +\sharp \{ v\in \ZZ^n \cap \relint(\Delta_i) \ |\ \height (v, \gamma_i)=e_i-k\},
\end{equation}
where $k$ is the minimal positive integer satisfying $\lambda=\zeta_{e_i}^{k}$ and for $v\in \ZZ^n \cap \relint(\Delta_i)$ we denote by $\height (v, \gamma_i)$ the lattice height of $v$ from the base $\gamma_i$ of $\Delta_i$. As in the same way as \cite[Theorem 5.9]{M-T-3}, by using Propositions \ref{prp:new} and \ref{prp:2-19} and Theorem \ref{thm:8-3} (ii), we obtain the following theorem.
\begin{theorem}\label{thm:8-4}
In the situation as above, for $\lambda \in \CC^* \setminus\{1\}$, we have
\begin{enumerate}
\item The number of the Jordan blocks for the eigenvalue $\lambda$ with the maximal possible size $n$ in $\Phi_{n-1,0}$ is equal to $\sharp \{q_i \ |\ \lambda^{d_i}=1\}$.
\item The number of the Jordan blocks for the eigenvalue $\lambda$ with size $n-1$ in $\Phi_{n-1,0}$ is equal to $\sum_{i \colon \lambda^{e_i}=1} n(\lambda)_i$.
\end{enumerate}
\end{theorem}
Note that by Theorem \ref{thm:8-3} and our results in Section \ref{sec:2} we can always calculate the whole Jordan normal form of $\Phi_{n-1,0}$. From now on, we shall rewrite Theorem \ref{thm:8-3} (ii) more explicitly in the case where any face $\gamma \prec \Gamma_{+}(f)$ such that $\gamma \subset \Gamma_f$ is prime (see Definition \ref{dfn:2-16} (i)). Recall that by Proposition \ref{prp:2-15} for $\lambda \in \CC^* \setminus \{1\}$ and a face $\gamma \prec \Gamma_{+}(f)$ such that $\gamma \subset \Gamma_f$ we have $e^{p,q}(Z_{\Delta_{\gamma}}^*)_{\lambda}=0$ for any $p,q \geq 0$ such that $p+q > \d \Delta_{\gamma}-1=\dim \gamma$. So the non-negative integers $r \geq 0$ such that $\sum_{p+q=r}e^{p,q}(Z_{\Delta_{\gamma}}^*)_{\lambda}\neq 0$ are contained in the closed interval $[0,\d \gamma]\subset \RR$.
\begin{definition}
For a face $\gamma \prec \Gamma_{+}(f)$ such that $\gamma \subset \Gamma_f$ and $k \geq 1$, we define a finite subset $J_{\gamma,k} \subset [0,\d \gamma] \cap \ZZ$ by
\begin{equation}
J_{\gamma,k}=\{0 \leq r\leq \d \gamma \ |\ n-2+k \equiv r \mod 2\}.
\end{equation}
For each $r\in J_{\gamma,k}$, set
\begin{equation}
d_{k,r}=\dfrac{n-2+k-r}{2}\in \ZZ_+.
\end{equation}
\end{definition}
If a face $\gamma \prec \Gamma_{+}(f)$ such that $\gamma \subset \Gamma_f$ is prime, then the polytope $\Delta_{\gamma}$ is pseudo-prime (see Definition \ref{dfn:2-16} (ii)). Then by Proposition \ref{cor:2-18} for $\lambda \in \CC^* \setminus \{1\}$ and an integer $r \geq 0$ such that $r\in [0,\d \gamma] $ we have
\begin{equation}
\sum_{p+q=r}e^{p,q}(\chi_h([Z_{\Delta_{\gamma}}^*]))_{\lambda}=(-1)^{\d \gamma +r+1} \sum_{\begin{subarray}{c} \Gamma\prec \Delta_{\gamma} \\ \d \Gamma=r+1\end{subarray}} \left\{ \sum_{\Gamma^{\prime} \prec \Gamma} (-1)^{\d \Gamma^{\prime}} \tl{\varphi}_{\lambda}(\Gamma^{\prime})\right\}.
\end{equation}
For simplicity, we denote this last integer by $e(\gamma,\lambda)_r$. Then by Theorem \ref{thm:8-3} (ii) we obtain the following result.
\begin{theorem}\label{thm:7-15}
Assume that any face $\gamma \prec \Gamma_{+}(f)$ such that $\gamma \subset \Gamma_f$ is prime. Let $\lambda \in \CC^* \setminus\{1\}$ and $k\geq 1$. Then the number of the Jordan blocks for the eigenvalue $\lambda$ with sizes $\geq k$ in $\Phi_{n-1,0} \colon H^{n-1}(F_0;\CC) \simto H^{n-1}(F_0;\CC)$ is equal to
\begin{equation}
(-1)^{n-1}\sum_{\gamma \subset \Gamma_f} \left\{ \sum_{r \in J_{\gamma, k}} (-1)^{d_{k,r}} \binom{m_{\gamma}}{d_{k,r}} \cdot e(\gamma,\lambda)_r + \sum_{r \in J_{\gamma, k+1}} (-1)^{d_{k+1,r}} \binom{m_{\gamma}}{d_{k+1,r}} \cdot e(\gamma,\lambda)_r\right\},
\end{equation}
where we used the convention $\binom{a}{b}=0$ ($0 \leq a <b$) for binomial coefficients.
\end{theorem}
By combining the proof of \cite[Theorem 5.6]{D-K} and \cite[Proposition 2.14]{M-T-3} with Theorem \ref{thm:8-3} (iii), if any face $\gamma \prec \Gamma_{+}(f)$ such that $\gamma \subset \Gamma_f$ is prime we can also describe the Jordan blocks for the eigenvalue $1$ in $\Phi_{n-1,0}$ by a closed formula. Since this result is rather involved, we omit it here.
\begin{remark}
Our results above are different from the previous ones due to Danilov \cite{Danilov} and Tanab{\'e} \cite{Tanabe}. For example, in \cite{Danilov} and \cite{Tanabe} they assume a stronger condition that the Newton polyhedron $\Gamma_+(f)$ itself is prime. We could weaken their condition, because our \cite[Propositions 2.13 and 2.14]{M-T-3} and Proposition \ref{cor:2-18} are generalizations of the corresponding results in \cite{D-K} to pseudo-prime polytopes.
\end{remark}
We can also obtain the corresponding results for the eigenvalue $1$ by rewriting Theorem \ref{thm:8-3} (iii) more simply as follows.
\begin{theorem}\label{thm:8}
In the situation of Theorem \ref{thm:8-3}, for $k\geq 1$ the number of the Jordan blocks for the eigenvalue $1$ with sizes $\geq k$ in $\Phi_{n-1,0}$ is equal to
\begin{equation}
(-1)^{n-1}\sum_{p+q=n-2-k, n-1-k}\left\{ \sum_{\gamma \subset \Gamma_f} e^{p,q}\( \chi_h\((1-\LL)^{m_{\gamma}} \cdot [Z_{\Delta_{\gamma}}^*]\)\)_{1} \right\}.
\end{equation}
\end{theorem}
As in the same way as \cite[Theorems 5.11 and 5.12]{M-T-3}, by using Propositions \ref{prp:new} and \ref{prp:2-19} and Theorem \ref{thm:8}, we obtain the following corollary. Denote by $\Pi_f$ the number of the lattice points on the $1$-skeleton of $\Gamma_f \cap \Int(\RR_+^n)$. Also, for a compact face $\gamma \prec \Gamma_+(f)$ we denote by $l^*(\gamma)$ the number of the lattice points on $\relint(\gamma)$.
\begin{corollary}\label{thm:8-5}
In the situation as above, we have
\begin{enumerate}
\item {\rm (van Doorn-Steenbrink \cite{D-St})} The number of the Jordan blocks for the eigenvalue $1$ with the maximal possible size $n-1$ in $\Phi_{n-1,0}$ is $\Pi_f$.
\item The number of the Jordan blocks for the eigenvalue $1$ with size $n-2$ in $\Phi_{n-1,0}$ is equal to $2\sum_{\gamma} l^*(\gamma)$, where $\gamma$ ranges through the compact faces of $\Gamma_+(f)$ such that $\d \gamma=2$ and $\relint(\gamma) \subset \Int(\RR_+^n)$.
\end{enumerate}
\end{corollary}
Note that Corollary \ref{thm:8-5} (i) was previously obtained in van Doorn-Steenbrink \cite{D-St} by different methods. Theorem \ref{thm:8} asserts that by replacing $\Gamma_+(f)$ with the Newton polyhedron at infinity $\Gamma_{\infty}(f)$ in \cite{L-S}, \cite{M-T-2} and \cite{M-T-3} etc. the combinatorial description of the local monodromy $\Phi_{n-1,0}$ is the same as that of the global one $\Phi_{n-1}^{\infty}$ obtained in \cite[Theorem 5.7 (iii)]{M-T-3}. Namely we find a beautiful symmetry between local and global. Theorem \ref{thm:8} can be deduced from the following more precise result.
\begin{theorem}\label{thm:7-6-1}
In the situation as above, for any $0 \leq p,q \leq n-2$ we have
\begin{eqnarray}
\lefteqn{\sum_{\gamma \subset \Gamma_f}e^{p,q}\( \chi_h\( (1-\LL)^{m_{\gamma}}[Z_{\Delta_{\gamma}}^*]\)\)_1}\nonumber \\
&=&\sum_{\gamma \subset \Gamma_f}e^{p+1,q+1}
\( \chi_h\( (1-\LL)^{m_{\gamma}}[Z_{\Delta_{\gamma}}^*]+ (1-\LL)^{m_{\gamma}+1}[Z_{\gamma}^*]\)\)_1.
\label{eq:7-6-1}
\end{eqnarray}
\end{theorem}
We can easily see that Theorem \ref{thm:7-6-1} follows from Proposition \ref{prp:7-6-2} below. For $[V] \in \KK_0(\HSm)$, let $e([V])_1=\sum_{p,q=0}^{\infty}e^{p,q}([V])_1t_1^pt_2^q$ be the generating function of $e^{p,q}([V])_1$ as in \cite{D-K}.
\begin{proposition}\label{prp:7-6-2}
We have
\begin{equation}
\sum_{\gamma \subset \Gamma_f}e\( \chi_h\( (1-\LL)^{m_{\gamma}+1}([Z_{\Delta_{\gamma}}^*]+[Z_{\gamma}^*])\)\)_1=1-(t_1t_2)^n.
\end{equation}
\end{proposition}
From now on, we shall prove Proposition \ref{prp:7-6-2}. First, we apply Proposition \ref{prp:7-6-5} to the case where $\Delta =\Delta_{\gamma}$ for a face $\gamma$ of $\Gamma_+(f)$ such that $\gamma \subset \Gamma_f$. Let $\gamma^{\prime}$ be a prime polytope in $\RR^{\d \gamma}$ which majorizes $\gamma$ and consider the Minkowski sum $\gamma^{\prime \prime}:=\gamma+\gamma^{\prime}$ (resp. $\Box_{\gamma^{\prime\prime}}:= \Delta_{\gamma}+\gamma^{\prime}$) in $\RR^{\d \gamma}$ (resp. $\RR^{\d \gamma +1}$). Then $\Box_{\gamma^{\prime\prime}}$ is a $(\d \gamma +1)$-dimensional truncated pyramid whose top (resp. bottom) is $\gamma^{\prime}$ (resp. $\gamma^{\prime \prime}$) (see Figure 1 below). In particular, $\Box_{\gamma^{\prime\prime}}$ is prime. Since the dual fan of $\gamma^{\prime \prime}$ coincides with that of $\gamma^{\prime}$, the prime polytope $\gamma^{\prime \prime}$ majorizes $\gamma$. Let $\Psi \colon {\rm som}(\gamma^{\prime \prime}) \longrightarrow {\rm som}(\gamma)$ be the morphism between the sets of the vertices of $\gamma^{\prime \prime}$ and $\gamma$. By extending $\Psi$ to a morphism $\tl{\Psi} \colon {\rm som}(\Box_{\gamma^{\prime \prime}}) \longrightarrow {\rm som}(\Delta_{\gamma})$ as
\begin{equation}
\tl{\Psi}(w)=
\begin{cases}
\Psi(w) & (w\in {\rm som}(\gamma^{\prime \prime})),\\
\{0\} & (w\in {\rm som}(\gamma^{\prime})),
\end{cases}
\end{equation}
we see that the prime polytope $\Box_{\gamma^{\prime \prime}}$ majorizes $\Delta_{\gamma}$.
\vspace{3mm}
\begin{center}
\includegraphics[scale=0.75]{boxgamma.eps}
Figure 1
\end{center}
\begin{proposition}\label{cor:7-6-6}
For the closure $\overline{Z_{\Delta_{\gamma}}^*}$ of $Z_{\Delta_{\gamma}}^*$ in $X_{\Box_{\gamma^{\prime \prime}}}$, we have
\begin{equation}
\sum_q e^{p,q}(\overline{Z_{\Delta_{\gamma}}^*})_1=\sum_{\tau\prec \gamma^{\prime \prime}} (-1)^{\d \tau+p} \binom{\d \tau}{p}.
\end{equation}
\end{proposition}
\begin{proof}
It suffices to rewrite Proposition \ref{prp:7-6-5} in this case. For a face $\Gamma$ of $\Box_{\gamma^{\prime\prime}}$, we set $\dd_{\Gamma}=\d \Gamma - \d \tl{\Psi} (\Gamma)$. Note that the set of faces of $\Box_{\gamma^{\prime \prime}}$ consists of those of $\gamma^{\prime}$ and $\gamma^{\prime \prime}$ and side faces. Each side face of $\Box_{\gamma^{\prime\prime}}$ is a truncated pyramid $\Box_{\tau}$ whose bottom is $\tau\prec \gamma^{\prime \prime}$. Since $\d \Box_{\tau}=\d \tau +1$ and $\dd_{\Box_{\tau}}=\dd_{\tau}$ for $\tau \prec \gamma^{\prime \prime}$, we have
\begin{equation}
\sum_{\Gamma\prec \Box_{\gamma^{\prime \prime}}} (-1)^{\d\Gamma+p+1} \left\{\binom{\d \Gamma}{p+1}-\binom{\dd_{\Gamma}}{p+1}\right\}= \sum_{\tau \prec \gamma^{\prime \prime}}(-1)^{\d \tau +p} \binom{\d \tau}{p}
\end{equation}
and
\begin{eqnarray}
\lefteqn{\sum_{\Gamma \prec \Box_{\gamma^{\prime \prime}}}(-1)^{\d \Gamma +1}\sum_{i=0}^{\min\{\dd_{\Gamma},p\}}\binom{\dd_{\Gamma}}{i}(-1)^i \varphi_{1,\d \tl{\Psi}(\Gamma)-p+i}(\tl{\Psi}(\Gamma))}\nonumber \\
&=& \sum_{\tau \prec \gamma^{\prime \prime}}(-1)^{\d \tau +1}\sum_{i=0}^{\min\{\dd_{\tau},p\}}\binom{\dd_{\tau}}{i}(-1)^i \nonumber \\
& & \hspace{10mm}\times \left\{\varphi_{1,\d \Psi(\tau)-p+i}(\Psi(\tau))-\varphi_{1, \d \tl{\Psi}(\Box_{\tau})-p+i}(\tl{\Psi}(\Box_{\tau}))\right\},
\end{eqnarray}
where the faces $\tau$ of the top $\gamma^{\prime}$ of $\Box_{\gamma^{\prime \prime}}$ are neglected by the condition $\d \tl{\Psi}(\tau)=0$. By $\tl{\Psi}(\Box_{\tau})=\Delta_{\Psi(\tau)}$ and Lemma \ref{lem:7-6-7} below, the last term is equal to $0$.\qed
\end{proof}
\begin{lemma}\label{lem:7-6-7}
For any face $\gamma$ of $\Gamma_+(f)$ such that $\gamma \subset \Gamma_f$, we have
\begin{equation}\label{eq:7-6-7}
\varphi_{1,j+1}(\Delta_{\gamma})=\varphi_{1,j}(\gamma).
\end{equation}
\end{lemma}
\begin{proof}
By the relation $l^*((k+1)\Delta_{\gamma})_1-l^*(k\Delta_{\gamma})_1=l^*(k\gamma)_1$ ($k \geq 0$) we have
\begin{equation}
P_1(\Delta_{\gamma};t)=tP_1(\gamma;t).
\end{equation}
By comparing the coefficients of $t^{j+1}$ in both sides, we obtain \eqref{eq:7-6-7}.\qed
\end{proof}
The following proposition is a key in the proof of Proposition \ref{prp:7-6-2}.
\begin{proposition}\label{thm:7-6-10}
For any face $\gamma$ of $\Gamma_+(f)$ such that $\gamma \subset \Gamma_f$, we have
\begin{equation}\label{eq:7-6-10}
e(\chi_h([Z_{\Delta_{\gamma}}^*]+[Z_{\gamma}^*]))_1=(t_1t_2-1)^{\d \gamma}.
\end{equation}
\end{proposition}
\begin{proof}
It is enough to prove
\begin{equation}\label{eq:7-6-10-1}
e^{p,q}(Z_{\gamma}^*)_1 +e^{p,q}(Z_{\Delta_{\gamma}}^*)_1=(-1)^{\d \gamma +p}\binom{\d \gamma}{p} \cdot \delta_{p,q},
\end{equation}
where $\delta_{p,q}$ is Kronecker's delta. We consider the closure $\overline{Z_{\Delta_{\gamma}}^*}$ of $Z_{\Delta_{\gamma}}^*$ in $X_{\Box_{\gamma^{\prime \prime}}}$. Then by the proofs of Propositions \ref{prp:7-6-5} and \ref{cor:7-6-6}, we have
\begin{align}
e^{p,q}(\overline{Z_{\Delta_{\gamma}}^*})_1
&= \sum_{\tau \prec \gamma^{\prime \prime}}\left\{e^{p,q}((\CC^*)^{
\dd_{\tau}}\times Z_{\Psi(\tau)}^*)_1 +e^{p,q}((\CC^*)^{\dd_{\Box_{\tau}}}\times Z_{\tl{\Psi}(\Box_{\tau})}^*)_1\right\}\\
&= \sum_{\tau \prec \gamma^{\prime \prime}} \sum_{i=0}^{\min\{\dd_{\tau},p\}}\binom{\dd_{\tau}}{i}(-1)^{i+\dd_{\tau}} \left\{e^{p-i,q-i}(Z_{\Psi(\tau)}^*)_1 +e^{p-i,q-i}(Z_{\Delta_{\Psi(\tau)}}^*)_1\right\}.
\label{eq:7-6-10-2}
\end{align}
Let us prove \eqref{eq:7-6-10-1} by induction on $\d \gamma$. In the case $\d \gamma=0$, we can prove \eqref{eq:7-6-10-1} easily by Propositions \ref{prp:2-15} and \ref{prp:2-19}. Assume that for any $\sigma\subset \Gamma_f$ such that $\d \sigma <\d \gamma$ \eqref{eq:7-6-10-1} holds. Then by $\dd_{\gamma^{\prime \prime}}=0$ and \eqref{eq:7-6-10-2} we have
\begin{equation}\label{eq:7-6-10-4}
e^{p,q}(\overline{Z_{\Delta_{\gamma}}^*})_1=e^{p,q}(Z_{\gamma}^*)_1 +e^{p,q}(Z_{\Delta_{\gamma}}^*)_1+ \delta_{p,q}\sum_{\tau \precneqq \gamma^{\prime \prime}} (-1)^{\d \tau +p}\binom{\d \tau}{p}.
\end{equation}
In the case $p+q> \d \gamma$, by Proposition \ref{prp:2-15} we have
\begin{equation}
e^{p,q}(\overline{Z_{\Delta_{\gamma}}^*})_1= \delta_{p,q}\sum_{\tau \prec \gamma^{\prime \prime}} (-1)^{\d \tau +p}\binom{\d \tau}{p}.
\end{equation}
Therefore, also in the case $p+q< \d \gamma$, by the Poincar{\'e} duality for $\overline{Z_{\Delta_{\gamma}}^*}$ ($\Box_{\gamma^{\prime \prime}}$ is prime) and Lemma \ref{lem:7-6-9} we have
\begin{eqnarray}
e^{p,q}(\overline{Z_{\Delta_{\gamma}}^*})_1
&=& e^{\d \gamma -p, \d \gamma-q}(\overline{Z_{\Delta_{\gamma}}^*})_1\\
&=& \delta_{p,q}\sum_{\tau \prec \gamma^{\prime \prime}} (-1)^{\d \tau +\d \gamma -p}\binom{\d \tau}{\d \gamma -p}\\
&=& \delta_{p,q}\sum_{\tau \prec \gamma^{\prime \prime}} (-1)^{\d \tau +p}\binom{\d \tau}{p}.
\end{eqnarray}
In the case $p+q=\d \gamma$, by Proposition \ref{cor:7-6-6} and the previous results we have
\begin{eqnarray}
e^{p,q}(\overline{Z_{\Delta_{\gamma}}^*})_1
&=&\sum_{q^{\prime}}e^{p,q^{\prime}}(\overline{Z_{\Delta_{\gamma}}^*})_1-(1-\delta_{p,q})e^{p,p}(\overline{Z_{\Delta_{\gamma}}^*})_1\\
&=&\delta_{p,q} \sum_{\tau \prec \gamma^{\prime \prime}} (-1)^{\d \tau +p}\binom{\d \tau}{p}.
\end{eqnarray}
By \eqref{eq:7-6-10-4}, we obtain \eqref{eq:7-6-10-1} for any $p,q$. \qed
\end{proof}
Now we can finish the proof of Proposition \ref{prp:7-6-2} as follows. By Proposition \ref{thm:7-6-10}, we have
\begin{align}
\sum_{\gamma \subset \Gamma_f}e\( \chi_h\( (1-\LL)^{m_{\gamma}+1}([
Z_{\Delta_{\gamma}}^*]+[Z_{\gamma}^*])\)\)_1
&=\sum_{\gamma \subset \Gamma_f} (1-t_1t_2)^{m_{\gamma}+1}(t_1t_2-1)^{\d \gamma}\\
&= \sum_{l=1}^n (1-t_1t_2)^l \sum_{\sharp S_{\gamma}=l}(-1)^{\d \gamma}\\
&= \sum_{l=1}^n (1-t_1t_2)^l \binom{n}{l}(-1)^{l-1}\\
&= 1-(t_1t_2)^n.
\end{align}
\qed
\begin{remark}
Following the proof of \cite[Theorem 5.16]{M-T-3}, we can easily give another proof to the Steenbrink conjecture which was proved by Varchenko-Khovanskii \cite{K-V} and Saito \cite{Saito-3} independently. For an introduction to this conjecture, see an excellent survey in Kulikov \cite{Kulikov} etc.
\end{remark}
\begin{remark}
For a polynomial map $f \colon \CC^n \longrightarrow \CC$, it is well-known that there exists a finite subset $B \subset \CC$ such that the restriction
\begin{equation}
\CC^n \setminus f^{-1}(B) \longrightarrow \CC \setminus B
\end{equation}
of $f$ is a locally trivial fibration. We denote by $B_f$ the smallest such subset $B \subset \CC$. For a point $b\in B_f$, take a small circle $C_{\e}(b)=\{x\in \CC\ |\ |x-b|=\e\}$ ($0<\e\ll 1$) around $b$ such that $B_f\cap \{x \in \CC\ |\ |x-b|\leq\e\}=\{b\}$. Then by the restriction of $\CC^n \setminus f^{-1}(B_f) \longrightarrow \CC \setminus B_f$ to $C_{\e}(b)\subset \CC\setminus B_f$ we obtain a geometric monodromy automorphism $\Phi_f^b \colon f^{-1}(b+\e) \simto f^{-1}(b+\e)$ and the linear maps
\begin{equation}
\Phi_j^b \colon H^j(f^{-1}(b+\e ) ;\CC) \overset{\sim}{\longrightarrow} H^j(f^{-1}(b+\e ) ;\CC) \ \ (j=0,1,\ldots)
\end{equation}
associated to it. The eigenvalues of $\Phi_j^b$ were studied in \cite[Sections 3 and 4]{M-T-2} etc. If $f$ is tame at infinity, as in \cite[Section 4]{M-T-3} we can introduce a motivic Milnor fiber $\SS_f^{b} \in \M_{\CC}^{\hat{\mu}}$ along the central fiber $f^{-1}(b)$ to calculate the numbers of the Jordan blocks for the eigenvalues $\lambda \neq 1$ in $\Phi_{n-1}^b$. This result can be easily obtained by using the proof of Sabbah \cite[Theorem 13.1]{Sabbah-2}. It would be an interesting problem to construct a motivic object to calculate the eigenvalue $1$ part of $\Phi_{n-1}^b$.
\end{remark}
|
2,877,628,089,401 | arxiv |
\subsection{The locally greedy algorithm} \label {sec:locally_greedy}
A simple approach to the assignment problem is the following greedy
procedure: the algorithm steps through all $\NumPartitions$ positions (according
to some fixed, arbitrary ordering). For position $\partitionindex$, it simply
chooses the item that increases the total value as much as possible,
i.e., it chooses
\[
s_\partitionindex= \mathop{\rm arg\,max}_{s \in \ensuremath{P}_\partitionindex} \set { f(\{s_1,\dots,s_{\partitionindex-1} \} + s) }\mbox{,}
\]
where, for a set $S$ and element $e$, we write $S + e$ for $S \cup \set{e}$.
Perhaps surprisingly, no matter which ordering over the positions is chosen, this so-called \emph {locally greedy algorithm} produces an assignment that obtains at least half the optimal value~\cite{fisher78}. In fact, the following more general result holds. We will use this lemma in the analysis of our improved offline algorithm, which uses the locally greedy algorithm as a subroutine.
\begin {lem} \label {lem:locally_greedy}
Suppose $f : 2^\groundset \to \ensuremath{\mathbb{R}_{\ge 0}}$ is of the form
$f(\assignment) = f_0(\assignment) + \sum_{\partitionindex=1}^\NumPartitions f_\partitionindex(\assignment \cap P_\partitionindex)$
where $f_0 : 2^\groundset \to \ensuremath{\mathbb{R}_{\ge 0}}$ is monotone submodular, and
$f_\partitionindex : P_\partitionindex \to \ensuremath{\mathbb{R}_{\ge 0}}$ is arbitrary for $\partitionindex \ge 1$.
Let $L$ be the solution returned by the locally greedy algorithm. Then
$f(L) + f_0(L) \ge \max_{\assignment \in \Feasible} \set { f(\assignment) }$.
\end {lem}
\ifthenelse{\boolean{istechrpt}}
{The proof is given in Appendix A.}
{The proof is given in the supplementary material \cite{supplementary}.}
Observe that in the special case where $f_\partitionindex = 0$ for all $\partitionindex \ge 1$, Lemma \ref
{lem:locally_greedy} says that $f(L) \ge \frac 1 2 \max_{\assignment \in \Feasible} f(\assignment)$.
The following example shows that the $1/2$ approximation ratio is tight. Consider an instance of the ad allocation problem with two ads, two positions and two users, Alice and Bob. Alice is interested in ad 1, but has a very short attention span: She will only click on the ad if it appears in the first position. Bob is interested in ad 2, and will look through all positions. Now suppose that Alice searches slightly less frequently (with probability $\frac{1}{2}-\varepsilon$) than Bob (who searches with probability $\frac{1}{2}+\varepsilon$). The greedy algorithm first chooses the ad to assign to slot 1. Since the ad is more likely to be shown to Bob, the algorithm chooses ad 2, with an expected utility of $\frac{1}{2}+\varepsilon$. Since Alice will only look at position 1, no ad assigned to slot 2 can increase the expected utility further. On the other hand, the optimal solution is to assign ad 1 to slot 1, and ad 2 to slot 2, with an expected utility of 1.
\subsection {An algorithm with optimal approximation ratio} \label {sec:offline}
We now present an algorithm that achieves the optimal approximation
ratio of $1 - 1/e$, improving on the $\frac 1 2$ approximation
for the locally greedy algorithm. Our algorithm associates with each
partition $\ensuremath{P}_\partitionindex$ a \emph {color} $\ensuremath{c}_\partitionindex$ from a palette
$\ensuremath{\bracket{\ensuremath{C}}}$ of $\ensuremath{C}$ colors, where
we use the notation $\bracket{n}=\set{1,2,\dots,n}$.
For any set $S \subseteq \groundset \times \ensuremath{\bracket{\ensuremath{C}}}$ and vector
$\ensuremath{ \vec{c} } = (\ensuremath{c}_1, \ldots, \ensuremath{c}_\NumPartitions)$, define\vspace{-2mm}
\[
\textsf{sample}_{\ensuremath{ \vec{c} }}(S) = \bigcup_{\partitionindex=1}^\NumPartitions \set { x \in \ensuremath{P}_\partitionindex: (x, \ensuremath{c}_\partitionindex) \in S } \mbox { .}\vspace{-1mm}
\]
Given a set $S$ of (item, color) pairs, which we may think of as
labeling each item with one or more colors, $\textsf{sample}_{\ensuremath{ \vec{c} }}(S)$
returns a set containing each item $x$ that is labeled with whatever color
$\ensuremath{ \vec{c} }$ assigns to the partition that contains $x$.
Let $F(S)$ denote the expected value of $f(\textsf{sample}_{\ensuremath{ \vec{c} }}(S))$ when each color $\ensuremath{c}_\partitionindex$ is selected uniformly at random from $\ensuremath{\bracket{\ensuremath{C}}}$. We consider the following algorithm.
\SetKw {KwForEach} {for each}
\SetKw {KwFrom} {from}
\SetKw {KwSet} {set}
\SetKw {KwReturn} {return}
\begin {algorithm}
\Titleofalgo { {\sc TabularGreedy}\xspace }
\KwIn {integer $\ensuremath{C}$, sets $\ensuremath{P}_1$, $\ensuremath{P}_2$, \ldots, $\ensuremath{P}_\NumPartitions$, function $f: 2^\groundset \to \ensuremath{\mathbb{R}_{\ge 0}}$ (where $\groundset = \bigcup_{\partitionindex=1}^\NumPartitions \ensuremath{P}_\partitionindex$)}
\ \\
\KwSet $G := \emptyset$. \\
\For(\tcc*[f]{For each color \hspace{1.2em} }){$\colorindex$ \KwFrom $1$ \KwTo $\ensuremath{C}$}{
\For(\tcc*[f]{For each partition}){$\partitionindex$ \KwFrom $1$ \KwTo $\NumPartitions$} {
\KwSet $g_{\partitionindex,\colorindex} = \mathop{\rm arg\,max}_{x \in \Partition_\partitionindex \times \set{\colorindex}} \set { F(G
+ x) }$ \tcc*[f]{Greedily pick $g_{\partitionindex,\colorindex}$\hspace{0.4em} } \\
\KwSet $G := G + g_{\partitionindex,\colorindex}$\;
}
}
\KwForEach $\partitionindex \in \bracket{\NumPartitions}$, choose $\ensuremath{c}_\partitionindex$ uniformly at random
from $\ensuremath{\bracket{\ensuremath{C}}}$. \\
\KwReturn $\textsf{sample}_{\ensuremath{ \vec{c} }}(G)$, where $\ensuremath{ \vec{c} } := (\ensuremath{c}_1, \ldots, \ensuremath{c}_\NumPartitions)$.
\end {algorithm}
Observe that when $\ensuremath{C} = 1$, the $\textsf{sample}_{\ensuremath{ \vec{c} }}$ function
is deterministic and {\sc TabularGreedy}\xspace is simply the locally greedy
algorithm from \secref{sec:locally_greedy}.
In the limit as $\ensuremath{C} \to \infty$, the {\sc TabularGreedy}\xspace can intuitively be viewed
as an algorithm for a continuous extension of the
problem followed by a rounding procedure, in the
same spirit as Vondr\'{a}k's \emph{continuous-greedy
algorithm}~\cite{calinescu07}.
In our case, the continuous extension is to compute a probability
distribution $D_\partitionindex$ for each position $\partitionindex$ with support in $P_\partitionindex$ (plus
a special ``select nothing'' outcome), such that if we independently
sample an element $x_\partitionindex$ from $D_\partitionindex$,
$\E {f(\set{x_1, \ldots, x_\NumPartitions})}$ is maximized.
It turns out that if the positions individually, greedily, and in
round-robin fashion, add infinitesimal units of probability mass to their
distributions so as to maximize this objective function, they achieve the
same objective function value as if, rather than making decisions in a
round-robin fashion, they had cooperated and added the combination of $\NumPartitions$
infinitesimal probability mass units (one per position) that greedily maximizes the
objective function. The latter process, in turn, can be shown to be
equivalent to a greedy algorithm for maximizing a (different) submodular
function subject to a cardinality constraint, which implies that it
achieves a $1-1/e$ approximation ratio~\cite{nemhauser78}.
\ignore{
It turns out that if the positions individually, simultaneously, and greedily add probability mass
at a fixed rate to their distributions to maximize this objective function, they do
nearly as well as a process that adds probability mass
simultaneously at the same rate to all positions in a manner which is greedy with
respect the collective increase in the objective function.
Moreover, the latter process can be viewed as the
greedy algorithm to maximize a (different) submodular function subject to a
cardinality constraint~\footnote{More precisely, as the limit process
of the greedy algorithm as $\delta \to 0$, where in each step the
algorithm selects an action that adds $\delta$ probability mass to each $D_\partitionindex$, and can perform at most
$1/\delta$ such actions.}, and is thus a $1-1/e$
approximation~\cite{nemhauser78}.
}
{\sc TabularGreedy}\xspace represents a tradeoff between these two extremes;
its performance is summarized by the following theorem.
For now, we assume that the $\mathop{\rm arg\,max}$ in the inner loop is computed exactly.
\ifthenelse{\boolean{istechrpt}}
{In Appendix A}
{In the supplementary material \cite {supplementary},}
we bound the performance loss that results from approximating the $\mathop{\rm arg\,max}$
(e.g., by estimating $F$ by repeated sampling).
\begin {thm} \label {thm:offline_greedy}
Suppose $f$ is monotone submodular. Then
$F(G) \ge \beta(\NumPartitions, \ensuremath{C}) \cdot \max_{\assignment \in \Feasible} \set {f(\assignment)}$,
where $\beta(\NumPartitions, \ensuremath{C})$ is defined as $1 - (1 - \frac {1} {\ensuremath{C}})^{\ensuremath{C}} - {\NumPartitions \choose 2} \ensuremath{C}^{-1} $.
\end {thm}
It follows that, for any $\varepsilon>0$, {\sc TabularGreedy}\xspace achieves a $(1-1/e-\varepsilon)$ approximation factor using a number of colors that is polynomial in $\ensuremath{K}$ and $1/\varepsilon$.
The theorem will follow immediately from the combination of two key
lemmas, which we now prove. Informally, Lemma
\ref {lem:outer_loop} analyzes the approximation error due to the
outer greedy loop of the algorithm, while Lemma \ref {lem:inner_loop}
analyzes the approximation error due to the inner loop.
\begin {lem} \label {lem:outer_loop}
Let $G_{\colorindex} = \set { g_{1,{\colorindex}}, g_{2, {\colorindex}}, \ldots, g_{\NumPartitions, {\colorindex}} }$, and let $G^-_{\colorindex} = G_1 \cup G_2 \cup \ldots \cup G_{{\colorindex}-1}$.
For each color ${\colorindex}$, choose $E_{\colorindex} \in \mathbb R$ such that
$
F( G^-_{\colorindex} \cup G_{\colorindex} )
\ge \max_{x \in \ensuremath{\mathcal{R}}_{\colorindex}} \set { F(G^-_{\colorindex} \cup x) } - E_{\colorindex}
$
where $\ensuremath{\mathcal{R}}_{\colorindex} := \set{\ensuremath{R} : \forall \partitionindex \in \bracket{\NumPartitions}, |\ensuremath{R} \cap (\Partition_\partitionindex \times \set {{\colorindex}})| = 1}$ is the set of all possible
choices for $G_{\colorindex}$.
Then
\begin {equation} \label {eq:guarantee}
F(G) \ge \beta(\C) \cdot \max_{\assignment \in \Feasible} \set {f(\assignment)} - \sum_{\c=1}^\C E_\c \mbox { .}
\end {equation}
where $\beta(\C) = 1 - \paren {1 - \frac {1} {\C}}^\C$.
\end {lem}
\begin {proof} [Proof (Sketch)]
We will refer to an element $\ensuremath{R}$ of $\ensuremath{\mathcal{R}}_{\colorindex}$ as a \emph {row}, and to ${\colorindex}$
as the color of the row. Let $\ensuremath{\mathcal{R}_{\bracket{\C}}} := \bigcup_{{\colorindex}=1}^\C \ensuremath{\mathcal{R}}_{\colorindex}$
be the set of all rows. Consider the function $H:
2^{\ensuremath{\mathcal{R}_{\bracket{\C}}}} \to \ensuremath{\mathbb{R}_{\ge 0}}$,
defined as $H(\ensuremath{\mathcal{R}}) = F \paren{ \bigcup_{\ensuremath{R} \in \ensuremath{\mathcal{R}}} \ensuremath{R} }$.
We will prove the lemma in three steps:
(\emph{i}) $H$ is monotone submodular,
(\emph{ii}) {\sc TabularGreedy}\xspace is simply the locally greedy algorithm for finding a
set of $\NumPartitions$ rows that maximizes $H$,
where the $\c^{\text{th}}$ greedy step is performed with additive error $E_{\c}$,
and
(\emph{iii}) {\sc TabularGreedy}\xspace obtains the guarantee \eqref
{eq:guarantee} for maximizing $H$, and this implies the same ratio
for maximizing $F$.
To show that $H$ is monotone submodular, it suffices to show that $F$ is monotone submodular. Because $F(\assignment) = \Esub{\ensuremath{ \vec{c} }}{f(\textsf{sample}_{\ensuremath{ \vec{c} }}(\assignment))}$, and because a convex combination of monotone submodular functions is monotone submodular, it suffices to show that for any particular coloring $\ensuremath{ \vec{c} }$, the function $f(\textsf{sample}_{\ensuremath{ \vec{c} }}(\assignment))$ is monotone submodular. This follows from the definition of $\textsf{sample}$ and the fact that $f$ is monotone submodular.
The second claim is true by inspection.
To prove the third claim, we use the fact that $F(G^-_{\colorindex} \cup \ensuremath{R})$ can always be
maximized by choosing a row $\ensuremath{R} \in \ensuremath{\mathcal{R}}_{\colorindex}$. Informally, this is
because $F(G^-_{\colorindex} \cup \ensuremath{R})$ can always be maximized by choosing a
row whose color has not already been used, and all
colors $\ge {\colorindex}$ are interchangeable. For problems with this special
property, it is known that the locally greedy algorithm obtains an
approximation ratio of $\beta(\C) = 1 - (1 - \frac 1 \C)^\C$~\cite{nemhauser78}.
Theorem 6 of \cite {streeter07tr} extends this result to handle
additive error, and yields
\vspace{-2mm}
\[
F(G) = H( \set {G_1, G_2, \ldots, G_\C} ) \ge \beta(\C) \cdot
\max_{\ensuremath{\mathcal{R}} \subseteq \ensuremath{\mathcal{R}_{\bracket{\C}}}: |\ensuremath{\mathcal{R}}| \le \C} \set { H(\ensuremath{\mathcal{R}}) } - \sum_{{\colorindex}=1}^\C E_{\colorindex} \mbox { .}
\vspace{-2mm}\]
To complete the proof, it suffices to show that
$\max_{\ensuremath{\mathcal{R}} \subseteq \ensuremath{\mathcal{R}_{\bracket{\C}}}: |\ensuremath{\mathcal{R}}| \le \C} \set { H(\ensuremath{\mathcal{R}}) }
\ge \max_{\assignment \in \Feasible} \set { f(\assignment) } $. This follows from the
fact that for any assignment $\assignment \in \Feasible$, we can find a set $\ensuremath{\mathcal{R}}(\assignment)$ of $\C$
rows such that $\textsf{sample}_{\Colorvec}(\bigcup_{\ensuremath{R} \in \ensuremath{\mathcal{R}}(\assignment)} R) = \assignment$ with
probability 1, and therefore $H(\ensuremath{\mathcal{R}}(\assignment)) = f(\assignment)$.
\end {proof}
We now bound the performance of the the inner loop of {\sc TabularGreedy}\xspace.
\begin {lem} \label {lem:inner_loop}
Let $f^* = \max_{\assignment \in \Feasible} \set {f(\assignment)}$,
and let $G_{\colorindex}$, $G^-_{\colorindex}$, and $\ensuremath{\mathcal{R}}_{\colorindex}$ be defined as in the statement of Lemma \ref
{lem:outer_loop}. Then, for any ${\colorindex} \in \bracket{\C}$,
\[
F (G^-_{\colorindex} \cup G_{\colorindex}) \ge \max_{\ensuremath{R} \in \ensuremath{\mathcal{R}}_{\colorindex}} \set {
F(G^-_{\colorindex} \cup \ensuremath{R}) } - {\NumPartitions \choose 2} \C^{-2} f^*
\mbox { .}
\]
\end {lem}
\begin {proof} [Proof (Sketch)]
Let $N$ denote the number of partitions whose color is ${\colorindex}$.
For $\ensuremath{R} \in \ensuremath{\mathcal{R}}_{\colorindex}$, let
$\Delta_{\Colorvec}(\ensuremath{R}) := f(\textsf{sample}_{\Colorvec}(G^-_{\colorindex} \cup \ensuremath{R})) -
f(\textsf{sample}_{\Colorvec}(G^-_{\colorindex}))$, and let $F_{\colorindex}(\ensuremath{R}) := F(G^-_{\colorindex} \cup \ensuremath{R}) - F(G^-_{\colorindex})$. By definition,
$F_{\colorindex}(\ensuremath{R}) = \Esub{\Colorvec}{\Delta_{\Colorvec}(\ensuremath{R})} = \Pr{N=1} \Esub{\Colorvec} { \Delta_{\Colorvec}(\ensuremath{R}) | N = 1 } + \Pr{N \ge 2} \Esub{\Colorvec}{\Delta_{\Colorvec}(\ensuremath{R}) | N \ge 2}$,
where we have used the fact that $\Delta_{\Colorvec}(\ensuremath{R}) = 0$ when $N =
0$.
The idea of the proof is that the first of these terms dominates as
$\C \to \infty$, and that $ \Esub{\Colorvec} { \Delta_{\Colorvec}(\ensuremath{R}) | N = 1 }$
can be optimized exactly
simply by optimizing each element of
$P_\partitionindex \times \set{{\colorindex}}$ independently.
Specifically, it can be seen that $\Esub{\Colorvec}{ \Delta_{\Colorvec}(\ensuremath{R}) | N = 1 } =
\sum_{\partitionindex=1}^\NumPartitions f_\partitionindex(\ensuremath{R} \cap (P_\partitionindex \times \set{{\colorindex}}))$ for suitable $f_\partitionindex$.
Additionally, $f_0(\ensuremath{R}) = \Pr{N \ge 2} \Esub{\Colorvec}{\Delta_{\Colorvec}(\ensuremath{R}) | N \ge
2}$ is a monotone submodular function of a set of (item, color) pairs, for the same reasons $F$ is.
Applying Lemma \ref {lem:locally_greedy} with these
$\set{f_\partitionindex : \partitionindex \ge 0}$ yields
\[
F_{\colorindex}(G_{\colorindex}) + \Pr{N \ge 2} \Esub{\Colorvec}{\Delta_{\Colorvec}(G_{\colorindex}) | N \ge 2} \ge \max_{R \in \ensuremath{\mathcal{R}}_{\colorindex}} \set { F_{\colorindex}(R) }
\mbox { .}
\]
To complete the proof, it suffices to show that $\Pr{N \ge 2} \le {K
\choose 2} \C^{-2}$ and that $\Esub{\Colorvec}{\Delta_{\Colorvec}(G_{\colorindex}) | N \ge 2}
\le f^*$.
The first inequality holds because, if we let $M$ be the number of \emph{pairs} of partitions that are both assigned color ${\colorindex}$, we have $\Pr{N \ge 2} = \Pr{M \ge 1}\le \E{M} = {\NumPartitions \choose 2} \C^{-2}$.
The second inequality follows from the fact that for any
$\Colorvec$ we have $\Delta_{\Colorvec}(G_{\colorindex}) \le f(\textsf{sample}_{\Colorvec}(G^-_{\colorindex} \cup G_{\colorindex})) \le
f^*$.
\end {proof}
\section{Conclusions}\vspace{-2mm}
\label{sec:conclusions}
In this paper, we showed that important problems, such as ad display in
sponsored search and computing diverse rankings of information sources
on the web, require optimizing assignments under submodular utility
functions. We developed an efficient algorithm, {\sc TabularGreedy}\xspace, which
obtains the optimal approximation ratio of $(1-1/e)$ for
this NP-hard optimization problem. We also developed an online
algorithm, {\sc TGbandit}\xspace, that asymptotically achieves no
$(1-1/e)$-regret for the problem of repeatedly selecting informative
assignments, under the full-information and bandit-feedback
settings. Finally, we demonstrated that our algorithm outperforms
previous work
on two real world problems, namely online ranking of
informative blogs and ad allocation.
\subsection{Implementation details}
We evaluate {\sc TGbandit}\xspace experimentally on two applications: Learning to rank blogs that are effective in detecting cascades of information, and allocating advertisements to maximize revenue.
\subsection{Online learning of diverse blog rankings}
We consider the problem of ranking a set of blogs and news sources on the web.
Our approach is based on the following idea: A blogger writes a posting, and, after some time, other postings link to it, forming cascades of information propagating through the network of blogs.
More formally, an information cascade is a directed acyclic graph of vertices
(each vertex corresponds to a posting at some blog), where edges are
annotated by the time difference between the postings.
Based on this
notion of an information cascade, we would like to select blogs
that detect big cascades (containing many nodes) as early as
possible (i.e., we want to learn about an important event before
most other readers). In
\cite{leskovec07}
it is shown how one
can formalize this notion of utility using a monotone submodular function
that measures the informativeness of a subset of blogs. Optimizing the submodular function yields a small set of blogs that ``covers'' most cascades. This utility function prefers \emph{diverse} sets of blogs, minimizing the overlap of the detected cascades, and therefore minimizing redundancy.
The work by \cite{leskovec07} leaves two major shortcomings: Firstly, they select a \emph{set} of blogs rather than a \emph{ranking}, which is of practical importance for the presentation on a web service. Secondly, they do not address the problem of \emph{sequential} prediction, where the set of blogs must be updated dynamically over time. In this paper, we address these shortcomings.
\paragraph{Results on offline blog ranking.}
\newcommand {\Blogs} {\mathcal{B}}
In order to model the blog ranking problem, we adopt the assumption that different users have different attention spans: Each user will only consider blogs appearing in a particular subset of positions. In our experiments, we assume that the probability that a user is willing to look at position $\partitionindex$ is proportional to $\gamma^\partitionindex$, for some discount factor $0<\gamma<1$.
More formally, let $g$ be the monotone submodular function measuring the informativeness of any set of blogs, defined as in \cite{leskovec07}.
Let $\ensuremath{P}_\partitionindex = \Blogs \times \set {\partitionindex}$, where $\Blogs$ is the set of blogs. Given an assignment $S \in \Feasible$, let $S^{[\partitionindex]} = S \cap \set { \ensuremath{P}_1 \cup \ensuremath{P}_2 \cup \ldots \cup \ensuremath{P}_\partitionindex }$ be the assignment of blogs to positions 1 through $\partitionindex$.
We define the \emph{discounted} value of the assignment $S$ as
$
f(S) = \sum_{\partitionindex=1}^\NumPartitions \gamma^\partitionindex \paren{ g(S^{[\partitionindex]})-g(S^{[\partitionindex-1]}) }.
$
It can be seen that $f: 2^\groundset \to \ensuremath{\mathbb{R}_{\ge 0}}$ is monotone submodular.
For our experiments, we use the data set of \cite{leskovec07}, consisting of 45,192 blogs, 16,551 cascades, and 2 million postings collected during 12 months of 2006. We use the \emph{population affected} objective of \cite{leskovec07}, and use a discount factor of $\gamma = 0.8$.
Based on this data, we run our {\sc TabularGreedy}\xspace algorithm with varying numbers of colors $\ensuremath{C}$ on the blog data set. \figref{fig:blogs_offline} presents the results of this experiment. For each value of $\ensuremath{C}$, we generate 200 rankings, and report both the average performance and the maximum performance over the 200 trials. Increasing $\ensuremath{C}$ leads to an improved performance over the locally greedy algorithm ($\ensuremath{C}=1$).
\paragraph{Results on online learning of blog rankings.}
We now consider the online problem where on each round $t$ we want to output a ranking $S_t$. After we select the ranking, a new set of cascades occurs, modeled using a separate submodular function $f_t$, and we obtain a reward of $f_t(S_t)$. In our experiments, we choose one assignment per day, and define $f_t$ as the utility associated with the cascades occurring on that day. Note that $f_t$ allows us to evaluate the performance of any possible ranking $S_t$, hence we can apply {\sc TGbandit}\xspace in the \emph{full-information feedback} model.
We compare the performance of our online algorithm using $\ensuremath{C} =
1$ and $\ensuremath{C} = 4$. \figref{fig:blogs_discount} presents the
average cumulative reward gained over time by both algorithms. We
normalize the average reward by the utility achieved by
the {\sc TabularGreedy}\xspace algorithm (with $\ensuremath{C} = 1$) applied to the
entire data set. \figref{fig:blogs_discount} shows that the
performance of both algorithms rapidly (within the first
47 rounds) converges to the performance of the offline
algorithm. The {\sc TGbandit}\xspace algorithm with $\ensuremath{C} = 4$ levels
out at an approximately 4\% higher reward than the
algorithm with $\ensuremath{C}=1$.
\begin{figure}[t]
\centering
\subfigure[Blogs: Offline results]{
\includegraphics[width=.31\textwidth]{fig/experiment_offline_discount45k}
\label{fig:blogs_offline}
}
\subfigure[Blogs: Online results]{
\includegraphics[width=.31\textwidth]{fig/experiment_online_discount1k_performance}
\label{fig:blogs_discount}
}
\subfigure[Ad display: Online results]{
\includegraphics[width=.31\textwidth]{fig/experiment_online_ads_markovian_users}
\label{fig:online_ads}
}
\caption{\small (a,b) Results for discounted blog ranking ($\gamma = 0.8$), in offline (a) and online (b) setting. (c) Performance of {\sc TGbandit}\xspace with $\C = 1$,
$2$, and $4$ colors for the sponsored search ad selection problem (each round is a query). Note that $\C = 1$ corresponds to the online
algorithm of~\cite {radlinski08, streeter08}.\vspace{-4mm}}
\end{figure}
\subsection{Online ad display} \label{ssec:online_ads}
\newcommand {\Ads} {\mathcal{A}}
\newcommand {\Pclick}{\ensuremath{p_{\text{click}}}}
\newcommand {\Pabandon}{\ensuremath{p_{\text{abandon}}}}
We evaluate {\sc TGbandit}\xspace for the sponsored search ad selection
problem in a simple Markovian model incorporating the value of diverse results and
complex position-dependence among clicks.
In this model, each user $u$ is defined by
two sets of probabilities:
$\Pclick(a)$ for each ad $a \in \Ads$, and $\Pabandon(\partitionindex)$ for each
position $\partitionindex \in \bracket{\NumPartitions}$. When presented an assignment of ads
$\set{a_1, a_2, \ldots, a_\NumPartitions}$, where $a_\partitionindex$ occupies position $\partitionindex$,
the user scans the positions in increasing order.
For each position $\partitionindex$, the user clicks on $a_\partitionindex$ with probability
$\Pclick(a_\partitionindex)$, leaving the results page forever.
Otherwise, with probability $(1 - \Pclick(a_\partitionindex)) \cdot \Pabandon(\partitionindex)$, the user loses interest and
abandons the results without clicking on anything.
Finally, with probability $(1 - \Pclick(a_\partitionindex)) \cdot (1 - \Pabandon(\partitionindex))$, the user proceeds to look
at position $\partitionindex+1$.
The reward function $f_t$ is the number of clicks, which is either zero or
one. We only receive information about $f_t(S_t)$ (i.e., \emph{bandit feedback}).
In our evaluation, there are two types of users: those interested in
all positions ($\Pabandon \equiv 0$), and those that quickly
lose interest ($\Pabandon \equiv 0.5$). For both types of users we
select $\Pclick(a)$ uniformly at random from $[0, 1]$, independently
for each ad $a$ (once chosen, $\Pclick$ is fixed for all rounds). We use $\NumPartitions = 6$ positions, and 6 available ads.
In~\figref{fig:online_ads} we compare the performance of {\sc TGbandit}\xspace with $\C = 4$ to the online
algorithm of~\cite {radlinski08, streeter08}, based on the average of 100 experiments. The latter algorithm is
equivalent to running {\sc TGbandit}\xspace with $\C = 1$. The former achieves parity with the latter
after roughly $10^6$ rounds, and dominates thereafter.
It can be shown that with several different types of users with
distinct $\Pclick(\cdot)$ functions the
offline problem of finding an assignment within
$1 - \frac 1 e + \varepsilon$ of optimal is $\NP$-hard.
This is in contrast to the
case in which $\Pclick$ and $\Pabandon$ are the \emph {same} for all
users; in this case the offline problem simply requires finding an
optimal policy for a Markov decision process, which can be done
efficiently using well-known algorithms.
A slightly different Markov model of user behavior which is efficiently solvable was considered in~\cite{aggarwal08}.
In that model, $\Pclick$ and $\Pabandon$ are the same for all users, and
$\Pabandon$ is a function of the ad in the slot currently being
scanned rather than its index.
\section{Introduction} \label{sec:intro}
\vspace{-1mm}
Consider the problem of repeatedly choosing advertisements to display in sponsored search to maximize our revenue. In this problem, there is a small set of positions on the page, and each time a query arrives we would like to assign,
to each position,
one out of a large number of possible ads. In this and related problems that we call \emph{online assignment learning} problems,
there is a set of positions, a set of items, and a sequence of rounds, and on each round we must assign an item to each position. After each round, we obtain some reward
depending on the selected assignment, and we observe the value of the reward. When there is only one position, this problem
becomes the well-studied multiarmed bandit problem~\cite{auer2002}.
When the positions have a linear ordering
the assignment can be construed as a ranked list of elements, and the
problem becomes one of selecting lists online.
Online assignment learning thus models a central challenge in web
search, sponsored search, news aggregators, and recommendation
systems, among other applications.
A common assumption made in previous work on these problems is that the quality of an assignment is the sum of a function on the (item, position) pairs in the assignment. For example, online advertising models with \emph {click-through-rates}~\cite{edelman07a} make an assumption of this form.
More recently, there have been attempts to incorporate
the value of diversity in the reward function~\cite{radlinski08}.
Intuitively, even though the best $\ensuremath{K}$ results for the
query ``turkey'' might happen to be about the country, the best list
of $\ensuremath{K}$ results is likely to contain some recipes for the
bird as well. This will be the case if there are diminishing returns
on the number of relevant links presented to a user;
for example, if it is better to present each user with at least one relevant result
than to present half of the users with no relevant results and half with
two relevant results.
We incorporate these considerations in a flexible way by providing an algorithm that performs well whenever the reward for an assignment is a \emph{monotone submodular
function} of its set of (item, position) pairs.
Our key contributions are:
(\emph{i}) an efficient algorithm, {\sc TabularGreedy}\xspace, that provides a constant factor $(1-1/e)$ approximation to the problem of optimizing assignments under submodular utility functions,
(\emph{ii}) an algorithm for online learning of assignments, {\sc TGbandit}\xspace, that has strong performance guarantees in the \emph {no-regret} model, and
(\emph{iii}) an empirical evaluation on two problems of information gathering on the web.
\section{The assignment learning problem}
\label{sec:problem}
We consider problems, where we have $\ensuremath{K}$ positions (e.g., slots for displaying ads), and need to assign to each position an item (e.g., an ad) in order to maximize a utility function (e.g., the revenue from clicks on the ads). We address both the \emph{offline} problem, where the utility function is specified in advance, and the \emph{online} problem, where a sequence of utility functions arrives over time, and we need to repeatedly select a new assignment.
\paragraph{The Offline Problem.}
In the offline problem we are given sets $\ensuremath{P}_1$, $\ensuremath{P}_2$, \ldots, $\ensuremath{P}_\ensuremath{K}$, where $\ensuremath{P}_\partitionindex$ is the set of items that may be placed
in position $\partitionindex$. We assume without loss of generality that these sets are disjoint.\footnote{If the same item can be placed in multiple positions, simply create multiple distinct copies of it.} An \emph{assignment} is a subset $\assignment\subseteq\groundset$, where $\groundset=\ensuremath{P}_1\cup\ensuremath{P}_2 \cup \dots\cup\ensuremath{P}_\ensuremath{K}$ is the set of all items.
We call an assignment \emph{feasible}, if at most one item is assigned to each position (i.e., for all $\partitionindex$, $|\assignment \cap \ensuremath{P}_\partitionindex| \le 1$). We use $\Feasible$ to refer to the set of feasible assignments.
Our goal is to find a feasible assignment maximizing a utility function $f:2^{\groundset}\to\ensuremath{\mathbb{R}_{\ge 0}}$.
As we discuss later, many important assignment problems satisfy \emph{submodularity}, a natural diminishing returns property: Assigning a new item to a position $\partitionindex$ increases the utility more if few elements have been assigned yet, and less if many items have already been assigned. Formally, a utility function $f$ is called submodular, if for all $\assignment\subseteq\assignment'$ and $s\notin\assignment'$ it holds that
$f(\assignment\cup\set{s})-f(\assignment)\geq f(\assignment'\cup\set{s})-f(\assignment')$. We will also assume $f$ is monotone (i.e., for all $\assignment\subseteq\assignment'$, we have $f(\assignment)\leq f(\assignment')$). Our goal is thus, for a given non-negative, monotone and submodular utility function $f$, to find a feasible assignment $\assignment^*$ of maximum utility, $\assignment^*=\mathop{\rm arg\,max}_{\assignment\in\Feasible}\set{f(\assignment)}$.
This optimization problem is NP-hard. In fact, a stronger negative result holds:
\begin{thm}[\cite{mirrokni08}]\label{thm:hardness}
For any $\epsilon > 0$, any algorithm guaranteed to obtain a solution within a
factor of $(1-1/e + \epsilon)$ of $\max_{\assignment\in\Feasible}
\set { f(\assignment) }$
requires exponentially many evaluations of $f$ in the worst case.
\end{thm}
In light of this negative result, we can only hope to obtain a solution that achieves a fraction of $(1-1/e)$ of the optimal value. In \secref{sec:offline} we develop such an algorithm.
\paragraph{The Online Problem.}
The offline problem is inappropriate to model dynamic settings, where
the utility function may change over time, and we need to repeatedly
select new assignments, trading off exploration (experimenting with ad
display to gain information about the utility function), and
exploitation (displaying ads which we believe will maximize utility).
More formally, we face a sequential decision problem, where,
on each round (which, e.g., corresponds to a user query for a
particular term), we want to
select an assignment $\assignment_{\ensuremath{t}}$ (ads to display). We
assume that the sets $\ensuremath{P}_1$, $\ensuremath{P}_2$, \ldots,
$\ensuremath{P}_\ensuremath{K}$ are fixed in advance for all
rounds.
After we select
the assignment we obtain reward
$f_{\ensuremath{t}}(\assignment_{\ensuremath{t}})$ for some monotone
submodular utility function $f_{\ensuremath{t}}$. We call the setting
where we do not get any information about $f_{\ensuremath{t}}$ beyond the
reward the \emph{bandit feedback} model.
In contrast, in the \emph{full-information feedback} model we obtain
oracle access to $f_{\ensuremath{t}}$ (i.e., we can evaluate
$f_{\ensuremath{t}}$ on arbitrary feasible assignments).
Both models arise in real applications, as we show in \secref{sec:evaluation}.
\ignore{
\begin{enumerate}
\item \emph{Constrained Full Information.}
We obtain
oracle access to $f_{\ensuremath{t}}$, meaning we may obtain
$f_{\ensuremath{t}}(\assignment)$ for any set $\assignment \in \Feasible$ (but not for $\assignment
\notin \Feasible$).
\item \emph{Partial Information.}
We see the values
$\set{f_{\ensuremath{t}}(\assignment_{\ensuremath{t}}^{\partitionindex}) :
\partitionindex \in \bracket{\ensuremath{K}} }$, where
$\assignment_{\ensuremath{t}}^{\partitionindex} :=
\bigcup_{\partitionindex' \in \bracket{\partitionindex}} \paren{
\assignment_{\ensuremath{t}} \cap P_{\partitionindex'} }$.
\item \emph{Bandit Feedback.} We get no additional information beyond $f_{\ensuremath{t}}(\assignment_{\ensuremath{t}})$.
\end{enumerate}
}
The goal is to maximize the total reward we obtain, namely
$\sum_{\ensuremath{t}} f_{\ensuremath{t}}(\assignment_{\ensuremath{t}})$.
Following the multiarmed bandit
literature, we evaluate our performance after $\ensuremath{T}$ rounds by comparing our
total reward against that obtained by a clairvoyant algorithm with
knowledge of the sequence of functions $\tuple{f_1, \ldots, f_{\ensuremath{T}}}$,
but with the restriction that it must select the same assignment
on each round.
The difference between the clairvoyant algorithm's total reward and
ours is called our \emph{regret}.
The goal is then to develop an algorithm whose
expected regret grows sublinearly in the number of rounds; such an algorithm is
said to have (or be) \emph{no-regret}.
However, since sums of submodular functions remain submodular, the clairvoyant algorithm has to solve an offline assignment problem with $f(\assignment)=\sum_{\ensuremath{t}} f_{\ensuremath{t}}(\assignment)$. Considering \thmref{thm:hardness}, no polynomial-time algorithm can possibly hope to
achieve a no-regret guarantee.
To accommodate this fact, we discount the reward of the clairvoyant algorithm by a factor of
$(1-1/e)$: We define the
\emph{$(1-1/e)$-regret} of a random sequence
$\tuple{\assignment_1, \ldots, \assignment_{\ensuremath{T}}}$
as
\[
\paren{1-\frac{1}{e}} \cdot \max_{\assignment \in \Feasible} \set { \sum_{\ensuremath{t} =
1}^{\ensuremath{T}} f_{\ensuremath{t}}(\assignment) }
\ - \
\expt{\sum_{\ensuremath{t} = 1}^{\ensuremath{T}} f_{\ensuremath{t}}(\assignment_{\ensuremath{t}}) } \mbox { .}
\]
\ignore{
Incorporating features, we similarly define the \emph{feature-weighted}
\emph{$(1-1/e)$-regret} as
\[
\max_{\text{feature }q \in \bracket{\ensuremath{m}}} \left\{ \paren{1-\frac{1}{e}} \cdot \max_{\assignment \in \Feasible} \sum_{\ensuremath{t} =
1}^{\ensuremath{T}} \ensuremath{v}^{\ensuremath{t}}_{q} \cdot f_{\ensuremath{t}}(\assignment)
\quad - \quad
\expt{\sum_{\ensuremath{t} = 1}^{\ensuremath{T}} \ensuremath{v}^{\ensuremath{t}}_{q} \cdot f_{\ensuremath{t}}(\assignment_{\ensuremath{t}})} \right\}
\]
}
Our goal is then to develop efficient algorithms whose
$(1-1/e)$-regret grows sublinearly in $\ensuremath{T}$.
\paragraph{Subsumed Models.}
Our model generalizes several common models for sponsored search ad
selection, and web search results.
These include models with \emph{click-through-rates}, in which
it is assumed that each (ad, position) pair has some probability
$p(a, \partitionindex)$ of being clicked on, and there is some monetary
reward $b(a)$ that is obtained whenever ad $a$ is clicked on.
Often, the \ctrs are assumed to be \emph{separable},
meaning $p(a, \partitionindex)$ has the functional form $\alpha(a) \cdot
\beta(\partitionindex)$ for some functions $\alpha$ and $\beta$.
See~\cite{feldman08,lahaie07} for more details on
sponsored search ad allocation. Note that in both of these cases, the
(expected) reward of a set $S$ of (ad, position) pairs
is $\sum_{(a,\partitionindex) \in S}{g(a,\partitionindex)}$ for some nonnegative function $g$.
It is easy to verify that such a reward function is monotone submodular.
Thus, we can capture this model in our framework by setting $\ensuremath{P}_\partitionindex = \mathcal{A} \times \set {\partitionindex}$, where $\mathcal{A}$ is the set of ads.
Another subsumed model, for web search, appears in~\cite{radlinski08};
it assumes that each user is interested in a particular set of
results, and any list of results that intersects this set generates a unit
of value; all other lists generate no value (the ordering of results is irrelevant). Again, the reward function is monotone submodular.
In this setting, it is desirable to display a diverse set of results in order to maximize the likelihood that at least one of them will interest the user.
Our model is flexible in that we can handle position-dependent effects and diversity considerations simultaneously. For example, we can
handle the case that each user $u$ is interested in a particular set $A_u$ of ads
and looks at a set $I_u$ of positions, and the reward of an assignment $\assignment$ is
any monotone-increasing concave function $g$ of $|\assignment \cap (A_u \times I_u)|$. If $I_u = \set {1, 2, \ldots, k}$ and $g(x) = x$, this models the case where the quality is the number of relevant result
that appear in the first $k$ positions.
If $I_u$ equals all positions and $g(x) = \min \set{x, 1}$ we recover the model of~\cite{radlinski08}.
|
2,877,628,089,402 | arxiv | \section{Introduction}
\label{sec:intro}
LHC analyses involve restrictions on QCD radiation to increase their sensitivity. Restrictions can be imposed directly by e.g.~requiring a specific number of jets, or indirectly through e.g.~the transverse momentum of a Higgs boson. This leads to large logarithms in the cross section, requiring resummation to obtain reliable predictions. The origin of these large logarithms is the enhancement of collinear and soft radiation, which are treated as dynamic degrees of freedom in Soft-Collinear Effective Theory (SCET)~\cite{Bauer:2000ew, Bauer:2000yr, Bauer:2001ct, Bauer:2001yt}. SCET is an effective theory of QCD that achieves resummation through the factorization of hard, collinear and soft radiation at the level of the Lagrangian.
In this paper we focus on soft radiation, which is encoded in the soft function in SCET.
The soft function is (schematically) defined as the matrix element
\begin{align} \label{eq:soft_def}
\widehat S(m,\mu)
=\Big\langle 0 \Big | {\rm\bar T}\Big[\prod\limits_i \widehat{Y}^{ \dagger}_i \Big]\, \delta(m - \hat m)\ {\rm T}\Big[\prod\limits_i \widehat{Y}_i \Big] \Big | 0 \Big \rangle\,,
\end{align}
where $Y_i$ is a soft Wilson lines along the light-like direction of, and in the color representation of the $i$-th colored parton participating in the hard scattering. The T $(\rm \bar T$) denote (anti-) time ordering and the delta function encodes the measurement $m$ through the operator $\hat m$.
We will present an efficient approach to calculate the one-loop soft function, which is an essential ingredient for resummation at next-to-next-to-leading logarithmic accuracy. We will not restrict ourselves to a specific process or measurement and demonstrate the versatility of our method by reproducing the one-loop soft function for thrust~\cite{Schwartz:2007ib,Fleming:2007xt}, angularities~\cite{Hornig:2009vb, Larkoski:2014uqa}, transverse momentum~\cite{Chiu:2012ir} and transverse thrust~\cite{Becher:2015gsa}. Results for the double differential measurement of two angularities and of transverse momentum and beam thrust are also reproduced~\cite{Larkoski:2014tva,Procura:2014cba}. These require an extension of SCET, called SCET${}_+$~\cite{Bauer:2011uc, Procura:2014cba,Larkoski:2015zka,jethierarchies}, with additional collinear-soft modes. The collinear-soft function is again a matrix element of eikonal Wilson lines and can be calculated in the same way.
We present for the first time the calculation of $N$-jettiness with generic jet angularities and the collinear-soft function for the double angularity measurement.
Our approach involves a combination of several tricks: We use the coordinates $k_T$, $y$ and $\phi$ that make the symmetries of the soft matrix element manifest. By isolating divergences at the integrand level, the integrals are simplified. In particular, the integral for the finite terms can directly be written down and evaluated numerically, if desired.
We work with the cumulative soft function, as this involves simple manipulations with logarithms rather than plus distributions. The soft function can be obtained by differentiating the final result. The $N$-jet soft function is given by the sum over emissions between all pairs of Wilson lines at one loop. We employ a boost to make the Wilson lines back-to-back, allowing us to recycle our dijet results. An extension of the hemisphere decomposition of ref.~\cite{Jouttenus:2011wh} is needed to handle more complicated boundaries between jets. Our approach is very general, as we also treat rapidity divergences and divergences with an azimuthal-angle dependence. In the latter case we find it convenient to use a version of dimensional regularization that has no $\epsilon$-dependence associated with the azimuthal angle, and show that this is consistent.
The calculation also provides insight into the structure of the soft function at one loop. For example, rapidity divergences are simply the divergences as the rapidity of the soft gluon goes to infinity. The divergent structure near the Wilson lines is dominated by the asymptotic behavior of the measurement. On the other hand, the divergences away from the Wilson lines depend on the area in $(y,\phi)$-space on which the measurement is defined, but are independent of the measurement itself.
The outline of the paper is as follows: In \sec{framework} we present the setup of our calculation. We discuss detailed examples for dijets observables in \sec{dijet}, generalized $N$-jettiness in \sec{1jettiness}, and double differential measurements in \sec{mdiff}. The conclusions are in \sec{conclusions}, and additional details related to the Becher-Bell rapidity regulator, the calculation of the jet function for transverse thrust and the results for thrust-like N-jettiness are relegated to the appendices.
\section{Calculational framework}
\label{sec:framework}
In this section we develop our calculational framework. We first describe the measurements we consider and the rapidity coordinates we use to express them. In \sec{dijetframework} we consider the one-loop soft function for (back-to-back) dijets and present our master formula in \eq{master_eta}. We extend this to $N$-jet production in \sec{not_back_to_back}, boosting to frames where Wilson lines are back-to-back. Multi-differential measurements are discussed in \sec{mdiffframework}.
\subsection{Measurement function and rapidity coordinates}
For two back-to-back jets, the corresponding soft radiation is emitted from back-to-back Wilson lines. Its boost invariance is made manifest by describing the emitted gluon using its transverse momentum $k_T$, rapidity $y$ and azimuthal angle $\phi$. We will denote the contribution of this soft gluon to a measurement by a function $f(k_T,y,\phi)$, and require that the measurement is additive when there are multiple emissions (avoiding clustering effects from jet algorithms, see e.g.~\cite{Banfi:2012yh}).
Collinear safety implies that for two partons in the collinear limit,
$f(k_{1T},y,\phi) + f(k_{2T},y,\phi) = f(k_{1T}+k_{2T},y,\phi)$. Consequently,
\begin{align}
f(k_T,y,\phi) = k_T f(y,\phi)
\,.\end{align}
For a parton in the presence of a soft gluon $f(y',\phi') = f(y,\phi) + \ord{k_T^\text{soft}/k_T}$. In the soft limit $k_T^\text{soft} \to 0$ the deflection $y' - y$ and $\phi'-\phi$ due to the soft gluon go to zero, from which we conclude that IR safety imposes that $f(y,\phi)$ is continuous.
We will further assume that $f(y,\phi)\geq 0$, such that the measurement restricts the QCD radiation.
To rewrite measurements in these coordinates, we use
\begin{align}
k^\mu &= k_T (\cosh y, \cos \phi, \sin \phi, \sinh y)
\,, \quad
k^- = k_T\, e^y
\,, \quad
k^+ = k_T\, e^{-y} \,.
\end{align}
Here $k^{\mp} = k^0 \pm k^3$ denote light-cone coordinates along the back-to-back jets, aligned with the $z$-axis.
\subsection{Dijets}
\label{sec:dijetframework}
We find it convenient to calculate the cumulative distribution for the soft function in terms of the measurement $m$ to avoid dealing with plus distributions
\begin{align}
\delta[m- k_T f(y,\phi)] &\to \theta[m- k_T f(y,\phi)]
\,, &
\frac{1}{\mu}\, \frac{1}{(m/\mu)_+} &\to \theta(m)\,\ln \frac{m}{\mu}
\,, & \nonumber \\
\delta(m) &\to \theta(m)
\,, &
\frac{1}{\mu}\, \Big(\frac{\ln (m/\mu)}{m/\mu}\Big)_+ &\to \frac12 \theta(m)\,\ln^2 \frac{m}{\mu}
\,,
&\ \text{etc.}
\end{align}
This simplifies intermediate steps, especially for multi-differential measurements. Of course, the distribution follows from differentiating the final expression with respect to $m$ and typically does contain plus distributions.
The calculation of the soft function will be carried out using dimensional regularization for both the UV and IR divergences, causing the virtual contributions to vanish ($1/\epsilon_\mathrm{UV} - 1/\epsilon_\mathrm{IR}=0$) and avoiding complications~\cite{Collins:1999dz, Manohar:2006nz, Lee:2006nr, Idilbi:2007ff} from the overlap with collinear radiation.
The real emission diagrams with the gluon attaching to Wilson lines 1 and 2 yield at this order (see also app.~C of ref.~\cite{Hoang:2014wka})
\begin{align} \label{eq:start}
S_{12}^{(1)}(m,\mu)
& = -\frac{\alpha_s}{\pi^2}\, \mathbf{T}_1 \!\cdot\! \mathbf{T}_2\, \frac{\big(e^{\gamma_E} \mu^2 \big)^\epsilon}{\Gamma(1-\epsilon)}\, \nu^\eta
\int_0^\infty\! \frac{\mathrm{d} k_T}{k_T^{1+\eta+2\epsilon}} \int_{-\infty}^\infty \frac{\mathrm{d} y}{|2 \sinh y|^{\eta}} \int_0^{2\pi}\! \mathrm{d} \phi\,
\theta[m - k_T f(y,\phi)]
\nonumber \\ &
= \frac{\alpha_s}{\pi^2}\, \mathbf{T}_1 \!\cdot\! \mathbf{T}_2\, \frac{e^{\epsilon \gamma_E}}{(\eta+2\epsilon) \Gamma(1-\epsilon)}\,\frac{\nu^\eta \mu^{2\epsilon}}{m^{\eta+2\epsilon}}
\int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\, \frac{\theta[f(y,\phi)] f(y,\phi)^{2\epsilon}}{|2 \sinh y|^{\eta}}
\,.\end{align}
Here $\mathbf{T}_1$ and $\mathbf{T}_2$ denote the color charge of the emitted gluon in the representation of Wilson lines 1 and 2, respectively (in the notation of refs.~\cite{Catani:1996jh,Catani:1996vz}). If there are only two Wilson lines,
$\mathbf{T}_1 \cdot \mathbf{T}_2 = - C_F$ for a quark-anti-quark and $-C_A$ for two gluons. From \eq{start} it is clear that the soft radiation is uniformly emitted in $y$ and $\phi$. Thus if $f(y,\phi)$ goes to a constant for $y \to \pm \infty$, the $y$ integral diverges. We control these rapidity divergences in the soft function using the $\eta$ regulator of refs.~\cite{Chiu:2011qc,Chiu:2012ir}. Other regulators are possible~\cite{Collins:1981uk,Dixon:2008gr,Chiu:2009yx,Collins:2011zzd,Becher:2011dz,Echevarria:2015usa,Echevarria:2015byo}, and the expression corresponding to \eq{start} for ref.~\cite{Becher:2011dz} is given in \app{becher}. Note that at this order there is no distinction between outgoing and incoming Wilson lines, which is known to extend to two loops in certain cases~\cite{Kang:2015moa}.
We introduce a function $f_\infty(y,\phi)$ that captures the behavior of the measurement as $y \to \pm \infty$, such that $\ln (f/f_\infty)$ is integrable. In practice, $f_\infty$ can be obtained by expanding $\ln f$ around $1/y = 0$.
This allows us to already isolate the divergent behavior at the integrand level, resulting in
\begin{align}\label{eq:master_eta}
S_{12}^{(1)}(m,\mu)
&= \frac{\alpha_s}{2\pi^2}\, \mathbf{T}_1 \!\cdot\! \mathbf{T}_2\,
\int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\, \theta[f(y,\phi)]\, f_\infty(y,\phi)^{2\epsilon} e^{-\eta |y|}
\nonumber \\ & \quad \times
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu\,f(y,\phi)}{m\,f_\infty(y,\phi)} + 2\epsilon \Big(\ln^2 \frac{\mu}{m} - \frac{\pi^2}{24}\Big)\Big]
\Big[1 + \eta \Big(-\frac{1}{2\epsilon} + \ln \frac{\nu}{m}\Big)\Big]
\,.\end{align}
The UV divergences are fixed by $f_\infty$ and the original measurement $f$ only enters in the finite terms through $\ln (f/f_\infty)$. At this order, only the asymptotic behavior of the rapidity regulator enters, which is characterized by the (simpler) factor $e^{-\eta|y|}$.
An exception is when $f$ vanishes in regions of phase-space (see \eq{outside}).
In these cases it is convenient to separate $f$ into the measurement $f^M>0$ and the theta function $f^R$ defining the integration region. Eq.~\eqref{eq:master_eta} now reads
\begin{align}
S_{12}^{(1)}(m,\mu)
&= \frac{\alpha_s}{2\pi^2}\, \mathbf{T}_1 \!\cdot\! \mathbf{T}_2\,
\int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\, f^R(y,\phi)\, f_\infty(y,\phi)^{2\epsilon} e^{-\eta |y|}
\nonumber \\ & \quad \times
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu\,f^M(y,\phi)}{m\,f_\infty(y,\phi)} + 2\epsilon \Big(\ln^2 \frac{\mu}{m} - \frac{\pi^2}{24}\Big)\Big]
\Big[1 + \eta \Big(-\frac{1}{2\epsilon} + \ln \frac{\nu}{m}\Big)\Big]
\,.\end{align}
Now $f_\infty$ can be determined by only considering $f^M$ (but is irrelevant if the integration is cut off by $f^R$).
When the region described by $f^R$ has a finite area $A$ in $(y,\phi)$ space,
\begin{align}\label{eq:outside}
S_{12}^{(1)}(m,\mu)
&= \frac{\alpha_s}{2\pi^2}\, \mathbf{T}_1 \!\cdot\! \mathbf{T}_2\,
\int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\, f^R(y,\phi)\,
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu\,f^M(y,\phi)}{m} \Big]
\nonumber \\
&= \frac{\alpha_s}{2\pi^2}\, \mathbf{T}_1 \!\cdot\! \mathbf{T}_2\, \frac{A}{\epsilon}
+ \ord{\epsilon^0}
\,.\end{align}
Thus the divergence is independent of $f_\infty$ and just proportional to this area. This is the motivation behind the hemisphere decomposition used in \sec{1jettiness}.
We will present several applications for dijet observables in \sec{dijet}, demonstrating the efficiency of this approach.
\subsection{$N$ jets}
\label{sec:not_back_to_back}
To calculate the soft function for $N$ Wilson lines, we can simply sum over the contribution from each pair of Wilson lines using \eq{master_eta}. However, we need to take into account that the Wilson lines are no longer back-to-back, which we address by boosting to a frame where they are back-to-back. Using primed coordinates for the former and unprimed coordinates for the latter, a momentum $k^\mu$ transforms as
\begin{align}
k'^\mu = B(n_1',n_2') k^\mu
\,.\end{align}
with
\begin{align} \label{eq:boost}
B(n_1',n_2') = \begin{pmatrix} \gamma & - \gamma \vec \beta^{\,T} \\ -\gamma \vec \beta \quad & \mathbf{1}+(\gamma-1) \vec \beta \vec \beta^{\,T} / \vec \beta^{\,2} \end{pmatrix}
\,, \qquad
\vec \beta = - \frac{1}{2} (\hat n_1' + \hat n_2')
\,, \qquad
\gamma = \frac{\sqrt{2}}{{\sqrt{n_1' \!\cdot\! n_2'}}}
\,,\end{align}
where $n_i'=(1,\hat n_i')$ ($i$=1,2) denote the directions of the Wilson lines.
The Wilson lines in the two frames simply transform into each other.
Applying the reverse boost to $n_1'$, $n_2'$, $\bar n_1'$ and $\bar n_2'$,
\begin{align} \label{eq:ns}
\tilde n_1^\mu &= \big(\gamma^{-1}, \tfrac12(\hat n_1'- \hat n_2')\big)
\,, &
\tilde n_2^\mu &= \big(\gamma^{-1}, \tfrac12(\hat n_2' - \hat n_1')\big)
\,, \nonumber \\
\tilde {\bar n}_1^\mu &= -\tilde n_1^\mu + 2\gamma (1,\vec \beta)
\,, &
\tilde {\bar n}_2^\mu &= -\tilde n_2^\mu + 2\gamma (1,\vec \beta)
\,,\end{align}
so $\tilde n_1$ and $\tilde n_2$ are indeed back-to-back, though $\tilde n_i$ and $\tilde {\bar n}_i$ are not.
Because $\tilde n_i$ and $\tilde {\bar n}_i$ do not have the conventional $(1,\hat n)$ normalization, we wrote a tilde on the $n_i$ and $\bar n_i$, though this normalization is irrelevant for the Wilson lines.
One can then convert the measurement between the two coordinates using Lorentz invariance of scalar products $n_i' \!\cdot\! k' = \tilde n_i \!\cdot\! k$. For $i=1,2$ this takes a particularly simple form
\begin{align}
n_1' \!\cdot\! k' = \gamma^{-1} n_1 \!\cdot\! k\,,
\quad
n_2' \!\cdot\! k' = \gamma^{-1} n_2 \!\cdot\! k
\label{eq:scalarboost1}
\,.\end{align}
This approach requires modification in the presence of rapidity divergences, since the rapidity regulator is not boost invariant. For definiteness we first assume that only the Wilson line in the $n_1'$ direction requires rapidity regularization. For the exchange of a soft gluon between the Wilson lines in the $n_1'$ and $n_2'$ direction, the rapidity regulator is
\begin{align} \label{eq:rap_boost}
\Big(\frac{\nu}{|{\bar{n}}_1' \!\cdot\! k' - n_1' \!\cdot\! k'|}\Big)^\eta \stackrel{y \to \infty}{=}
\Big(\frac{\nu}{2\gamma k_T \sinh y} \Big)^\eta
\,.\end{align}
Although inserting \eq{ns} leads to complicated expressions, the asymptotic behavior is simple and is the only thing that matters at one-loop order. The Wilson line requiring the rapidity regularization is at $y = \infty$, so this is the only relevant limit ($y\to -\infty$ is regulated by dimensional regularization). Note that if instead the Wilson line $n_2$ required rapidity regularization, the final expression would still be the same. From this we conclude that we may use our master formula by simply replacing $\nu \to \nu/\gamma$. In the presence of additional Wilson lines requiring rapidity regularization, we in principle need a copy of the rapidity regulator for each direction\footnote{Even for Wilson lines in the $n_1$ and ${\bar{n}}_1$ directions we can have separate regulators, since the rapidity divergences should be cancelled by the collinear radiation in the $n_1$ and ${\bar{n}}_1$ direction, respectively~\cite{Echevarria:2012js}.}
\begin{align}
\prod_i \Big(\frac{\nu_i}{|{\bar{n}}_i' \!\cdot\! k' - n_i' \!\cdot\! k'|}\Big)^{\eta_i}
\end{align}
Ensuring that rapidity divergences corresponding to the $n_i'$ direction are controlled by $\eta_i$, by taking the other $\eta$'s to zero first, implies that \eq{rap_boost} still holds with $\nu \to \nu_i$ and $\eta \to \eta_i$. In particular, if at the end of the calculation we take all regulators equal, $\nu_i = \nu$ and $\eta_i = \eta$, we can simply do all calculations by replacing $\nu \to \nu/\gamma$ in our master formula.
We find our approach of boosting to back-to-back coordinates convenient as it allows us to recycle results, but it is not necessary. Direct calculations of soft functions with more than two Wilson lines and rapidity divergences have been carried out in e.g.~refs.~\cite{Liu:2013hba,Becher:2015gsa}.
\subsection{Multi-differential measurements}
\label{sec:mdiffframework}
We now consider multi-differential measurements, where large logarithms associated with additional scales arise and require resummation. The resummation can be achieved by an extension of SCET (SCET${}_+$) with additional collinear-soft and/or soft-collinear degrees of freedom~\cite{Bauer:2011uc, Procura:2014cba, Larkoski:2015zka, Larkoski:2015kga, Becher:2015hka, Neill:2015nya, Chien:2015cka, jethierarchies}.
Whereas the soft function defined in \eq{soft_def} depends on one measurement $m$, multi-differential measurements give rise to a soft function depending on multiple measurements
\begin{align} \label{eq:multimeas}
\delta(m - \hat m) \to \prod_i \delta(m_i - \hat m_i)
\,.\end{align}
The collinear-soft radiation of SCET${}_+$ is described by a collinear-soft function, which is also a matrix element of (collinear-soft) Wilson lines. It can be calculated in the same manner, as we will show in \sec{mdiff}.
To incorporate the multiple measurements in the soft function, we extend the measurement to a vector
\begin{align}
\vec m &= k_T \vec f(y,\phi)
\,.\end{align}
allowing us to write \eq{multimeas} for the cumulative soft function as
\begin{align}
\prod_i \theta[m_i - k_T f_i(y,\phi)] &= \theta[\max_i \{f_i(y,\phi)\}] \prod_i \theta[m_i/f_i(y,\phi) - k_T]
\,.\end{align}
For a given $y$ and $\phi$ this is dominated by a single measurement $m_I$ that imposes the strongest constraint on $k_T$.
Regulating this dominant measurement for $y\to \pm \infty$ through $f_\infty$, we arrive at following expression for the soft function
\begin{align}
S_{12}^{(1)}(\vec m,\mu)
&= \frac{\alpha_s}{2\pi^2}\, \mathbf{T}_1 \!\cdot\! \mathbf{T}_2\,
\int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\, \theta[\max_i \{f_i(y,\phi)\}] \, f_\infty(y,\phi)^{2\epsilon} e^{-\eta |y|}
\\ & \quad \times
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu\,f_I(y,\phi)}{m_I\,f_\infty(y,\phi)} + 2\epsilon \Big(\ln^2 \frac{\mu}{m_I} \!-\! \frac{\pi^2}{24}\Big)\Big]
\Big[1 + \eta \Big(-\frac{1}{2\epsilon} + \ln \frac{\nu}{m_I}\Big)\Big]
\,. \nonumber\end{align}
We emphasize that the index $I$ denoting the dominant measurement generally depends on $y$ and $\phi$. The corresponding division of phase-space provides a natural way to do the integration.
In \sec{mdiff} we will apply this to several double-differential measurements. Specifically, the measurement of two angularities~\cite{Larkoski:2014tva} and the simultaneous measurement of transverse momentum and beam thrust~\cite{Jain:2011iu}.
\section{Dijet examples}
\label{sec:dijet}
We start by calculating the soft function for the thrust and angularity $e^+e^-$ event shapes in \secs{thrust}{ang}.
In \sec{pT} we determine the transverse momentum soft function for $pp\to Z+X$ (or $pp \to H+X$), which contains rapidity divergences.
For transverse thrust in $e^+e^-$ collisions, discussed in \sec{tt}, the divergences depend on the azimuthal angle. We describe how to treat this in dimensional regularization without breaking the azimuthal symmetry.
\subsection{Thrust}
\label{sec:thrust}
Thrust is an $e^+e^-$ event shape defined through~\cite{Farhi:1977sg}
\begin{align}
\tau=1-T= \frac{1}{Q}\,\sum_i \text{min} \left\{ k_i^+,k_i^-\right\}
\end{align}
with $i$ running over the final-state particles and $Q$ being the total invariant mass. The contribution of soft radiation to the measurement $m=Q\tau$, corresponds to
\begin{align}
f(y,\phi) = e^{-|y|}
\,.\end{align}
Since $f$ is particularly simple, we choose $f_\infty = f$, leading to
\begin{align}
S^{(1)}(m=Q\tau,\mu)
&= -\frac{\alpha_s C_F}{\pi}\,
\int_{-\infty}^\infty\!\mathrm{d} y\, e^{-2 \epsilon |y|}
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m} + 2\epsilon \Big(\ln^2 \frac{\mu}{m} - \frac{\pi^2}{24}\Big)\Big]
\nonumber \\ &
= -\frac{\alpha_s C_F}{\pi}\,
\frac{1}{\epsilon}\,\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m} + 2\epsilon \Big(\ln^2 \frac{\mu}{m} - \frac{\pi^2}{24}\Big)\Big]
\,.\end{align}
Differentiating this leads to the result of refs.~\cite{Schwartz:2007ib,Fleming:2007xt}.
\subsection{Angularities}
\label{sec:ang}
The contribution of soft radiation to the measurement $m=Q \tau_a$ of the angularity~\cite{Berger:2003iw}
\begin{align}
\tau_a & = \frac{1}{Q} \sum_i k_{i T}\, e^{-|y_i|(1-a)}
\end{align}
is described by
\begin{align}
f(y,\phi) & = e^{-|y|(1-a)}
\,.\end{align}
This family of event shapes is infrared safe for $a<2$ and includes thrust ($a=0$) and broadening ($a=1$). For $a<2$ and $a\neq 1$, with $f_\infty = f$, we obtain~\cite{Hornig:2009vb, Larkoski:2014uqa}
\begin{align}
S^{(1)}(m=Q \tau_a,\mu) &= -\frac{\alpha_s C_F}{\pi}\,
\int_{-\infty}^\infty\!\mathrm{d} y\, e^{-2 \epsilon |y| (1-a)}
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m} + 2\epsilon \Big(\ln^2 \frac{\mu}{m} - \frac{\pi^2}{24}\Big)\Big]
\nonumber \\ &
= \frac{\alpha_sC_F}{\pi} \frac{1}{a-1} \frac{1}{\epsilon}
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m} + 2\epsilon \Big(\ln^2 \frac{\mu}{m} - \frac{\pi^2}{24}\Big)\Big]
\nonumber \\ &
= \frac{\alpha_s C_F}{\pi} \frac{1}{a-1} \Big[\ \frac{1}{\epsilon^2} +\frac{1}{\epsilon} \Big( \ln \frac{\mu^2}{Q^2} - 2 \ln \tau_a \Big)
\nonumber \\ &
\quad \quad+ \frac{1}{2} \ln^2 \frac{\mu^2}{Q^2} -2 \ln \frac{\mu^2}{Q^2} \ln \tau_a + 2 \ln^2 \tau_a - \frac{\pi^2}{12}
\Big]
\,.\end{align}
The case $a=1$ is equivalent with the transverse momentum measurement discussed next.
\subsection{Transverse momentum}
\label{sec:pT}
When the transverse momentum, $p_T$, of the soft radiation is measured, $f$ is trivial
\begin{align}
f(y,\phi) &= f_\infty(y,\phi) = 1
\,. \end{align}
However, the calculation is slightly more complicated due to rapidity divergences arising from $y \to \pm \infty$ in the $y$ integration,
\begin{align}
S^{(1)}(m=p_T,\mu)
&= -\frac{\alpha_s C_F}{\pi}\,
\int_{-\infty}^\infty\!\mathrm{d} y \, e^{-\eta |y|}
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m} + 2\epsilon \Big(\ln^2 \frac{\mu}{m} - \frac{\pi^2}{24}\Big)\Big]
\nonumber \\ & \quad \times
\Big[1 + \eta \Big(-\frac{1}{2\epsilon} + \ln \frac{\nu}{m} \Big)\Big]
\nonumber \\ &
= -\frac{\alpha_s C_F}{\pi}\,
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m} + 2\epsilon \Big(\ln^2 \frac{\mu}{m} - \frac{\pi^2}{24}\Big)\Big] \Big(\frac{2}{\eta} - \frac{1}{\epsilon} + 2 \ln \frac{\nu}{m} \Big)
\,.\end{align}
As the rapidity regulator $\eta$ should not regulate UV divergences, it must be taken to zero before $\epsilon$.
For Wilson lines in the adjoint representation (gluons), $C_F \to C_A$. This agrees with the calculation in ref.~\cite{Chiu:2012ir}, when converting their $\vec p_T$ measurement.
\subsection{Transverse thrust in $e^+e^-$}
\label{sec:tt}
The transverse thrust event shape $T_\perp$~\cite{Banfi:2004nk}, is designed for hadron collisions, but has also been calculated for $e^+e^- \to 2$ jets \cite{Becher:2015gsa},
\begin{align} \label{eq:Tperp_def}
\tau_\perp = 1 - T_\perp &= \max_{\vec{n}_\perp} \frac{\sum_i |\vec{k}_{i\perp}| - |\vec{k}_{i\perp} \cdot \vec{n}_{\perp}|}{\sum_i |\vec{k}_{i\perp}|}
\nonumber \\
&= \frac{\sum_i |\vec{k}_{i\perp}| - |\vec{k}_{i\perp} \cdot \vec{n}_{\perp}|}{Q |\sin\theta|}
\,.\end{align}
Here the sum $i$ runs over the final-state particles and the transverse ($\perp$) is with respect to the electron-positron beam axis. In the second line, power suppressed contributions have been neglected in order to write it in terms of the angle $\theta$ between the beam and the thrust axis, and the transverse orientation of the thrust axis $\vec n_\perp$. The contribution to $\tau_\perp$ from one soft particle with momentum $k$ is thus described by
\begin{align}\label{eq:f_tau_perp}
f(y,\phi) & = \frac{1}{Q|\sin\theta|} \Big[ \sqrt{( \cos\phi \cos\theta + \sinh y \sin\theta )^2 + \sin^2\phi} - |\cos\phi \cos\theta + \sinh y \sin\theta | \Big]\, , \nonumber\\
f_\infty(y,\phi) & = \frac{\sin^2\phi}{Q \sin^2\theta} e^{-|y|} \,,
\end{align}
where we have expressed $k_\perp$ and $n_\perp$ in \eq{Tperp_def} in terms of the variables $k_T$, $y$ and $\phi$ in the frame where the thrust axis is along the $\hat{z}$ direction.
Interestingly, the structure of the divergence as $y \to \pm \infty$ in \eq{f_tau_perp} has an azimuthal angle dependence.
This results in
\begin{align}
S^{(1)}(m=\tau_\perp,\mu)
&= -\frac{\alpha_s C_F}{2\pi^2}
\int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\,
\\
& \quad \times \Big\{ f_\infty(y,\phi)^{2\epsilon}
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m} + 2\epsilon \Big(\ln^2 \frac{\mu}{m} - \frac{\pi^2}{24}\Big)\Big] + 2\ln\frac{f(y,\phi)}{\,f_\infty(y,\phi)} \Big\}
\nonumber \\
& = - \frac{\alpha_s C_F}{\pi} \Big[ \frac{1}{\epsilon^2} + \frac{2}{\epsilon} \ln \frac{\mu}{4mQ\sin^2\theta} + 2 \ln^2 \frac{\mu}{4mQ\sin^2\theta} + \frac{7\pi^2}{12} + A(\theta) \Big]
\,,\nonumber\end{align}
where the finite term $A(\theta)$ is given by
\begin{align}
A(\theta) & = \frac{1}{\pi} \int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi
\ln \frac{f(y,\phi)}{\,f_\infty(y,\phi)}
\,.\end{align}
We remind the reader that these $y$ and $\phi$ are defined in the frame where the thrust axis is along the $\hat{z}$ axis, while the {\it transverse} in transverse thrust means perpendicular to the beam axis. The results have been cross checked with ref.~\cite{Becher:2015gsa}, and agree once the different scheme for dimensional regularization is taken into account, as discussed in detail below.
The transverse part of the $d$-dimensional integration measure can be written
\begin{align}
\int\! \mathrm{d}^{2-2\epsilon} k_\perp = \frac{\Omega_{1-2\epsilon}}{2} \frac{1}{2}\int\! \mathrm{d} k_T^2 (k_T^2)^{-\epsilon} \int_0^{2\pi} \mathrm{d}\phi\, \bigl[\sin^2(\phi-\phi_0)\big]^{-\epsilon}
,\end{align}
where $\phi-\phi_0$ is the azimuthal angle between the momentum $k_\perp$ and an arbitrary reference axis. With the choice $\phi_0=0$ we obtain the integration measure used in ref.~\cite{Becher:2015gsa}.
We prefer to preserve the azimuthal symmetry, and integrate over the choice of this reference axis
\begin{align}
\int\! \mathrm{d}^{2-2\epsilon} k_\perp & =\frac{\Omega_{1-2\epsilon}}{2} \frac{1}{2}\int\! \mathrm{d} k_T^2 (k_T^2)^{-\epsilon} \frac{1}{2\pi}\int_0^{2\pi}\! \mathrm{d}\phi_0 \int_0^{2\pi}\! \mathrm{d}\phi \big[\sin^2(\phi-\phi_0)\big]^{-\epsilon}
\nonumber\\
& = \frac{\Omega_{2-2\epsilon}}{2\pi} \frac{1}{2}\int\! \mathrm{d} k_T^2 (k_T^2)^{-\epsilon} \int_0^{2\pi}\! \mathrm{d}\phi.
\end{align}
When the measurement does not depend on $\phi$ the two ways gives the same results since
\begin{align}
\frac{\Omega_{1-2\epsilon}}{2} \int_0^{2\pi}\! \mathrm{d}\phi \bigl(\sin^2\phi\bigr)^{-\epsilon} = \frac{2\pi^{1-\epsilon}}{\Gamma(1-\epsilon)} = \Omega_{2-2\epsilon}.
\end{align}
However, for transverse thrust, which does depend on the azimuthal angle, the two schemes give different results for the cumulative soft function. With $f_\infty^{2\epsilon} \propto (\sin^2\phi)^{2\epsilon}$, the two measures lead to contributions to the cumulative soft function that are related through
\begin{align}
-\frac{\alpha_s C_F}{2\pi^2}\,\frac{1}{\epsilon^2}\, \frac{\Omega_{2-2\epsilon}}{2\pi} \int_0^{2\pi}\! \mathrm{d}\phi\, (\sin^2\phi)^{2\epsilon} &= -\frac{\alpha_s C_F}{2\pi^2}\, \frac{1}{\epsilon^2}\,\frac{\Omega_{1-2\epsilon}}{2} \int_0^{2\pi}\! \mathrm{d}\phi\, (\sin^2\phi)^{\epsilon} - \frac{\alpha_s C_F}{\pi}\, \frac{2\pi^2}{3} + \ord{\epsilon}.
\end{align}
The extra $\pi^2$ term in the finite part of the cumulative soft function is cancelled by corresponding terms in the two jet functions, calculated in \app{jetfunction}.
\section{$N$-jettiness with generic jet angularities}
\label{sec:1jettiness}
We extend the thrust-like $N$-jettiness definition~\cite{Stewart:2010tn,Jouttenus:2011wh}, by considering the measurement of a different angularity for each jet\footnote{Here we use the term `jets' to refer to both final-state and beam jets.}
\begin{align} \label{eq:Tau_N}
\mathcal{T}_N=\sum_h \min_\ell \Big\{ \frac{2 \omega_\ell}{Q_\ell} (n_\ell' \!\cdot\! k_h')^{1-\alpha_\ell/2}(\bar{n}_\ell'\!\cdot\! k_h')^{\alpha_\ell/2} \Big\}
\equiv \sum_{\ell}\mathcal{T}^{\ell,\alpha_\ell}_N
\,,\end{align}
where $h$ runs over the hadronic final-state particles and $\ell$ over the jets in the event with (label) momenta
\begin{align} \label{eq:q_l}
q_\ell' = \omega_\ell n_\ell' = \omega_\ell (1, \hat n_\ell' )\,.
\end{align}
The primed variables indicate that the momenta are defined in generic coordinates. We will later boost to (unprimed) coordinates where two of the Wilson lines are back to back, as discussed in \sec{not_back_to_back}. The $\omega_\ell$ in \eqs{Tau_N}{q_l} is considered a parameter which does not transform between frames (i.e.~no $\omega_\ell'$).
The minimization of \eq{Tau_N} assigns each particle to a jet region, and $\mathcal{T}^{\ell,\alpha_\ell}_N$ is the total contribution from jet region $\ell$.
The `standard' thrust-like $N$-jettiness definition is recovered if all $\alpha_\ell$ are zero, $\mathcal{T}^{\ell}_N \equiv \mathcal{T}^{\ell,\alpha_\ell=0}_N$. We show how our results reduce to the expressions in ref.~\cite{Jouttenus:2011wh} in \app{Njettiness}.
We will assume $\alpha_\ell \neq 1$ to avoid rapidity divergences.
The one-loop soft function is the sum over contributions from gluons exchanged between Wilson lines corresponding to the jets $i$ and $j$
\begin{align}
S^{(1)}(m,\mu) =
\sum_{i<j} S_{ij}^{(1)}(m,\mu)
\,.\end{align}
To simplify the discussion we consider $1$-jettiness in $pp$ collisions (or equivalently $3$-jettiness in $e^+ e^-$ collisions). We label the three jets by $\ell=i,j,m$ to make the extension to $N$ jets straightforward.
The contribution of a soft gluon to $\mathcal{T}^{i,\alpha_i}_{1}$ is given by\footnote{To simplify the expressions for the measurement functions $f^M$, we already pull out a factor of the transverse momentum $k_T$ in the unprimed coordinates (where Wilson lines are back-to-back).}
\begin{align}
k_T \,k_{i}
\theta \big(k_{j}- k_{i}\big) \theta \big( k_{m}- k_{i}\big)\,,
\end{align}
and similarly for $\mathcal{T}^{j,\alpha_j}_{1}$ and $\mathcal{T}^{m,\alpha_m}_{1}$, where we introduced
\begin{align}
k_{\ell}=\frac{ 2 \omega_\ell}{Q_\ell k_T} (n_\ell'\!\cdot\! k')^{1-\alpha_\ell/2}(\bar{n}_\ell'\!\cdot\! k')^{\alpha_\ell/2} \,.
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{figs/hemi.pdf}
\caption{
Separation of the soft function, $S_{ij}$, with a gluon emitted between the $i$th and $j$th Wilson line, into
hemisphere, boundary and non-hemisphere contributions. The contributions surrounded by a gray box are together finite.}
\label{fig:regions}
\end{figure}
We extend the hemisphere decomposition~\cite{Jouttenus:2011wh} to handle the azimuthally dependent phase-space boundaries between regions arising from the more general $N$-jettiness measurement. This approach is discussed in detail in
Ref.~\cite{softfunction}. The decomposition of $S_{ij}^{(1)}$ into hemisphere, boundary and non-hemisphere contributions is depicted in \fig{regions} and will be discussed below.
The soft function involves three regions associated with the measurements: $\theta(k_j-k_i)\theta(k_m-k_i)$ for $m_i$, $\theta(k_i-k_j)\theta(k_m-k_j)$ for $m_j$ and $\theta(k_i-k_m)\theta(k_j-k_m)$ for $m_m$. With the purpose of making the analytical calculation of the divergent parts, as well as the extension to $N$ jets as easy as possible, we first allow the measurements of $m_i$ and $m_j$ to extend over the region of $m_m$. This is then compensated for by the non-hemisphere contributions $S_{ij,m}$ and $S_{ji,m}$ to the soft function. For a generic measurement, such as the one considered here, the separation between the regions for $m_i$ and $m_j$ is a non-trivial contour in ($y$, $\phi$)-space, but the divergencies of the soft function do not depend on the exact form of the contour. We therefore split the ($y$, $\phi$)-space into the two hemispheres $y>0$ for $m_i$ and $y<0$ for $m_j$. To compensate for the difference between cutting the phase space along $y=0$ compared to the original contour between $m_i$ and $m_j$, we introduce the boundary contribution $S_{ij,\mathrm{bound}}+S_{ji,\mathrm{bound}}$. Adding up these contributions,
\begin{align}
S_{ij}^{(1)}(m=\{\mathcal{T}^{i,\alpha_i}_{1},\mathcal{T}^{j,\alpha_j}_{1},\mathcal{T}^{m,\alpha_m}_{1}\},\mu) &= S^{(1)}_{ij,\rm{hemi}}(m_i=\mathcal{T}^{i,\alpha_i}_{1},\mu) \,
+ S^{(1)}_{ij,\mathrm{bound}}(m_i=\mathcal{T}^{i,\alpha_i}_{1}\},\mu) \nonumber \\& \quad
+ S^{(1)}_{ij,m}(m_m=\mathcal{T}^{m,\alpha_m}_{1},\mu) \,
- S^{(1)}_{ij,m}(m_i=\mathcal{T}^{i,\alpha_i}_{1},\mu)
\nonumber \\& \quad
+ (j \leftrightarrow i)\,.
\end{align}
As we will see, the hemisphere contributions contain all divergencies, whereas the boundary and
non-hemisphere contributions are UV and IR finite.
When there are additional jets, the hemisphere and boundary contributions are of course the same, but there will be additional non-hemisphere contributions.
We now boost such that the Wilson lines $i$ and $j$ become back-to-back, allowing us to use \sec{not_back_to_back} to perform the calculation.
Using \eqs{ns}{scalarboost1}, this leads to the following expressions for the $k_\ell$ in the back-to-back frame
\begin{align}
& k_{i}= \frac{ 2 \omega_i}{Q_i} \gamma^{-1} e^{-y\,(1-\alpha_i/2)} \left(a e^{-y} + b e^y + c \cos(\phi-\phi_0)\right)^{\alpha_i/2}
\,, \nonumber \\
& k_{j}= \frac{ 2 \omega_j}{Q_j} \gamma^{-1} e^{y\,(1-\alpha_j/2)} \left(b e^{-y} + a e^y + c \cos(\phi-\phi_0) \right)^{\alpha_j/2}
\,, \nonumber \\
& k_{m}=\frac{2 \omega_m}{Q_m} \Big(\frac{1}{2} e^y (\tilde{n}^0_m-\tilde{n}^3_m) + \frac{1}{2} e^{-y} (\tilde{n}^0_m+\tilde{n}^3_m)-\tilde{n}^1_m \cos\phi-\tilde{n}^2_m \sin\phi \Big)^{1-\alpha_m/2}
\, \nonumber \\
&\phantom{k_{m}\frac{2 \omega_m}{Q_m} }\times \Big(\frac{1}{2} e^y (\tilde{\bar{n}}^0_m-\tilde{\bar{n}}^3_m) + \frac{1}{2} e^{-y} (\tilde{\bar{n}}^0_m+\tilde{\bar{n}}^3_m)-\tilde{\bar{n}}^1_m \cos\phi-\tilde{\bar{n}}^2_m \sin\phi\Big)^{\alpha_m/2}
\,,\end{align}
with
\begin{align}
a = \gamma^2 - 1
\,, \qquad
b = \gamma^2
\,, \qquad
c = 2\gamma\sqrt{(\gamma^2-1)}
\,.\end{align}
Here we have explicitly chosen the $z$-axis through $\hat{z}=\tfrac12 \gamma (\hat{n}'_i-\hat{n}'_j)$. The azimuthal angle $\phi_0$ of the boost $- \vec{\beta}$ in \eq{boost} plays no role in the rest of the calculation.
The measurement functions for the different jets are defined as
\begin{align}
f^M_i = k_i \,, \qquad f^M_j = k_j\,, \qquad f^M_m = k_m\,.
\end{align}
Starting with $S^{(1)}_{ij,\rm{hemi}}$, we have
\begin{align}
f^R_{\rm{hemi},i} =
\theta (y)
\,.\end{align}
The form of the measurement simplifies considerably in the limit of $y\rightarrow \infty$, in particular the dependence on the azimuthal angle vanishes
\begin{align}
f_{\infty,i} = 2 \frac{\omega_i}{Q_i} \gamma^{-1} b^{\alpha_i/2} e^{-(1-\alpha_i)y}\,.
\end{align}
The hemisphere contribution is now
\begin{align}\label{eq:hemii}
S^{(1)}_{ij,\rm{hemi}}(m_i=\mathcal{T}^{i,\alpha_i}_{1},\mu)
&= \frac{\alpha_s}{\pi}\,\mathbf{T}_i \!\cdot\! \mathbf{T}_j
\int_{-\infty}^\infty\!\mathrm{d} y\,
f^R_{\rm{hemi},i} \, f_{\infty,i}^{2\epsilon}
\nonumber \\ & \quad \times
\Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m_i} + 2\epsilon \Big(\ln^2 \frac{\mu}{m_i} - \frac{\pi^2}{24}\Big)\Big] + I_{\rm{hemi},i}
\nonumber \\
& = \frac{\alpha_s}{2\pi}\frac{1}{ (1- \alpha_i)}\,\mathbf{T}_i \!\cdot\! \mathbf{T}_j \Big( \frac{1}{\epsilon^2} +\frac{2}{\epsilon} \ln \frac{ B_i \mu}{m_i} + 2 \ln^2 \frac{ B_i \mu}{m_i} - \frac{\pi^2}{12} \Big) + I_{\rm{hemi},i} \,,
\end{align}
where
\begin{align}\label{eq:B}
B_i = 2\frac{\omega_i}{Q_i} \gamma^{-1} b^{\alpha_i/2} \,,
\end{align}
and the remaining finite integral is
\begin{align}\label{eq:Ihemi}
I_{\rm{hemi},i} = \frac{\alpha_s}{\pi^2}\,\mathbf{T}_i \!\cdot\! \mathbf{T}_j
\int_{-\infty}^\infty\!\mathrm{d} y\,
\int_{0}^{2 \pi}\!\mathrm{d} \phi\,
f^R_{\rm{hemi},i} \, \ln \frac{f^M_i}{f_{\infty,i}}\,.
\end{align}
The second hemisphere contribution $S^{(1)}_{ji,\rm{hemi}}(m_j=\mathcal{T}^{j,\alpha_j}_{1},\mu)$, describing the region $ y<0$,
is given by replacing $i\rightarrow j$ in the final line of \eq{hemii}.
Next we calculate the boundary contribution, shown in the second and third column of \fig{regions}. The integration over $y$ and $\phi$ is finite and we can use \eq{outside} to write
\begin{align} \label{eq:non-hemiA}
S^{(1)}_{ij,\mathrm{bound}}(m_i,\mu) &= \frac{\alpha_s}{2\pi^2}\,\mathbf{T}_i \!\cdot\! \mathbf{T}_j
\int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\,
f^R_{ij,\mathrm{bound}}
\Big(\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m_i} + 2\ln f^M_{i} \Big) \,,
\end{align}
with
\begin{align}
f^R_{ij,\mathrm{bound}} = \theta(- y)\theta(k_j-k_i) - \theta( y)\theta(k_i-k_j)\,,
\end{align}
The region for $S^{(1)}_{ji,\mathrm{bound}}(m_j,\mu)$ is given by $f^R_{ji,\mathrm{bound}}=-f^R_{ij,\mathrm{bound}}$. Therefore, the area of the two contributions are equal but enter with different signs, such that the poles cancel in the combination. The total boundary contribution is thus
\begin{align}
S^{(1)}_{ij,\mathrm{bound}}(m_i,\mu) &+ S^{(1)}_{ji,\mathrm{bound}}(m_j,\mu) = \frac{\alpha_s}{2\pi^2}\,\mathbf{T}_i \!\cdot\! \mathbf{T}_j
\int_{-\infty}^\infty\!\!\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\, f^R_{ij,\mathrm{bound}} \Big( 2 \ln\frac{m_j}{m_i} + 2 \ln \frac{f^M_{i}}{f^M_{j}} \Big)\,.
\end{align}
The measurement functions for the non-hemisphere contributions,
$S^{(1)}_{ij,m} (m_m,\mu)$ and $S^{(1)}_{ij,m} (m_i,\mu)$, shown in the last two columns of \fig{regions},
are defined on the same region,
\begin{align}
f^R_{ij,m}=
\theta (k_{j} - k_{i})
\theta (k_{i} - k_{m})
\,.\end{align}
Application of \eq{outside} gives
\begin{align}
S^{(1)}_{ij,m}(m_{m}=\mathcal{T}^{m,\alpha_{m}}_{1},\mu) &= \frac{\alpha_s}{2\pi^2}\,\mathbf{T}_i \!\cdot\! \mathbf{T}_j
\int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\, f^R_{ij,m}
\Big(\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m_{m}} + 2 \ln f^M_{m} \Big) \,,
\end{align}
and similarly for $S^{(1)}_{ij,m}(m_{i},\mu)$ with the replacement $m \rightarrow i$.
Subtracting the non-hemisphere $i$ contribution from the non-hemisphere $m$ contribution, the $1/\epsilon$ poles cancel and for
the full non-hemisphere contribution we find
\begin{align}\label{eq:non-hemi}
S^{(1)}_{ij,m}(m_m) \,
- S^{(1)}_{ij,m}(m_i) &=
\frac{\alpha_s}{\pi}\,\mathbf{T}_i \!\cdot\! \mathbf{T}_j \Big( \tilde{I}_0 \ln \frac{m_i}{m_m} + \tilde{I}_1 \Big)\,,
\end{align}
with
\begin{align}\label{eq:ints}
\tilde{I}_0 &= \frac{1}{\pi} \int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\, f^R_{m} \,, \nonumber \\
\tilde{I}_1 &= \frac{1}{\pi} \int_{-\infty}^\infty\!\mathrm{d} y \int_0^{2\pi}\! \mathrm{d} \phi\, f^R_{m} \ln
\frac{f^M_{m}}{f^M_{i}} \,.
\end{align}
Note that $\tilde I_0$ is simply the area of region $m$ with $k_i<k_j$. The result for the second non-hemisphere contribution $S^{(1)}_{ji,m}(m_m) \,
- S^{(1)}_{ji,m}(m_j)$ is obtained by the replacement $i\leftrightarrow j$ in \eq{non-hemi} and \eq{ints}.
We show in \app{Njettiness} how for $\alpha_\ell=0$ these expressions reduce to those in ref.~\cite{Jouttenus:2011wh}.
\section{Multi-differential measurements}
\label{sec:mdiff}
We present results for the soft function and the collinear-soft function for double differential measurements. In \sec{thrust_pT} we consider the simultaneous measurement of (beam) thrust and transverse momentum, and in \sec{tatb} the measurement of two angularities.
\subsection{Thrust and transverse momentum}
\label{sec:thrust_pT}
Following ref.~\cite{Procura:2014cba}, we combine the (beam) thrust and transverse momentum measurements of \secs{thrust}{pT}, which is described by
\begin{align}
\vec f(y,\phi) &= (e^{-|y|},1)
\,.\end{align}
In the asymptotic regime $y\to \pm \infty$ the transverse momentum measurement dominates,
\begin{align}
f_\infty(y,\phi) = 1
\,, \end{align}
leading to~\cite{Procura:2014cba}
\begin{align} \label{eq:S_tau_pT}
S^{(1)}(\vec m=(Q\tau,p_T),\mu)
&= -\frac{\alpha_s C_F}{\pi}\,
\int_{-\infty}^\infty\!\mathrm{d} y\, e^{-\eta |y|}
\bigg[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{\min(m_1 e^{|y|},m_2)}
+ 2\epsilon \Big(\ln^2 \frac{\mu}{m_2} - \frac{\pi^2}{24}\Big)\bigg]
\nonumber \\ & \quad \times
\Big[1 + \eta \Big(-\frac{1}{2\epsilon} + \ln \frac{\nu}{m_2}\Big)\Big]
\nonumber \\ &
= S^{(1)}(m=p_T,\mu) -
\theta(m_2 - m_1)\,
\frac{2\alpha_s C_F}{\pi}\,
\int_0^{\ln (m_2/m_1)}\!\mathrm{d} y\,
2 \ln \frac{m_2}{m_1 e^y}
\nonumber \\ &
= S^{(1)}(m=p_T,\mu) -
\theta(m_2 - m_1)\,
\frac{2\alpha_s C_F}{\pi}\, \ln^2 \frac{m_2}{m_1}
\,.\end{align}
In the second step we first assumed that $\min(m_1 e^{|y|},m_2) = m_2$, leading to the transverse momentum soft function, and corrected for this through the second term.
The collinear-soft function for this double differential measurement is a matrix element of (collinear-soft) Wilson lines, and thus leads to the same amplitude as in \eq{master_eta}. However, due to the collinear nature of this radiation, we use the measurement function for the hemisphere it goes into.\footnote{In the calculation one also integrates over the other hemisphere. This is corrected for through zero-bin subtractions~\cite{Manohar:2006nz} that remove the overlap with soft radiation, but vanish in pure dimensional regularization.}
For collinear-soft radiation going into the $y<0$ hemisphere,
\begin{align}
\vec f(y,\phi) &= (e^y,1)
\,, \quad
f_\infty(y,\phi) = \theta(-y)+\theta(y)e^{y}
\,.\end{align}
We thus find
\begin{align}
{\mathscr S}^{(1)}(\vec m=(p^-,p_T),\mu)
&= \frac12 S^{(1)}(\vec m=(Q\tau,p_T),\mu)
\nonumber \\ & \quad
-\frac{\alpha_s C_F}{\pi}\,
\int_{0}^\infty\!\mathrm{d} y\, e^{2\epsilon y}
\bigg[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m_1}
+ 2\epsilon \Big(\ln^2 \frac{\mu}{m_1} - \frac{\pi^2}{24}\Big)\bigg]
\nonumber \\ & \quad
-\theta(m_1 - m_2) \frac{\alpha_s C_F}{\pi}\,
\int_{0}^{\ln(m_1/m_2)}\!\mathrm{d} y\,
2 \ln \frac{m_1}{m_2 e^y}
\nonumber \\ &
= \frac12 S^{(1)}(\vec m=(Q\tau,p_T),\mu)
-\frac{\alpha_s C_F}{\pi}\,
\Big[- \frac{1}{2\epsilon^2} - \frac{1}{\epsilon} \ln \frac{\mu}{m_1} - \ln^2 \frac{\mu}{m_1} + \frac{\pi^2}{24}
\nonumber \\ & \quad
-\theta(m_1 - m_2) \ln^2 \frac{m_1}{m_2}\Big]
\,,\end{align}
exploiting that the measurement is identical to the soft function in \eq{S_tau_pT} for $y<0$.
Our result agrees with ref.~\cite{Procura:2014cba}.\footnote{In the second-to-last expression in eq.~(3.17) of ref.~\cite{Procura:2014cba}, the $\delta(k^+ - |\vec k_\perp|)$ term is equal to zero. Due to a typo, the $\pi^2$ term is a factor 2 too big there.} Note that the collinear-soft function for the hemisphere $y>0$ has an identical expression.
\subsection{Two angularities}
\label{sec:tatb}
We now extend \sec{ang} to consider the measurement of two angularities $\tau_a$ and $\tau_b$ as in refs.~\cite{Larkoski:2013paa,Larkoski:2014tva,Procura:2014cba}. Taking $2>b>a$ (and $a,b\neq 1$) implies $\tau_b > \tau_a$ and
\begin{align} \label{eq:f_two_ang}
\vec f(y,\phi) &= (e^{-|y|(1-a)},e^{-|y|(1-b)})
\,, \quad
f_\infty(y,\phi) = e^{-|y|(1-b)}
\,.\end{align}
Writing $m_{a}=Q \tau^{a}$ and $m_{b}=Q \tau^{b}$, this leads to
\begin{align}
S^{(1)}(\vec m=(m_a,m_b),\mu)
&= S^{(1)}(m_b,\mu)
\nonumber \\ & \quad - \theta(m_b - m_a)\, \frac{2\alpha_s C_F}{\pi}\! \int_0^{\frac{1}{(b-a)} \ln \frac{m_b}{m_a}} \mathrm{d} y\, 2\Big( \ln \frac{m_b}{m_a} + (a-b) y \Big)
\nonumber \\ &= S^{(1)}(m_b,\mu)
- \theta(m_b - m_a)\, \frac{2\alpha_sC_F}{\pi}\, \frac{1}{b-a} \ln^2 \frac{m_b}{m_a}
\,.\end{align}
This agrees with the expression in ref.~\cite{Larkoski:2014tva}, when converting their angular exponents $\alpha$, $\beta$ to our (current) conventions, $\alpha = 2-a$, $\beta = 2 - b$, and taking into account that they consider only one jet which halves the result.
The corresponding collinear-soft function has again the same amplitude but a modified measurement. For collinear-soft radiation going into the $y<0$ hemisphere,
\begin{align}
\vec f(y,\phi) &= (e^{y(1-a)},e^{y(1-b)})
\,, \quad
f_\infty(y,\phi) = \theta(-y)e^{y(1-b)}+\theta(y)e^{y(1-a)},
\end{align}
which is identical to \eq{f_two_ang} for $y<0$ but not for $y>0$.
This leads to
\begin{align} \label{eq:S1_ma_mb}
{\mathscr S}^{(1)}(\vec m=(m_a, m_b),\mu)
&= \frac{1}{2} S^{(1)}(\vec m=(m_a, m_b),\mu)
\nonumber \\ & \quad - \frac{\alpha_s C_F}{\pi} \int_{0}^\infty\!\mathrm{d} y\, e^{2\epsilon y(1-a)} \Big[\frac{1}{\epsilon} + 2 \ln \frac{\mu}{m_a} + 2\epsilon \Big(\ln^2 \frac{\mu}{m_a} - \frac{\pi^2}{24}\Big)\Big]
\nonumber \\ & \quad
- \theta(m_a - m_b) \frac{\alpha_s C_F}{\pi} \int_{0}^{\frac{1}{(b-a)} \ln \frac{m_a}{m_b}} \!\mathrm{d} y\, 2\Big( \ln \frac{m_a}{m_b} + (a-b) y \Big)
\nonumber \\ &=
\frac{1}{2} S^{(1)}(\vec m=(m_a,m_b),\mu) - \frac{1}{2} S^{(1)}(m_a,\mu)
\nonumber \\ & \quad
- \theta(m_a - m_b) \, \frac{\alpha_sC_F}{\pi}\, \frac{1}{b-a} \ln^2 \frac{m_b}{m_a}
\nonumber \\ &=
\frac{1}{2} S^{(1)}(m_b,\mu) - \frac{1}{2} S^{(1)}(m_a,\mu)
- \frac{\alpha_sC_F}{\pi}\, \frac{1}{b-a} \ln^2 \frac{m_b}{m_a}
\,. \end{align}
This is consistent with matching the SCET${}_+$ factorization theorem in the bulk with the \ensuremath{{\rm SCET}_{\rm I}}\xspace factorization on the boundary, discussed in sec.~4 of ref.~\cite{Procura:2014cba}, since the last term in the second-to-last line of \eq{S1_ma_mb} drops out due to $m_b > m_a$.
It may not be a priori obvious that the collinear-soft function satisfies the kinematic constraint $m_a < m_b$. However, inserting the collinear-soft scale
\begin{align}
\mu_{\mathscr S} = \big(m_a^{b-1} m_b^{1-a}\big)^{1/(b-a)}
\end{align}
in the finite terms gives,
\begin{align}
{\mathscr S}^{(1)}(\vec m=(m_a, m_b),\mu_{\mathscr S}) &= \frac{\alpha_s C_F}{\pi} \Big(\frac{1}{b-1}\, \ln^2 \frac{\mu}{m_b} +
\frac{1}{1-a}\, \ln^2 \frac{\mu}{m_a} - \frac{1}{b-a} \ln^2 \frac{m_b}{m_a}\Big)
\nonumber \\
&= 0
\,. \end{align}
\section{Conclusions}
\label{sec:conclusions}
We have presented a convenient method for calculating the effect of soft QCD radiation at one-loop order, for generic $N$-jet processes and measurements. This exploits that soft emissions are uniform in rapidity and azimuthal angle. Through an isolation of the divergent parts, we are able to perform a partial expansion in the regulators already before the integration, simplifying the calculation of the poles and directly leading to an integral for the finite terms. By working with cumulative distributions, complications from plus distributions in intermediate expressions are avoided. As a demonstration of the ease of the calculational framework, soft functions for a range of processes and measurements are computed. We obtain original results for the soft function for $N$-jettiness with generic jet angularities, which required an extension of the hemisphere decomposition~\cite{Jouttenus:2011wh} to make the complicated boundaries between regions tractable, see also ref.~\cite{softfunction}.
We also determine the collinear-soft function for the measurement of two angularities for the first time.
An automated approach to the two-loop soft function for dijets is underway~\cite{SCETsoft2lTalk}.
Our method reduces the work required for calculating one-loop soft functions, and can for example be applied to calculate the soft functions for the recently introduced XCone class of jet algorithms~\cite{Stewart:2015waa}.
\begin{acknowledgments}
This work is supported by the European Community under the "Ideas" program QWORK (contract 320389), by the the Netherlands Organization for Scientific Research (NWO) through a VENI grant, and the D-ITP consortium, a program of the NWO that is funded by the Dutch Ministry of Education, Culture and Science (OCW).
\end{acknowledgments}
|
2,877,628,089,403 | arxiv | \section{Introduction}
At present it is not known which experiment will
lead to the first reliable, prototypical quantum computing device.
Quantum systems with two states, called qubits,
are taken to be the basic unit for quantum information
processing and storage. However, in practice, these
two states are often only two of a larger set of states.
Therefore, one may wonder if a higher-dimensional system will
eventually be used in its entirety for quantum computing.
Higher dimensional quantum systems,
which contain $d$ orthogonal states (called $d$-state systems
hereafter), have many interesting
properties which differ from those systems which have
$d=2$ and may have advantages for quantum information processing.
For example two three-state systems, or qutrits, can be more
entangled than two qubits \cite{Caves/Milburn:99,Rungta/etal:00,qutritent}.
$d$-state systems can also share a larger fraction of
their entanglement \cite{Wootters:dqudits}.
In addition to the differences in entanglement properties for
quantum systems with more than two orthogonal
states, there are differences
in the selection rules governing the transitions between states.
Some of these selection rules
are referred to as ``superselection" rules.
(See \cite{Kitaev/etal:ss} and references therein.)
In the present article,
a superselection rule will be taken to mean that a system's
``principal" quantum numbers cannot be changed in a
closed system. A principal quantum number is defined
here as one that identifies an irreducible
representation (irrep) of a group.
An example of such a rule is the preservation of the
principal quantum number $j$ which applies when $J^2$
is a constant of the motion. The differences
in selection rules
arise, in part, from the fact that for systems
with $d\geq 3$, more than one principle
quantum number is required.
Selection rules, including superselection rules,
play an important role in quantum theory
\cite{Wick/etal:52,Aharanov/Susskind,Wick/etal:70}.
They often define
a set of physically accessible states within a particular
experiment. Superselection rules could have important consequences
for some quantum information processing protocols
\cite{Verst/Cirac:03,Bartlett/Wiseman:03,Mayers:02,Kitaev/etal:ss,Bartlett/etal:04}.
However, in the case of quantum cryptographic
protocols, it has been shown that superselection rules do
not aid in their security since these rules can,
in principle, be violated \cite{Kitaev/etal:ss}.
Here, the importance of selection
rules for quantum information processing in other realms
is explored. We will see that selection rules
have important implications for the theory of
decoherence-free, or noiseless, subsystems (DFS/NS)
\cite{Zanardi:97c,Duan:98,Lidar:PRL98,Knill:99a,Kempe:00,Lidar:00a}
(for recent reviews see \cite{Lidar/Whaley:03,Byrd/etal:pqe04}),
a topic which was also mentioned in connection with
superselection rules in \cite{Bartlett/etal:04}.
A DFS/NS can be described by a set of
selection rules which are obeyed by a system-bath interaction.
To compute on a DFS/NS, one violates the system-bath selection
rule by using externally applied controls.
Taking advantage of these selection rules and encoding in a DFS/NS
has been shown to enable universal quantum
computing on a noiseless subspace
using a limited set of interactions.
For example for certain DFS/NSs, the Heisenberg exchange
interaction alone can be universal
\cite{Bacon:Sydney,DiVincenzo:00a,Levy:01,Kempe:00,Lidar/Wu:01,Byrd/Lidar:ss}.
This is quite an advantage in those
systems which have readily available exchange interactions,
but no other gating operations which are able to be
easily or quickly performed.
DFS/NSs also show promise for error protection and universal
computing when combined with other methods.
(See \cite{Byrd/etal:pqe04} for a review.) One way
to do this is to
use dynamical decoupling to eliminate the noise on DFS/NS
encoded qubits \cite{Wu/etal:02,Byrd/etal:05}. Such
``leakage elimination operations'' \cite{Wu/etal:02,Byrd/etal:05}
can be used to prevent
coupling of the two-state system whether it is
a subspace of an $d$-state system or a logical qubit comprised
of a subspace of a set of physical qubits.
In the case of leakage elimination and also
the elimination of gating errors in a multistate system
\cite{Tian:00}, controls are used to eliminate
interactions between a subsystem (usually a physical or encoded
qubit) of a multilevel system and other states in the system.
The objective in this
article is to explore the possibility of using
the entire $d$-state system for quantum information processing.
Computing with physical $d$-state
was discussed in Refs.~\cite{Bullock/etal:05,Brennen/etal:05}
and distillation protocols for physical $d$-state systems were
discussed in Ref.~\cite{Bombin:05}.
Both of these articles deal with quantum
computing using $d$-state systems while
the present article concerns encoding quantum information into
collective DFS/NS using $d$-state systems.
This is done by first describing a connection between quantum
selection rules and operator algebras with group representation
theory, operator algebras being useful for the description
of DFS/NSs \cite{Knill:99a}. Then, as mentioned
above, implications of this for DFS/NS theory
are examined.
More specifically, this article is organized as follows.
Section \ref{sec:gpth} contains conventions and
labeling which will provide a basis
for the group-theoretical treatment
in this article including the definition of different types
of selection rules. Section \ref{sec:dfss}
reviews some formal aspects of decoherence-free, or
noiseless, subsystems. Conventions for the choice
of basis and eigenvalues for three-state systems are
provided in Section \ref{sec:3sts}.
These results will then
be used in Section \ref{sec:3stdfss}
for the construction of DFS/NS
from systems having more than two orthogonal
states. This includes details of the
a decoherence-free, or noiseless, qubit
which is constructed from three qutrits and is immune to
arbitrary collective errors.
The properties which are important for the
generalization to higher-dimensional systems are also
important for the simulation of quantum systems with
quantum systems. Simulations and future work are
discussed in the concluding section, Section \ref{sec:concl}.
Two appendices provide some group-theoretical
definitions, properties of young tableau, singlet
states and a basis for $3\times 3$ matrices
which are used in the text.
\section{Background}
\label{sec:gpth}
In Appendix \ref{app:gpth}, several definitions are given
which are required in much of the rest of the article.
These definitions can also be found Cornwell
\cite{Cornwell:84I+II} (with slight differences). The
comments in Appendix \ref{app:gpth}
are added to provide some extra explanation and
context. For our purposes, it is enough to note that
there exist two inequivalent, fundamental, irreducible
representations of $SU(d)$ for all $d\geq 3$.
Definitions of ``inequivalent representations'' and
``fundamental representations'' are two of the
definitions provided in Appendix \ref{app:gpth}.
For $SU(3)$, the two different fundamental
irreps for $SU(3)$ will be denoted by ${\bf 3}$
and ${\bf{\bar{3}}}$. In general, representations will be
denoted by bold-faced numbers with the numbers
indicating the dimension of the representation.
States within these two
irreps will be denoted $\ket{i}$ and $\ket{\bar{i}}$ respectively.
Tensor products will be written, for example, as $\ket{ii}$
($=\ket{i}\otimes\ket{i}$),
$\ket{i\bar{i}}$ ($=\ket{i}\otimes\ket{\bar{i}}$), etc.
\subsection{Labels for Irreps}
For physical systems, a complete set of labels for the states
is quite important since a complete set of labels is required
in order to distinguish elements of a complete
set of mutually orthonormal
states. In this section, such labels are discussed
generally and then given explicitly
for irreducible representations of $SU(2)$. In
Sec.~\ref{sec:3sts}, further discussion of this point is
taken up and explicit labels for $SU(3)$ are given.
Let $U$ be an element of a matrix representation of a Lie group,
parameterized by a set of parameters
$a_i$; $U=U(a_i)$.
The elements of the matrix may then be denoted:
\begin{equation}
D^{(r)}_{m,m^\prime}(a_i) = \bra{r,m}U(a_i)\ket{r,m^\prime}.
\end{equation}
In general, for $d\geq 3$ $r$ will represent more than
one number. Similarly, $m$ and $m^\prime$ will each
represent more than one number.
Quantum numbers $r$ represent ``principal quantum numbers''
and the quantum numbers $m,m^\prime$ will be referred to as
``secondary quantum numbers.''
Let us give the familiar example of angular momentum in
quantum mechanics. The principal quantum number is
taken to be $j$ which labels the total angular
momentum through its relationship with the eigenvalue of the
total angular momentum operator, $J^2$:
\begin{equation}
J^2\ket{j,m} = j(j+1)\ket{j,m}.
\end{equation}
If Euler angles $\alpha,\beta,\gamma$ are chosen to parameterize
the matrix $U$, then the matrix elements are given by
$$
D^{(j)}_{m,m^\prime}(\alpha,\beta,\gamma) =
\bra{j,m}U(\alpha,\beta,\gamma)\ket{j,m^\prime}.
$$
Here $j$ is the principal quantum number. (Here there is only one.)
And $m,m^\prime$ label states within an irrep. Transitions
may occur which change the $z$ component of the angular momentum
$m$, but if $J^2$ is a constant of the motion, then
$j$ will not change.
These labels let us define a superselection rule as being one
for which the principal quantum numbers are conserved; i.e.,
a superselection rule exists-and is not violated-if
one cannot transform a state in one irreducible representation
to a state in different irreducible representation.
If two such representations are accessible and
equivalent, we can include another ``principal quantum
number'' to label this degeneracy.
\subsection{Superselection rules}
To make connection
with previous work, note that the definition of a superselection
rule used in this article is not significantly
different from the one used by \cite{Kitaev/etal:ss} and
\cite{Bartlett/etal:04} which state that a local superselection
rule exists if there is a symmetry in the system. In other words,
a superselection rule exists
if the system has the property that it is invariant under a
group of transformations, ${\cal G}$-viz.,
\begin{equation}
\dmat{\psi}{\psi} = \int_{\cal G} U \dmat{\psi}{\psi} U^\dagger dU,
\end{equation}
where $U\in {\cal G}$ and $dU$ is the group-invariant Haar measure.
The existence of such a symmetry
implies that the group of all transformations
on the Hilbert space is reducible. This divides the space into
superselection sectors. Here we identify each superselection
sector with an irrep of a group. By our definitions, a
superselection rule prevents the system from being transformed
from a state within one sector to a state within another.
\section{Decoherence-Free or Noiseless Subsystems}
\label{sec:dfss}
In this section, a brief review of DFS/NS
is provided using the notation
of Refs.~\cite{Kempe:00} and \cite{Byrd/etal:05}. This is
followed by a statement of a theorem which apparently
has not been previously
applied to DFS/NS theory and which formally relates
group theoretical representation theory to algebraic
representation theory. These and the example in the
next section provide an application of these
methods to a DFS/NS which is known. This section
is then followed by new results.
\subsection{Definitions DFS/NS}
Consider a system $S$ coupled to a bath $B$ via the Hamiltonian
\begin{equation}
H = H_S\otimes \Bid_B + \Bid_S \otimes H_B + H_I,
\end{equation}
where $H_S$ acts only on the system Hilbert space ${\cal H}_S$,
$H_B$ acts only on the bath Hilbert space ${\cal H}_B$,
$\Bid_S$ is the identity operator on the system Hilbert space,
$\Bid_B$ is the identity operator on the bath Hilbert space,
and $H_I$ is the
interaction Hamiltonian which acts on both the system and
bath Hilbert spaces ${\cal H}_S\otimes {\cal H}_B$
and couples the two. In general, $H_I$ can be written
as a sum of operators which act on the system
($S_\alpha$) and operators which act on the bath ($B_\alpha$),
\begin{equation}
H_I = \sum_\alpha S_\alpha \otimes B_\alpha.
\end{equation}
If there is no interaction Hamiltonian-i.e., when
$H_I=0$-the system and bath evolve separately and unitarily:
\begin{equation}
U(t) = \exp[-iH_St]\otimes \exp[-iH_Bt],
\end{equation}
where $\hbar =1$.
Consider the
(associative) algebra, denoted ${\cal A}$, and
generated by $H_S$ and the set of $S_\alpha$.
This is a $\dagger$-closed algebra
(if $A_i\in {\cal A}$, then $A_i^\dagger \in {\cal A}$) which
is, by assumption, reducible. This implies that the algebra is
isomorphic to a direct sum of $d_J\times d_J$ complex matrix
algebras, each with multiplicity $n_J$:
\begin{equation}
\label{eq:Adef}
{\cal A} \cong \underset{J\in {\cal J}}{\oplus}\Bid_{n_J}\otimes
{\cal M}(d_J,\mathbb{C}).
\end{equation}
${\cal J}$ is a finite set labeling the irreducible components of
${\cal A}$, and ${\cal M}(d_J,\mathbb{C})$ denotes a $d_J\times d_J$
complex matrix algebra.
The commutant ${\cal A}^\prime$ of ${\cal A}$ is defined by
\begin{equation}
{\cal A}^\prime = \{X:[X,A]=0, \; \forall \; A \in {\cal A}\}.
\end{equation}
The elements of this set also form a $\dagger$-closed algebra.
This algebra is also reducible, with
\begin{equation}
{\cal A}^\prime \cong \underset{J\in {\cal J}}{\oplus}
{\cal M}(n_J,\mathbb{C})\otimes\Bid_{d_J}.
\end{equation}
An element of ${\cal A}$ can be written in block-diagonal form
with $J$ denoting subblocks given in Eq.~(\ref{eq:Adef}).
A typical block with given $J$ can be further decomposed as
\begin{equation}
\label{eq:dfsmatrix}
\left[
\begin{tabular}{ccccccccccc}
\cline{1-3}
\multicolumn{1}{|c}{} & & & \multicolumn{1}{|c}{} & & & & & & & \\
\multicolumn{1}{|c}{} & $M_{\alpha }$ & & \multicolumn{1}{|c}{} & & & &
& & & $\lambda =0$ \\
\multicolumn{1}{|c}{} & & & \multicolumn{1}{|c}{} & & & $\mu $ & & &
& \\ \cline{1-6}
& & & \multicolumn{1}{|c}{} & & & \multicolumn{1}{|c}{$0$} & & & &
\\
& & & \multicolumn{1}{|c}{} & $M_{\alpha }$ & & \multicolumn{1}{|c}{$
\vdots $} & & & & $\lambda =1$ \\
& & & \multicolumn{1}{|c}{} & & & \multicolumn{1}{|c}{$d_{J}-1$} & & &
& \\ \cline{4-6}
& & $\mu ^{\prime }:$ & $0$ & $\cdots $ & $d_{J}-1$ & $\ddots $ & & & &
\\ \cline{8-10}
& & & & & & & \multicolumn{1}{|c}{} & & & \multicolumn{1}{|c}{} \\
& & & & & & & \multicolumn{1}{|c}{} & $M_{\alpha }$ & &
\multicolumn{1}{|c}{$\lambda =n_{J}-1$} \\
& & & & & & & \multicolumn{1}{|c}{} & & & \multicolumn{1}{|c}{} \\
\cline{8-10}
\end{tabular}
\right]
\end{equation}
Here $\lambda$ labels the different degenerate subblocks,
$0\leq\lambda \leq n_J-1$ and $\mu$ labels the states inside each
$\lambda$-subblock, $0\leq \mu\leq d_J-1$.
Associated with this decomposition of the algebra ${\cal A}$ is the
decomposition of the system Hilbert space:
\begin{equation}
{\cal H}_{S}=\sum_{J\in {\cal J}}\mathbb{C}^{n_{J}}\otimes \mathbb{C}^{d_{J}}.
\label{eq:repspc}
\end{equation}
Decoherence-free or noiseless subsystems can now be defined. Let
$\{\ket{\lambda_\mu}\}$, denote a subspace of ${\cal H}_S$ with
given $J$. Then the condition for the existence of an irreducible
decomposition is
\begin{equation}
A_\alpha\ket{\lambda_\mu} =
\sum_{\mu^\prime =1}^{d_J}M_{\mu\mu^\prime,\alpha}\ket{\lambda_{\mu^\prime}}
\end{equation}
for all $A_\alpha$, $\lambda$, and $\mu$. Notice that
$M_{\mu\mu^\prime,\alpha}$ does not depend on $\lambda$.
Thus for a fixed $\lambda$, the subspace spanned by $\ket{\lambda_\mu}$
is acted upon in some nontrivial way. However, because
$M_{\mu\mu^\prime,\alpha}$ is not dependent on $\lambda$, each
subspace defined by a fixed $\mu$ and running over
$\lambda$ is acted upon in an identical manner by the decoherence
process. The information is stored in blocks with the same $J$,
but different $\lambda$, and this defines a DFS/NS.
Therefore, the labels which define the decoherence-free,
or noiseless, states are the $\lambda$.
A decoherence-free or noiseless {\it subspace} is one for which
the matrices $M_{\mu\mu^\prime,\alpha}$ are one by one, i.e.,
they are numbers. If the $M$ are numbers ($1\times 1$ matrices),
then they act on a $1\times 1$ representation, which is necessarily
a singlet state (a one-dimensional representation).
\subsection{Weyl's unitary trick}
In the following sections, group theoretical methods will be
used to identify DFS/NSs. In particular, the representation theory
of $SU(d)$ will be used repeatedly in order to find degeneracies
which are able to represent DFS/NSs. It is not immediately obvious
that there exists an equivalence between group representation
theory and the algebraic representation theory above.
In other words, it may not
be clear that the representation theory of the algebra of
complex matrices is directly related to the theory of representations
of the unitary groups.
However, the representations are directly related and
the use of this relation is sometimes
referred to as Weyl's unitary trick.
The theorem from Huang \cite{Huang:repbook} is stated here
without proof. (For a proof, see \cite{Huang:repbook}.)
{\it Weyl's unitary trick.}
The following sets of representations on finite vector spaces
are in one to one correspondence. Moreover, under the
correspondence, invariant subspaces and equivalences are
preserved.
\begin{itemize}
\item[(i)] holomorphic representations of $SL(d,\mathbb{C})$;
\item[(ii)] representations of $SL(d,\mathbb{R})$
\item[(iii)] representations of $SU(d,\mathbb{R})$
\item[(iv)] representations of $sl(d,\mathbb{R})$
\item[(v)] representations of $su(d)$
\item[(vi)] complex linear representations of $sl(d,\mathbb{C})$
\end{itemize}
Here a holomorphic (analytic)
representation of $SL(d,\mathbb{C})$ is
defined to be a homomorphism which is also a holomorphic map.
The lowercase letters designate an algebra rather than a group.
For example, $su(2)$ is the Lie algebra of the group $SU(2)$.
As an application of this ``trick,'' let us consider collective
decoherence effects. These are noises which act identically
on every physical state in the system. Let $\pi(L^\beta)$
a representation of a basis element of an abstract algebraic
element $L^\beta$. Let $\ket{a_i}$ and their tensor products
carry a representation of a group $G$ generated by
the set $\{L^\beta\}$, denoted ${\cal L}$. Then
\begin{gather}
\pi_e(L^\beta)(\ket{a_1}\otimes\ket{a_2}\otimes \cdots \ket{a_m})
\phantom{mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm} \nonumber \\
\phantom{i} = [\pi_1(L^\beta)\ket{a_1}]\otimes(\ket{a_2}\otimes\cdots\ket{a_m})
\nonumber \\
\phantom{mmi} + \ket{a_1}\otimes[\pi_2(L^\beta]\ket{a_2})\otimes\cdots \ket{a_m}
\nonumber \\
+ \cdots \phantom{mmmmmmmmmm} \nonumber \\
\phantom{mmmmmmii} + \ket{a_1}\otimes\ket{a_2}\otimes \cdots[\pi_m(L^\beta)\ket{a_m}],
\label{eq:genalgaction}
\end{gather}
where $\pi_e$ is the representation on the entire space and
$\pi_i$ is the representation on the $i^{th}$ subsystem.
The algebra acting this way corresponds to the quantum numbers
being additive. For collective decoherence on a set of
$m$ physical qudits, each $\pi_i(L^\beta)$ is identical.
This provides a correspondence between the algebraic elements
$L^\beta$ acting on the group and the algebraic elements
which act as noises on the states and establishes
the relation between tensor products of representations and
direct sums of representations.
To exemplify and clarify these statements, the
three-qubit DFS/NS is reexamined in the following
section. This will show how to provide generalizations
of these operators, and the corresponding DFS/NSs to
$d$-state systems.
\subsection{Example: Three-qubit DFS/NS}
\label{sec:3qbdfs}
The review of the three-qubit noiseless
subsystem will enable the introduction of some general techniques,
including Young tableau, in a more familiar context.
(Rules for using Young
tableau are given in \cite{Biedenharn,Cornwell:84II}
and briefly discussed in Appendix \ref{app:gpth}.)
In this case a decoherence-free, or noiseless,
subsystem is formed
from two doublet states in the Hilbert space of
three two-state systems. This code
protects a single two-state subspace, referred to as the
encoded qubit, from collective errors.
Let us use Young's tableau to find the doublets. For qubits,
$d=2$, so there are two possible numbers with which boxes of
a Young diagram can be filled. Let us consider the
following example of one box:
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular}
$$
Filling this with either a 1 or a 2, implies
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \,1\, \\ \hline
\end{tabular}\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \,2\, \\ \hline
\end{tabular}
$$
This is a doublet, or two-dimensional representation of $SU(2)$.
Taking the product gives
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular}
\otimes
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular} = \setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\end{tabular}
\oplus
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \phantom{ai} & \phantom{ai} \\ \hline
\end{tabular}.
$$
These can be filled in the following ways according
to the rules for using Young's tableau. The first can
only have
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \,1\, \\ \hline
\,2\, \\ \hline
\end{tabular}
$$
The second can have
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \,1\, & \,1\, \\ \hline
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \,1\,& \,2\, \\ \hline
\end{tabular}\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \,2\, & \,2\, \\ \hline
\end{tabular}.
$$
which gives a singlet and a triplet respectively.
This can be summarized in the equation
${\bf 2} \otimes {\bf 2} = {\bf 3}\oplus {\bf 1}.$
Taking the tensor products of three doublets,
\begin{eqnarray}
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular}
\otimes
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular}
\otimes
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular} \!&=&\! \left(
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\end{tabular}
\oplus
\begin{tabular}{|c|c|}
\hline \phantom{ai} & \phantom{ai} \\ \hline
\end{tabular}\right)\otimes
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular} \nonumber \\
\!&=&\!
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \phantom{ai} & \phantom{ai}\\ \hline
\phantom{ai} \\ \hhline{|-|~|}
\end{tabular}
\oplus
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \phantom{ai} & \phantom{ai}\\ \hline
\phantom{ai} \\ \hhline{|-|~|}
\end{tabular}
\oplus
\begin{tabular}{|c|c|c|}
\hline \phantom{ai} & \phantom{ai} & \phantom{ai}\\ \hline
\end{tabular}. \;\;\;\;\;\;\;
\end{eqnarray}
Filling in the numbers implies that there are two doublets and
a quadruplet state in the direct sum decomposition,
giving a total of eight states. (Note that
the set of three vertical boxes which one might have
drawn here is not present. This is because there is
no nonzero state with three antisymmetric indices.)
Now, the following convention is used for the
computational basis states $\ket{0}$ and $\ket{1}$:
$|0\rangle =|1/2,1/2\rangle ,|1\rangle =|1/2, -1/2\rangle $.
These are the two states of a single spin-$1/2$ particle,
or a representation of the $j=1/2$ representation of
$SU(2)$. This convention is opposite to that of Ref.~\cite{Kempe:00},
but follows the conventions of
\cite{Byrd/etal:05}, both of which provide more detail
than is given here.
Three-qubit DFS/NS encoded qubit will now be represented in
the following way,
\begin{equation}
\left(
\begin{array}{c}
(\left\vert 010\right\rangle -\left\vert 100\right\rangle )/\sqrt{2} \\
(\left\vert 011\right\rangle -\left\vert 101\right\rangle )/\sqrt{2} \\
(2\left\vert 001\right\rangle -\left\vert 010\right\rangle -\left\vert
100\right\rangle )/\sqrt{6} \\
(-2\left\vert 110\right\rangle +\left\vert 011\right\rangle +\left\vert
101\right\rangle )/\sqrt{6} \\
\left\vert 000\right\rangle \\
(\left\vert 001\right\rangle +\left\vert 010\right\rangle +\left\vert
100\right\rangle )/\sqrt{3} \\
(\left\vert 011\right\rangle +\left\vert 101\right\rangle +\left\vert
110\right\rangle )/\sqrt{3} \\
\left\vert 111\right\rangle%
\end{array}%
\right) \overset{\mbox{\LARGE{\phantom{X}}}}{%
\begin{array}{c}
{\bigg\}}\left\vert 0_{L}\right\rangle \\
{\bigg\}}\left\vert 1_{L}\right\rangle \\
{\mbox{\LARGE{${\Bigg\}}$}}}\mathcal{C}^{\perp }%
\end{array}%
} \label{eq:3DFS}
\end{equation}%
With this notation $\left\vert 0_{L}\right\rangle =\alpha
_{0}(\left\vert 010\right\rangle -\left\vert 100\right\rangle )/\sqrt{2}%
+\beta _{0}(\left\vert 011\right\rangle -\left\vert 101\right\rangle )/\sqrt{%
2}$ (arbitrary superposition), and likewise $\left\vert 1_{L}\right\rangle
=\alpha _{1}(2\left\vert 001\right\rangle -\left\vert 010\right\rangle
-\left\vert 100\right\rangle )/\sqrt{6}+\beta _{1}(-2\left\vert
110\right\rangle +\left\vert 011\right\rangle +\left\vert 101\right\rangle )/%
\sqrt{6}$. These states belong to the two $J=1/2$
irreps of $SU(2)$. The coefficients are Wigner-Clebsch-Gordan
coefficients \cite{Bohm:qmbook} and the last four states comprise a $J=3/2$
representation of $SU(2)$. The two $J=1/2$ irreps can be distinguished by a
degeneracy label $\lambda =0,1$. Thus a basis state in the eight-dimensional
Hilbert space is fully identified by the three quantum numbers $|J,\lambda
,\mu \rangle $, where $\mu $ is the $z$-component of the total spin $J$. In
this notation we can write $\left\vert 0_{L}\right\rangle =\alpha
_{0}|1/2,0,1/2\rangle +\beta _{0}|1/2,0,-1/2\rangle $ and $\left\vert
1_{L}\right\rangle =\alpha _{1}|1/2,1,1/2\rangle +\beta
_{1}|1/2,1,-1/2\rangle $.
If this encoded qubit is affected by collective errors-i.e.,
errors that act the same on each physical qubit-then no
information is lost to the environment.
The collective errors are formed from linear
combinations of the operators
$S^\alpha= \sum_i\sigma^\alpha_i$:
\begin{equation}
S = \sum_\alpha a_\alpha S^\alpha,
\end{equation}
where $\alpha=x,$ $y,$ or $z$ and the $i$ denotes the
physical qubit $1$, $2$, or $3$. The states within blocks
$\ket{0_L}$ and $\ket{1_L}$ couple in exactly the same way,
but neither block couples with states outside of that block.
Logical operations create superpositions of these blocks.
This can be described in terms of group
representation theory, using Weyl's unitary trick.
A basis for the algebra which spans the space of collective
errors can be chosen to be the Lie algebra of $SU(2)$, $su(2)$.
If we consider the action of the algebra on the entire space of
the three qubits and suppose that this is a representation
of $su(2)$ as well, then the representation on the entire
space of three qubits is affected by the same operation
on each physical qubit. This is the statement made
generally in Eq.~(\ref{eq:genalgaction}).
For the example of three qubits, the matrix,
Eq.~(\ref{eq:dfsmatrix}), can be found using the DFS/NS
transformation [Eq.~(18) of \cite{Byrd/etal:05}].
In the DFS/NS basis the explicit form is given by
\begin{widetext}
\begin{eqnarray}
S_{\mbox{\scriptsize dfs}} &=& %
U_{\mbox{\scriptsize dfs}}^{\phantom{-1}}SU_{\mbox{\scriptsize dfs}}^{-1}
\nonumber \\
&=& \left(\begin{array}{cccccccc}
a_3 & a_1-ia_2 & 0 & 0 & 0 & 0 & 0 & 0 \\
a_1+ia_2 & -a_3 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & a_3 & a_1-ia_2 & 0 & 0 & 0 & 0 \\
0 & 0 & a_1+ ia_2 & -a_3 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 3a_3 & \sqrt{3}(a_1-ia_2) & 0 & 0 \\
0 & 0 & 0 & 0 & \sqrt{3}(a_1+ia_2) & a_3 & 2(a_1-ia_2) & 0\\
0 & 0 & 0 & 0 & 0 & 2(a_1+ia_2) & a_3 & \sqrt{3}(a_1-ia_2)\\
0 & 0 & 0 & 0 & 0 & 0 & \sqrt{3}(a_1+ia_2) & -3a_3
\end{array}\right). \;\;\;\;\;\;\;\;\;\;\;
\end{eqnarray}
\end{widetext}
Thus we see that the two states of the logical zero transform
in exactly the same way as the two states of the logical one
under collective operations.
In the context of this example, let us also
consider a collection of physical qubits. When collective
errors occur, the form of these errors is
\begin{equation}
\pi_e(L^\beta) = \sum_i \pi_i(L^\beta),
\end{equation}
where $L^\beta$ is an element of the algebra and
the subscript identifies the error as acting on the
$i^{th}$ physical qubit.
In other words, the operator $L^\beta$ acts on the
system of qubits by acting with $L_1^\beta$ on qubit 1,
$L_2^\beta$ on qubit 2, etc. Each $L_i^\beta$ is
identical, but acts on a different two-state system.
Therefore,
the statement that collective errors occur, is
the statement that
the entire system of qubits transforms as a representation
of $su(2)$. By the unitary trick, there is a
direct correspondence between the representation theory
of the group $SU(2)$
and the representation theory of the algebra with complex
coefficients. In this case, the $SU(2)$ transformation of
the direct product of three two-dimensional representations
is expressed as the direct sum of two three-dimensional
representations (and a four-dimensional representation).
This perspective of collective errors will now be used in
the following sections to describe DFS/NSs of higher dimensional
Hilbert spaces. Though the arguments here are primarily
restricted to the concrete example of $SU(3)$, they
can readily be extended to any $SU(d)$.
\section{Types of Qutrit States and Labeling}
\label{sec:3sts}
In this section explicit labels are provided for the states
of qutrits. As noted in Appendix \ref{app:gpth} and also
Sec.~\ref{sec:gpth}, two different types of qutrit states exist.
This is true independent of the basis chosen for the algebra.
Usually different sets of bases are
chosen which reflect the symmetry
of the physical system. For example,
given a representation of $SU(3)$, there are
different subgroup chains which provide different
possibilities for sets of measurements, corresponding
to a different choice of basis elements,
\begin{gather}
\label{chain1}
SU(3) \supset SU(2) \supset U(1) \\
\label{chain2}
SU(3) \supset SO(3) \supset SO(2).
\end{gather}
The first is used in particle physics to describe
the three lightest flavors of quarks \cite{Gell-Mann:64}.
Associated with this subgroup chain is the set of
Gell-Mann matrices. The second of these subgroup
chains is used in nuclear physics models, such as that
of Elliott \cite{Elliott:58}. The appropriate set
of measurements depends on the ``good'' quantum numbers
of the system.
To each of these subgroup chains there corresponds a
complete set of commuting operators (CSCO). These operators are
simultaneous observables which can be used to distinguish
the different states within an irrep. In the remainder
of this article, subgroup chain (\ref{chain1}) is
considered almost exclusively although the arguments
can be applied to chain (\ref{chain2}) as well.
By using this example, we
are able to discuss the importance of
the existence of two inequivalent fundamental irreps
of the $SU(d)$ groups, both in the theory of DFS/NS
and also in simulating quantum systems with quantum systems.
\subsection*{Labels}
As stated above, each of the two types
irreps will be associated with the first subgroup chain
(\ref{chain1}). These will be called qutrit and
barred states. Qutrit states will be associated with the ${{\bf 3}}$
representation and barred states
will be associated with ${{\bf{\bar{3}}}}$, which is the complex
conjugate of the ${{\bf 3}}$ rep. Throughout the rest of
the article, states in the ${{\bf{\bar{3}}}}$ rep will have a bar
over them to distinguish them from states in the
${{\bf 3}}$ rep: for example, the states
$\ket{0},\ket{1},\ket{2}\in {{\bf 3}}$ and
$\ket{\bar{0}},\ket{\bar{1}},\ket{\bar{2}}\in {{\bf{\bar{3}}}}$.
In order to provide a complete set of labels
which distinguish two orthonormal states, a CSCO must be
measured \cite{Bohm:qmbook}.
For irreps of $SU(3)$, the following set of
labels completely describe states within
an irrep. Each is associated with
an operator in the CSCO. Let $p,q$ label the irrep
and $t$ label the
eigenvalue of an $SU(2)$ subgroup of $SU(3)$;
$T^2\ket{\psi} = t(t+1)\ket{\psi}$ for $\ket{\psi}$ an
eigenstate of the operator $T^2$. (Lowercase
letters will represent the eigenvalues of the operators
which will be denoted with an uppercase.) The symbol $t_3$ will
denote the eigenvalue of $T_3$, and $y$ will denote the eigenvalue
of the operator $Y$. In terms of the Gell-Mann matrices,
(see Appendix \ref{app:alg})
\begin{gather}
Y = \frac{1}{\sqrt{3}} \lambda_8, \nonumber \\
T^2 = \frac{1}{4}(\lambda_1^2+\lambda_2^2 +\lambda_3^2), \label{eq:eigenops}\\
T_3 = \frac{1}{2}\lambda_3. \nonumber
\end{gather}
The quantum numbers $p$ and $q$ can be determined by the
highest-weight states (described later in this section)
or by measurement of the Casimir operators.
(See Appendix \ref{app:alg}.) The Casimir operators,
plus the set of operators in Eq.(\ref{eq:eigenops}) provide
a CSCO. States within any irreps can be written as
$\ket{p,q,t,t_3,y}$, where $p$ and $q$ are assumed to be
fixed and determined by the irrep. To make a connection
with the more familiar case of a spin-$j$ particle, $p,q$
should be considered ``principal" quantum numbers which
label an irrep.
When $J^2$ is a constant of the motion, then $j$ is
fixed and cannot change.
The analog for three-state systems is the
conservation of the two quantum
numbers $p$ and $q$. It will be assumed here,
unless otherwise stated, that $p$ and $q$ are both conserved.
However, whether
or not these quantum numbers are conserved in a particular
experiment depends on the physical system in question.
The comparison
with the unitary representation of the group is made by labeling
the unitary matrices in the following way:
$$
\bra{p,q;t,t_3,y}U\ket{p,q;t^\prime,t_3^\prime,y^\prime}
= D^{(p,q)}_{t,t_3,y;t^\prime,t_3^\prime,y^\prime},
$$
so that the matrix elements of $U$ are
given by the functions $D$ and the rows (columns) are labeled
by the primed (unprimed) numbers.
For $SU(3)$ there are six raising and lowering operators which
take one state in an irrep to another state in the same irrep.
They are often denoted $U_\pm,V_\pm,T_\pm$.
Within an irrep, one can define a unique
``maximum weight state'' (see \cite{Cornwell:84II,symmetry:book}).
This state $\ket{\psi_m}$ is usually defined as the one for
which the following relations hold:
\begin{equation}
\label{eq:maxstaterl}
T_+\ket{\psi_m}=0,\;\;\;V_+\ket{\psi_m}=0,\;\;\;U_-\ket{\psi_m}=0.
\end{equation}
Since the maximum weight state is unique for each irrep, it can
be related to the labels, $p$ and $q$, which identify the irrep,
\begin{equation}
\label{eq:pnq}
t_{3m} = \frac{p+q}{2},\;\;\;\; y_m = \frac{p-q}{3}.
\end{equation}
The examples of the two fundamental irreps ${{\bf 3}}$ and ${{\bf{\bar{3}}}}$
are given in Fig.~\ref{fig:fundirreps} where the states
are labeled according to the eigenvalues of $Y$ and $T_3$.
\begin{figure}
\includegraphics{fundrepscaled}
\caption{\label{fig:fundirreps} State spaces for the
${\bf 3}$ (left) and ${{\bf{\bar{3}}}}$ (right) reps.}
\end{figure}
The highest weight states in ${\bf 3}$ and ${\bf{\bar{3}}}$ are the
states with $t_3=1/2,y=-1/3$ and $t_3 =1/2,y=1/3$
respectively.
Let us now label the states $\ket{0},\ket{1},\ket{2}$ by
using the full set of quantum numbers,
\begin{eqnarray}
\label{eq:3states}
\ket{0}&=& \ket{1,0,1/2,1/2,1/3}, \nonumber \\
\ket{1}&=& \ket{1,0,1/2,-1/2,1/3},\\
\ket{2}&=& \ket{1,0,0,0,-2/3}. \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:3barstates}
\;\;\;\;\;\ket{\bar{0}}&=& \ket{0,1,-1/2,-1/2,-1/3},\nonumber \\
\ket{\bar{1}}&=& \ket{0,1,-1/2,1/2,-1/3},\\
\ket{\bar{2}}&=& \ket{0,1,0,0,2/3}. \nonumber
\end{eqnarray}
For future reference, note that an $SU(3)$ singlet state
has the unique property that
\begin{equation}
\label{eq:singletstaterl}
T_\pm\ket{\psi_s}=0,\;\;\;V_\pm\ket{\psi_s}=0,\;\;\;U_\pm\ket{\psi_s}=0.
\end{equation}
This follows from the fact that a ``singlet'' state is one for which
there exists only one state within the irrep (see also
Appendix~\ref{app:singlets} and the comment at the end of
Sec.~III~A).
\section{Decoherence-free subspaces for three-state systems}
\label{sec:3stdfss}
In principle, one can find DFS/NSs from the formal theory
provided in \cite{Knill:99a}. However, until now, there has been
no emphasis on the implications of the various irreps of a
given group (with the exception of \cite{Bartlett/etal:04}).
Here, in particular, it is shown that the
distinction between the ${\bf 3}$ and ${\bar {\bf 3}}$
representations is tantamount to the
identification and use of a DFS/NS for quantum error
avoidance in qutrit systems.
Again, in what follows three-state systems are used
as explicit examples. However, the constructions here are
readily generalizable to $SU(d)$.
\subsection{Product states}
It is common in the quantum information literature to find statements
such as ``... the maximally entangled state of two qutrits,
$\ket{\phi}=\frac{1}{\sqrt{3}}(\ket{00}+\ket{11}+\ket{22})$.''
However, we have just seen that there are two different irreducible
fundamental representations of $SU(3)$. In that case, we should
distinguish between this state, $\ket{\phi}$ and
$\ket{\phi^\prime}=\frac{1}{\sqrt{3}}(\ket{0\bar{0}}
+\ket{1\bar{1}}+\ket{2\bar{2}})$.
This observation has an important consequence since
there is a striking difference between these two states.
{\it The state $\ket{\phi^\prime}$ is an $SU(3)$ singlet,
but $\ket{\phi}$ is not}.
Singlet states are used in the theory of DFS in order to protect
against all forms of errors. The state $\ket{\phi}$ is not
decoherence free.
(To see that $\ket{\phi^\prime}$ is a singlet state, consult
Appendix \ref{app:singlets}.)
Note that, in principle, a local unitary can be found which will
transform the state $\ket{\phi}$ into the state
$\ket{\phi^\prime}$. This implies that
the amount of entanglement in $\ket{\phi}$ is
the same as the amount of entanglement in $\ket{\phi^\prime}$.
Therefore, whereas
it has been conjectured that better quantum error correcting
codes are those with more entanglement present in their states
\cite{Scott:04}, no such correspondence can be made for DFS/NS.
In other words, {\it there is no direct correspondence between
the amount of entanglement in a DFS/NS and the efficacy, or
error avoidance properties, of the encoded DFS/NS states.}
This is indicated from the example just presented as well as the
corresponding Young diagram wich applies to any two $d$-state
systems.
The states $\ket{\phi}$ and $\ket{\phi^\prime}$,
belong to two different
irreps. Checking all quantum numbers shows that each of the
two states has all secondary quantum numbers, $t,t_3,$ and $y$
equal to zero; that is, $t=0,t_3=0,$ and $y=0$.
However, the primary quantum numbers differ.
$\ket{\phi}$ has $p=1,q=0$ whereas
$\ket{\phi^\prime}$, $p=0,q=1$. This difference is experimentally
measurable in
several different ways. One way is to
find the highest weight state of the irrep through the use of the
appropriate raising and/or lowering operators. This will identify
$p$ and $q$ through Eqs.~(\ref{eq:pnq}).
Also, the differentiation between these states
has implications for quantum error
avoidance properties of the states.
This provides another important method
for experimental distinction.
\subsection{Young Tableau and DFSs}
\label{sec:youngtrits}
The use of Young's tableau proves very convenient for
exploring the possibility of constructing collective
DFS/NSs from qutrits (or qudits).
When two or more irreducible representations
occur in a tensor product of a set of states of any dimensions, these
identical irreps will transform in the same way under an
$SU(3)$ action on the entire space of physical subsystems
and are therefore candidates for a DFS/NS. (See Section \ref{sec:dfss}).
Let us now examine some tensor products of qutrits to determine
the possibility of constructing DFS/NSs which are immune to
collective errors.
The two different Young tableau for the ${\bf 3}$ and
${\bf \bar{3}}$ representations are represented by
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular}
$$
which is filled with numbers $1,2,3$
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline 1 \\ \hline
\end{tabular}, \;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline 2 \\ \hline
\end{tabular}, \;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline 3 \\ \hline
\end{tabular},
$$
and
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\end{tabular}
$$
which is filled with numbers $1,2,3$ but is antisymmetric in the
interchange of two rows. This gives the following possibilities:
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline 1 \\ \hline
2 \\ \hline
\end{tabular}, \;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline 1 \\ \hline
3 \\ \hline
\end{tabular}, \;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline 2 \\ \hline
3 \\ \hline
\end{tabular}.
$$
These are the two inequivalent fundamental irreps.
The tensor product of two ${\bf 3}$ gives the following
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular}
\otimes
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular} = \setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\end{tabular}
\oplus
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \phantom{ai} & \phantom{ai} \\ \hline
\end{tabular}.
$$
The first is the ${\bf \bar{3}}$ rep and the second is a
six-dimensional representation which can be shown by filling
in the boxes with all possible symmetric combinations
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 1 & 1 \\ \hline
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 1 & 2 \\ \hline
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 1 & 3 \\ \hline
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 2 & 2 \\ \hline
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 2 & 3 \\ \hline
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 3 & 3 \\ \hline
\end{tabular}.
$$
To be precise, this is the ${\bf 6}$ rep. The
result of this can be written in the following
equation:
${\bf 3} \otimes {\bf 3} = {{\bf{\bar{3}}}} \oplus {\bf 6}$.
Now, note that the result of the product of
${\bf \bar{3}}$ and ${\bf 3}$ is given by
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\end{tabular} \otimes
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular} =
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \phantom{ai} & \phantom{ai}\\ \hline
\phantom{ai} \\ \hhline{|-|~|}
\end{tabular}
\oplus
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\end{tabular}.
$$
The first tableau corresponds to an octet of states,
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 1 & 1\\ \hline
2 \\ \hhline{|-|~|}
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 1 & 1\\ \hline
3 \\ \hhline{|-|~|}
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 1 & 2\\ \hline
2 \\ \hhline{|-|~|}
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 1 & 2\\ \hline
3 \\ \hhline{|-|~|}
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 1 & 3\\ \hline
2 \\ \hhline{|-|~|}
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 1 & 3\\ \hline
3 \\ \hhline{|-|~|}
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 2 & 2\\ \hline
3 \\ \hhline{|-|~|}
\end{tabular},\;\;\;
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline 2 & 3\\ \hline
3 \\ \hhline{|-|~|}
\end{tabular}.
$$
The second corresponds to a singlet, as it can only be filled in
one way,
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline 1 \\ \hline
2 \\ \hline
3 \\ \hline
\end{tabular}.
$$
Therefore,
${\bf \bar{3}}\otimes {\bf 3} = {\bf 8} \oplus {\bf 1}$
\cite{octetnote}.
This shows that the product of two three-dimensional
representations of the same type do not give rise to a
singlet state, whereas products of two reps
of different types do give rise to a singlet state.
Singlet states are decoherence-free since they are annihilated by
all $SU(3)$ operators \cite{Lidar:PRL98}.
Let us consider constructing a decoherence-free, or
noiseless, qubit from qutrits. We have already seen
this is not possible using two identical
qutrits, or an unbarred
and a barred rep. One may naturally ask about
three unbarred (or three barred) states. From the
tableau, it can be shown that
${\bf 3}\otimes {\bf 3}\otimes {\bf 3} =
{\bf 8}\oplus {\bf 8} \oplus {\bf 1} \oplus {\bf 10}$.
This indicates that three qutrit states have a set of
two degenerate reps. This implies that a DFS/NS can
be constructed with the two degenerate states representing
the logical zero and logical one states of a qubit which
is immune to collective noise.
Note, however, that the product of a barred and two unbarred
reps will have the following decomposition:
${\bf{\bar{3}}}\otimes {\bf 3} \otimes {\bf 3} = {\bf 15}\oplus {\bf 3}
\oplus {\bf 3} \oplus {\bf \bar{6}}$.
This indicates that one may also construct a DFS/NS from this
set of states which can represent a decoherence-free qubit.
Certainly these two are quite different subsystems. The
first has two degenerate eight-state subsystems and the
second has two degenerate three-state systems.
In order to find the fewest number of physical qutrits
which can be encoded such that a logical qutrit is
protected from collective errors, four qutrits are
taken:
${\bf 3}\otimes {\bf 3}\otimes {\bf 3}\otimes {\bf 3}
= {\bf 3}\oplus {\bf 3} \oplus {\bf 3} \oplus {\bf \bar{6}}
\oplus {\bf \bar{6}}\oplus {\bf 15}\oplus {\bf 15}\oplus {\bf 15}
\oplus {\bf \bar{15}}$. In this case, a decoherence-free qutrit
could be represented by three three-state systems and this is
the smallest number of qutrit states which can represent
such a qutrit DFS/NS.
The analysis can be used for any $d$-state
systems. For example, one may ask for the least number
of physical $d$-state systems which can be used to
encode a logical qubit which is decoherence free with
respect to collective errors.
The answer can be found by again using Young tableau.
{\it The tensor product of three $d$-state systems can
be used to encode logical qubit into a NS/DFS.}
This can be seen in the tableau of any ${\bf d}$
representation
$$
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular}.
$$
Taking the tensor product of three such systems, produces the following
tableau
\begin{eqnarray}
\label{tableau:3dfs}
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular}
\otimes
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular}
\otimes
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular} \!&=&\!
\left(
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\end{tabular}
\oplus
\begin{tabular}{|c|c|}
\hline \phantom{ai} & \phantom{ai} \\ \hline
\end{tabular}\right)\otimes
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\end{tabular} \nonumber \\
\!&=&\!
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \phantom{ai} & \phantom{ai}\\ \hline
\phantom{ai} \\ \hhline{|-|~|}
\end{tabular}
\oplus
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|c|}
\hline \phantom{ai} & \phantom{ai}\\ \hline
\phantom{ai} \\ \hhline{|-|~|}
\end{tabular}
\oplus
\setlength{\arrayrulewidth}{.4pt}
\begin{tabular}{|c|}
\hline \phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\phantom{ai} \\ \hline
\end{tabular} \oplus
\begin{tabular}{|c|c|c|}
\hline \phantom{ai} & \phantom{ai} & \phantom{ai}\\ \hline
\end{tabular}.\;\;\;\;\;
\end{eqnarray}
Therefore, {\it this is the smallest number of qudits for which a
collective DFS/NS,
representing a qubit in terms of qudits, exists.}
The difference between a tensor product of two fundamental
irreps which are equivalent and two which are not is clearly
very important for constructing DFS/NS from higher dimensional
systems. The fact that one of the two states transforms
differently than the other implies that a superselection rule
which preserves the type of qutrit (or qudit)
must exist in the system-bath
interaction. On the other hand, if one wants to create
a DFS/NS by the use of decoupling controls according to the
methods presented in \cite{Zanardi:98b,Viola:00a,Wu/Lidar:cdfs,Byrd/Lidar:ss,Viola:01a,Byrd/Lidar:01,Byrd/Lidar:ebb,Byrd/Lidar:pqe02,Lidar/Wu:02,Wu/etal:02,Zanardi:99a},
then one must recognize
this as a quantum control problem in which the decoupling
controls must provide the appropriate symmetry
for those systems which do not otherwise
obey the required superselection rule. In other words, to create
a DFS/NS from two inequivalent fundamental irreps, one
must ensure that the appropriate transformation properties
are obeyed. Representing
decoherence-free systems with $d$-state systems therefore
requires knowledge of the transformation properties induced
by experimental controls and system-bath interactions.
\subsection{Three-qutrit DFS/NS}
As discussed in the previous section,
Sec.~\ref{sec:youngtrits},
a noiseless subsystem can be formed from two octets in the
Hilbert space of three three-state systems. This
logical qubit will be
protected against arbitrary collective errors
[see Eq.~(\ref{tableau:3dfs})].
Using the conventions established by Eqs.(\ref{eq:3states}),
the logical states can be given
explicit labels according to the principal quantum numbers
$p,q$ and eigenvalues of the operators $T$, $Y$, and $T_3$.
The first of two octets will have a degeneracy label
$0$, which indicates that it forms the logical
zero state $\ket{0_L}$,
\begin{eqnarray}
\psi^{8,0}_1&=&(\ket{200}-\ket{020})/\sqrt{2}, \nonumber \\
\psi^{8,0}_2&=&(\ket{100}-\ket{010})/\sqrt{2}, \nonumber \\
\psi^{8,0}_3&=&(\ket{011}-\ket{101})/\sqrt{2}, \nonumber \\
\psi^{8,0}_4&=&(\ket{211}-\ket{121})/\sqrt{2}, \nonumber \\
\psi^{8,0}_5&=&(\ket{212}-\ket{122})/\sqrt{2}, \nonumber \\
\psi^{8,0}_6&=&(\ket{022}-\ket{202})/\sqrt{2}, \nonumber \\
\psi^{8,0}_7&=&(-\ket{021}-\ket{120}+\ket{201}+\ket{210})/2, \nonumber \\
\psi^{8,0}_8&=&(2\ket{012}+\ket{021}-2\ket{102} \nonumber \\
&&-\ket{120}-\ket{201}+\ket{210})/\sqrt{12}.
\end{eqnarray}
The second octet of states carries a degeneracy label $1$
and forms the logical one state $\ket{1_L}$,
\begin{eqnarray}
\psi^{8,1}_1\!&=&\!\!(2\ket{002}-\ket{020}-\ket{200})/\sqrt{6} \nonumber \\
\psi^{8,1}_2\!&=&\!\!(2\ket{001}-\ket{010}-\ket{100})/\sqrt{6} \nonumber \\
\psi^{8,1}_3\!&=&\!\!(-2\ket{110}+\ket{011}+\ket{101})/\sqrt{6} \nonumber \\
\psi^{8,1}_4\!&=&\!\!(2\ket{112}-\ket{121}-\ket{211})/\sqrt{6} \nonumber \\
\psi^{8,1}_5\!&=&\!\!(-2\ket{221}+\ket{122}+\ket{212})/\sqrt{6} \nonumber \\
\psi^{8,1}_6\!&=&\!\!(-2\ket{220}+\ket{022}+\ket{202})/\sqrt{6} \nonumber \\
\psi^{8,1}_7\!&=&\!\!(2\ket{012}-\ket{021}+2\ket{102} \nonumber \\
&&\; -\ket{120}-\ket{201}-\ket{210})/\sqrt{12} \nonumber \\
\psi^{8,1}_8\!&=&\!\!(-\ket{021}+\ket{120}-\ket{201}+\ket{210})/2.
\end{eqnarray}
In other words, the first superscript denotes the dimension of
the representation, the second is a degeneracy label and the
subscript labels the state within the representation.
As in the
case of the three-qubit DFS/NS, the logical zero state is given
by $\ket{0_L}= \sum_i\alpha_i\psi^{8,0}_i$ (arbitrary superposition)
and likewise for $\ket{1_L}= \sum_i\beta_i\psi^{8,1}_i$.
Using the notation of Sec.~\ref{sec:3qbdfs}, the logical states
can be fully identified by the quantum numbers,
$\ket{p,q;\lambda;t,t_3,y}$, where $p,q$ are the principal
quantum numbers which identify the irreducible representation,
$\lambda$ is the degeneracy label, and
$t,t_3,y$ identify the states within the representation
by its secondary quantum numbers.
The states of the octet which comprise the logical zero
state are, in this notation, given by
\begin{eqnarray}
\psi^{8,0}_1\!&=&\!\! \ket{1,1;0;1,1,0},\nonumber \\
\psi^{8,0}_2\!&=&\!\! \ket{1,1;0;1/2,1/2,1},\nonumber \\
\psi^{8,0}_3\!&=&\!\! \ket{1,1;0;1/2,-1/2,1},\nonumber \\
\psi^{8,0}_4\!&=&\!\! \ket{1,1;0;1,-1,0},\nonumber \\
\psi^{8,0}_5\!&=&\!\! \ket{1,1;0;1/2,-1/2,-1},\nonumber \\
\psi^{8,0}_6\!&=&\!\! \ket{1,1;0;1/2,1/2,-1},\nonumber \\
\psi^{8,0}_7\!&=&\!\! \ket{1,1;0;1,0,0},\nonumber \\
\psi^{8,0}_8\!&=&\!\! \ket{1,1;0;0,0,0}.
\end{eqnarray}
The states which comprise the logical one are given by
\begin{eqnarray}
\psi^{8,1}_1\!&=&\!\! \ket{1,1;1;1,1,0}, \nonumber \\
\psi^{8,1}_2\!&=&\!\! \ket{1,1;1;1/2,1/2,1}, \nonumber \\
\psi^{8,1}_3\!&=&\!\! \ket{1,1;1;1/2,-1/2,1}, \nonumber \\
\psi^{8,1}_4\!&=&\!\! \ket{1,1;1;1,-1,0}, \nonumber \\
\psi^{8,1}_5\!&=&\!\! \ket{1,1;1;1/2,-1/2,-1}, \nonumber \\
\psi^{8,1}_6\!&=&\!\! \ket{1,1;1;1/2,1/2,-1}, \nonumber \\
\psi^{8,1}_7\!&=&\!\! \ket{1,1;1;1,0,0}, \nonumber \\
\psi^{8,1}_8\!&=&\!\! \ket{1,1;1;0,0,0}.
\end{eqnarray}
The remaining 11 states include a (completely antisymmetric)
singlet
\begin{equation}
\psi_s=(\ket{012}-\ket{021}-\ket{102}+\ket{120}-\ket{210})/\sqrt{6}
\end{equation}
and a (completely symmetric) decuplet of states:
\begin{eqnarray}
\psi_1^{10} &=& \ket{111}, \nonumber \\
\psi_2^{10} &=& (\ket{011}+\ket{101}+\ket{110})/\sqrt{3},\nonumber \\
\psi_3^{10} &=& (\ket{001}+\ket{010}+\ket{100})/\sqrt{3},\nonumber \\
\psi_4^{10} &=& \ket{000}, \nonumber \\
\psi_5^{10} &=& (\ket{112}+\ket{121}+\ket{211})/\sqrt{3},\nonumber \\
\psi_6^{10} &=& (\ket{012}+\ket{021}+\ket{102}+\ket{120}+\ket{210})/\sqrt{6},
\nonumber \\
\psi_7^{10} &=& (\ket{002}+\ket{020}+\ket{200})/\sqrt{3},\nonumber \\
\psi_8^{10} &=& (\ket{122}+\ket{212}+\ket{221})/\sqrt{3},\nonumber \\
\psi_9^{10} &=& (\ket{022}+\ket{202}+\ket{220})/\sqrt{3},\nonumber \\
\psi_{10}^{10} &=& \ket{222}.
\end{eqnarray}
A basis for the collective errors for qutrit states is given
by sums of the operators $\{\lambda_i\}$ of Appendix\ref{app:alg}:
\begin{equation}
S^\alpha= \sum_i\lambda^\alpha_i,
\end{equation}
where $\alpha=1,2,...,8$ and $i$ denotes the
physical qutrit $1,2$ or $3$. A generic collective error
has the form
\begin{equation}
S = \sum_\alpha a_\alpha S^\alpha,
\end{equation}
where the $a_\alpha$ are arbitrary constants. As in the
three qubit DFS/NS, the states within blocks
$\ket{0_L}$ and $\ket{1_L}$ get mixed with each other
in exactly the same way during collective operations,
but states in one block are not mixed with states in
another. Logical operations will mix these blocks
with each other. In the DFS/NS basis, the operators
$S_{\mbox{\scriptsize dfs}}^\alpha
=V_{\mbox{\scriptsize dfs}}^{\phantom{-1}}S^\alpha %
V_{\mbox{\scriptsize dfs}}^{-1}$
are block diagonal in accordance with
Eq.~(\ref{eq:dfsmatrix}). Let us order the states in
a column vector: $\Psi$ = column$\{$$\psi^{8,0}_1$,
$\psi^{8,0}_2$,
$\psi^{8,0}_3$,
$\psi^{8,0}_4$,
$\psi^{8,0}_5$,
$\psi^{8,0}_6$,
$\psi^{8,0}_7$,
$\psi^{8,0}_8$,
$\psi^{8,1}_1$,
$\psi^{8,1}_2$,
$\psi^{8,1}_3$,
$\psi^{8,1}_4,$
$\psi^{8,1}_5,$
$\psi^{8,1}_6,$
$\psi^{8,1}_7,$
$\psi^{8,1}_8,$
$\psi_s,$
$\psi_1^{10},$
$\psi_2^{10},$
$\psi_3^{10},$
$\psi_4^{10},$
$\psi_5^{10},$
$\psi_6^{10},$
$\psi_7^{10},$
$\psi_8^{10},$
$\psi_9^{10},$
$\psi_{10}^{10}$
$\}$.
From these states one may readily deduce the transformation
$V_{\mbox{\scriptsize dfs}}$ which
takes the qutrit computational basis states to the
DFS/NS basis. It is then clear that $V_{\mbox{\scriptsize dfs}}$
is a $27\times 27$ matrix of $SU(3)$
Wigner-Clebsch-Gordan coefficients.
Since, the collective errors in this basis are block diagonal
[viz. Eq.~(\ref{eq:dfsmatrix})], these blocks will
be labeled according
to the set of states on which they act nontrivially. Let
$S_0$ be the first such block (which acts nontrivially
on the states which form the logical zero),
$S_1$ be the second such block (which acts nontrivially
on the states which form the logical one),
$S_s$ be the third such block (which acts
on the singlet state), and $S_{10}$ which acts
on the states in the decuplet. The form of the matrix
$S_{\mbox{\scriptsize dfs}} =%
V_{\mbox{\scriptsize dfs}}^{\phantom{-1}}SV_{\mbox{\scriptsize dfs}}^{-1}$
is given by
\begin{equation}
\label{eq:coll33errs}
S_{\mbox{\scriptsize dfs}} =
\left(\begin{array}{cccc}
S_0 &0 &0 &0 \\
0 & S_1 &0 &0 \\
0 &0 & S_s &0 \\
0 &0 &0 & S_{10}
\end{array}\right),
\end{equation}
where $S_0$ and $S_1$ are both $8 \times 8$ matrices and are given by
\begin{widetext}
\begin{equation}
\left(\begin{array}{cccccccc}
2a_3 & a_6+ia_7 & 0 & 0 & 0 & a_4-ia_5 & \sqrt{2}(a_1-ia_2) & 0 \\
a_6-ia_7 & a_3+\sqrt{3}a_8 & a_1-ia_2 & 0 & 0 & 0
& \frac{-a_4+ia_5}{\sqrt{2}}& \frac{-3(a_4-ia_5)}{\sqrt{6}}\\
0& a_1+ia_2& -a_3+\sqrt{3}a_8 & -a_4+ia_5 &0& 0 &\frac{a_6-ia_7}{\sqrt{2}}&
\frac{-3(a_6-ia_7)}{\sqrt{6}} \\
0 & 0 &-a_4-ia_5 & -2a_3 &a_6-ia_7 & 0 & \sqrt{2}(a_1+ia_2) & 0 \\
0 & 0 & 0 &a_6+ia_7 & -a_3-\sqrt{3}a_8 & a_1+ia_2& \frac{a_4+ia_5}{\sqrt{2}}& \frac{3(a_4+ia_5)}{\sqrt{6}} \\
a_4+ia_5 & 0 & 0 & 0 & a_1-ia_2 & a_3-\sqrt{3}a_8 & \frac{a_6+ia_7}{\sqrt{2}} &\frac{-3(a_6+ia_7)}{\sqrt{6}} \\
\sqrt{2}(a_1+ia_2) & \frac{-a_4-ia_5}{\sqrt{2}}&\frac{a_6+ia_7}{\sqrt{2}}
&\sqrt{2}(a_1-ia_2)&\frac{a_4-ia_5}{\sqrt{2}}&\frac{a_6-ia_7}{\sqrt{2}}&0&0\\
0 &\frac{-3(a_4+ia_5)}{\sqrt{6}} &\frac{-3(a_6+ia_7)}{\sqrt{6}} & 0 & \frac{3(a_4-ia_5)}{\sqrt{6}} & \frac{-3(a_6-ia_7)}{\sqrt{6}} & 0 & 0
\end{array}\right).
\end{equation}
\end{widetext}
The matrix $S_s$ is a $1\times 1$ zero ``matrix'' and the
$10 \times 10$ matrix $S_{10}$ will not be displayed since it
is not relevant for the DFS/NS.
In summary, if the physical circumstances are such that the
errors/operations on a set of three qutrits in the ${\bf 3}$
representation are identical on each qutrit, then errors/operators
will have the form, of Eq.(\ref{eq:coll33errs}). There is then
a two-state subsystem formed by two collections of states
$\psi^{8,0}_i$ and $\psi^{8,1}_j$ which may represent a
decoherence-free, or noiseless, subsystem.
\section{Discussion and Conclusions}
\label{sec:concl}
We note that this research was prompted, in part,
by the following question:
How does a quantum state or operator transform? This
is a fundamental, physically motivated question.
The transformation properties determine
the good quantum numbers of a state. For three-state,
and higher-dimensional systems, the states could
transform in one of two ways under special unitary
transformations and the representations are
inequivalent. There are several physical
consequences of the difference in transformation
properties.
For the theory of decoherence-free, or
noiseless, subsystems it is important to determine
the transformation properties which distinguish different
physical states. Without this knowledge,
it is not possible to reliably form a DFS/NS. Using
Weyl's unitary trick, this is clearly seen through the use of
Young's tableau for analyzing the irreps of the $SU(d)$ groups.
Similarly, simulating quantum systems with other quantum
systems requires strict adherence to the appropriate transformation
rules during the applications of quantum controls. A very
important example of this is provided by low-energy nuclear
interactions and the quark model. Quark-quark interactions
at low energies involve both weak and strong forces. Both
the theories of strong and weak forces
involve $SU(d)$ symmetry groups. QCD is
a non-Abelian gauge theory, with gauge group $SU(3)$. In
this theory, quarks transform according to the ${\bf 3}$ rep
of the group $SU(3)$. Weak interaction physics,
or ``flavor'' physics, potentially involves six flavors of
quarks which have an approximate $SU(6)$ symmetry. At lower
energies, only the quarks with mass less than the
experimental interaction energies are used in calculations.
The three lightest quarks are quite close in mass and
have an approximate $SU(3)$ symmetry to an even
better approximation than the $SU(6)$ theory. (Reference
\cite{Weinberg:QTFII} contains detailed discussions
of these topics.) Whereas quarks transform according
to the ${\bf 3}$ rep,
antiquarks transform according to the ${\bf{\bar{3}}}$ rep. Baryons,
such as the proton and neutron, are color-neutral bound
states of three quarks. Mesons behave as color-neutral
states of quarks and antiquarks.
Thus the transformation
properties of the particles involved in low energy
nuclear interactions are critically important in simulations.
Since there exist states of particles which behave in
such a way,the differences in transformation properties
of quantum systems must be taken into account during the
simulation of low-energy nuclear physics.
We may therefore conclude that the existence of inequivalent
fundamental irreps for $SU(d)$ can be vital for quantum
information processing, whether the systems being
used to process quantum information contain $d$ distinct
orthogonal states, or a system being simulated contains
$d$ such states.
Clearly there is a great deal of work still to be done in this
area. Whether or not a system transforms according to a barred
or unbarred representation is determined by the physical system.
Not all systems will naturally
obey a super-selection rule of this sort.
In the near future, we anticipate exploring quantum
computing in DFS/NSs constructed from these higher-dimensional
state spaces. We also expect to more fully explore the
experimental circumstances which give rise
to DFS/NSs and how one would control the system to keep it
in a DFS/NS.
\begin{acknowledgments}
The author thanks ORDA of Southern Illinois University for
partial support of this work under internal grant
4-14095 and Centro de Ci\^{e}ncias Matem\'{a}ticas at the University
of Madeira. This work was supported in part by POPRAM III and CITMA,
Portugal, and was undertaken during the XXIX
Madeira Math encounter. The author also thanks V. Akulin,
J. Clark, A. Mandilara, and especially N. Harshman for stimulating and
helpful discussions.
\end{acknowledgments}
|
2,877,628,089,404 | arxiv | \section{Introduction}
In algebraic coding theory, linear code over finite rings acquires intensive study in the last decades of $20$th century. This study over finite rings was prompted after the accomplishment of Gray maps. Remarkable steps came in $1994$ when Hammons et al. $\cite{16}$ obtained some good non-linear binary codes as an image of linear codes over $\mathbb{Z}_4$ under the Gray map. Afterward, the study of linear codes over finite rings have got more attention than a binary field and several families of codes were studied in \cite{22,T,12,20,23}, such as over $\mathbb{Z}_4,~ \mathbb{Z}_2+v\mathbb{Z}_2,v^2 =v;~ \mathbb{Z}_2+u\mathbb{Z}_2+v\mathbb{Z}_2+uv\mathbb{Z}_2, u^2 =v^2 =0;~ \mathbb{Z}_{p^r} +u\mathbb{Z}_{p^r} +\cdots+u^{k-1}\mathbb{Z}_{p^r}, u^k = 0,$ where $p$ is a prime. Note that cyclic codes are block linear codes in which the cyclic shift of each codeword is again a codeword. These error-correcting codes are also considered an important family of linear codes due to their rich algebraic structure, which makes this class easy to understand and implement. These codes have been studied over various finite rings and many new codes and results have been obtained in \cite{12,20,15,18}.
In $1970$, Hartmann and Tzeng \cite{18} have given bound for the minimum distance of certain reversible cyclic codes. In $2007$, Siap and Abualrub $\cite{14}$ studied the structure of reversible cyclic codes over $\mathbb{Z}_4$. In $2015$, Srinivasulu and Bhaintwal $\cite{11}$ studied reversible cyclic codes over $\mathbb{F}_4+u\mathbb{F}_4, u^2=0$ and their applications to DNA codes, meanwhile Sehmi et al. $\cite{19}$ studied reversible and reversible complement cyclic codes over Galois rings.
Motivated by these works, we study reversible cyclic codes of arbitrary length $n$ over $\mathbb{F}_q + u \mathbb{F}_q$, $u^2=0 ~mod~q$. Recall that these codes have applications in DNA computing which is a field of study that aims at harnessing individual molecules at the nanoscopic level for computational purposes. Computation with DNA molecules possesses an inherent interest for researchers in computer and biology. At present, many researchers have been interested in designing a new set of codewords for each experiment depending on various design constraints in DNA computing. One can prevent errors by minimizing the similarity between the sequences under some distance measure. These codes have many applications in constructing data storage and retrieval systems. \\
The presentation of the manuscript is as follows: In Section $2$, we give some preliminaries while Section $3$ provides the structure of cyclic codes of arbitrary length $n$ over the ring $R$. Section $4$ contains some important results on reversible cyclic codes over $\mathbb{F}_q + u \mathbb{F}_q$. In Section $5,$ some conditions are given under which dual of reversible cyclic code over $R$ is reversible. Section $6$ includes some examples in support of our results and Section $7$ concludes the work.
\section{Basic definitions and construction of cyclic codes over $ \mathbb{F}_q + u \mathbb{F}_q$ }
Throughout the article, $R= \mathbb{F}_q + u \mathbb{F}_q$, where $u^2=0~mod~q$ and $q= p^{k},$ a positive integer power of a prime $p$. Then $R$ is a commutative ring having $q^2$ elements.\\
Recall that a linear code $C$ of length $n$ over $R$ is an $R$-submodule of $R^n$ and cyclic code is a linear code invariant under the shift operator which maps $(c_0,c_1,\dots,c_{n-1})$ to $(c_{n-1},c_0,\dots,c_{n-2})$. Also, cyclic code over $R$ can be viewed as an ideal of $R_n=R[x]/ \langle x^n -1 \rangle $, identifying $(c_0,c_1,\dots,c_{n-1})$ by $c_0 +c_1x+\cdots+c_{n-1}x^{n-1}$. For $v=(v_0,v_1,\dots,v_{n-1}) \in R^n$, the vector obtained after reversal of components of $v$ is denoted by $v^r=(v_{n-1},v_{n-2}\dots,v_0)$.\\
The Hamming weight of a codeword is the number of non-zero components in it and the Hamming distance between any two codewords is number of components in which these two differ. The inner product of two vectors $a=(a_0,\dots,a_{n-1})$ and $b=(b_0,\dots,b_{n-1})$ is defined as $a \cdot b={\Sigma }_{i=0}^{n-1}a_i b_i$. The vectors $a$ and $b$ are said to be orthogonal if $a \cdot b=0$.
The dual $C^{\perp}$ of a linear code $C$ is defined as $C^{\perp} =\{ v \in R^n: v \cdot c=0 ~\text{for all } c \in C\}$. A linear code $C$ is said to be self dual if and only if $C=C^{\perp}$, and self-orthogonal if and only if $C\subseteq C^{\perp}$.
For each polynomial $f(x)=f_0 + f_1x+ \cdots + f_{n-1}x^{n-1}$ with $f_{n-1}\neq 0$, the reciprocal of $f(x)$ is defined as $f^*(x)=x^{n-1}f(1/x)=f_{n-1} + f_{n-2}x+\cdots+ f_{0}x^{n-1}$. Note that $deg f^*(x) \leq deg f(x)$, and if $f_0 \neq 0$, then $deg f^*(x)= deg f(x)$. The polynomial $f(x)$ is called self-reciprocal if and only if $f^*(x)= f(x)$.
Let $C$ be a cyclic code over $R$. Then a map $\phi:C \rightarrow \mathbb{F}_q[x]/\langle x^n-1 \rangle$ defined by $\phi(a_0+a_1x+\cdots+ a_{n-1}x^{n-1})=a_0^q+a_1^qx+\cdots+ a_{n-1}^qx^{n-1}$ is a ring homomorphism with $ker\phi=\{ur(x)| r(x) ~\text{is a polynomial in} ~ \mathbb{F}_q[x]/\langle x^n-1 \rangle\}$. Let $J=\{r(x) ~ : ~ur(x) \in ker\phi \}$. Then $J$ is a cyclic code over $\mathbb{F}_q$, being an ideal of $\mathbb{F}_q[x]/\langle x^n-1 \rangle$. Therefore, $J=\langle a(x) \rangle ~ \text{where}~ a(x)|(x^n-1)$. This implies $ ker\phi = \langle ua(x) \rangle ~\text{where}~ a(x)|(x^n-1)~ mod~q$. Since image of $\phi$ is an ideal and hence a cyclic code over $\mathbb{F}_q$ with generator polynomial $g(x)$ such that $g(x)|(x^n-1)$. Hence,
$$C = \langle g(x)+up(x), ua(x) \rangle$$ for some polynomial $p(x)$ over $\mathbb{F}_q$.
Throughout the article, we use same $g(x), ~p(x)$ and $a(x)$ as mentioned above. Now, we will give some lemmas and theorem having proof with similar arguments as given in \cite{12}, and will be used later for the discussion on reversible cyclic code.
\begin{lemma}
For above $a(x)$ and $p(x),$ $deg~ a(x)>deg ~p(x)$ and $a(x)\mid g(x)$.
\end{lemma}
\begin{proof}
Note that
$$ C = \langle g(x)+up(x), ua(x) \rangle = \langle g(x)+u(p(x)+x^i a(x)), ua(x) \rangle.$$ Meanwhile, $$ \langle g(x)+up(x), ua(x) \rangle = \langle g(x)+u(p(x)+d(x) a(x)), ua(x) \rangle.$$ Therefore, from above, we may assume $deg(p(x)) < deg (a(x))$. Also, $$ug(x) \in ker \phi = \langle ua(x) \rangle$$ implies that $a(x)|g(x)$. If $g(x)= a(x)$, then $C=\langle g(x)+up(x) \rangle $.
\end{proof}
\begin{lemma}
$a(x)$ divides $p(x) \left(\frac{ x^n-1}{g(x)}\right)$.
\end{lemma}
\begin{proof}
Since $$\phi\left(\frac{x^n-1}{g(x)}(g(x)+up(x))\right)=\phi\left( up(x) \frac{x^n-1}{g(x)}\right)=0.$$
This implies $\left(up(x)\frac{x^n-1}{g(x)}\right)\in ker\phi = \langle ua(x) \rangle$. Therefore, $a(x)|\left(p(x)\frac{x^n-1}{g(x)}\right).$
\end{proof}
\begin{lemma}
Let $C = \langle g(x)+up(x), ua(x) \rangle = \langle h(x)+uq(x), ub(x) \rangle $. Then $g(x)=h(x), a(x)=b(x)$ and $p(x)=q(x)~ mod ~a(x)$.
\end{lemma}
\begin{proof}
From the construction of $C$, we have $J=\langle a(x) \rangle=\langle b(x) \rangle$, i.e., $a(x)=b(x)$.
Let $C = \langle g(x)+up(x), ua(x) \rangle= \langle h(x)+uq(x), ub(x) \rangle$.
Since $h(x)\in \phi(C)= \langle g(x) \rangle $, we have $$h(x)=g(x)\alpha (x)~ \text{ and }deg~h(x) \geq deg~g(x).$$ Similarly, $$g(x)=h(x)\beta (x)=g(x)\alpha(x)\beta(x) ~\text{and}~ deg~g(x) \geq deg~h(x).$$ Now, $(x^n-1)$ factors uniquely into irreducible polynomials over $\mathbb{F}_q$ and $g(x), h(x)$ are monic polynomials which divides $(x^n-1)$, therefore $\alpha(x)=\beta(x)=1$ and $g(x)=h(x)$. Since $g(x)+uq(x) \in C$, we have $$g(x)+uq(x)=(g(x)+up(x))+ua(x)l(x), ~\text{for some}~l(x)\in R[x].$$ This implies $u(q(x)-p(x))=ua(x)l(x)$ and hence $p(x)=q(x) ~mod ~a(x).$
\end{proof}
\begin{lemma}
If $n$ is relatively prime to $q$, then $C=\langle g(x),ua(x)\rangle =\langle g(x)+ua(x)\rangle$.
\end{lemma}
\begin{proof}
Let $a(x)|g(x)$ and $a(x)|p(x) \left(\frac{ x^n-1}{g(x)}\right)$. Then $g(x)=a(x)l_1(x)$ and\\ $p(x) \left(\frac{ x^n-1}{g(x)}\right)=a(x)l_2(x)$. Since $n$ is relatively prime to $q$, $x^n-1$ can uniquely be written as product of distinct irreducible polynomials and hence $a(x)$ must be a factor of $p(x)$. But $deg~p(x) < deg~a(x),$ therefore, $p(x)=0$ and $C=\langle g(x),ua(x)\rangle$.
Let $b(x)=g(x)+ua(x)$. Then $ub(x)=ug(x) \in \langle g(x)+ua(x)\rangle$ and $\left(\frac{x^n-1}{g(x)}\right)b(x)=u\left(\frac{x^n-1}{g(x)}\right)a(x) \in \langle g(x)+ua(x)\rangle$. Since $gcd~\left(\frac{x^n-1}{g(x)}, g(x)\right)=1$, there exist polynomials $g_1(x)$ and $g_2(x)$ over $\mathbb{F}_q$ such that
\begin{align*}
1&=\frac{x^n-1}{g(x)}g_1(x)+ g(x)g_2(x)
\\& ua(x)=u\left(\frac{x^n-1}{g(x)}\right)a(x)g_1(x)+ ug(x)a(x)g_2(x)
\in \langle g(x)+ua(x)\rangle.
\end{align*}
Also, $g(x)=b(x)-ua(x) \in\langle g(x)+ua(x)\rangle$. Therefore, $C=\langle g(x), ua(x)\rangle=\langle g(x)+ua(x)\rangle.$
\end{proof}
\begin{theorem}\label{Th1}
Let $C$ be a cyclic code of length $n$ over $R$.
\begin{enumerate}
\item If $n$ is relatively prime to $q$, then $R[x]/ \langle x^n - 1 \rangle$ is a principal ideal ring and $C = \langle g(x),ua(x) \rangle = \langle g(x)+ua(x) \rangle$
where $g(x)$, $a(x)$ are polynomials over $\mathbb{F}_q$ with $ a(x)|g(x)|(x^n -1)~ mod~ q$.
\item If $n$ is not relatively prime to $q$, then
\item[(a)] $C = \langle g(x)+up(x) \rangle$ where $g(x)$, $p(x)$ are polynomials over $\mathbb{F}_q$ with $g(x)|(x^n -1) ~ mod~ q $, $ (g(x)+up(x))|(x^n -1)$ and $g(x)|p(x) \left( \frac{ x^n-1}{g(x)}\right)$ and also $g(x)=a(x)$.
\item[(b)] $C= \langle g(x) + up(x), ua(x) \rangle $ where $g(x)$, $a(x)$, and $p(x)$ are polynomials over $\mathbb{F}_q$ with $ a(x)|g(x)|(x^n -1)~ mod~ q$, $ (g(x)+up(x))|(x^n -1)$, $a(x)|p(x) \left(\frac{ x^n-1}{g(x)}\right)$ and
$ deg(g(x)) > deg(a(x)) > deg(p(x))$.
\end{enumerate}
\end{theorem}
\section{Reversible cyclic code over $ R$}
In this section, we study reversible codes separately for even and odd lengths and find necessary and sufficient condition
for a cyclic code $C$ over $R$ to be reversible. The reverse of a codeword $c = (c_0, c_1,\dots, c_{n-1}) \in C$ is
denoted by $c^r$, and defined as $c^r =(c_{n-1}, c_{n-2},\dots, c_0)$.
\begin{definition} A linear code $C$ of length $n$ over a ring $R$ is said to
be reversible if $c^r \in C$, for all $c \in C$.
\end{definition}
The following theorem characterizes a cyclic code to be reversible over the finite field.
\begin{theorem}\label{4} $\cite[Theorem~1]{15}$
The cyclic code over $GF(q)$ generated by the monic polynomial $g(x)$ is reversible if and only if $g(x)$ is self-reciprocal.
\end{theorem}
\begin{lemma}\label{1} $\cite[Lemma~19]{22}$ Let $f (x), g(x)$ be any two polynomials
in $R[x]$ with $deg( f ) \geq deg(g)$. Then
\begin{enumerate}
\item $( f (x)g(x))^{*} = f^{*}(x)g^{*}(x)$;
\item $( f (x)+g(x))^{*} = f ^{*}(x)+x^{deg( f )-deg(g)}g^{*}(x)$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{2}
Let $C$ be a reversible cyclic code of length $n$ over the ring $R$ and $\phi: C \rightarrow \frac{F_q[x]}{\langle x^n -1\rangle}$ as defined in the Section $2$, be a ring homomorphism. Then $\phi(C) $ is reversible.
\end{lemma}
\begin{proof}
Let $\phi(c) \in \phi(C)$, where $c=(c_0,c_1,\dots,c_{n-1}) \in C$, i.e., $\phi(c)=(c_0^q,c_1^q,\dots\\,c_{n-1}^q)\in \phi(C)$.
Since $C$ is a reversible cyclic code, $c^r=(c_{n-1},c_{n-2},\dots,c_0) \in C$. Consider
\begin{align*}
\phi(c)^r&= (c_0^q,c_1^q,\dots,c_{n-1}^q)^r \\
&=(c_{n-1}^q,c_{n-2}^q, \dots,c_0^q)\\ &=\phi(c_{n-1},c_{n-2},\dots,c_0) \in \phi(C).
\end{align*}
Hence, $\phi(C)$ is reversible.
\end{proof}
\begin{lemma}\label{3}
Let $C$ be a reversible cyclic code over $R$. Then $\langle g(x) \rangle $ and $\langle a(x) \rangle$ are also reversible cyclic codes over $\mathbb{F}_q.$
\end{lemma}
\begin{proof}
From the construction of generators of cyclic codes over $R$, we have $\phi(C)=\langle g(x) \rangle$ and by Lemma \ref{2}, $\phi(C)$ is a reversible code over $\mathbb{F}_q$. Therefore, $\langle g(x) \rangle$ is reversible cyclic code over $\mathbb{F}_q$.
As $ker(\phi)=\langle ur(x) | r(x)$ is a polynomial in $C$ with coefficients in $\mathbb{F}_q \rangle$ and $J=\langle r(x) |ur(x)
\in ker(\phi) \rangle = \langle a(x) \rangle$, it is sufficient to show that $J$ is reversible. Let $r(x)=r_0 + r_1x+...+r_{n-1}x^{n-1} \in J$ be arbitrary,
then $r(x) \in \mathbb{F}_q[x]$ is a polynomial in $C$ . Since $C$ is reversible cyclic code in $R$, therefore $r^*(x)$ is also in $C$. Also, $ur^*(x) \in ker(\phi)$ i.e., $r^*(x) \in J$. Hence, we get the required result.
\end{proof}
\begin{theorem}\label{6} Let $C = \langle g(x), ua(x)\rangle$ be a linear cyclic code of odd length $n$ over $R$, where $ a(x) |g(x) | (x^n -1)$ and
$a(x), g(x) \in \mathbb{F}_q[x].$ Then $C$ is reversible if and only if both
$g(x)$ and $a(x)$ are self reciprocal.
\end{theorem}
\begin{proof}
Let $C$ be a reversible cyclic code over $R$. Then by Lemma $\ref{3}$ and Theorem $\ref{4}$, $g(x)$ and $a(x)$ are self-reciprocal polynomials.
For sufficient part, we assume that $g(x)$ and $a(x)$ are self-reciprocal polynomials over
$\mathbb{F}_q$.
Let $c(x) \in C$, i.e., $c(x) = g(x)m_1(x)+ua(x)m_2(x)$ for some
polynomials $m_1(x)$ and $m_2(x)$ over $R$. $C$ is reversible if and only if $c^*(x) \in C$. Consider
\begin{align*}
c^{*}(x) &=(g(x)m_1(x)+ua(x)m_2(x))^{*}\\
&= (g^{*}(x)m_1^{*}(x)+ux^{i}a^{*}(x)m_2^{*}(x))\\
&= (g(x)m_1^{*}(x)+ua(x)x^{i}m_2^{*}(x)),
\end{align*}
where $m_1^*(x), m_2^*(x)$ are polynomials over $R$. This implies $c^{*}(x) \in \langle g(x),\\ u a(x) \rangle $. Thus, $C$ is a reversible cyclic code over $R$.
\end{proof}
\begin{theorem} \label{7}
Let $C = \langle g(x) + up(x), ua(x)\rangle $ be a cyclic
code of even length $n$ over $R$ where $a(x), g(x)$ and $p(x)$
are polynomials over $ \mathbb{F}_{q} $ such that $deg~a(x) > deg~p(x)$,
$a(x)|g(x)|(x^{n}-1)$ and $a(x)|p(x)( \frac{ x^n-1}{g(x)})$. Then $C$ is reversible if and only if
\begin{enumerate}
\item $g(x)$ and $a(x)$ are self-reciprocal, and
\item $a(x)$ divides $(x^ip^{*}(x)-p(x))$, where $i = deg~g(x)-deg~p(x)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $C =\langle g(x)+up(x), ua(x)\rangle$ be reversible cyclic code over $R$. Then $ g(x)$ and $a(x)$ are self-reciprocal by Lemma \ref{3} and Theorem \ref{4}. Now, consider
\begin{equation*} \label{8}
[g(x)+up(x)]^* =[g^*(x)+ux^ip^*(x)] = g(x)+ux^ip^*(x),
\end{equation*} where $i=deg~g(x)-deg~p(x)$. Since $C$ is reversible cyclic code over $R$, we have $g(x)+ux^ip^*(x)\in C$. Therefore, there exist $l_1(x)$ and $l_2(x)$ in $R[x]$ such that
\begin{equation*} \label{9}
g(x)+ux^ip^*(x)=[g(x)+up(x)]l_1(x)+ua(x)l_2(x).
\end{equation*}
Comparing the degrees on both sides, we get $l_1(x)$ is a constant over $R,$ say, $l_1(x)=a+ub$ where $a, b \in \mathbb{F}_q$. Then
\begin{equation*} g(x)+ux^ip^*(x)=ag(x)+bug(x)+aup(x)+ua(x)l_2(x).
\end{equation*}
Multiplying the above equation by $u$, we get $ug(x)=uag(x)$. Therefore, $l_1(x)=1+ub \in R$. Since $a(x)|g(x)$, it can be easily seen that $a(x)|(x^ip^*(x)-p(x))$.
Conversely, assume that conditions (1) and (2) hold. Let $c(x) \in C$. Then $c(x)=(g(x)+up(x))l_1(x) + ua(x)l_2(x)$, for some polynomials $l_1(x)$ and $l_2(x)$ over $R$. Consider
\begin{align*}
c^*(x)&=(g(x)+up(x))^* l_1^*(x) + ux^ja^*(x)l_2^*(x)
\end{align*}
\begin{align}\label{eq8}
c^*(x)&=(g(x)+ux^ip^*(x))l_1^*(x) + ux^ja(x)l_2^*(x).
\end{align}
Since $a(x)$ divides $p(x)+x^ip^{*}(x)$, so $p(x)+x^ip^{*}(x)=a(x)b(x)$ for some polynomial $b(x)$ over $R$. Also, $ux^ip^{*}(x)=up(x)+ua(x)b(x)$. Substituting in equation (\ref{eq8}), we have
\begin{align*}
c^*(x)=(g(x)+up(x)+ua(x)b(x))l_1^*(x) + ux^ja(x)l_2^*(x) \\ = (g(x)+up(x))l_1^*(x)+ua(x)(b(x)l_1^*(x)+x^jl_2^*(x)).
\end{align*}
Therefore, $c^*(x) \in C$. Hence, $C$ is reversible cyclic code over $R$.
\end{proof}
\begin{theorem}\label{10}
Let $C = \langle g(x)+up(x) \rangle $ be a cyclic code of
even length $n$ over $R$. Then $C$ is reversible if and only if
\begin{enumerate}
\item $g(x)$ is self-reciprocal;
\item $p(x) = x^i p^{*}(x)$ or $g(x) = b^{-1}[x^ip^{*}(x)-p(x)]$, where
$i = deg~g(x)-deg~p(x)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $C$ be reversible cyclic code over $R$. Then $\langle g(x) \rangle $ is reversible cyclic code over $\mathbb{F}_q$. By Theorem \ref{4} , $g(x)$ is self reciprocal.\\
For part (2), consider
\begin{align*} [g(x)+up(x)]^{*}= [g^{*}(x)+ux^{i}p^{*}(x)]\\
= g(x)+u x^ip^{*}(x).
\end{align*}
Since $C$ is reversible, we have $g(x)+ux^ip^{*}(x) \in C$. This implies there exists a polynomial $l(x)$ over $\mathbb{F}_q$ such that
\begin{equation*} g(x)+ux^i p^{*}(x) = [g(x)+up(x)]l(x).
\end{equation*}
Comparing the degrees on both sides, we get $l(x)$ is a constant polynomial over $R,$ say, $l(x)=a+ub$, where $a, b \in \mathbb{F}_q$. Then
\begin{equation*} g(x)+ux^ip^*(x)=ag(x)+bug(x)+aup(x). \end{equation*}
Multiplying the above equation by $u$, we have $ug(x)=uag(x)$
and hence $a=1$. If $b=0,$ then $p(x)=x^ip^*(x)$ and if $b\neq 0,$ then $g(x)=p(x)+x^ip^*(x)$.
Conversely, assume that $(1)$ and $(2)$ hold. Let $c(x) \in C$, i.e., $c(x)=(g(x)+up(x))r(x)$, for some polynomials $r(x)$ over $R$. Consider
\begin{equation} \label{eq1}
c^*(x)=[g(x)^*+ux^ip^*(x)]r^*(x)=[g(x)+ux^ip^*(x)]r^*(x).
\end{equation}
\textbf{Case 1} If $p(x)=x^ip^{*}(x)$, then
$$c^*(x)=[g(x)+up(x)]r^*(x) \in C$$
\textbf{Case 2} If $g(x)=b^{-1}[x^ip^{*}(x)-p(x)]$, being $g(x)$ self-reciprocal, we have
\begin{equation*}
x^ip^{*}(x)-p(x)=x^{i+j}p(x)-p^*(x)
\end{equation*} i.e.,
\begin{equation}\label{23}
up(x)(1+x^{i+j})-up^*(x)=ux^ip^*(x).
\end{equation}
Using (\ref{23}) in (\ref{eq1}), we have
\begin{align*}
& [g(x)+up(x)+u(x^{i+j}p(x)-p^*(x))]r^*(x)
\\&=[(1+u)g(x)+up(x)]r^*(x)\\&=(1+u)[g(x)+(1+u)up(x)]r^*(x)\\&= [g(x)+up(x)]s(x),
\end{align*} where $s(x)=(1+u)r^*(x)$. Therefore, $c^*(x)\in C$ and hence $C$ is reversible cyclic code over $R$.
\end{proof}
\section{Dual of a reversible cyclic code over $\mathbb{Z}_2
+u \mathbb{Z}_2 $}
Let $C$ be a cyclic $[n,k]$-code with parity check polynomial $h(x)=h_0+h_1x+\cdots+ h_kx^k$ and $\bar{h}(x)=h^*(x)$. Then we have the following characterization for $C^\perp$ dual code of $C$.
\begin{theorem}\label{10}
Let $C^\perp$ be a dual code of a cyclic code $C$ over $GF(q).$ Then $C^\perp=\langle \bar{h}(x)\rangle$ is reversible cyclic code if and only if $h(x)\in C^\perp$.
\end{theorem}
\begin{proof}
Let $\bar{h}(x)=(h_{k},h_{k-1},\dots, h_0)\in C^\perp$. Then $(\bar{h}(x))^r=(h_0,h_{1},\dots, h_{k-1},\\h_{k})=h(x)\in C^\perp$.
Conversely, suppose $h(x)\in C^\perp$, then $h(x)=(\bar{h}(x))^r$, i.e., $(\bar{h}(x))^r=h(x)\in C^\perp$. Therefore, $C^\perp=\langle \bar{h}(x)\rangle$ is reversible cyclic code.
\end{proof}
\begin{definition}
Let $I$ be an ideal in $R_n$. The annihilator $A(I)$ of $I$ in $R_n$ is defined as
$$A(I)= \{b(x)~|~ f(x)b(x)=0 ~\text{for all} ~f(x) ~\text{in}~ I\}. $$
\end{definition}
If $C$ is a cyclic code with associated ideal $I,$ then the associated ideal of $C^\perp$ is
$$A(I)^*= \{g^*(x)~|~ g(x)\in I\}.$$
\begin{proposition}\label{prop1}
Let $C$ be a cyclic code of odd length over $\mathbb{Z}_2+u\mathbb{Z}_2$. Then
$$Ann(C)=\left \langle \frac{x^n-1}{a(x)},u\frac{x^n-1}{g(x)} \right \rangle.$$
\end{proposition}
\begin{proof}
Since $C$ is a cyclic code of odd length over $\mathbb{Z}_2+u\mathbb{Z}_2$, we have $$C= \langle g(x)+ua(x) \rangle=\langle g(x), ua(x) \rangle $$ with $a(x)|g(x)|(x^n-1)$. Also, there exists $m_1(x)$ such that $g(x)=a(x)m_1(x)$.
Note that
\begin{align*}
\left(\frac{x^n-1}{a(x)}\right)(g(x)+ua(x))&=\left(\frac{x^n-1}{a(x)}\right)g(x)+u\left(\frac{x^n-1}{a(x)}\right)a(x)\\
&=\left(\frac{x^n-1}{a(x)}\right)a(x)m_1(x)=0.
\end{align*}
Also, $$u\frac{x^n-1}{g(x)}(g(x)+ua(x)) =0.$$
Hence, $M=\left \langle \frac{x^n-1}{a(x)},u\frac{x^n-1}{g(x)}\right\rangle \subseteq Ann(C)$. \\
In order to prove $Ann(C)\subseteq M$, let $Ann(C)=\langle h(x),ur(x)\rangle$. Then
$$ur(x)(g(x)+ua(x))=0.$$ This implies there exists a polynomial $t_1(x)$ in $\mathbb{Z}_2$ such that
$$r(x)=\left(\frac{x^n-1}{g(x)}\right)t_1(x)\in M.$$
Also,
\begin{align*}
&h(x)(g(x)+ua(x))=0\\
&h(x)g(x)+uh(x)a(x)=0.
\end{align*}
Since $h(x)g(x)=0$. So,
$uh(x)a(x)=0$, i.e., there exists polynomial $t_2(x)$ in $\mathbb{Z}_2$ such that
$$h(x)=\left(\frac{x^n-1}{a(x)}\right)t_2(x).$$
Hence,
$$Ann(C)=\langle h(x),ur(x)\rangle
\subseteq\left \langle \frac{x^n-1}{a(x)},u\frac{x^n-1}{g(x)} \right \rangle\in M.$$ Therefore, $$Ann(C)=\left \langle \frac{x^n-1}{a(x)},u\frac{x^n-1}{g(x)} \right \rangle.$$
\end{proof}
The following result is the consequence of Proposition $\ref{prop1}$.
\begin{theorem}
Let $C$ be a cyclic code of odd length over $\mathbb{Z}_2+u\mathbb{Z}_2$. Then
$$C^{\perp}=\left \langle \left(\frac{x^n-1}{a(x)}\right)^* ,u\left(\frac{x^n-1}{g(x)}\right)^*\right \rangle.$$
\end{theorem}
\begin{theorem}
Let $C$ be a reversible cyclic code of odd length $n$ over $\mathbb{Z}_2+u\mathbb{Z}_2$ with $a(x)|g(x)|(x^n-1)$ and $C^{\perp}= \left \langle \left(\frac{x^n-1}{a(x)}\right)^*,u\left(\frac{x^n-1}{g(x)}\right)^* \right \rangle$. Then $C^{\perp}$ is a reversible cyclic code over $\mathbb{Z}_2+u\mathbb{Z}_2$.
\end{theorem}
\begin{proof}
Let $C$ be a reversible cyclic code of odd length $n$ over $\mathbb{Z}_2+u\mathbb{Z}_2$. Then $g(x)$ and $a(x)$ are self reciprocal. Assume $\left(\frac{x^n-1}{a(x)}\right)=r_1(x)$ and $\left(\frac{x^n-1}{g(x)}\right)=r_2(x)$. Now,\\
$$(x^n-1)^{*}=a^{*}(x)r^{*}_1(x) $$ and $$(x^n-1)^{*}=g^{*}(x)r^{*}_2(x). $$ This implies $r^{*}_1(x)=\frac{(x^n-1)^{*}}{a^{*}(x)}=\frac{-(x^n-1)}{a(x)}=-r_1(x)$ and $r^{*}_2(x)=\frac{(x^n-1)^{*}}{g^{*}(x)}=\frac{-(x^n-1)}{g(x)}=-r_2(x)$.\\
Let $\bar{c}(x)\in C^{\perp}$. Then
\begin{align*}
(\bar{c}(x))^{*}&=\left(\left(\frac{x^n-1}{a(x)}\right)^{*}l_1(x)+\left(\frac{x^n-1}{a(x)}\right)^{*}l_2(x)\right)^{*}\\
&=\left(-r_1(x)l_1(x)-r_2(x)l_2(x)\right)^{*}\\
&=-r^{*}_1(x)l^{*}_1(x)-x^{i}r^{*}_2(x)l^{*}_2(x)\\
&=r^{*}_1(x)q_1(x)+r^{*}_2(x)q_2(x),
\end{align*}
for some polynomials $q_1(x)=-l^{*}_1(x)$ and $q_2(x)=-x^{i}l^{*}_2(x)$ over $R$. Therefore, $c(x)\in C^{\perp}$. Thus, by Theorem \ref{10}, $C^{\perp}$ is a reversible cyclic code over $\mathbb{Z}_2+u\mathbb{Z}_2$.
\end{proof}
Now, we present a result given by Abualrub and Siap \cite{T}, which is used for furtherance on the dual of a reversible cyclic code.
\begin{theorem}$\cite[Theorem~4]{T}$ Let $C$ be a cyclic code of even length over $\mathbb{Z}_2+u\mathbb{Z}_2$.
\begin{enumerate}
\item If $C=\langle g(x)+up(x), ua(x) \rangle $, with $a(x)|g(x)|(x^n-1)$, $a(x)|p(x)\left(\frac{x^n-1}{g(x)}\right)$ and $deg~ g(x)>deg ~a(x)> deg~p(x)$, and $g(x)=a(x)m_1(x)$, $p(x)\left(\frac{x^n-1}{g(x)}\right)\\=a(x)m_2(x)$, then $$Ann(C)=\left \langle \frac{x^n-1}{a(x)}+um_2(x),u\frac{x^n-1}{g(x)} \right \rangle ~\text{and}$$ $$C^{\perp}=\left \langle \left(\frac{x^n-1}{a(x)}\right)^*+ux^im_2^{*}(x),u\left(\frac{x^n-1}{g(x)}\right)^* \right \rangle,$$ where $i=deg \left(\frac{x^n-1}{a(x)}\right) - deg(m_2(x))$.
\item If $C=\langle g(x)+up(x) \rangle$ with $p(x)\left(\frac{x^n-1}{a(x)}\right)=g(x)m_2(x)$, then $$Ann(C)=\left \langle \frac{x^n-1}{g(x)}+um_2(x)\right \rangle ~\text{and}$$
$$C^{\perp}=\left \langle \left(\frac{x^n-1}{g(x)}\right)^*+ux^im_2^{*}(x) \right \rangle,$$ where $i=deg \left(\frac{x^n-1}{g(x)}\right) - deg(m_2(x))$.
\end{enumerate}
\end{theorem}
\begin{theorem}
Let $C$ be a reversible cyclic code of even length $n$ over $\mathbb{Z}_2+u\mathbb{Z}_2$ and $C^{\perp}= \left \langle \left(\frac{x^n-1}{a(x)}\right)^*+ux^i(m_2^*(x)),~ u\left(\frac{x^n-1}{a(x)}\right)^*\right \rangle$. If $a(x)$ divides $p^*(x)+x^jp(x)$, where $i= deg \left(\frac{x^n-1}{a(x)}\right) - deg(m_2(x))$ and $j=deg \left(\frac{x^n-1}{a(x)}\right)^* - deg(m_2^*(x))-i,$ then $C^{\perp}$ is reversible cyclic code over $\mathbb{Z}_2+u\mathbb{Z}_2$ provided $p(x)\neq 0$.
\end{theorem}
\begin{proof}
Let $C$ be a reversible cyclic code of even length $n$ over $\mathbb{Z}_2+u\mathbb{Z}_2$. For $C^{\perp}$ to be reversible, it suffices to show that $\left(\frac{x^n-1}{a(x)}\right)+ux^{i+j}m_2(x)$ and $u\left(\frac{x^n-1}{a(x)}\right)$ are in $C^{\perp}$. Note that
\begin{align*}
&\left(\left(\frac{x^n-1}{a(x)}\right)+ux^{i+j}m_2(x)\right)(g(x)+up(x))\\&= up(x)\left(\frac{x^n-1}{a(x)}\right) + ux^{i+j}m_2(x)g(x)\\&=um_2(x)g(x)+ux^{i+j}m_2(x)g(x)\\&= um_2(x)g(x)(1+x^{i+j}).
\end{align*}
Since $p(x)\left(\frac{x^n-1}{g(x)}\right)=a(x)m_2(x)$ and $a(x)$ divides $g(x)$, we get
\begin{align*}
& um_2(x)g(x)(1+x^{i+j})=\left(\frac{x^n-1}{a(x)}\right)up(x)(1+x^{i+j})
\\&=(x^n-1)u\left(\frac{p(x)+x^{i+j}p(x)}{a(x)}\right)
\\&=(x^n-1)u\left(\frac{p(x)+x^ip^*(x)+x^i(p^*(x)+x^jp(x))}{a(x)}\right).
\end{align*}
By Theorem \ref{7}, $a(x)$ divides $p(x)+x^ip^*(x)$. Hence, above expression can be written as
\begin{equation*}
(x^n-1)u\left(\frac{a(x)l^{'}(x)}{a(x)}\right)
=(x^n-1)ul^{'}(x)=0.
\end{equation*}
Next, we have $$\left(\left(\frac{x^n-1}{a(x)}\right)+ux^{i+j}m_2(x)\right)(g(x)+up(x))=0,$$
\\and $$\left(\left(\frac{x^n-1}{a(x)}\right)+ux^{i+j}m_2(x)\right)ua(x)=0,$$\\
and $$u\left(\frac{x^n-1}{a(x)}\right)(g(x)+up(x))=0,$$\\
and $$u\left(\frac{x^n-1}{a(x)}\right)ua(x)=0.$$\\
Hence, $C^{\perp}$ is a reversible cyclic code over $\mathbb{Z}_2+u\mathbb{Z}_2$.
\end{proof}
\section{Minimum Hamming distance of a cyclic code over $R $}
In this section, we find the minimum Hamming distance of a cyclic code of arbitrary length over $R$. Let $C=\langle g(x)+up(x), ua(x) \rangle$ be a cyclic code of length $n$ over $R$. Define $C_u=\{b(x)| ub(x) \in C\}$. Then $C_u$ is a cyclic code of length over $n$ over $\mathbb{F}_q$. The following results give a technique to find minimum distance of a cyclic code of arbitrary length over $R$.
\begin{theorem}
Let $C=\langle g(x)+up(x), ua(x) \rangle$ be a cyclic code of length $n$ over $R$. Then $C_u=\langle a(x) \rangle$.
\end{theorem}
\begin{proof}
Straightforward.
\end{proof}
\begin{theorem}
Let $C$ be a cyclic code of length $n$ over $R$. Then $d_{H}(C)=d_{H}(C_u)$.
\end{theorem}
\begin{proof}
Let $m(x)\in C_u$ be such that $d_{H}(C_u)=w_{H}(m(x))$. Then $um(x)\in C$. Also, $w_{H}(m(x))=w_{H}(um(x))$, hence, $d_{H}(C_u) \geq d_{H}(C)$. \\
Conversely, suppose $d_{H}(C)=w_{H}(b(x))$ where $b(x) \in C$. If the coefficient of any power of $x$ in $b(x)$ is a zero divisor of $R ,$ then we get an element in $C$ with hamming weight less than that of $b(x),$ which contradicts our assumption. Therefore, the coefficients of any power of $x$ in $b(x)$ is either zero or a unit in $R$, i.e., $\alpha +u \beta~ \text{such that either}~ \alpha= \beta=0 ~\text{or}~ \alpha \in \mathbb{F}_q^*$. In this case, $ub(x)=u c(x)$ for some $c(x) \in C_u$. Therefore, $w_{H}(b(x))=w_{H}(uc(x)) = w_{H}(c(x))$ and hence $d_{H}(C_u) \leq d_{H}(C)$. Thus, $d_{H}(C)=d_{H}(C_u)$.
\end{proof}
\section{Examples}
\begin{example} For length $n=4$.
$$x^4-1=(x+1)(x+2)(x^2+1) ~\text{over}~ \mathbb{Z}_3.$$
Some of the reversible cyclic codes of length $4$ over $\mathbb{Z}_3
+u \mathbb{Z}_3 $ are given below:
\begin{longtable}[h]{|l|l|l|l|l|l|}
\hline
Non-zero Generator & Dimension $k$ &$d(C)$ & MDS \\
Polynomial (s) of $C$ & of $C $ & &\\
\hline
$1$ or $1+u$ & $4$ & $1$ & $*$\\
\hline
$x+1$ & $3$ & $2$ &$*$\\
\hline
$x^2+1$ & $2$ & $2$ &\\
\hline
$(x+1)(x^2+1)$ & $1$ & $4$ & $*$\\
\hline
$x+1, u$ & $4$ & $1$ & $*$\\
\hline
$x^2+1, u$ & $4$ & $1$ & $*$\\
\hline
$(x+1)(x^2+1), u$ & $4$ & $1$ & $*$\\
\hline
$(x+1)(x^2+1), u(x+1)$ & $3$ & $2$ & $*$\\
\hline
$(x+1)(x^2+1), u(x^2+1)$ & $2$ & $2$ &\\
\hline
\end{longtable}
\end{example}
\begin{example} For length $n=5$.
$$x^5-1=(x-1)(x^4+x^3+x^2+x+1) ~\text{over}~ \mathbb{Z}_2.$$
Since all of the above factors are self reciprocal polynomials, by Theorem $\ref{6},$ all the cyclic codes of length $5$ over $R$ are reversible. Some of these are given below:
\begin{longtable}[h]{|l|l|l|l|l|l|}
\hline
Non-zero Generator & Dimension $k$ &$d(C)$ & MDS \\
Polynomial (s) of $C$ & of $C $ & &\\
\hline
$1$ or $(1+u)$ & $5$ & $1$ & $*$\\
\hline
$(u+1)(x+1)$ & $4$ & $2$ &$*$\\
\hline
$(u+1)(x^4+x^3+x^2+x+1)$ & $1$ & $5$ &$*$\\
\hline
$x+1+u$ & $4$ &$1$ &\\
\hline
$x^4+x^3+x^2+x+1+u$ & $4$ & $1$ & \\
\hline
\end{longtable}
\end{example}
\begin{example}
For length $n=6$.
$$x^6-1=(x+1)^3(x+2)^3 ~\text{over}~ \mathbb{Z}_3.$$
From Theorem \ref{Th1}, the non-zero free module or single generator reversible cyclic codes of length $6$ over $\mathbb{Z}_3 + u \mathbb{Z}_3 $ are given below:
\begin{longtable}[h]{|l|l|l|l|l|l|}
\hline
Non-zero Generator & Dimension $k$ &$d(C)$ & MDS \\
Polynomial (s) of $C$ & of $C $ & &\\
\hline
$1$ or $(1+u)$ &$6$ & $1$& $*$\\
\hline
$x+1$ & $5$ & $2$& $*$\\
\hline
$(x+1)^2$ &$4$ & $2$& \\
\hline
$(x+1)^3$ &$3$ & $2$& \\
\hline
$(x+2)^2$ &$4$ & $2$& \\
\hline
$x+1, u$ &$6$ & $1$& $*$\\
\hline
$(x+1)^2, u$ &$6$ & $1$& $*$\\
\hline
$(x+1)^3, u$ &$6$ & $1$& $*$\\
\hline
$(x+1)^2, u(x+1)$ &$5$ & $2$& $*$\\
\hline
$(x+1)^3, u(x+1)$ &$5$ & $2$& $*$\\
\hline
$(x+1)^3, u(x+1)^2$ &$4$ & $2$& \\
\hline
$(x+2)^2, u$ &$6$ & $2$& \\
\hline
\end{longtable}
\end{example}
\begin{example} For length $n=4$.
$$x^4-1=(x+1)^4=f^4 ~\text{over}~ \mathbb{Z}_2.$$ From Theorem \ref{Th1}, the non-zero free module or single generator reversible cyclic codes of length $4$ over $\mathcal{R}$ are given below:
\newpage
\begin{longtable}[h]{|l|l|l|l|l|l|}
\hline
Non-zero Generator & Dimension $k$ &$d(C)$ & MDS \\
Polynomial (s) of $C$ & of $C $ & &\\
\hline
$1$ or $1+u$ & $4$ & $1$ & $*$ \\
\hline
$x+1$ & $3$ & $2$ & $*$ \\
\hline
$x+1, u$ & $4$& $1$& $*$ \\
\hline
$x+1+u$ & $3$& $2$& $*$ \\
\hline
$x^2+1$ & $2$& $2$ & \\
\hline
$x^2+1, u$ & $4$& $1$ & $*$ \\
\hline
$x^2+1, u(x+1)$ & $3$& $2$& $*$ \\
\hline
$x^2+1+u$ & $2$& $2$ &\\
\hline
$x^2+1+u, u(x+1)$ &$3$& $2$& $*$ \\
\hline
$x^2+1+u(x+1)$ & $2$& $2$ &\\
\hline
$x^2+1+u, u(x+1)$ & $3$& $2$& $*$ \\
\hline
$x^3+1$ & $1$& $2$ &\\
\hline
$x^3+1, u$ & $4$& $1$& $*$ \\
\hline
$x^3+1+u, u(x+1)$ & $3$& $2$& $*$ \\
\hline
$x^3+1+u(x+1), u(x^2+1)$ & $2$& $2$&\\
\hline
$x^3+1, u(x+1)$ & $3$& $2$& $*$ \\
\hline
$x^3+1, u(x^2+1)$ & $2$& $2$&\\
\hline
\end{longtable}
\end{example}
\begin{example}
For length $n=6$.
$$x^6-1=(x+1)^2(x^2+x+1) ~\text{over}~ \mathbb{Z}_2.$$ From Theorem \ref{Th1}, the non-zero free module or single generator reversible cyclic codes of length $4$ over $\mathcal{R}$ are given below:
\begin{longtable}[h]{|l|l|l|l|l|l|}
\hline
Non-zero Generator & Dimension $k$ &$d(C)$ & MDS \\
Polynomial (s) of $C$ & of $C $ & &\\
\hline
$1$ or $(1+u)$ &$6$ & $1$ & $*$\\
\hline
$x+1$ & $5$& $2$ & $*$\\
\hline
$x+1+u$ & $5$& $2$ & $*$\\
\hline
$x^2+1$ & $4$& $2$ &\\
\hline
$x^2+1, u$ & $6$& $1$ & $*$ \\
\hline
$x^2+1, u(x+1)$ & $5$& $2$ & $*$\\
\hline
$x^2+x+1$ & $4$& $2$ &\\
\hline
$x^2+x+1, u$ & $6$& $1$ & $*$\\
\hline
$x^3+1$ & $3$& $2$ & \\
\hline
$x^3+1, u$ & $6$& $1$ & $*$\\
\hline
$x^3+1, u(x+1)$ & $5$& $2$ & $*$\\
\hline
$x^3+1+u, u(x+1)$ & $5$& $2$ & $*$\\
\hline
\end{longtable}
\end{example}
\begin{example}For length $n=7$.
$$x^7-1=(x + 1)(x^3 + x + 1)(x^3 + x^2 + 1) ~\text{over}~ \mathbb{Z}_2.$$
The only self reciprocal factors are $(x-1)$ and $(x^6+x^5+x^4+x^3+x^2+x+1)$. By Theorem $\ref{6}$, reversible cyclic codes of length $7$ over $R$ are given below:
\begin{longtable}[h]{|l|l|l|l|l|}
\hline
Non-zero Generator & Dimension $k$ &$d(C)$ & MDS \\
Polynomial (s) of $C$ & of $C $ & &\\
\hline
$1$ or $1+u$ & $7$ & $1$& $*$\\
\hline
$(1+u)(x+1)$ & $6$ &$2$ &$*$\\
\hline
$(x+1),u$ & $7$ & $1$& $*$\\
\hline
$(x^6+x^5+x^4+x^3+x^2+x+1)$ & $1$ & $7$& $*$\\
\hline
$(x^6+x^5+x^4+x^3+x^2+x+1)$, $u$ & $7$ & $1$& $*$\\
\hline
\end{longtable}
\end{example}
\begin{example} For length $n=10$.
$$x^{10}-1=(x+1)^5(x+4)^5 ~\text{over}~ \mathbb{Z}_5.$$ From Theorem \ref{Th1}, the non-zero free module or single generator reversible cyclic codes of length $10$ over $\mathbb{Z}_5+u \mathbb{Z}_5$ are given below:
\begin{longtable}[h]{|l|l|l|l|l|l|}
\hline
Non-zero Generator & Dimension $k$ &$d(C)$ & MDS \\
Polynomial (s) of $C$ & of $C $ & &\\
\hline
$1$ or $1+u$ & $10$ &$1$& $*$\\
\hline
$(x+1)^j, 1 \leq j \leq 5 $ & $9,8,7,6,5$ & $2 $& $*$, for $j=1$ \\
\hline
$(x+4)^{2j}, ~ \text{where}~ j=1,2$ & $8
,6$ &$2$&\\
\hline
$x+1, u$ & $10$ & $1$ & $*$\\
\hline
$(x+1)^2, u$ &$10$ & $1$& $*$\\
\hline
$(x+1)^2, u(x+1) $ &$9$ & $2$& $*$\\
\hline
$(x+1)^3, u$ & $10$ & $1$& $*$\\
\hline
$(x+1)^3, u(x+1)^j$ where $ 1 \leq j \leq 2$ & $9,8$ &$2$& $*$, for $j=1$\\
\hline
$(x+1)^4, u$ &$10$ & $1$& $*$\\
\hline
$(x+1)^4, u(x+1)$ &$9$ & $2$& $*$\\
\hline
$(x+4)^2, u $ &$10$ & $1$& $*$\\
\hline
$(x+1)^2+ux$ & $8$ & $2$& \\
\hline
$(x+4)^2+ u x$ & $8$ &$2$&\\
\hline
$(x+1)^i(x+4)^{2}, $ where $ i=1,2$ & $7,6$ &$3$&\\
\hline
$(x+1)^3(x+4)^{2} $ & $5$ &$4$&\\
\hline
$(x+1)^4(x+4)^{2} $ & $4$ &$5$&\\
\hline
$(x+1)^5(x+4)^{2} $ & $3$ &$6$&\\
\hline
$(x+1)(x+4)^{4} $ & $5$ &$4$&\\
\hline
$(x+1)^i(x+4)^{4}, $ ~\text{where}~ $ 2 \leq i \leq 4$ & $4,3,2$ &$5$&\\
\hline
$(x+1)^5(x+4)^{4} $ & $1$ &$10$& $*$\\
\hline
$(x+1)^4+ u x^2, u$ & $10$ &$1$& $*$\\
\hline
$(x+1)^4+ u x^2, u(x+1)$ & $9$ &$2$& $*$\\
\hline
$(x+1)^2+u x, u(x+1)^2$ & $8$ &$2$&\\
\hline
$(x+1)^2(x+4)^2+u(x^3+x)$ & $6$ &$3$&\\
\hline
$(x+1)^2(x+4)^2+u(x^3+x^2+x)$ & $6$ &$3$&\\
\hline
$(x+1)(x+4)^2+u(x^2+x)$ & $7$ &$3$&\\
\hline
\end{longtable}
\end{example}
\hspace{-.7cm}\textbf{Remark:} In above examples $*$ represents the optimal (MDS) codes calculated by using magma software \cite{Bosma}.
\section{Conclusion}
In this article, we studied reversible cyclic codes of arbitrary length $n$ over the ring $ R = \mathbb{F}_q + u \mathbb{F}_q$, where $u^2=0$. We have provided a unique set of generators for these codes as ideals in the ring $R[x]/\langle x^n-1\rangle$. Moreover, in Section $5$, we have imposed some conditions under which dual of a reversible cyclic code is reversible. In Section $6$, we have given some examples in support of our results.
|
2,877,628,089,405 | arxiv | \section{Introduction}
Recent measurements by ALICE~\cite{ALICE:2017jyt} show a smooth increase of strange to non-strange particle ratios across pp, p--Pb and Pb--Pb collisions as a function of charged-particle multiplicities at the LHC. This universal scaling with particle multiplicity may point towards a common underlying physics mechanism across collision systems. However, no onset of jet quenching effects has been observed so far the smaller pp and p--Pb systems~\cite{Nagle:2018nvi}. To disentangle the phenomena of soft (underlying event) and hard (jet induced) particle production, the relative transverse activity classifier ($R_{\rm{T}}$), an event shape observable, can be exploited as a powerful tool. Here, the production of light flavor charged hadrons for different classes of $R_{\rm{T}}$ in pp, p--Pb and Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 $\textrm{TeV}$ are reported. Also, we present a search for jet quenching behavior in small collision systems.
\section{Relative Transverse Activity Classifier ($R_{\rm{T}}$)}
Using $R_{\rm{T}}$ proposed in~\cite{Martin:2016igp}, the final-state particle production can be studied as a function of varying underlying events. To define $R_{\rm{T}}$, the analysed events are required to have a leading trigger particle above a certain $p_{\rm T}$. Relative to the leading rigger particle, an event can be classified into three different azimuthal regions. Assuming $\phi_{\rm t}$ as the azimuthal angle for the trigger particle and $\phi_{\rm a}$ as the azimuthal angle of the associated particles, the regions can be classified as the following,
\begin{itemize}
\item Near side: $|\phi_{\rm t} - \phi_{\rm a}| < \frac{\pi}{3}$
\item Away side: $|\phi_{\rm t} - \phi_{\rm a}| > \frac{2\pi}{3}$
\item Transverse side: $\frac{\pi}{3} \leq |\phi_{\rm t} - \phi_{\rm a}| \leq \frac{2\pi}{3}$
\end{itemize}
The near side is dominated by the jet activity related to the trigger particle. Since jets are typically produced as pairs back-to-back in azimuthal angle, the away side will contain some of these back-scattered jets. The transverse side is dominated by particle production in the underlying events (UE). Both the near and away side also contain similar UE production as in transverse side. Thus, one can subtract the UE from near and away side by subtracting the yield in transverse side, see Section~\ref{res}. The leading-$p_{\rm T}$ selection of 8 $< p_{\rm T}^{\rm trig.} <$ 15 GeV/$c$ ensures that the number density in the transverse region remains almost independent of leading particle $p_{\rm T}$~\cite{Acharya:2019nqn} and reduces the impact of possible elliptic flow on the measurements. $R_{\rm T}$ is defined as~\cite{Martin:2016igp,Ortiz:2017jaz},
\begin{eqnarray}
R_{\rm T} = \frac{N_{\rm ch}^{\rm TS}}{\langle N_{\rm ch}^{\rm TS} \rangle},
\label{eq2}
\end{eqnarray}
where, $N_{\rm ch}^{\rm TS}$ is the charged particle multiplicity in the transverse side. The events with $R_{\rm T} \rightarrow$ 0 are the events expected to be dominated by jet fragmentation.
\section{Results and Discussion}
\label{res}
\begin{figure}[ht!]
\centering
\includegraphics[width=29pc]{2020-05-22-Figure2a.png}
\caption{\label{fig1} System size dependence of $\langle p_{\rm T}\rangle$ for charged-particles as a function of $R_{\rm T}$ in the near (left), away (middle), and transverse (right) sides.}
\end{figure}
Figure~\ref{fig1} shows $\langle p_{\rm T}\rangle$ of charged-particles as a function of $R_{\rm T}$ in the near (left), away (middle), and transverse (right) sides for pp, p--Pb and Pb--Pb collisions. The measurement of charged-particles follows a similar procedure as described in Ref.~\cite{Acharya:2018qsh}. The near and away side $\langle p_{\rm T}\rangle$ for pp and p--Pb collisions decreases at low-$R_{\rm T}$ and it saturates for high-$R_{\rm T}$. This behavior indicates that the contribution from the near and away side jet dominates at low-$R_{\rm T}$ and the soft particle production starts contributing in high-$R_{\rm T}$ region. Another interesting observation to note that the values of $\langle p_{\rm T}\rangle$ are similar for all systems for $R_{\rm T}\rightarrow$ 0. One would naively expect this behavior as this region has very little contribution from soft particles i.e. UE. The $\langle p_{\rm T}\rangle$ for transverse side increases with $R_{\rm T}$ as the UE increases with increase in $R_{\rm T}$. For large $R_{\rm T}$, the $\langle p_{\rm T}\rangle$ approaches to a similar value in all three topological regions for a given system as they are mostly dominated by the UE.
\begin{figure}[ht!]
\centering
\includegraphics[width=14pc]{2020-05-22-Figure4_IAA_NS.png}
\includegraphics[width=14pc]{2020-05-22-Figure4_IAA_AS.png}
\caption{\label{fig2} $I_{\rm pp,p-Pb,Pb-Pb}$ as a function of $\langle N_{\rm ch}^{\rm TS} \rangle$ in different V0M/V0A multiplicity classes for the near (left) and away (right) side in pp, p--Pb, and Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV.}
\end{figure}
To investigate the presence of jet-quenching effects in small collision systems, $I_{\rm pp,p-Pb,Pb-Pb}$, an observable which is calculated from the yields of different topological regions, is calculated as a function of $\langle N_{\rm ch}^{\rm TS} \rangle$ for different V0M (V0A) multiplicity classes of pp and Pb--Pb (p--Pb) collisions. The $I_{\rm pp,p-Pb,Pb-Pb}$ is a similar quantity ($I_{\rm AA}$) calculated as in Ref.~\cite{Aamodt:2011vg}. The $I_{\rm pp,p-Pb,Pb-Pb}$ is expected to be highly sensitive to medium effects. A suppression of this obervable in the away side would indicate the presence of jet quenching, while an enhancement in the near side would indicate a bias due to trigger particle selection and/or presence of medium effects. It is defined as the ratio of yield in the near or away region (after subtraction of yield in transverse side) in different collision systems to the yield in the near or away region in minimum bias pp collisions. It can be expressed as,
\begin{equation}
I_{\rm pp,p-Pb,Pb-Pb} = \frac{Y^{\rm pp,p-Pb,Pb-Pb} - Y^{\rm pp,p-Pb,Pb-Pb}_{\rm TS}}{Y^{\rm pp~min. bias} - Y^{\rm pp~min. bias}_{\rm TS}}.
\end{equation}
Here, $Y$ represents the integrated yield of charged particles in a particular topological region. For these results we have not made a direct selection on $N_{\rm ch}^{\rm TS}$, as the direct selection on $N_{\rm ch}^{\rm TS}$ biases the near and away side yields~\cite{Ortiz:2020dph}. Thus, the events are selected based on the forward rapidity estimator (V0M for pp and Pb--Pb collisions and V0A for p--Pb collisions) and the corresponding $N_{\rm ch}^{\rm TS}$ are calculated for each multiplicity class. Figure~\ref{fig2} shows the $I_{\rm pp,p-Pb,Pb-Pb}$ in the range 4 $< p_{\rm T}^{a} < $ 6 GeV/$c$ as a function of $\langle N_{\rm ch}^{\rm TS} \rangle$ in different V0M/V0A multiplicity classes for the near (left) and away (right) side in pp, p--Pb, and Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV. Here, $p_{\rm T}^{a}$ is the transverse momentum of associated particles with respect to a leading trigger particle. The values of $I_{\rm PbPb}$ for most central and most peripheral Pb--Pb collisions show similar trends as reported by ALICE in Ref.~\cite{Aamodt:2011vg} at Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV. In small collision systems, no enhancement (suppression) of $I_{\rm pp,p-Pb}$ is observed in near (away) sides for pp or p--Pb collisions within uncertainties. This indicates the absence of jet-quenching effects in small collision systems or if any, the jet-quenching effects are very small to be detected in the measured $\langle N_{\rm ch}^{\rm TS} \rangle$ ranges.
\section{Summary}
In summary, using $R_{\rm T}$ one can vary the magnitude of the underlying event contribution and study the final state particle production in different topological regions. The system size dependence of charged-particle production indicates that the contribution from the near and away side jet dominates at low-$R_{\rm T}$. For high-$R_{\rm T}$, the $\langle p_{\rm T}\rangle$ approaches a similar value in all three topological regions for a given collision system. In contrast to Pb--Pb collisions, no suppression of $I_{\rm pp,p-Pb}$ is observed in the away side for pp and p--Pb collisions, which indicates the absence of jet-quenching effects for small collision systems or if any, the jet-quenching effects are very small to be detected in the measured $\langle N_{\rm ch}^{\rm TS} \rangle$ ranges.
\section*{Acknowledgements}
S.T. acknowledges the support from CONACyT under the Grant No.A1-S-22917 and postdoctoral fellowship of DGAPA UNAM.
|
2,877,628,089,406 | arxiv | \section{$A$ Series Quiver Constructions}
\label{sec:ASeries}
\subsection{Quiver Types}
\label{subsec:AQuivers}
The constructions for the Slodowy slices of $A$ series nilpotent orbits draw upon the same two quiver types as the constructions for the closures of the nilpotent orbits. These are shown in figure \ref{fig:A1}:
\begin{enumerate}
\item Linear quivers based on partitions. These quivers ${\cal L}_A (\rho)$ consist of a $SU(N_0)$ flavour node connected to a linear chain of $U(N_{i})$ gauge nodes, where the decrements between nodes, $\rho_i = N_{i-1} - N_{i}$, constitute an ordered partition of $N_0$, $\rho \equiv \{\rho_1,\ldots,\rho_{{k}}\}$, where $\rho_i \ge \rho_{i+1}$ and $\sum \nolimits_{i = 1}^{k} {\rho _i} = N_0$.
\item Balanced quivers based on Dynkin diagrams. These quivers ${\cal B}_A ({\mathbf N_f})$ consist of a linear chain of $U(N_i)$ gauge nodes (in the form of an $A$ series Dynkin diagram), with each gauge node connected to a flavour node of rank $N_{f_i}$, where $N_{f_i} \ge 0$. The ranks of the gauge nodes are chosen such that each gauge node is balanced (as explained below), after taking account of any attached flavour nodes.
\end{enumerate}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{genAquivers.png}\\
\caption[$A$ Series Linear and Balanced Quiver Types.]{$A$ Series linear and balanced quiver types. In the canonical linear quiver, the unitary gauge nodes (blue) are in descending order with decrements in non-increasing order. In the balanced quiver, the unitary gauge nodes are all balanced by their attached gauge nodes (blue) and flavour nodes (red).}
\label{fig:A1}
\end{figure}
On the Higgs branch, the flavour nodes of both types of quiver define an overall $S(\mathop \otimes \limits_i {U_{N_{f_i}}})$ global symmetry, while on the Coulomb branch, the global symmetry group follows from the Dynkin diagram formed by any balanced gauge nodes in the quiver.\footnote{The concept of balance was used in \cite{Gaiotto:2008sa}, in order to distinguish between (a) those Coulomb branch monopole operators that are ``good", with unit conformal dimension and act as root space operators, (b) those that are ``ugly" with half-integer conformal dimension and act as weight space operators, and (c) those that are ``bad" with zero or negative conformal dimension, which lead to divergences.}
The balance of a unitary gauge node is defined as the sum of the ranks of its adjacent gauge nodes, plus the number of attached flavours, less twice its rank:
\begin{equation}
\label{eq:Aquivers1}
\begin{aligned}
\text{Balance(i)} \equiv {N_{f_i}} + \sum\limits_{j = i \pm 1} {N_j} - 2{N_i} &.
\end{aligned}
\end{equation}
For the $A$ series balanced theories, the balance condition is ${\bf B} \equiv \{\text{Balance}(i)\}= \bf{0}$, and \ref{eq:Aquivers1} can be simplified as:
\begin{equation}
\label{eq:Aquivers2}
\begin{aligned}
{\mathbf N_f} ={\mathbf A} \cdot {\mathbf N},
\end{aligned}
\end{equation}
where the flavour and gauge nodes have been written as vectors ${\mathbf N_f} \equiv (N_{f_1},\ldots,N_{f_{k}})$ and ${\mathbf N} \equiv (N_{1},\ldots,N_{k})$, and $\mathbf A$ is the Cartan matrix of $A_{k}$.
$A$ series nilpotent orbits are in bijective correspondence with the partitions of $N$, and the linear quivers provide a complete set of Higgs branch constructions. The balanced quivers also provide a complete set of Coulomb branch constructions under the unitary monopole formula. Both types of quiver are thus in bijective correspondence with $A$ series orbits and can be related by $3d$ mirror symmetry \cite{Intriligator:1996ex}.
For Slodowy slices, the roles of these quiver types are reversed: the linear $A$ series quivers provide a complete set of Coulomb branch constructions, while the balanced $A$ series quivers provide a complete set of Higgs branch constructions.
When quivers of linear type are used to calculate Slodowy slices, via their Coulomb branches, the lack of balance of such quivers generally breaks the symmetry of $SU(N_0)$ to a subgroup, which becomes the isometry group of the Slodowy slice; this subgroup is in turn defined by the Dynkin diagram of the subset of gauge nodes in the linear quiver that remain balanced.
The identification of quivers for Slodowy slices follows directly from the partition data discussed in section \ref{sec:SS_Dim}. For the $A$ series, it is convenient to write the $SU(2)$ partition of the fundamental representation under $\rho$ as:
\begin{equation}
\label{eq:Aquivers3}
\begin{aligned}
\rho {\left[ {1,0, \ldots } \right]_{A}} =\left( {{N^{{N_{{f_N}}}}}, \ldots ,{n^{{N_{{f_n}}}}}, \ldots ,{1^{{N_{{f_1}}}}}} \right),
\end{aligned}
\end{equation}
%
so that the multiplicities of partition elements, which may be zero, are mapped to the flavour vector ${\mathbf N_{f}}$. The \textit{linear} quiver ${\cal L}_A (\rho)$ can be extracted simply by writing $\rho [fund.]$ in long form. The ranks $\bf N$ of the gauge nodes of the \textit{balanced} quiver ${\cal B}_A ({\mathbf N_f}(\rho))$ can be found from $\mathbf N_f$ by inverting \ref{eq:Aquivers2}. Alternatively, the balanced quivers ${\cal B}_A ({\mathbf N_f}(\rho))$ can be obtained by applying $3d$ mirror symmetry transformations to the linear quivers ${\cal L}_A (\rho)$, and vice versa.
We can use the notation above to express the key relationships and dualities involving $A$ series quivers for the Slodowy slices of nilpotent orbits:
\begin{equation}
\label{eq:Aquivers4}
\begin{aligned}
{{\cal S}}_{{{\cal N}},\rho } = Higgs \left[ {{\cal B}_A( {{\mathbf N_f}( \rho)})} \right] &= Coulomb\left[ {{\cal L}_A( \rho)} \right],\\
{\bar {\cal O}}_\rho = Higgs\left[ {{\cal L}_A(\rho ^T)} \right] &= Coulomb \left[ {{\cal B}_A( {{\mathbf N_f}( {{\rho ^T}} )} )} \right],\
\end{aligned}
\end{equation}
or, taking the transpose of $\rho$:
\begin{equation}
\label{eq:Aquivers4aa}
\begin{aligned}
{{\cal S}}_{{{\cal N}},{\rho^T} } = Higgs \left[ {{\cal B}_A( {{\mathbf N_f}( \rho ^T)})} \right] &= Coulomb\left[ {{\cal L}_A( \rho^T)} \right],\\
{\bar {\cal O}}_{\rho^T}= Higgs\left[ {{\cal L}_A(\rho) } \right] &= Coulomb \left[ {{\cal B}_A( {{\mathbf N_f}( {{\rho}} )} )} \right].\\
\end{aligned}
\end{equation}
The quivers for $A$ series slices are related to the quivers for the underlying orbits simply by the transpose of the partition $\rho$, combined with exchange of Coulomb and Higgs branches. This transposition of partitions, which is an order reversing involution on the poset of $A$ series orbits, is known as the Lusztig-Spaltenstein map \cite{Baohua-Fu:2015nr}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{Aquiversgrid.png}\\
\caption[Quivers for $A_1$ to $A_4$ Slodowy Slices.]{{Quivers for $A_1 to A_4$ Slodowy slices.} The Higgs quivers are of type ${{\cal B}_{A}} \left({\mathbf N_f}(\rho) \right)$ and the Coulomb quivers are of type ${{{\cal L}}_{A}}\left(\rho \right)$.}
\label{fig:A2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{A5quiversgrid.png}\\
\caption[Quivers for $A_5$ Slodowy Slices.]{{Quivers for $A_5$ Slodowy slices.} The Higgs quivers are of type ${{\cal B}_{A}} \left({\mathbf N_f}(\rho) \right)$ and the Coulomb quivers are of type ${{{\cal L}}_{A}}\left(\rho \right)$.}
\label{fig:A3}
\end{figure}
\FloatBarrier
These linear or balanced quiver types correspond to the limiting cases of $T_\sigma ^\rho \left( {SU\left( N \right)} \right)$ theories \cite{Gaiotto:2008ak, Cremonesi:2014uva}, where one of the partitions $\sigma$ or $\rho$ is taken as the trivial partition:
\begin{equation}
\label{eq:Aquivers4a}
\begin{aligned}
{{\cal L}_A}(\rho ) \Leftrightarrow T_\rho ^{(1,1, \ldots ,1)}\left( {SU\left( N \right)} \right),\\
{{\cal B}_A}({{\bf{N}}_f}(\rho )) \Leftrightarrow T_{(1,1, \ldots ,1)}^\rho \left( {SU\left( N \right)} \right).\\
\end{aligned}
\end{equation}
Those quivers, whose Higgs or Coulomb branches yield Slodowy slices of $A$ series groups up to rank 5, are tabulated in figures \ref{fig:A2} and \ref{fig:A3}, labelled by the nilpotent orbit, giving the partition $\rho$ of the fundamental, the dimensions of the Slodowy slice, and the residual symmetry group\footnote{ We describe a $U(1)$ symmetry as $D_1$ if the characters $q^n$ of $U(1)$ irreps always appear paired with their conjugates in representations $(q^n+q^{-n})$.}. The balanced quivers used in the Higgs branch construction always have gauge nodes equal in number to the rank of $G= SU(N)$, while the linear quivers used in the Coulomb branch constructions always have a number of flavours equal to the fundamental dimension of $G = SU(N)$. The quivers ${\cal L}_A ((1^N))$ and ${\cal B}_A ({\mathbf N_f}(1^N))$ for the Higgs and Coulomb branch constructions of the Slodowy slice to the trivial nilpotent orbit are identical.
\subsection{Higgs Branch Constructions}
\label{subsec:AHiggs}
The calculation of Higgs branch Hilbert series from the balanced quivers draws on similar methods to those used in the calculation of the Higgs branches of the linear quivers for $A$ series nilpotent orbits, as elaborated in \cite{Hanany:2016gbz}. Pairs of bi-fundamental fields (and their complex conjugates) connect adjacent gauge nodes and, in addition, each non-trivial flavour node gives rise to a pair of bi-fundamental fields connected to its respective gauge node. The characters of all these fields are included in the PE symmetrisations. A HyperK\"ahler quotient is taken once for each gauge node, exactly as in the case of a linear quiver, and the Weyl integrations are then carried out over the gauge groups. The order of Weyl integrations can be chosen to facilitate computation.
The general Higgs branch formula for $A$ series Slodowy slices is:
\begin{equation}
\small
\label{eq:Aquivers5}
\begin{aligned}
g_{HS}^{Higgs[{\cal B}_A({{\bf{N}}_{\bf{f}}}(\rho ))]} =&\\
\oint\limits_{U\left( {{N_1}} \right) \otimes \ldots U\left( {{N_{k}}} \right)}{d\mu}{~}& \prod\limits_{n = 1}^{k} {\frac{{PE\left[ {{{[ {fund.} ]}_{U( {{N_n}} )}} \otimes {{[ {anti.} ]}_{U( {{N_{{f_n}}}} )}} + {{[ {anti.} ]}_{U( {{N_n}} )}} \otimes {{[ {fund.} ]}_{U( {{N_{{f_n}}}} )}},t} \right]}}{{PE\left[ {{{[ {adjoint} ]}_{U( {{N_n}} )}},{t^2}} \right]}}} \\
\times & \prod\limits_{n = 1}^{k - 1} {PE\left[ {{{[ {fund.} ]}_{U( {{N_n}} )}} \otimes {{[ {anti.} ]}_{U( {{N_{n + 1}}} )}} + {{[ {anti.} ]}_{U( {{N_n}} )}} \otimes {{[ {fund.} ]}_{U( {{N_{n + 1}}} )}},t} \right]},
\end{aligned}
\end{equation}
where $ {d\mu}$ is the Haar measure for the ${U\left( {{N_1}} \right) \otimes \ldots U\left( {{N_{k}}} \right)}$ product group. Note that the bifundamental fields are symmetrised with the fugacity $t$, while the HyperK\"ahler quotient (``HKQ") is symmetrised with $t^2$.
The Higgs branch formula can be simplified, by drawing on the dimensions of the bi-fundamentals and the gauge groups, to give a rule for the dimensions of an $A$ series Slodowy slice, and this can be simplified further by the balance condition \ref{eq:Aquivers2}:
\begin{equation}
\label{eq:Aquivers6}
\begin{aligned}
\left| g_{HS}^{Higgs[{\cal B}_A ({{\bf{N}}_{\bf{f}}}(\rho ))]} \right| & = 2 {{\bf{N}}_f}\left( \rho \right)\cdot {\bf{N}}\left( \rho \right) - {\bf{N}}\left( \rho \right) \cdot {\bf{A}} \cdot {\bf{N}}\left( \rho \right)\\
& = {{\bf{N}}_f}\left( \rho \right)\cdot {\bf{N}}\left( \rho \right).
\end{aligned}
\end{equation}
For further details of the calculation methodology the reader is referred to the Plethystics Program Literature. The same Hilbert series can in principle also be obtained algebraically by working with matrix generators and relations, as in section \ref{subsec:Agenerators}.
\subsection{Coulomb Branch Constructions}
\label{subsec:ACoulomb}
The monopole formula, which was introduced in \cite{Cremonesi:2013lqa}, provides a systematic method for the construction of the Coulomb branches of particular SUSY quiver theories, being ${\cal N}=4$ superconformal gauge theories in $2+1$ dimensions. The Coulomb branches of these theories are HyperK\"ahler manifolds. The formula draws upon a lattice of monopole charges, defined by the linked system of gauge and flavour nodes in a quiver diagram.
Each gauge node carries adjoint valued fields from the SUSY vector multiplet and the links between nodes correspond to complex bi-fundamental scalars within SUSY hypermultiplets. The monopole formula generates the Coulomb branch of the quiver by projecting charge configurations from the monopole lattice into the root space lattice of $G$, according to the monopole flux over each gauge node, under a grading determined by the conformal dimension of each overall monopole flux $q$.
The \textit{conformal dimension} (equivalent to R-charge or the highest weight of the $SU(2)_R$ global symmetry) of a monopole flux is given by applying the following general schema \cite{Gaiotto:2008ak} to the quiver diagram:
\begin{equation}
\label{eq:mon0}
\Delta \left( q \right) = \underbrace {\frac{1}{2}\sum\limits_{i = 1}^r {\sum\limits_{{\rho _i} \in R_i}^{} {\left| {{\rho _i}(q)} \right|} } }_{\scriptstyle contribution~of~N = 4 \atop
\scriptstyle hyper~multiplets} - \underbrace {\sum\limits_{\alpha \in \Phi_+ }^{}{\left| {\alpha (q)} \right|}}_{\scriptstyle contribution~of~N = 4\atop
\scriptstyle vector~multiplets}.
\end{equation}
The positive R-charge contribution in the first term comes from the bi-fundamental matter fields that link adjacent nodes in the quiver diagram. The second term captures a negative R-charge contribution from the vector multiplets, which arises due to symmetry breaking, whenever the monopole flux $q$ over a gauge node contains a number of different charges.
The calculation of Hilbert series for Coulomb branches of $A$ type quivers draws on the \textit{unitary} monopole formula, which follows from specialising \ref{eq:mon0} to unitary gauge groups. Each $U(N_i)$ gauge node carries a \textit{monopole flux} $q_i \equiv (q_{i,1}, \ldots ,q_{i,N_i})$ comprising one or more \textit{monopole charges} $q_{i,m}$. The fluxes are assigned the collective coordinate $q \equiv (q_1, \ldots, q_r)$. Each flavour node carries $N_{f_i}$ flavours of zero charge.\footnote{Flavour nodes may also carry non-zero charges, although these are not required by the Slodowy slice (or nilpotent orbit) constructions.}
With these specialisations, the conformal dimension $\Delta(q)$ associated with a flux $q$ yields the formula:
\begin{equation}
\label{eq:mon3}
\begin{aligned}
\Delta \left( {q } \right) = \frac{1}{2}\underbrace {\sum\limits_{j > i \ge 1}^r {\sum\limits_{m,n} {\left| {{q_{i,m}}{A_{ij}} - {q_{j,n}}{A_{ji}}} \right|} } }_{\text{gauge - gauge hypers}}
& + \frac{1}{2} \underbrace {\sum\limits_{i} {\sum\limits_{m} {N_{f_i} }{\left| {{q_{i,m}}} \right|} } }_{{\text{gauge - flavour hypers}}}\\
& - \underbrace {\sum\limits_{i = 1}^r {\sum\limits_{m > n}^{} {\left| {{q_{i,m}} - {q_{i,n}}} \right|} } }_{\text{gauge vplets}},
\end{aligned}
\end{equation}
where (i) the summations are taken over all the monopole charges within the flux $q$ and (ii) the linking pattern between nodes is defined by the $A_{ij}$ off-diagonal $A_r$ Cartan matrix terms, which are only non-zero for linked nodes.\footnote{For theories with simply laced quivers of $ADE$ type, $A_{ij} = 0$ or $-1$, for $i \neq j$.}
With conformal dimension defined as above, the \textit{unitary monopole formula} for a Coulomb branch HS is given by the schema \cite{Cremonesi:2013lqa}:
\begin{equation}
\label{eq:mon1}
g_{HS}^{{\rm{Coulomb}}}\left( {z,t^2} \right) \equiv \sum\limits_q {} {}P_q^{U\left( N \right)}\left( t^2 \right){z^q}{~}{t^{2 \Delta \left( q \right)}},
\end{equation}
where:
\begin{enumerate}
\item The limits of summation for the monopole charges are $\infty \ge {q_{i,1}} \ge \ldots {q_{i,m}} \ge \ldots {q_{i,{N_i}}} \ge - \infty $ for $i=1,\ldots r$.
\item The monopole flux over the gauge nodes is counted by the fugacity $z \equiv (z_1, \ldots, z_r)$, where the $z_i$ are fugacities for the simple roots of $A_r$.
\item The monomial $z^q$ combines the monopole fluxes $q_i$ into total charges for each $z_i$ and is expanded as ${z^q} \equiv \prod\limits_{i = 1}^r {z_i^{\sum\limits_{m = 1}^{{N_i}} {{q_{i,m}}} }}$.
\item The term $P_{q}^{U\left( N \right)}$ encodes the degrees $d_{i,j}$ of the Casimirs of the residual $U(N)$ symmetries that remain at the gauge nodes under a monopole flux $q$:
\begin{equation}
\label{eq:mon2}
P_{q}^{U\left( N \right)}(t^2) \equiv \prod\limits_{i,j} {\frac{1}{{\left( {1 - {t^{{2 d_{i,j}(q)}}}} \right)}}} = \prod\limits_{i = 1}^r {\prod\limits_{j = 1}^{{N_i}} {\prod\limits_{k = 1}^{{\lambda _{ij}}\left( {{q_i}} \right)} {\frac{1}{{1 - {t^ {2k}}}}} } }.
\end{equation}
Recalling that a $U(N)$ group has Casimirs of degrees 1 through $N$, the residual symmetries can be determined as in \cite{Cremonesi:2013lqa}.\footnote{We construct a partition of $N_i$ for each node, which counts how many of the charges $q_{i,m}$ are equal, such that $\lambda(q_i)=(\lambda_{i,1},\ldots,\lambda_{i,N_i})$, where $\sum\limits_{m = 1}^{{N_i}} {{\lambda _{i,m}}} = {N_i}$ and ${\lambda _{i,m}} \ge {\lambda _{i,m+ 1}} \ge 0 $. The non-zero terms $\lambda_{i,j}$ in the partition give the ranks of the residual $U(N)$ symmetries associated with each node, so that it is a straightforward matter to compound the terms in the degrees of Casimirs. For example, if $q_{i,m}= q_{i,n}$ for all $m ,n$, then $\{d_{i,1},\ldots d_{i,N_i}\}=\{1,\ldots, N_i$\} and if $q_{i,m}\neq q_{i,n}$ for all $m, n$, then $\{d_{i,1},\ldots d_{i,N_i}\}=\{1,\ldots, 1\}$.} Alternatively, the residual symmetries for a flux $q_i$ can be fixed from the sub-group of $U(N_i)$ identified by the Dynkin diagram formed by those monopole charges that equal their successors $\{q_{i,m}: q_{i,m}=q_{i,m+1}\}$, (or equivalently, correspond to zero Dynkin labels).
\end{enumerate}
The exact calculation of a Coulomb branch HS can be carried out by evaluating \ref{eq:mon1} as a geometric series over each sub-lattice of monopole charges $q$, for which both conformal dimension $\Delta(q)$ and the symmetry factors $P_{q}^{U\left( N \right)}$ are linear (rather than piecewise or step) functions, and then summing the many resulting polynomial quotients. These sub-lattices of monopole charges form a hypersurface and care needs to be taken to avoid duplications at edges and intersections.
\subsection{Hilbert Series}
\label{subsec:AHilbert}
The Hilbert series of the Slodowy slices of algebras $A_1$ to $A_4$, calculated as above, are summarised in table \ref{tab:A1}. Both the Higgs and Coulomb branch calculations lead to identical refined Hilbert series, up to choice of CSA coordinates or fugacities.
The Hilbert series are presented in terms of their generators, or $PL[HS]$, using character notation $[n_1,\ldots, n_r]$ to label $A_r$ irreps. Symmetrisation of these generators using the $PE$ recovers the refined Hilbert series. The underlying adjoint maps \ref{eq:SS9} can readily be recovered from the generators by inverting \ref{eq:SS11}. The HS can be unrefined by replacing representations of the global symmetry groups by their dimensions.
\begin{sidewaystable}[htp]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
${\begin{array}{c} \text{Nilpotent}\\\text{Orbit} \end{array}}$
&${\begin{array}{c} \text{Dimension}\\ {| {\cal S}_{\cal N,\rho}|} \end{array}}$
&${\begin{array}{c} \text{Symmetry} \\ F\end{array}}$
&$ \text{Generators of HS} \equiv \text{PL[HS]}$
&$ \text{Unrefined HS} $\\
\hline
$[0]$&$ 2 $&
$A_1$&
$ [2]t^2-t^4 $&$\frac {(1 - t^4)}{(1 - t^2)^3} $\\
$[2]$&$ 0 $& $\emptyset$ &
$ 0 $&$ 1 $\\
\hline
$[00]$&$ 6 $& $A_2$&
$ [1,1]t^2-t^4-t^6 $&$ \frac{ (1 - t^4)}{(1 - t^2)^3 } $\\
$[11]$&$ 2 $& $D_1$&
$ t^2+(1)_{q_1/q_2} t^3-t^6 $&$ \frac{ (1 - t^6)}{(1 - t^2) (1 - t^3)^2} $\\
$[22]$&$ 0 $& $\emptyset$&
$ 0 $&$ 1 $\\
\hline
$[000]$&$ 12 $& $A_3$&
$ [1,0,1]t^2-t^4-t^6-t^8 $&$ \frac{(1 - t^4) (1 - t^6) (1 - t^8)}{(1 - t^2)^{15}} $\\
$[101]$&$ 6 $& $A_1 \otimes D_1$&
$t^2+[2]t^2+[1](1)_{q_1/q_2} t^3-t^6-t^8 $&$ \frac {(1 - t^6) (1 - t^8)}{(1 - t^2)^4 (1 - t^3)^4} $\\
$[020]$&$ 4 $& $A_1$
$ [2]t^2+[2]t^4-t^6-t^8 $&$\frac {(1 - t^6) (1 - t^8)}{(1 - t^2)^3 (1 - t^4)^3} $\\
$[202]$&$ 2 $& $D_1$
$t^2+(1)_{q_1/q_3} t^4-t^8 $&$ \frac {(1 - t^8)}{(1 - t^2) (1 - t^4)^2} $\\
$[222]$&$ 0 $& $\emptyset$
$ 0 $&$ 1 $\\
\hline
$[0000]$&$ 20 $& $A_4$
$ [1,0,0,1]t^2-t^4-t^6-t^8-t^{10} $&$\frac {(1 - t^4) (1 - t^6) (1 - t^8) (1 - t^{10})}{(1 - t^2)^{24}} $\\
$[1001]$&$ 12 $& $A_2\otimes U(1)$&
$t^2+[1,1]t^2+[1,0]{q_1/q_2} t^3+[0,1]{q_2/q_1} t^3-t^6-t^8-t^{10} $&$\frac{(1 - t^6) (1 - t^8) (1 - t^{10})}{(1 - t^2)^9 (1 - t^3)^6} $\\
$[0110]$&$ 8 $& $A_1 \otimes D_1$&
$t^2+ [2]t^2+[2]t^4+[1](1)_{q_1/q_2} t^3-t^6-t^8-t^{10} $&$\frac{(1 - t^6) (1 - t^8) (1 - t^{10})}{(1 - t^2)^4 (1 - t^3)^4 (1 - t^4)^3} $\\
$[2002]$&$ 6 $& $A_1 \otimes D_1$&
$t^2+[2]t^2+[1](1)_{q_1/q_3} t^4-t^8-t^{10} $&$\frac {(1 - t^8) (1 - t^{10})}{(1 - t^2)^4 (1 - t^4)^4} $\\
$[1111]$&$ 4 $& $D_1$&
$t^2+(1) t^3+t^4+(1)_{q_2/q_3} t^5-t^8-t^{10} $&$\frac {(1 - t^8) (1 - t^{10})}{(1 - t^2) (1 - t^3)^2 (1 - t^4) (1 - t^5)^2} $\\
$[2112]$&$ 2 $& $D_1$&
$t^2+(1)_{q_1/q_4} t^5-t^{10} $&$ \frac{(1 - t^{10})}{(1 - t^2) (1 - t^5)^2} $\\
$[2222]$&$ 0 $& $\emptyset$&
$ 0 $&$ 1 $\\
\hline
\end{tabular}
\text{N.B. $(n)_q$ denotes the character of the $D_1 \equiv SO(2)$ reducible representation $q^n+q^{-n}$ of $U(1)$.}
\caption{Hilbert Series for Slodowy Slices of $A_1$, $A_2$, $A_3$ and $A_4$.}
\label{tab:A1}
\end{sidewaystable}
\begin{sidewaystable}[htp]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
${\begin{array}{c} \text{Nilpotent}\\\text{Orbit} \end{array}}$
&${\begin{array}{c} \text{Dimension}\\ {| {\cal S}_{\cal N,\rho}|} \end{array}}$
&${\begin{array}{c} \text{Symmetry} \\ F\end{array}}$
&$ \text{Generators of HS} \equiv \text{PL[HS]}$
&$ \text{Unrefined HS} $\\
\hline
$[00000]$&$ 30 $& $A_5$&
$ [1,0,0,0,1]t^2 - t^4-t^6 -t^8-t^{10} - t^{12}$
&$\frac{(1-t^4) (1-t^6) (1-t^8) (1-t^{10}) (1-t^{12})}{(1-t^2)^{35}} $\\
\hline
$[10001]$&$ 20 $& $A_3\otimes U(1)$&
$\begin{array}{c}t^2+[1,0,1]t^2+([1,0,0]q_1/q_2+[0,0,1]q_2/q_1)t^3\\
-t^6-t^8-t^{10}-t^{12} \end{array}$
&$\frac{(1-t^6) (1-t^8) (1-t^{10}) (1-t^{12})}{(1-t^2)^{16} (1-t^3)^8} $\\
\hline
$[01010]$&$ 14 $& $A_1 \otimes A_1 \otimes D_1$&
$\begin{array}{c}t^2+[2][0]t^2+[0][2]t^2+[1][1](1)_{q_1/q_2}t^3\\+[2][0]t^4-t^6-t^8-t^{10}-t^{12}\end{array}$
&$\frac{(1-t^6) (1-t^8) (1-t^{10}) (1-t^{12})}{(1-t^2)^7 (1-t^3)^8 (1-t^4)^3} $\\
\hline
$[00200]$&$ 12 $& $A_2$&
$[1,1]t^2+[1,1]t^4-t^6-t^8-t^{10}-t^{12} $
&$\frac{ (1-t^6 ) (1-t^8 ) (1-t^{10} ) (1-t^{12} )}{ (1-t^2 )^8 (1-t^4 )^8} $\\
\hline
$[20002]$&$ 12$& $A_2 \otimes U(1)$&
$\begin{array}{c}t^2+[1,1]t^2+[1,0] q_1/q_3 t^4+[0,1]q_3/q_1 t^4\\-t^8-t^{10}-t^{12} \end{array} $
&$\frac{ (1-t^8 ) (1-t^{10} ) (1-t^{12} )}{ (1-t^2 )^9 (1-t^4 )^6} $\\
\hline
$[11011]$&$ 8$& $U(1) \otimes U(1) $&
$\begin{array}{c} 2{t^2} + ({(1)_{q1/q2}} + {(1)_{q2/q3}})){t^3} + {t^4} + {(1)_{{q_1}/{q_3}}}{t^4} \\+ (1)_{q_2/q_3}{t^5} - {t^8} - t^{10} - t^{12} \end{array} $
&$\frac{ (1-t^8 ) (1-t^{10} ) (1-t^{12} )}{ (1-t^2 )^2 (1-t^3 )^4 (1-t^4 )^3 (1-t^5 )^2} $\\
\hline
$[02020]$&$ 6 $& $A_1$&
$[2]t^2+[2]t^4+[2]t^6-t^8-t^{10}-t^{12} $
&$\frac{ (1-t^8 ) (1-t^{10} ) (1-t^{12} )}{ (1-t^2 )^3 (1-t^4 )^3 (1-t^6 )^3} $\\
\hline
$[21012]$&$ 6 $& $A_1 \otimes D_1 $&
$t^2+[2]t^2+[1](1)_{q_1/q_4} t^5-t^{10}-t^{12} $
&$\frac{ (1-t^{10} ) (1-t^{12} )}{ (1-t^2 )^4 (1-t^5 )^4} $\\
\hline
$[20202]$&$ 4 $& $D_1$&
$t^2+t^4+(1)_{q_2/q_4}t^4+(1)_{q_2/q_4} t^6-t^{10}-t^{12} $
&$\frac{ (1-t^{10} ) (1-t^{12} )}{ (1-t^2 ) (1-t^4 )^3 (1-t^6 )^2} $\\
\hline
$[22022]$&$ 2 $& $D_1$&
$t^2+(1)_{q_1/q_5} t^6-t^{12}$
&$\frac{1-t^{12}}{ (1-t^2 ) (1-t^6 )^2} $\\
\hline
$[22222]$&$ 0 $& $\emptyset$& $ 0 $&$ 1 $\\
\hline
\end{tabular}
\text{N.B. $(n)_q$ denotes the character of the $D_1 \equiv SO(2)$ reducible representation $q^n+q^{-n}$ of $U(1)$.}
\caption{Hilbert Series for Slodowy Slices of $A_5$.}
\label{tab:A5}
\end{sidewaystable}
Several observations can be made about the Hilbert series.
\begin{enumerate}
\item As expected, (i) the Slodowy slice to the trivial nilpotent orbit $\mathcal{S}_{\mathcal{N},(1^N)}$ has the same Hilbert series as the nilpotent cone, (ii) the slice to the sub-regular orbit $\mathcal {S}_{\mathcal {N},(N-1,1)}$ has the Hilbert series of a Kleinian singularity of type $\hat A_{N-1}$, and (iii) the slice to the maximal nilpotent orbit $\mathcal {S}_{\mathcal {N},(N)}$ is trivial.
\item As expected, the Slodowy slices $\mathcal S_{\mathcal N,\rho}$ are all complete intersections.
\item The global symmetry groups of the Slodowy slice generators include mixed $SU$ and unitary groups, and descend in rank as the dimension of the Slodowy slice reduces. Sometimes different Slodowy slices share the same symmetry group, with inequivalent embeddings of $F$ into $G$.
\item Complex representations always appear combined with their conjugates to give real representations.
\item The adjoint maps often contain singlet generators at even powers of $t$ up to the (twice the) degree of the highest Casimir of $\mathfrak g$; these generators may be cancelled by one or more Casimir relations.
\end{enumerate}
Many of these observations have counterparts amongst the Slodowy slices of $BCD$ series, although these also raise several new intricacies, as will be seen in section \ref{sec:BCDSeries}.
\subsection{Matrix Generators for Unitary Quivers}
\label{subsec:Agenerators}
A Hilbert series over the class functions of a Classical group can be viewed in terms of matrix generators (or operators), and this perspective makes it possible to identify the generators of a Slodowy slice directly from the partition data or its Higgs branch quiver.
\subsubsection{Fundamental Decomposition}
\label{sec:AFD}
From \ref{eq:Aquivers3}, it follows that the character of the fundamental representation of $G$ decomposes into fundamental representations of a unitary product group:
\begin{equation}
\label{eq:Agens1}
\begin{aligned}
\rho :\chi _{fund.}^G \to { \bigoplus \limits_{[n]}}{[n]_{\rho}}{~}\chi _{[ {fund.} ]}^{{SU_{{N_{{f_{n+1}}}} }}}{q_{n+1} },
\end{aligned}
\end{equation}
where ${[n]_{\rho}}$ are irreps of the $SU(2)$ associated with the nilpotent orbit embedding $\rho$, and the $U(1)$ charges $q_i$ on the flavour nodes satisfy the overall gauge condition $\prod\limits_{i=1}^k {i} {N_{f_i}}{q_i} = 1$.\footnote{This corresponds to viewing the fields in a centre of mass frame.} Once this decomposition has been identified, the mapping of the adjoint of $G$ into matrix generators follows, by taking the product of the fundamental and antifundamental characters, and eliminating a singlet. This can be checked against the adjoint partition $\rho :\chi _{adjoint}^G$.
\FloatBarrier
\subsubsection{Generators from Quiver Paths}
\label{sec:OfQA}
Alternatively the operators can be read from a quiver of type ${B_A}({{\bf{N}}_{\bf{f}}}(\rho ))$, following the prescription:
\begin{enumerate}
\item Draw the chiral multiplets explicitly as arrows in the quiver:
\begin{equation}
\node{\fdu{}{N_{f_1}}}{\,\,N_1\,\,}{\scriptstyle\rightleftarrows}\node{\fdu{}{N_{f_2}}}{\,\,N_2\,\,}{\scriptstyle\rightleftarrows}\node{\fdu{}{N_{f_3}}}{\,\,N_3\,\,}{\scriptstyle\rightleftarrows}\cdots{\scriptstyle\rightleftarrows}\node{\fdu{}{N_{f_k}}}{\,\,N_k\,\,}
\end{equation}
\item Every path in the quiver that starts and ends on a flavor node corresponds to an operator in the chiral ring of the Higgs branch.
\item There is a one to one correspondence between paths that appear as generators in the PL[HS] of the Higgs branch and the paths of the type $\mathcal{P}_{ij}(a)$, defined as below.
\item The operator $\mathcal P_{ij}(a)$ transforms under the fundamental representation of $U(N_{f_i})$ and the antifundamental representation of $U(N_{f_j})$ and sits on an irrep of $SU(2)_R$ with spin $s=A/2$, where $A$ is the number of arrows in the path that defines $\mathcal P_{ij}(a)$. This means that it appears in the plethystic logarithm of the refined Hilbert series as the character of the corresponding representation multiplied by the fugacity $t^{A}$.
\item Therefore, there is a one to one correspondence between operators $\mathcal P_{ij}(a)$ and irreducible representations in the decomposition of the adjoint representation of $A_k$ in \ref{eq:SS9}.
\end{enumerate}
\begin{definition} {\it $\mathbf{\mathcal P_{ij}(a)}$: Let $\mathcal P_{ij}(a)$ be an operator $\mathcal P_{ij}(a)$ with $i,j \in \{1,2,\dots , k \}$ and $a \in \{1,2,\dots, min(i,j)\}$. $\mathcal{P}_{ij}(1)$ is defined as the operator formed by products of operators represented by arrows in the shortest possible path that starts at node $N_{f_i}$ and ends at node $N_{f_j}$ (note that $i$ and $j$ could be equal). $\mathcal{P}_{ij}(2)$ represents a path that differs from $\mathcal{P}_{ij}(1)$ only in that it has been extended to incorporate the arrows between the gauge nodes $N_{min(i,j)}$ and $N_{min(i,j)-1}$. $\mathcal{P}_{ij}(3)$ differs from $\mathcal{P}_{ij}(2)$ in that it also includes arrows between the gauge nodes $N_{min(i,j)-1}$ and $N_{min(i,j)-2}$. In this way $\mathcal P_{ij}(a)$ is defined recursively as an extension of $\mathcal P_{ij}(a-1)$.}
\end{definition}
\paragraph {Example 1.} Let us start with the balanced $A_3$ quiver based on the fundamental partition $\rho=(2,1^2)$, whose Higgs branch is the the Slodowy slice $\mathcal S_{{\cal N},(2,1^2)}$ to the nilpotent orbit $A[101]$. The quiver is:
\begin{equation} \label{eq:Agens2}
{B_A}({{\bf{N}}_{\bf{f}}}(2,1^2 )) ={~} \node{\wver{}{\,\,2}}{2}- \node{\wver{}{\,\,1}}{2}- \node{}{1}.
\end{equation}
From table \ref{tab:A1}, the Hilbert series is:
\begin{equation}\label{eq:Agens3}
g_{HS}^{Higgs[{B_A}({{\bf{N}}_{\bf{f}}}(2,{1^2})) ] } = PE[t^2+[2]t^2+[1](q+1/q)t^3-t^6-t^8].
\end{equation}
To obtain this using the prescription in section \ref{sec:AFD}, we first identify the fugacity map for the group decomposition using \ref{eq:Agens1}:
\begin{array} \label{eq:Agens4}
SU(4) & \rightarrow SU(2)_{\rho} \otimes SU(2)\otimes U(1),\\
[1,0,0] & \rightarrow [1]_{\rho} q^{1/2} + [1] q^{-1/2},\\
[0,0,1] & \rightarrow [1]_{\rho} q^{-1/2} +[1] q^{1/2},\\
[1,0,1] & \rightarrow ([2]+1)[0]_{\rho} + [1](q+1/q) [1]_{\rho}+[2]_{\rho}.\\
\end{array}
Next the irreps $[n]_{\rho}$ of $SU(2)_{\rho}$ are mapped to the fugacity $t^{n+2}$, giving the generators:
\begin{array} \label{eq:Agens5}
[1,0,1] & \rightarrow [2] t^2+ t^2+ [1](q+1/q) t^3 +t^4.\\
\end{array}
Subtracting the relations $-t^4-t^6-t^8$, corresponding to Casimirs of $A_3$, we obtain:
\begin{equation} \label{eq:Agens6}
PL[g_{HS}^{Higgs[{{\cal B}_{A} ({ \bf N_f} (2,1^2) ) }]}]=[2]t^2+t^2+[1](q+1/q)t^3-t^6-t^8.
\end{equation}
The generators in \ref{eq:Agens6} can be understood as operators from paths in the quiver \ref{eq:Agens2}:
\begin{table}[htp]
\centering
\begin{tabular}{|c|c|c|}
\hline
$ {\cal P}_{ij}(a) $& Quiver Path & Generator \\
\hline
$ {\cal P}_{1,1}(1) $&$ \ \ \ \node {\fdu {}{\,\,2}}{2}\ \node{}{2} \ \ \node{}{1} $&$ [2] t^2 $\\
\hline
$ {\cal P}_{2,2}(1) $&$ \ \ \ \node{}{2}\ \node {\fdu {}{\,\,1}}{2}\ \ \node{}{1} $&$ t^2 $\\
\hline
$ {\cal P}_{1,2}(1) $&$ \ \ \ \node{\fd{}{\,\,2}}{2} {\scriptstyle\rightarrow} \node{\fu {}{\,\,1}}{2}\ \node{}{1} $&$ [1] q t^3 $\\
\hline
$ {\cal P}_{2,1}(1) $&$ \ \ \ \node{\fu{}{\,\,2}}{2} {\scriptstyle\leftarrow} \node{\fd {}{\,\,1}}{2}\ \node{}{1} $&$ [1] {q^{-1}} t^3 $\\
\hline
$ {\cal P}_{2,2}(2) $&$ \ \ \ \node{}{2} {\scriptstyle\rightleftarrows} \node{\fdu {}{\,\,1}}{2}\ \node{}{1} $&$ t^4 $\\
\hline
\end{tabular}
\caption{Generators for Slodowy Slice to $A[101]$.}
\label{tab:A2}
\end{table}
%
The irrep of each generator corresponds with the flavor nodes where the path starts and ends. The $U(1)$ fugacity $q \equiv q_1/q_2$. The exponent of the fugacity $t$ corresponds to the length of the path.
\paragraph {Example 2.} Now consider the balanced quiver based on the $A_4$ partition $(3,2)$, whose Higgs branch is the the Slodowy slice $\mathcal S_{{\cal N},(3,2)}$ to the nilpotent orbit $A[1111]$:
\begin{equation}
{{\cal B}_{A}{({\bf N_f}(3,2)})} ={~} \node{}{1}- \node{\wver{}{\,\,1}}{2}- \node{\wver{}{\,\,1}}{2}- \node{}{1}.
\end{equation}
The group decomposition is:
\begin{equation} \label{eq:ex2}
SU(5) \rightarrow SU(2)_{\rho} \otimes S(U(1)\otimes U(1)).
\end{equation}
The paths in the quiver can be used to predict the generators in table \ref{tab:A3}.
\begin{table}[htp]
\centering
\begin{tabular}{|c|c|c|}
\hline
$ {\cal P}_{ij}(a) $& Quiver Path & Generator \\
\hline
$ {\cal P}_{2,2}(1) $&$ \node{}{1}\ \ \node{\fdu{}{\,\,1}}{2}\ \ \node{}{2}\ \ \node{}{1} $&$ t^2 $\\
\hline
$ {\cal P}_{3,3}(1) $&$ \node{}{1}\ \ \node{}{2}\ \ \node{\fdu{}{\,\,1}}{2}\ \ \node{}{1} $&$ t^2 $\\
\hline
$ {\cal P}_{2,3}(1) $&$ \node{}{1}\ \ \node{\fd{}{\,\,1}}{2}{\scriptstyle\rightarrow} \node{\fu{}{\,\,1}}{2}\ \ \node{}{1} $&$ {q_2/q_3} t^3 $\\
\hline
$ {\cal P}_{3,2}(1) $&$ \node{}{1}\ \ \node{\fu{}{\,\,1}}{2}{\scriptstyle\leftarrow} \node{\fd{}{\,\,1}}{2}\ \ \node{}{1} $&$ {q_3/q_2} t^3 $\\
\hline
$ {\cal P}_{2,2}(2) $&$ \node{}{1}{\scriptstyle\rightleftarrows} \node{\fdu{}{\,\,1}}{2}\ \ \node{}{2}\ \ \node{}{1} $&$ t^4 $\\
\hline
$ {\cal P}_{3,3}(2) $&$ \node{}{1}\ \ \node{}{2}{\scriptstyle\rightleftarrows} \node{\fdu{}{\,\,1}}{2}\ \ \node{}{1} $&$ t^4 $\\
\hline
$ {\cal P}_{2,3}(2) $&$ \node{}{1}{\scriptstyle\rightleftarrows} \node{\fd{}{\,\,1}}{2}{\scriptstyle\rightarrow} \node{\fu{}{\,\,1}}{2}\ \ \node{}{1} $&$ {q_2/q_3} t^5 $\\
\hline
$ {\cal P}_{3,2}(2) $&$\node{}{1}{\scriptstyle\leftrightarrows} \node{\fu{}{\,\,1}}{2}{\scriptstyle\leftarrow} \node{\fd{}{\,\,1}}{2}\ \ \node{}{1} $&$ {q_3/q_2} t^5 $\\
\hline
$ {\cal P}_{3,3}(3) $&$\node{}{1}{\scriptstyle\rightleftarrows} \node{}{2}{\scriptstyle\rightleftarrows} \node{\fdu{}{\,\,1}}{2}\ \ \node{}{1} $&$ t^6 $\\
\hline
\end{tabular}
\caption{Generators for Slodowy Slice to $A[1111]$.}
\label{tab:A3}
\end{table}
Subtracting relations $-\sum_{i=1}^{5}t^{2i}$, corresponding to the special condition in \ref{eq:ex2}, which eliminates one of the $U(1)$ symmetries, and the Casimirs of $A_4$, and substituting $q$ for ${q_2/q_3}$ gives the expected $PL[HS]$:
\begin{equation}
PL[g_{HS}^{Higgs[{{\cal B}_{A} ({ \bf N_f} (3,2))}]}]=t^2+\left( q+\frac{1}{q}\right) t^3+t^4+\left( q+\frac{1}{q}\right) t^5-t^8-t^{10},
\end{equation}
in accordance with table \ref{tab:A1}.
\subsubsection{Matrices and Relations}
\label{Arelations}
In this section we offer a reinterpretation of the previous results for Slodowy slices $\mathcal S_{\mathcal N,\rho}$ as sets of matrices that satisfy specific relations. The aim of this analysis is to build a bridge between the algebraic definition of the nilpotent cone $\mathcal S_{\mathcal N,(1^N)} = \mathcal N$ and that of the Kleinian singularity $\mathcal S_{\mathcal N,(N-1,1)} = \mathbb{C}^2/\mathbb{Z}_N$.
First, let us remember that the Kleinian singularity $\mathcal S_{\mathcal N,(N-1,1)} = \mathbb{C}^2/\mathbb{Z}_N$ can be defined as the set of points parametrized by three complex variables $x,y,z\in \mathbb{C}$, subject to one relation:
\begin{equation}
x^N=yz.
\end{equation}
Secondly, the nilpotent cone $\mathcal S_{\mathcal N,(1^N)} = \mathcal N$ can be defined as a set of complex variables arranged in a $N\times N$ matrix $M\in \mathbb{C}^{N\times N}$, subject to the following relations:
\begin{equation}\label{eq:nilpCone}
\tr(M^p)=0\ \ \ \forall p =1,2,\dots, N.
\end{equation}
We want to show that a Slodowy slice $\mathcal S_{\mathcal N,\rho}$ can be viewed as an intermediate case between these two descriptions. In order to do this we build examples of varieties described by sets of complex matrices, choose relations among them and compute the (unrefined) Hilbert series of their coordinate rings, utilizing the algebraic software {\it Macaulay2} \cite{M2}. These Hilbert series can be checked to be the same as those in table \ref{tab:A1}.
The specific matrices that generate the coordinate rings are chosen according to the operators $\mathcal P_{ij}(a)$ found in the balanced quivers $\mathcal B_A(\mathbf N_f(\rho))$. For example, let us study the balanced quiver whose Higgs branch is the Slodowy slice $\mathcal S_{\mathcal N,(2,1^3)}$:
\begin{equation}
\mathcal B_A(\mathbf N_f(2,1^3)) = \ \node{\wver{}{\,\, 3}}{3} - \node{\wver{}{\,\, 1}}{3} - \node{}{2} - \node{}{1}.
\end{equation}
One can assemble the generators $\mathcal{P}_{ij}(a)$ into three different complex matrices $M$, $A$ and $B$ of dimensions $3\times 3$, $3\times 1$ and $1\times 3$ respectively. Let us show how this can be done explicitly. There are five paths of the form $\mathcal{P}_{ij}(a)$: $\mathcal P_{11}(1)$, $\mathcal P_{22}(1)$, $\mathcal P_{22}(2)$, $\mathcal P_{12}(1)$, $\mathcal P_{21}(1)$. Out of these five sets of operators $P_{22}(1)$ can be removed by the relation $-t^2$ that removes the center of mass and $P_{22}(2)$ by the first Casimir invariant relation $-t^4$. This means that there is a remaining set of generators transforming in the following irreps:
\begin{equation}
\begin{aligned}
\mathcal P_{11}(1) &\rightarrow \ ([1,1] + [0,0])t^2,\\
\mathcal P_{12}(1) &\rightarrow \ ([1,0]q)t^3,\\
\mathcal P_{21}(1) &\rightarrow \ ([0,1]\frac{1}{q})t^3.\\
\end{aligned}
\end{equation}
One can now assemble these generators in three complex matrices that transform in the usual way under the global symmetry $U(3)$:
\begin{equation}
\begin{aligned}
([1,1] + [0,0])t^2 &\rightarrow \ M_{3\times 3},\\
([1,0]q)t^3 &\rightarrow \ A_{1\times 3},\\
([0,1]\frac{1}{q})t^3 &\rightarrow \ B_{3\times 1}.\\
\end{aligned}
\end{equation}
The chiral ring is then parametrized by the set of all matrices $\{M,A,B\}$, subject to one relation at order $t^6$, another relation at order $t^8$ and a final relation at order $t^{10}$. These relations are invariant under the global $U(3)$ symmetry. One can choose the following set of relations:
\begin{align}
\tr(M^3)&=AB,\\
\tr(M^4)&=AMB,\\
\tr(M^5)&= AM^2 B.
\end{align}
Note that these look like corrections to the equations of the nilpotent cone \ref{eq:nilpCone}. The Hilbert series of the coordinate ring is then computed using {\it Macaulay2} to be:
\begin{equation}
HS=\frac{(1-t^6)(1-t^8)(1-t^{10})}{(1-t^2)^9(1-t^3)^6}.
\end{equation}
This is the same Hilbert series as that of the variety $\slod{2,1^3}$ computed in table \ref{tab:A1}.
In tables \ref{tab:A456} and \ref{tab:A7} we provide a set of algebraic varieties described by matrices such that their HS have been computed to be identical to those of the corresponding Slodowy slices $\mathcal S_{\mathcal N,\rho}$. Note that we rewrite the Kleinian singularity in terms of $1\times 1$ matrices, to clarify the connection with the algebraic description of the other Slodowy slices.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Orbit & Partition & Dimension & Generators; Degree & Relations \\
\hline
\multirow{4}{*}{[0]}&\multirow{4}{*}{$(1^2)$}&\multirow{4}{*}{2}& $\begin{array}{rc} M_{2\times 2}; & 2 \\ \end{array}$&$\begin{array}{rl}tr(M) &= 0 \\ tr(M^2) &=0 \end{array}$\\ \cline{4-5}
&&&$\begin{array}{rc} M_{1\times 1}; & 2 \\ {A}_{1\times 1} ; & 2 \\ {B}_{1\times 1} ; & 2\end{array}$&$\begin{array}{rl}tr(M^2) &= {AB} \end{array}$\\ \hline
[2] & $(2)$ & 0 & - & - \\
\hline
\hline
[00] & $(1^3)$ & 6 & $\begin{array}{rc} M_{3\times 3}; & 2\end{array}$ & $\begin{array}{rl}tr(M) &=0 \\ tr(M^2) &=0 \\ tr(M^3) &=0 \end{array}$ \\ \hline
[11] & $(2,1)$ & 2 & $\begin{array}{rc}M_{1\times 1} ; & 2 \\ {A}_{1\times 1} ; & 3 \\ {B}_{1\times 1} ; & 3 \\\end{array}$ & $\begin{array}{rl} tr(M^3) & ={AB} \\ \end{array}$ \\ \hline
[22] & $(3)$ & 0 & - & - \\
\hline
\hline
[000] & $(1^4)$ & 12 & $\begin{array}{rc} M_{4\times 4}; & 2\end{array}$ & $\begin{array}{rl}tr(M) &=0 \\ tr(M^2) &=0 \\ tr(M^3) &=0 \\ tr(M^4) &=0 \end{array}$ \\ \hline
[101] & $(2,1^2)$ & 6 & $\begin{array}{rc} M_{2\times 2}; & 2 \\ {A}_{1\times 2} ; & 3 \\ B _{2\times 1} ; & 3\\ \end{array}$ & $\begin{array}{rl} tr(M^3) &= AB \\ tr(M^4) &=AMB \end{array} $ \\ \hline
[020] & $(2^2)$ & 4 & $\begin{array}{rc}M_{2\times 2} ; & 2 \\ N_{2\times 2} ; & 4 \\ \end{array}$ & $\begin{array}{rl} tr(M) &= 0 \\tr(N) &= 0 \\tr(M^3) &= tr(MN) \\ tr(M^4)&=tr(N^2) \end{array}$ \\ \hline
[202] & $(3,1)$ & 2 & $\begin{array}{rc}M_{1\times 1} ; & 2 \\ {A}_{1\times 1} ; & 4 \\ {B}_{1\times 1} ; & 4 \\\end{array}$ & $\begin{array}{rl} tr(M^4) & ={AB} \\ \end{array}$ \\ \hline
[222] & $(4)$ & 0 & - & - \\ \hline
\end{tabular}
\caption[$A_1$, $A_2$ and $A_3$ Slodowy Slice Varieties]{$A_1$, $A_2$ and $A_3$ varieties, generated by complex matrices $M$, $A$ and $B$ and their relations, with Hilbert series calcuated by {\it Macaulay2} to match Slodowy slices $\mathcal S_{\mathcal N,\rho}$. Note that $\mathcal S_{\mathcal N,(1^2)}$ has two alternative descriptions, one as the nilpotent cone and one as the subregular Kleinian singularity.}
\label{tab:A456}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Orbit & Partition & Dimension & Generators; Degree & Relations \\ \hline
[0000] & $(1^5)$ & 20 & $\begin{array}{rc} M_{5\times 5}; & 2\end{array}$ & $\begin{array}{rl}tr(M) &=0 \\ tr(M^2) &=0 \\ tr(M^3) &=0 \\ tr(M^4) &=0 \\ tr(M^5) &=0 \\ \end{array}$ \\ \hline
[1001] & $(2,1^3)$ & 12 & $\begin{array}{rc} M_{3\times 3}; & 2 \\ {A}_{1\times 3} ; & 3 \\ B _{3\times 1} ; & 3\\ \end{array}$ & $\begin{array}{rl} tr(M^3) &= AB \\ tr(M^4) &=AMB \\ tr(M^5) &={AM^2B} \\ \end{array} $ \\ \hline
[0110] & $(2^2,1)$ & 8 & $\begin{array}{rc}M_{2\times 2} ; & 2 \\ {A}_{1\times 2} ; & 3 \\ {B}_{2\times 1} ; & 3 \\ N_{2\times 2} ; & 4 \\ \end{array}$ & $\begin{array}{rl} tr(M^3) &= {AB} \\ tr(M^4) + tr(N^2) &={AMB} \\ tr(M^5) &={A(M^2+N)B} \\ tr(N)&=0 \\ \end{array}$ \\ \hline
[2002] & $(3,1^2)$ & 6 & $\begin{array}{rc}M_{2\times 2} ; & 2 \\ {A}_{1\times 2} ; & 4 \\ {B}_{2\times 1} ; & 4 \\ \end{array}$ & $\begin{array}{rl}tr(M^4) &= {AB} \\ tr(M^5) &= {AMB} \\ \end{array}$ \\ \hline
[1111] & $(3,2)$ & 4 & $\begin{array}{rc}M_{1\times 1} ; & 2 \\ {A}_{1\times 1} ; & 3 \\ {B}_{1\times 1} ; & 3 \\ N_{1\times 1} ; & 4 \\ {C}_{1\times 1} ; & 5 \\ {D}_{1\times 1} ; & 5 \\\end{array}$ & $\begin{array}{rl} tr(M^4) + tr(N^2) &={A MB} + {AD} \\
& +{BC} \\ tr(M^5) &= {CD}\end{array}$ \\ \hline
[2112] & $(4,1)$ & 2 & $\begin{array}{rc}M_{1\times 1} ; & 2 \\ {A}_{1\times 1} ; & 5 \\ {B}_{1\times 1} ; & 5 \\\end{array}$ & $\begin{array}{rl} tr(M^5) & ={AB} \\ \end{array}$ \\ \hline
[2222] & $(5)$ & 0 & - & - \\ \hline
\end{tabular}
\caption[$A_4$ Slodowy Slice Varieties]{$A_4$ varieties, generated by complex matrices $M$, $A$ and $B$ and their relations, with Hilbert series calcuated by {\it Macaulay2} to match Slodowy slices $\mathcal S_{\mathcal N,\rho}$.}
\label{tab:A7}
\end{table}
\FloatBarrier
\section{$BCD$ Series Quiver Constructions}
\label{sec:BCDSeries}
\subsection{Quiver Types}
\label{subsec:BCDQuivers}
The constructions for the Slodowy slices of $BCD$ algebras draw upon a different set of quiver types to the A series.
\begin{enumerate}
\item Linear orthosymplectic quivers. These quivers ${{\cal L}_{B/C/D}} (\sigma)$ consist of a $B$, $C$ or $D$ series flavour node of vector irrep dimension $N_0$ connected to an alternating linear chain of $(S)O/USp({N_i})$ gauge nodes of non-increasing vector dimension. For a subset of these linear quivers, the decrements, $\sigma_i = N_{i-1} - N_{i}$, between nodes constitute an ordered partition of $N_0$, $\sigma \equiv \{\sigma_1,\ldots,\sigma_{{k}}\}$, where $\sigma_i \ge \sigma_{i+1}$ and $\sum\nolimits_{i =1}^{k} {{\sigma _i}} = N_0$. More generally, however, the $\sigma_i$ form a sequence of non-negative integers, subject to $\sum\nolimits_{i=1}^{k} {\sigma _i} = N_0$, and to selection rules, such that $USp$ nodes of odd dimension do not arise.
\item Balanced orthosymplectic quivers. These quivers ${{\cal B}_{B/C/D}} ({\mathbf N_f})$ consist of an alternating linear chain of $O/USp(N_{i})$ nodes, with each gauge node connected to a flavour node $O/USp(N_{f_i})$, where $N_{f_i} \ge 0$. The ranks of the gauge nodes are chosen such that, taking account of any attached flavour nodes, each gauge node inherits its balance ${\bf B}$ (via \ref{eq:BCDquivers1}) from that of a canonical quiver (as defined below).
\item Dynkin diagram quivers. These quivers ${\cal D}_G ({\mathbf N_f})$ consist of a chain of $U(N_i)$ gauge nodes in the form of a simply laced Dynkin diagram, with each gauge node connected to $N_{f_i}$ flavours, where $N_{f_i} \ge 0$. ${\mathbf N_f}$ matches the Characteristic $G[\ldots]$ of a nilpotent orbit, and the ranks of the gauge nodes are chosen such that each is balanced (similarly to the A series quivers in section \ref{subsec:AQuivers}). These constructions are limited to certain Slodowy slices of $ADE$ algebras, as the Higgs branch construction is not available on non-simply laced Dynkin diagrams.
\end{enumerate}
Recall, the \textit{nilpotent orbits} of a $BCD$ algebra correspond to a subset of the partitions $\rho$ of $N$, once these have been subjected to selection rules,\footnote{In a valid $B$ or $D$ partition $\rho$ each even integer appears at an even multiplicity; in a valid $C$ partition each odd integer appears at an even multiplicity \cite{Collingwood:1993fk}.} and linear quivers ${{\cal L}_{B/C/D}} (\rho^T)$ provide a complete set of Higgs branch constructions.
Also, balanced quivers ${{\cal B}_{B/C/D}} ({\mathbf N_f})$ provide Coulomb branch constructions, using the $O/USp$ monopole formula, for the unrefined Hilbert series of certain nilpotent orbits of orthogonal groups, as discussed in \cite{Cabrera:2017ucb}. The linear and balanced quivers can partially be related by $3d$ mirror symmetry, as discussed further in section \ref{sec:Conclusions}.
Many of these linear quivers have ``Higgs equivalent" quivers, ${{\cal L}_{B/C/D}} (\sigma)$, with a different choice of orthogonal gauge node dimensions, but the same Higgs branches; these are generally described by sequences $\sigma$ rather than partitions $\rho^T$: a $USp-O-USp$ subchain with the sub-partition $(\ldots, n,n,\ldots )$ has the Higgs equivalent sequence $(\ldots, \sigma_i,\sigma_{i+1},\ldots ) = (\ldots,n-1, n+1,\ldots )$, in which the vector dimension of the central $O$ node is increased by 1 \cite{Hanany:2016gbz}.
Returning to \textit{Slodowy slices}, the roles of these quiver types are essentially reversed: balanced quivers ${{\cal B}_{B/C/D}}$ provide a \textit{complete} set of Higgs branch \textit{refined} Hilbert series constructions, while linear quivers ${{\cal L}_{B/C/D}}$ provide Coulomb branch constructions for the \textit{unrefined} HS of \textit{certain} Slodowy slices. Within the general classes of linear and balanced quiver types, those that are most relevant to the construction of Slodowy slices are shown in figure \ref{fig:BCD1}.
\begin{figure}[htbp]
\includegraphics[scale=0.42]{genBCDquivers.png}\\
\caption[$BCD$ Series Linear and Balanced Quiver Types]{$BCD$ linear and balanced quiver types. In the linear quivers ${\cal L}_{BC}$, ${\cal L}_{CD}$ and ${\cal L}_{DC}$, the ranks and fundmental dimensions of the gauge nodes (blue) are in non-increasing order L to R and the quivers are in the form of alternating $B-C$ or $D-C$ chains. In the balanced quivers, ${\cal B}_{B/C/D}$, the gauge nodes (blue) inherit their balance, taking account of attached gauge and flavour nodes (red), from a quiver for the nilpotent cone. Nodes labelled $C_r$ represent the group $USp(2r)$. Nodes labelled $B_r$ and $D_r$ represent $SO/O(2r+1)$ and $SO/O(2r)$ respectively. Nodes labelled $BC$, $BD$ or $DC$ indicate a group of one of the two types, subject to the alternation rule and to balance.}
\label{fig:BCD1}
\end{figure}
We refer to the quivers of type ${\cal L}_{BC}$, ${\cal L}_{CD}$ or $ {\cal L}_{DC}$, which contain pure $B-C$, $C-D$ or $D-C$ chains, as \textit{canonical} linear quivers. On the Higgs branch, the flavour nodes (of either type of quiver) identify the overall global symmetry, although it is necessary to distinguish within the $B$ and $D$ series between $O$ and $SO$ groups. However, it is not easy to identify the global symmetry of the Coulomb branch of a $O/USp$ quiver.
It is important to explain how the specific quivers used in the construction of the Hilbert series for $BCD$ Slodowy slices arise from the partition of the vector representation of $G$ under the homomorphism $\rho$.
The balanced quivers ${{\cal B}_{B/C/D}} ({\mathbf N_f(\rho)})$ are found via a modification of the $A$ series method explained in section \ref{subsec:AQuivers}. Firstly, the $SU(2)$ partition of a $BCD$ series vector representation under $\rho$ can be used to define a vector ${\mathbf N_f}(\rho)$ of alternating $O/USp$ flavour nodes, similarly to \ref{eq:Aquivers3}:
\begin{equation}
\label{eq:BCDquivers3}
\begin{aligned}
\rho {\left[ {1,0, \ldots } \right]_{{B/C/D}}} =\left( {{N^{{N_{{f_N}}}}}, \ldots ,{n^{{N_{{f_n}}}}}, \ldots ,{1^{{N_{{f_1}}}}}} \right).
\end{aligned}
\end{equation}
Next, consider linear quivers, whose Higgs branches match the nilpotent cone $\cal N$. In the case of $BCD$ groups, these quivers can be chosen, using Higgs equivalences, to be of canonical type. The balances ${\mathbf B}$ of their gauge nodes can be calculated by applying \ref{eq:Aquivers1} to vectors ${\mathbf N_f}$ and ${\mathbf N}$ defined from the vector/fundamental dimensions of the fields, as shown in table \ref{tab:BCD1}.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Group & Canonical Linear Quiver for $\cal N$ & $ \begin{array}{c} \text{Gauge Node}\\ \text{Balance} \end{array} $\\
\hline
$ A $ & $\ \node{\wver{}{\,\, SU(N)}}{U(N-1)}-\node{}{U(N-2)}- \ldots - \node{}{U(2)}-\node{}{U(1)}$ &$0$ for all\\
\hline
$ B $ & $\ \node{\wver{}{\,\, SO(2n+1)}}{USp(2n)}-\node{}{O(2n-1)}- \ldots - \node{}{USp(2)}-\node{}{O(1)}$ &$0$ for all\\
\hline
$ C $&$\ \node{\wver{}{\,\, USp(2n)}}{O(2n)}-\node{}{USp(2n-2)}- \ldots - \node{}{USp(2)}-\node{}{O(2)}$ &$\left\{ \begin{array}{c} USp: + 2\\ O({even}): - 2 \end{array} \right. $ \\
\hline
$ D $&$\ \node{\wver{}{\,\, SO(2n)}}{USp(2n-2)}-\node{}{O(2n-2)}- \ldots - \node{}{USp(2)}-\node{}{O(2)}$&$\left\{ \begin{array}{c} USp: + 2\\ O({even}): - 2 \end{array} \right. $ \\
\hline
\end{tabular}
\end{center}
\caption{Higgs Branch Quivers for Nilpotent Cones.}
\label{tab:BCD1}
\end{table}
These canonical quivers obey the generalisation of \ref{eq:Aquivers2}:
\begin{equation}
\label{eq:BCDquivers1}
\begin{aligned}
{\mathbf N_f} ={\mathbf A} \cdot {\mathbf N} + {\mathbf B},
\end{aligned}
\end{equation}
Whereas ${\mathbf B}=(0,\ldots,0)$ for the $A$ and $B$ series canonical quivers, ${\mathbf B}=(-2,2,\ldots,-2)$ for the $C$ series and ${\mathbf B}=(2,-2,\ldots,-2)$ for the $D$ series canonical quivers.
By fixing ${\mathbf B}$, the gauge node balance condition \ref{eq:BCDquivers1} can be extended from $\cal N$ to general Slodowy slices $\mathcal{S}_{\mathcal{N},\rho}$, permitting the calculation of each gauge node vector ${\mathbf N}$ from its flavour node vector ${\mathbf N_f}$. In effect, the quivers ${{\cal B}_{B/C/D}} ({\mathbf N_f(\rho)})$ descend from the canonical linear quivers for $\cal N$, through a series of transitions that leave the balance vector ${\mathbf B}$ invariant. These canonically balanced quivers provide Higgs branch constructions for $BCD$ Slodowy slices. They are tabulated in figures \ref{fig:BCD3a} to \ref{fig:BCD5b}, along with the partitions of the fundamental, the dimensions of the Slodowy slices, and their residual symmetry groups.\footnote{Note that other quivers whose Higgs branches match $\cal N$ could be taken to define ${\mathbf B}$; each leads to a different family of quivers, whose Higgs branches match the Slodowy slice Hilbert series. The canonical choice, however, best illustrates the Higgs-Coulomb quiver dualities.}
On the other hand, the identification of linear quivers ${{\cal L}_{B/C/D}} (\sigma)$ for Coulomb branch constructions of $BCD$ series Slodowy slices poses a number of complications.
\begin{enumerate}
\item There is no bijection between partitions of $N$ and nilpotent orbits of $O(N)$ or $USp(N)$. So the quiver ${{\cal L}_{B/C/D}(\rho)}$ is valid only for partitions $\rho$ of \textit{special} nilpotent orbits; in the other cases ${{\cal L}_{B/C/D}(\rho)}$ (unlike ${{\cal L}_{B/C/D}(\rho^T)}$) would contain $USp(N)$ vectors of odd dimension $N$.
\item In the case of Coulomb branch constructions, GNO duality \cite{Goddard:1976qe} is relevant. This indicates that, since the non-simply laced $B$ and $C$ groups are GNO dual to each other, partitions of $B$ type will be necessary to produce quivers whose Coulomb branches generate Slodowy slices of $C$ algebras, and vice versa.
\item A quiver ${{\cal L}_{B/C/D}( {\rho ^T})}$ may have several Higgs equivalent quivers ${{\cal L}_{B/C/D}( {\sigma})}$, in which $\sigma$ is a sequence of non-negative integers, rather than an ordered partition. Such quivers have the same Higgs branch refined HS, but generally have different ranks of gauge groups, and therefore different Coulomb branch dimensions.
\item Any candidate quiver for a Slodowy slice must have the correct Hilbert series dimension. Since the Coulomb branch monopole construction leads to a HS with complex dimension equal to twice the sum of the gauge group ranks in the quiver, this limits the candidates amongst Higgs equivalent quivers.
\item The Coulomb branches of quivers with $O$ gauge groups differ from those with $SO$ gauge groups; a correct choice of orthogonal gauge groups needs to be made \cite{Cabrera:2017ucb}.
\item When the orthosymplectic Coulomb branch monopole formula is applied to a quiver, the conformal dimension of all monopole operators must be positive for the Hilbert series to be well formed.
\end{enumerate}
Leaving the discussion of conformal dimension to section \ref{subsec:BCDCoulomb}, it is remarkable that a procedure exists for a partial resolution of these complexities, and indeed forms the basis for Coulomb branch constructions for the \textit{unrefined} Hilbert series of nilpotent orbits of special orthogonal groups in \cite{Cabrera:2017ucb}. The method draws on the Barbasch-Vogan map\footnote{A particularly clear description of this map is given in equation (5) of \cite{Achar:2002p}. A summary of dual maps between partitions and their appearance in the literature can be found in \cite[sec.~4]{Cabrera:2017njm}.} $d_{BV}(\rho)$ \cite{barbasch_vogan_1985}, which provides a bijection between the partitions of real vector representations associated with $B$ series special nilpotent orbits and those of pseudo-real vector representations associated with $C$ series special nilpotent orbits. By making use of Higgs equivalences, to select canonical linear quivers of type ${\cal L}_{BC}$, ${\cal L}_{CD}$ or ${\cal L}_{DC}$, which can be done for all \textit{special} nilpotent orbits, the $d_{BV}(\rho)$ map can be extended to identify candidates for Coulomb branch constructions of Hilbert series of $BCD$ Slodowy slices, in each case starting from a homomorphism $\rho$.
The specific transformations from the partitions $\rho^T$ to the sequences $\sigma$ are summarised in table \ref {tab:BCD1a}.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$\text {Group}$&${\cal \bar O}_{\rho}$& Transformation &${\cal S}_{{\cal N}, \rho}$\\
\hline
$ A $&$ {Higgs \left[{\cal L}_A (\rho ^T)\right]} $&$ \rho = {\left( {{\rho ^T}} \right)^T} $&$ {Coulomb \left[ {{{\cal L}_A} ( \rho)} \right]} $\\
$ B $&$ {Higgs \left[{\cal L}_B (\rho ^T)\right]} $&$ \sigma \equiv {\left. {{{\left( {{{\left( {{{\left( { {\rho ^T}} \right)}_{N \to N - 1}}} \right)}_C}} \right)}^T}} \right|_{CD}} $&$ {Coulomb \left[ {{{\cal L}_{CD}} (\sigma)} \right]} $\\
$ C $&$ {Higgs \left[{\cal L}_C (\rho ^T)\right]} $&$\sigma \equiv {\left. {{{\left( {{{\left( {{{\left( { {\rho ^T}} \right)}_{N \to N + 1}}} \right)}_B}} \right)}^T}} \right|_{BC}} $&$ {Coulomb \left[ {{{\cal L}_{BC}} (\sigma)} \right]} $\\
$ D $&$ {Higgs \left[{\cal L}_D (\rho ^T)\right]} $&$ \sigma = {\left. {{{\left( {{{\left( {{\rho ^T}} \right)}_D}} \right)}^T}} \right|_{DC}} $&$ {Coulomb\left[ {{{\cal L}_{DC}} ( \sigma)} \right]} $\\
\hline
\end{tabular}
\end{center}
\caption{Coulomb Branch Quiver Candidates for Slodowy Slices}
\label{tab:BCD1a}
\end{table}
Within these; $\rho^T$ indicates the transpose of a partition; $\rho_{N \to N \pm 1}$ indicates incrementing(decrementing) the first(last) term of a partition by 1; $\rho_B$, $\rho_C$, or $\rho_D$ indicates \textit{collapsing} a partition to a lower partition that is a valid $B$, $C$, or $D$ partition \cite{Collingwood:1993fk}; $|_{BC}$ or $|_{CD}$ indicates shifting $D$ or $B$ nodes in a linear quiver to a `Higgs equivalent' quiver that consists purely of $B-C$ or of $C-D$ pairs of nodes. The transformations can be written more concisely as $\sigma = {\left. {d_{BV}{{\left( \rho \right)}^T}} \right|_{canonical}}$.
The resulting linear quivers, ${\cal L}_{CD }\left(\sigma \right)$, ${\cal L}_{BC }\left(\sigma \right)$ and ${\cal L}_{DC }\left(\sigma \right)$, whose Coulomb branches are candidates for Slodowy slices of $BCD$ groups up to rank 4, are included in figures \ref{fig:BCD3a} through \ref{fig:BCD5b}.
The quivers ${\cal L}_{DC} ((1^N))$ and ${\cal B}_D ({\mathbf N_f}(1^N))$ for the Higgs and Coulomb branch constructions of the Slodowy slice to the trivial nilpotent orbit are the same. These tables also include identified quivers of type ${\cal D}_G ({\mathbf N_f}([{d_{BV}}( \rho ) ] ))$, whose Higgs branch Hilbert series match those of ${{\cal B}_{B/C/D}} ({\mathbf N_f(\rho)})$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.45]{Bquiversgrid1.png}\\
\caption[Quivers for $B_1$ to $B_3$ Slodowy Slices.]{{Quivers for ${ B_1}$ to ${ B_3}$ Slodowy Slices}. The Higgs quivers are of type ${{\cal B}_{B/C/D}} \left({\mathbf N_f}(\rho) \right)$ and the Coulomb quivers are of type ${{\cal L}_{CD}}\left( {{{\left. {d_{BV}{\left( \rho \right)}^T} \right|}_{CD}}} \right)$. Gauge nodes of $B$ or $D$ type are evaluated as $O$ nodes on the Higgs branch and $SO$ nodes on the Coulomb branch. $\Delta=0$ indicates a diagram for which the monopole formula contains zero conformal dimension.}
\label{fig:BCD3a}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.35]{Bquiversgrid2.png}\\
\caption[Quivers for $B_4$ Slodowy Slices.]{{Quivers for $B_4$ Slodowy slices.} The Higgs quivers are of type ${{\cal B}_{B/C/D}} \left({\mathbf N_f}(\rho) \right)$ and the Coulomb quivers are of type ${{{\cal L}}_{CD}}\left( {{{\left. {d_{BV}^{}{{\left( \rho \right)}^T}} \right|}_{CD}}} \right)$. Gauge nodes of $B$ or $D$ type are evaluated as $O$ nodes on the Higgs branch and $SO$ nodes on the Coulomb branch. $\Delta=0$ indicates a diagram for which the monopole formula contains zero conformal dimension.}
\label{fig:BCD3b}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.40]{Cquiversgrid1.png}\\
\caption[Quivers for $C_1$ to $C_3$ Slodowy Slices.]{{Quivers for $C_1$ to $C_3$ Slodowy slices.} The Higgs quivers are of type ${{\cal B}_{B/C/D}} \left({\mathbf N_f}(\rho) \right)$ and the Coulomb quivers are of type ${{{\cal L}}_{BC}}\left( {{{\left. {d_{BV}^{}{{\left( \rho \right)}^T}} \right|}_{BC}}} \right)$. Gauge nodes of $B$ or $D$ type are evaluated as $O$ nodes on the Higgs branch and $SO$ nodes on the Coulomb branch. $\Delta=0$ indicates a diagram for which the monopole formula contains zero conformal dimension.}
\label{fig:BCD4a}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.35]{Cquiversgrid2.png}\\
\caption[Quivers for $C_4$ Slodowy Slices.]{{Quivers for $C_4$ Slodowy slices.} The Higgs quivers are of type ${{\cal B}_{B/C/D}} \left({\mathbf N_f}(\rho) \right)$ and the Coulomb quivers are of type ${{{\cal L}}_{BC}}\left( {{{\left. {d_{BV}^{}{{\left( \rho \right)}^T}} \right|}_{BC}}} \right)$. Gauge nodes of $B$ or $D$ type are evaluated as $O$ nodes on the Higgs branch and $SO$ nodes on the Coulomb branch. $\Delta=0$ indicates a diagram for which the monopole formula contains zero conformal dimension.}
\label{fig:BCD4b}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.4]{Dquiversgrid1.png}\\
\caption[Quivers for $D_2$ to $D_3$ Slodowy Slices.]{{Quivers for $D_2$ to $D_3$ Slodowy slices.} The Higgs balanced quivers are of type ${{\cal B}_{B/C/D}} \left({\mathbf N_f}(\rho) \right)$ and the Coulomb quivers are of type ${{{\cal L}}_{DC}}\left( {{{\left. {d_{BV}^{}{{\left( \rho \right)}^T}} \right|}_{DC}}} \right)$. The Dynkin type quivers ${\cal D}_D ({\mathbf N_f}')$ are identified by $A$ series isomorphisms. Gauge nodes of $B$ or $D$ type are evaluated as $O$ nodes on the Higgs branch and $SO$ nodes on the Coulomb branch. $\Delta=0$ indicates a diagram for which the monopole formula contains zero conformal dimension.}
\label{fig:BCD5a}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.35]{Dquiversgrid2.png}\\
\caption[Quivers for $D_4$ Slodowy Slices.]{{Quivers for $D_4$ Slodowy slices.} The Higgs balanced quivers are of type ${{\cal B}_{B/C/D}} \left({\mathbf N_f}(\rho) \right)$ and the Coulomb quivers are of type ${{{\cal L}}_{DC}}\left( {{{\left. {d_{BV}^{}{{\left( \rho \right)}^T}} \right|}_{DC}}} \right)$. The Dynkin quivers of type ${\cal D}_D ({\mathbf N_f}([{d_{BV}}( \rho ) ] ))$, are those that have Higgs branches matching the balanced quivers. Gauge nodes of $B$ or $D$ type are evaluated as $O$ nodes on the Higgs branch and $SO$ nodes on the Coulomb branch. $\Delta=0$ indicates a diagram for which the monopole formula contains zero conformal dimension.}
\label{fig:BCD5b}
\end{center}
\end{figure}
\FloatBarrier
\paragraph{Type IIB String Theory Brane Systems}
Note that all the resulting quivers, presented in figures \ref{fig:BCD3a} through \ref{fig:BCD5b} represent $3d \ \mathcal N=4$ gauge theories that admit an embedding in Type IIB string theory. They correspond to the effective gauge theory living on the world-volume of D3-branes suspended along one spatial direction between NS5-branes and D5-branes. This is achieved by taking the construction of \cite{Hanany:1996ie} and introducing O3-planes \cite{Feng:2000eq}. This type of system was further explored in \cite{Gaiotto:2008ak} where the Coulomb branches and Higgs branches were described in terms of nilpotent orbits and Slodowy slices, and the label $T_\sigma^\rho (G)$ was introduced to denote the SCFT at the superconformal fixed point. These brane systems and $3d$ quivers were also studied in \cite{Cabrera:2017njm}, finding the physical realization of transverse slices between closures of nilpotent orbits that are adjacent in their corresponding Hasse diagrams. This phenomenon has been named the \textit{Kraft-Procesi transition}.
\subsection{Higgs Branch Constructions}
\label{subsec:BCDHiggs}
In the case of the balanced unitary quivers ${\cal D}_D ({\mathbf N_f})$, based on $D$ series Dynkin diagrams, the calculation of Higgs branch Hilbert series proceeds similarly to the $A$ algebras. This leads to a Higgs branch formula that is comparable to \ref{eq:Aquivers5}, modified to include the connection of three pairs of bifundamental fields to the central node. The dimension formula \ref{eq:Aquivers6} remains unchanged.
In the case of orthosymplectic quivers of type ${{\cal B}_{B/C/D}} ({\mathbf N_f})$, modifications to the $A$ series Higgs branch formula are required. The $O/USp$ alternating chains are taken to comprise bifundamental (half) hypermultiplet fields that transform in vector representations $[1,0,\ldots,0]_{B/D} \otimes [1,0,\ldots,0]_{C}$. Also, it is necessary to average the integrations over the disconnected $SO$ and $O^-$ components of the $O$ gauge groups; this requires precise choices both of the character for the vector representation of $O^-$ and of the HKQ associated with the integration over $O^-$, as explained in \cite{Hanany:2016gbz}.\footnote{The effect of non-connected $O$ group components is also discussed in \cite{Bourget:2017tmt}.}
In other respects, the calculation of the Higgs branch of a balanced $BCD$ quiver follows a similar Weyl integration to the $A$ series. The general Higgs branch formula for $BCD$ series Slodowy slices is:
\begin{equation}
\label{eq:BCDquivers1a}
\begin{aligned}
g_{HS}^{Higgs[{\cal B}_{B/C/D}({{\bf{N}}_{\bf{f}}}(\rho ))]} &=\\
\frac{1}{{{2^{\# O}}}}\sum\limits_{O \pm } {~}& \oint\limits_{G_1 \left( {{N_1}} \right) \otimes \ldots G_k \left( {{N_{k}}} \right)}{d\mu}{~}\prod\limits_{n = 1}^{k} {\frac{{PE\left[ {{{[vector]}_{G_n({N_n})}} \otimes {{[vector]}_{G_{f_n}({N_{{f_n}}})}},t} \right]}}{{HKQ\left[ {G_n({N_n}),t} \right]}}} \\
\times & \prod\limits_{n = 1}^{k - 1} {PE\left[ {{{[vector]}_{G_n({N_n})}} \otimes {{[vector]}_{G_{n+1}({N_{n + 1}})}},t} \right]}.
\end{aligned}
\end{equation}
In \ref{eq:BCDquivers1a}, $G_n$ alternates between $O(N)$ and $USp(N)$, ${d\mu}$ is the Haar measure for the ${G_1\left( {{N_1}} \right) \otimes \ldots G_k \left( {{N_{k}}} \right)}$ product group, $HKQ\left[ {G_n ({N_n}),t} \right]$ is the HyperKahler quotient for a gauge node, and the summation indicates that the calculation is averaged over the non-connected $SO$ and $O^-$ components of $O$ gauge groups \cite{Hanany:2016gbz}.
The character $[vector]_{O(2r)^ {-}}=[vector]_{O(2)^ {-}} \oplus [vector]_{USp(2r-2)} $, where $[vector]_{O(2)^ {-}}$ is (the trace of) a diagonal matrix with eigenvalues $\{1,-1\}$. The HKQ is given by $HKQ \left[ {G_n ({N_n}),t} \right] = PE \left[ [adjoint ]_{G_n},t^2 \right]$, where for the $O(2r)^-$ component of an $O(2r)$ group, $[adjoint]_{O(2r)^ {-}} \equiv {\Lambda ^2} \left[ [vector]_{O(2r)^{-}} \right]$.\footnote{For further detail, see \cite{Hanany:2016gbz}.}
The structure of the Higgs branch formula can be used to identify the dimensions of the Hilbert series. In essence, each bifundamental field contributes HS generators according to its dimensions (being the product of the dimensions of the $O$ and $USp$ vectors), and each gauge group offsets the generators by HS relations numbering twice the dimension of the gauge group (once for the Weyl integration and once for the HKQ). This gives a rule for the dimensions of a Slodowy slice calculated from a balanced ${{\cal B}_{B/C/D}} ({\mathbf N_f}(\rho))$ quiver:
\begin{equation}
\label{eq:BCDquivers2}
\begin{aligned}
\left| g_{HS}^{Higgs[{\cal B}({{\bf{N}}_{\bf{f}}}(\rho ))]} \right| & = {{\bf{N}}_f}( \rho ) \cdot {\bf{N}}( \rho ) - \frac{1}{2}{\bf{N}}( \rho ) \cdot {\bf{A}} \cdot {\bf{N}}( \rho ) + {\bf{N}}( \rho ) \cdot {\bf{K}},\\
\end{aligned}
\end{equation}
where $K_n = \left\{ \begin{array}{l} + 1{~}\text{ if}{~}{{ G_n = B/D}}\\- 1{~}\text{ if}{~}{{G_n = C}}\end{array} \right.$ and \ref{eq:BCDquivers1} is used to calculate ${\bf{N}}( \rho)$.
\subsection{Coulomb Branch Constructions}
\label{subsec:BCDCoulomb}
While the $O/USp$ version of the monopole formula \ref{eq:monOSp} derives from \ref{eq:mon0} by following similar general principles to the unitary monopole formula \ref{eq:mon1}, there are several aspects and subtleties that require discussion:
\begin{equation}
\label{eq:monOSp}
\begin{aligned}
g_{HS}^{{\text{Coulomb}}}( {{\bf f},t^2} ) \equiv \sum\limits_{o,s} {} {}P_{o/s}^{G}( t^2 ){~}{{\bf f}^{o/s}}{~}{t^{2 \Delta (o,s)}}.
\end{aligned}
\end{equation}
\begin{enumerate}
\item Monopole lattice. The lattice of monopole charges depends on the symmetry group. For $SU$ and $USp$ groups, points in the monopole charge lattice correspond to sets of ordered integers and are in bijective correspondence with highest weight Dynkin labels. However, the monopole charge lattices of orthogonal groups only span the vector sub-lattices and exclude weight space states whose spinor Dynkin labels sum to an odd number. Labelling monopole charges as ${\bf q} \equiv (q_1,\ldots,q_r)$ for unitary nodes, ${\bf s} \equiv (s_1,\ldots, s_r)$ for symplectic nodes and ${\bf o} \equiv (o_1,\ldots,o_r)$ for orthogonal nodes, the relationships between monopole and integer weight space lattices can be summarised as in table \ref{tab:BCD1b}.
\begin{table}[htp]
\footnotesize
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Group & Monopole Lattice & Basis Transformations & Dynkin Labels \\
$ $&$ $&$ $&$ [n_1, \ldots, n_r] $\\
\hline
$ {{U(r)}} $&
$ {\infty > {q_1} \ge \ldots {q_i} \ge \ldots {q_r} > - \infty } $&
$ {{q_i} \equiv \sum\nolimits_{j = i}^r {{n_j}} } $&
${\left\{ \begin{array}{l} \infty > {n_{i < r}} \ge 0\\ \infty > {n_r} > - \infty \end{array} \right.} $\\
\hline
$ {{A_r}} $&
$ {\infty > {q_1} \ge \ldots {q_i} \ge \ldots {q_r} \ge 0} $&
$ {{q_i} \equiv \sum\nolimits_{j = i}^r {{n_j}} } $&
$ {\infty > {n_i} \ge 0} $\\
\hline
$ {{B_r}} $&
$ {\infty > {o_1} \ge \ldots {o_i} \ge \ldots {o_r} \ge 0} $&
$ {{o_i} \equiv \sum\nolimits_{j = i}^{r - 1} {{n_j} + {n_r}/2} } $&
$ {\left\{ \begin{array}{l} \infty > {n_i} \ge 0\\ {n_r} = 2k \end{array} \right.} $\\
\hline
$ {{C_r}} $&
$ {\infty > {s_1} \ge \ldots {s_i} \ge \ldots {s_r} \ge 0} $&
$ {{s_i} \equiv \sum\nolimits_{j = i}^r {{n_j}} } $&
$ {\infty > {n_i} \ge 0} $\\
\hline
$ {{D_r}} $&
$ {\infty > {o_1} \ge \ldots {o_i} \ge \ldots \left| {{o_r}} \right| \ge 0 } $&
$ {\left\{ \begin{array}{l} {o_{i < r}} \equiv \sum\nolimits_{j = i}^{r - 2} {{n_j} + \left( {{n_{r - 1}} + {n_r}} \right)/2} \\ {o_r} = \left( { - {n_{r - 1}} + {n_r}} \right)/2 \end{array} \right.} $&
$ {\left\{ \begin{array}{l} \infty > {n_i} \ge 0\\ {n_{r + 1}} + {n_r} = 2k \end{array} \right.} $\\
\hline
\end{tabular}
\end{center}
\caption{Monopole and Dynkin Label Lattices}
\label{tab:BCD1b}
\end{table}
\item Characters. The definition of conformal dimension draws on the characters of the bifundamental scalar fields in the hyper multiplets and of the adjoint scalars in the vector multiplets: the weights of the fugacities in the characters become coefficients of the monopole charges $q$ in $\Delta(q)$. These characters take a relatively simple form in the monopole lattice basis, compared with the weight space integer (Dynkin label) basis, as shown in tables \ref{tab:BCD1c} and \ref{tab:BCD1d}. CSA fugacities are taken as $\{x_1,\ldots,x_r\}$ in the weight space integer (Dynkin label) basis, or $\{y_1,\ldots,y_r\}$ in the monopole basis.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Group &$ \begin{array}{c} \text{Monopole Basis} \\ \text{Vector/Fundamental} \end{array} $&$ \begin{array}{c} \text{Weight Space Basis}\\ \text{Vector/Fundamental}\end{array} $ \\
\hline
$ {{U(r)}} $&
$ {\sum\nolimits_{i = 1}^r {{y_i}} } $&
$ {{x_1} + \sum\nolimits_{i = 2}^r {{{{x}}_i}{{/}}{{{x}}_{i - 1}}} } $\\
\hline
$ {{A_r}} $&
$ {\sum\nolimits_{i = 1}^r {{y_i}} + \prod\nolimits_{i = 1}^r {1/{y_i}} } $&
$ {{x_1} + \sum\nolimits_{i = 2}^r {{{{x}}_i}{{/}}{{{x}}_{i - 1}} + 1/{{{x}}_r}} } $\\
\hline
$ {{B_r}} $&
$ {1 + \sum\nolimits_{i = 1}^r {{y_i} + \sum\nolimits_{i = 1}^r {1/{y_i}} } } $&
$ \begin{array}{c} 1 + 1/{x_1} + {x_1}\\ + \sum\nolimits_{i = 2}^{r - 1} {\left( {{{x}_{i - 1}}{/}{{x}_i} + {{x}_i}{/}{{x}_{i - 1}}} \right)} + {{x}_{r - 1}}/{x_r^2} +{ x_r^2}/{{x}_{r - 1}}\end{array} $\\
\hline
$ {{C_r}} $&
$ {\sum\nolimits_{i = 1}^r {{y_i} + \sum\nolimits_{i = 1}^r {1/{y_i}} } } $&
$ {1/{x_1} + {x_1} + \sum\nolimits_{i = 2}^r {\left( {{{x}_{i - 1}}{/}{{x}_i} + {{x}_i}{/}{{x}_{i - 1}}}\right)}}$\\
\hline
$ {{D_r}} $&
$ {\sum\nolimits_{i = 1}^r {{y_i} + \sum\nolimits_{i = 1}^r {1/{y_i}} } } $&
$ \begin{array}{c}
1/{x_1} + {x_1} + \sum\nolimits_{i = 2}^{r - 2} {\left( {{{{x}}_{i - 1}}{{/}}{{{x}}_i} + {{{x}}_i}{{/}}{{{x}}_{i - 1}}} \right)} \\
{{ + }}{{{x}}_{r - 2}}{{/(}}{{{x}}_{r - 1}}{{{x}}_r}{{) + }}{{{x}}_{r - 1}}{{/}}{{{x}}_r}{{ + }}{{{x}}_r}{{/}}{{{x}}_{r - 1}}{{ + (}}{{{x}}_{r - 1}}{{{x}}_r}{{)/}}{{{x}}_{r - 2}}
\end{array} $\\
\hline
\end{tabular}
\end{center}
\caption{Vector/Fundamental Characters}
\label{tab:BCD1c}
\end{table}
\begin{table}[htp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Group & Monopole Basis \\
\hline
$ {{U(r)}} $&
$ {r + \sum\nolimits_{i \ne j}^{} {{y_i}/{y_j}} } $\\
\hline
$ {{A_r}} $&
$ {r + \prod\nolimits_{i = 1}^r {1/{y_i}\left( {\sum\nolimits_{j = 1}^r {1/{y_j}} } \right) + \prod\nolimits_{i = 1}^r {{y_i}\left( {\sum\nolimits_{j = 1}^r {{y_i}} } \right) + \sum\nolimits_{i \ne j}^{} {{y_i}/{y_j}} } } } $\\
\hline
$ {{B_r}} $&
$ {r + \sum\nolimits_{i = 1}^r {\left( {{y_i} + 1/{y_i}} \right) + \sum\nolimits_{i < j}^{} {\left( {{y_i}{y_j} + {y_i}/{y_j} + {y_j}/{y_i} + 1/( {{y_i}{y_j}} )} \right)} } } $\\
\hline
$ {{C_r}} $&
$ {r + \sum\nolimits_{i = 1}^r {\left( {y_i^2 + 1/y_i^2} \right) + \sum\nolimits_{i < j}^{} {\left( {{y_i}{y_j} + {y_i}/{y_j} + {y_j}/{y_i} + 1/( {{y_i}{y_j}} )} \right)} } } $\\
\hline
$ {{D_r}} $&
$ {r + \sum\nolimits_{i < j}^{} {\left( {{y_i}{y_j} + {y_i}/{y_j} + {y_j}/{y_i} + 1/( {{y_i}{y_j}} )} \right)} } $\\
\hline
\end{tabular}
\end{center}
\caption{Adjoint Characters}
\label{tab:BCD1d}
\end{table}
\FloatBarrier
\item Conformal dimension. The contributions to conformal dimension of the $O/USp$ bifundamental fields linking gauge or flavour nodes, and of the $O/USp$ gauge nodes, follow from \ref{eq:mon0} in a similar manner to the unitary case \ref{eq:mon3}, starting from the relevant characters: the coefficients $\{0, \pm 1, \pm 2 \}$ of the monopole charges $\{{\bf o},{\bf s}\}$ in the conformal dimension formula match the weights (exponents) of the ${\bf y}$ fugacities in the characters of the respective bifundamental or adjoint representations in the monopole basis. Tables \ref{tab:BCD1e} and \ref{tab:BCD1f} show the resulting contributions from the various types of gauge node and bifundamental field.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Gauge Group & $\Delta(\text{Node})$ \\
\hline
$ {{U(r)}} $&
$ { - \sum\nolimits_{1 \le i < j \le r} {| {{q_i} - {q_j}} |} }$\\
\hline
$ {{B_r}} $&
$ { - \sum\nolimits_{i = 1}^r {| {{o_i}} |} - \sum\nolimits_{1 \le i < j \le r} {| {{o_i} \pm {o_j}} |} } $\\
\hline
$ {{C_r}} $&
$ { - 2\sum\nolimits_{i = 1}^r {| {{s_i}} |} - \sum\nolimits_{1 \le i < j \le r} {| {{s_i} \pm {s_j}} |} } $\\
\hline
$ {{D_r}} $&
$ { - \sum\nolimits_{1 \le i < j \le r} {| {{o_i} \pm {o_j}}|} } $\\
\hline
\end{tabular}
\end{center}
\caption{Gauge Node Conformal Dimensions}
\label{tab:BCD1e}
\end{table}
\begin{table}[htp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Gauge Groups & $\Delta(\text{Bifundamental})$ \\
\hline
$ {{U({{r_1}})} - {U({{r_2}})}} $&
$ {\frac{1}{2}\sum\limits_{i = 1}^{{r_1}} {\sum\limits_{j = 1}^{{r_2}} {| {q_{1,i}} - {q_{2,j}}|} } }$\\
\hline
$ {{B_{{r_1}}} - {C_{{r_2}}}} $&
$ {\frac{1}{2}\sum\limits_{j = 1}^{{r_2}} {| {{s_j}} |} + \frac{1}{2}\sum\limits_{i = 1}^{{r_1}} {\sum\limits_{j = 1}^{{r_2}} {| {{o_i} \pm {s_j}} |} } } $\\
\hline
$ {D_{{r_1}}} - {C_{{r_2}}} $&
$ {\frac{1}{2}\sum\limits_{i = 1}^{{r_1}} {\sum\limits_{j = 1}^{{r_2}} {| {{o_i} \pm {s_j}} |} } }$\\
\hline
\end{tabular}
\end{center}
\caption{Bifundamental Conformal Dimensions}
\label{tab:BCD1f}
\end{table}
\FloatBarrier
\item Symmetry factors. The residual symmetries for a flux (whether $\bf{o}$, $\bf{s}$, or $\bf{q}$) over a gauge node can be fixed from the sub-group of the $O/USp/U$ gauge group identified by the Dynkin diagram formed by those monopole charges that equal their successors (or, equivalently, correspond to zero Dynkin labels). Note that the symmetry factors may belong to a sub-group from a different series to the gauge node.
\item $O$ vs $SO$ gauge nodes. Both the characters of vector irreps and symmetry factors depend on whether a $D$ series gauge node is taken as $SO$ or as $O$. As noted in \cite{Cremonesi:2014uva}, the Casimirs of an $O(2n)$ symmetry group are the same as those of $SO(2n+1)$, due to the absence of a Pfaffian in $O(2n)$ (since the determinant of representation matrices can be of either sign). The Coulomb branch calculations for Slodowy slices herein are based entirely on $SO$ gauge nodes. This is a choice consistent with the results in \cite{Cabrera:2017ucb}. When these results are translated to the brane configurations, the action of the Lusztig's Canonical Quotient ${\cal \bar A}(\mathcal O)$ related to each quiver can be seen in terms of \textit{collapse transitions} \cite{Cabrera:2017njm} performed in the branes. Each time a collapse transition moves two half D5-branes away from each other all magnetic lattices of the orthogonal gauge nodes in between are acted upon by a diagonal $\mathbb{Z}_2$ action. The brane configurations \cite{Feng:2000eq,Gaiotto:2008ak,Cabrera:2017njm} for linear quivers ${{\cal L}_{B/C/D}} (\sigma)$ do not present this effect, and therefore all gauge nodes are $SO$.
\item Fugacities. In the unitary monopole formula, $\bf{z}$ in \ref{eq:mon1} can be treated as a fugacity for the simple roots of the group for which the quiver is a balanced Dynkin diagram. As discussed in \cite{Cremonesi:2014kwa}, such a treatment cannot be extended to the $O/USp$ monopole formula due to the non-unitary gauge groups involved. Thus, while it can be helpful to include fugacities ${\bf f} \equiv ({f_1,\ldots,f_r})$ during the calculation of Coulomb branches, their interpretation is unclear. Such issues do not affect the validity of the \textit{unrefined} Hilbert series ultimately obtained by setting $\forall f_i: f_i \to 1$.
\end{enumerate}
In order for a Coulomb branch Hilbert series not to lead to divergences when the fugacities $\bf f$ are set to unity, it is necessary that no sub-lattice of the monopole lattice (other than the origin) should have a conformal dimension of zero (to ensure that the fugacities $\bf f$ only appear as generators when coupled with $t^k$, where $k > 0$). A necessary (albeit not always sufficient) condition on $O/USp$ quivers can be formulated by examining unit shifts away from the origin of the monopole lattice. This is similar to the ``good or ugly, but not bad" balance condition on unitary quivers \cite{Gaiotto:2008ak}.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Gauge Group Chain & ${{\Delta _r}\left( {1,0 \ldots 0} \right)}$ \\
\hline
$ {{C_{{r_1}}} - {D_r} - {C_{{r_2}}}} $&
$ {{r_1} + {r_2} - 2r + 2} $\\
\hline
$ {{C_{{r_1}}} - {B_r} - {C_{{r_2}}}} $&
$ {{r_1} + {r_2} - 2r + 1} $\\
\hline
$ {{B_{{r_1}}} - {C_r} - {B_{{r_2}}}} $&
$ {{r_1} + {r_2} - 2r+1 } $\\
\hline
$ {{B_{{r_1}}} - {C_r} - {D_{{r_2}}}} $&
$ {{r_1} + {r_2} - 2r +1/2 } $\\
\hline
$ {{D_{{r_1}}} - {C_r} - {D_{{r_2}}}} $&
$ {{r_1} + {r_2} - 2r } $\\
\hline
\end{tabular}
\end{center}
\caption{Quiver Chain Unit Conformal Dimensions}
\label{tab:BCD1g}
\end{table}
In table \ref{tab:BCD1g} we examine the unit conformal dimensions that result, based on tables \ref{tab:BCD1e} and \ref{tab:BCD1f}, from setting a single monopole charge ($o_1$, or $s_1$) on a central gauge node in a chain of three nodes to unity, depending on the ranks of the nodes involved. We can use this table to check that no gauge node in a quiver is necessarily ``bad". For example, the central gauge node in the chain $D_2 - C_1 - D_1$ is assigned a unit conformal dimension of $1$ and is a ``good" node. Quivers with zero conformal dimension are identified as such in figures \ref{fig:BCD3a} through \ref{fig:BCD5b}. Their Hilbert series clearly do not match those of the Higgs branch constructions for Slodowy slices, and are not tabulated here.
Providing (i) a nilpotent orbit ${\cal O}_{\rho}$ is special (so that the Barbasch-Vogan map can be uniquely applied), and (ii) that the quiver ${{\cal L}_{BC/CD/DC}} (\sigma(\rho))$ does not suffer from zero conformal dimension, the $O/USp$ monopole formula \ref{eq:monOSp} can be used to calculate \textit{unrefined} Hilbert series for Slodowy slices; these match those calculated on the Higgs branch of ${{\cal B}_{B/C/D}} ({\mathbf N_f}(\rho))$ using \ref{eq:Aquivers5}.
\subsection{Hilbert Series}
\label{subsec:BCDHilbert}
The Hilbert series of the Slodowy slices of algebras $B_1$ to $B_4$, $C_1$ to $C_4$ and $D_2$ to $D_4$, calculated as above, are summarised in tables \ref{tab:BCD2}, \ref{tab:BCD3}, \ref{tab:BCD4}, \ref{tab:BCD5} and \ref{tab:BCD6}. The refined Hilbert series are based on the Higgs branches of the balanced quivers ${{\cal B}_{B/C/D}} ({\mathbf N_f}(\rho))$.
Whenever the flavour symmetry groups are from the $B$ or the $D$ series, a choice has to be made between the characters of $SO(N)$ or $O(N)^-$. In the tables, $B/D$ flavour nodes have been taken as $SO$ type, with the exception of $B_0$ where the $O(1)$ fugacity $k_i= \pm 1$ has been used (with indices dropped where no ambiguity arises)\footnote{Note that if one wishes to read the generators of the chiral ring from the quiver as described in Section \ref{sec:OfQBCD}, then all fugacities $k_i$ need to be set to $1$.}.
The Hilbert series are presented in terms of their generators, or $PL[HS]$, using character notation $[n_1,\ldots, n_r]_G$ to label irreps. Symmetrisation of these generators using the $PE$ recovers the refined Hilbert series. The underlying adjoint maps \ref{eq:SS9} can readily be recovered from the generators by inverting \ref{eq:SS11}. The HS can be unrefined by replacing irreps of the global symmetry groups by their dimensions.
\begin{sidewaystable}[htp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
${\begin{array}{c} \text{Nilpotent}\\\text{Orbit} \end{array}}$
&${\begin{array}{c} \text{Dimension}\\ {| {\cal S}_{\cal N,\rho}|} \end{array}}$
&${\begin{array}{c} \text{Symmetry} \\ F\end{array}}$
&$ \text{Generators of HS} \equiv \text{PL[HS]}$
&$ \text{Unrefined HS} $\\
\hline
$[0]$&$ 2 $&$B_1 $&$ [2] t^2-t^4 $&$\frac { (1 - t^4)}{(1 - t^2)^3} $\\
$[2]$&$ 0 $&$ \emptyset $&$ 0 $&$ 1 $\\
\hline
$[00]$&$ 8 $&$ B_2 $&$ [0,2] t^2-t^4-t^8 $ & $\frac{(1 - t^4) (1 - t^8)}{(1 - t^2)^{10}}$\\
$[01]$&$ 4 $&$ C_1 \otimes B_0 $&$[2] t^2+[1] k t^3-t^8 $ & $ \frac{ (1 - t^8)}{(1 - t^2)^3 (1 - t^3)^2 } $\\
$[20]$&$ 2 $&$ D_1 \otimes B_0 $&$ t^2+(1) k t^4- t^8 $ & $ \frac{ (1 - t^8)}{(1 - t^2) (1 - t^4)^2} $\\
$[22]$&$ 0 $&$ \emptyset $&$ 0 $ & $ 1 $\\
\hline
$[000]$&$ 18 $&$B_3$ & $[0,1,0] t^2-t^4-t^8-t^{12} $&$ \frac{(1 - t^4) (1 - t^8) (1 - t^{12})}{(1 - t^2)^{21}} $\\
$[010]$&$ 10 $&$B_1 \otimes C_1$ & $[2]_B t^2+[2]_C t^2+[2]_B [1]_C t^3 -t^8-t^{12} $&$ \frac {(1 - t^8) (1 - t^{12})}{(1 - t^2)^6 (1 - t^3)^6} $\\
$[200]$&$ 8 $&$ D_2 \otimes B_0$&$ [2,0] t^2 + [0,2] t^2+[1,1] k t^4-t^8-t^{12} $&$\frac {(1 - t^8) (1 - t^{12})}{(1 - t^2)^6 (1 - t^4)^4} $\\
$[101]$&$ 6 $&$ C_1 \otimes B_0 $& $[2] t^2+[1] k t^3+t^4+[1] k t^5-t^8-t^{12} $&$ \frac {(1 - t^8) (1 - t^{12})}{(1 - t^2)^3 (1 - t^3)^2 (1 - t^4) (1 - t^5)^2} $\\
$[020]$&$ 4 $&$ D_1 \otimes B_0$ & $t^2+(1) k t^4+(2) t^4 +t^6-t^8-t^{12} $&$ \frac {(1 - t^8) (1 - t^{12})}{(1 - t^2) (1 - t^4)^4 (1 - t^6)} $\\
$[220]$&$ 2 $&$ D_1 \otimes B_0 $& $t^2+(1) k t^6-t^{12} $&$ \frac {(1 - t^{12})}{(1 - t^2) (1 - t^6)^2} $\\
$[222]$&$ 0 $&$ \emptyset $&$ 0 $&$ 1 $\\
\hline
\end{tabular}
\end{center}
\text{N.B. $(n)$ denotes the character of the $D_1 \equiv SO(2)$ reducible representation $q^n+q^{-n}$ of $U(1)$.}\\
\text{$k$ denotes the character $\pm 1$ if $B_0 \to O(1)$ or $1$ if $B_0 \to SO(1)$.}
\caption{Hilbert Series for Slodowy Slices of $B_1$, $B_2$, and $B_3$}
\label{tab:BCD2}
\end{sidewaystable}
\begin{sidewaystable}[htp]
\footnotesize
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
${\begin{array}{c} \text{Nilpotent}\\\text{Orbit} \end{array}}$
&${\begin{array}{c} \text{Dimension}\\ {| {\cal S}_{\cal N,\rho}|} \end{array}}$
&${\begin{array}{c} \text{Symmetry} \\ F\end{array}}$
&$ \text{Generators of HS} \equiv \text{PL[HS]}$
&$ \text{Unrefined HS} $\\
\hline
$[0000]$&$ 32 $&$ B_4 $&$ [0,1,0,0] t^2-t^4-t^8-t^{12}-t^{16} $&$\frac {(1 - t^4) (1 - t^8) (1 - t^{12}) (1 - t^{16})}{(1 - t^2)^{36}} $\\
$[0100]$&$ 20 $&$ B_2 \otimes C_1 $&$[0,2] t^2 + [2] t^2+[1,0] [1] t^3 -t^8-t^{12}-t^{16} $&$\frac{(1 - t^8) (1 - t^{12}) (1 - t^{16})}{(1 - t^2)^{13} (1 - t^3)^{10}} $\\
$[2000]$&$ 18 $&$D_3 \otimes B_0 $&$[0,1,1] t^2+ [1,0,0] k t^4-t^{8}-t^{12}-t^{16} $&$\frac{(1- t^8) (1-t^{12}) (1-t^{16})}{(1-t^2)^{15} (1 - t^4)^6} $\\
$[0001]$&$ 16 $&$ C_2 \otimes B_0 $&$[2,0] t^2+[1,0] k t^3+[0,1] t^4-t^8-t^{12}-t^{16}$&$\frac {(1 - t^8) (1 - t^{12}) (1 - t^{16})}{(1 - t^2)^{10} (1 - t^3)^4 (1-t^4)^5} $\\
$[1010]$&$ 12 $&$ C_2 \otimes D_1 \otimes B_0 $&$[2] t^2+[1] (1) t^3+[1] k t^3 +(1) k t^4+ t^4+[1] k t^5 +t^2 -t^8-t^{12}-t^{16} $&$\frac {(1 - t^8) (1 - t^{12}) (1 - t^{16})}{(1 - t^2)^4 (1 - t^3)^6 (1 -
t^4)^3 (1 - t^5)^2} $\\
$[0200]$&$ 10 $&$ B_1 \otimes D_1 $&$[2]t^2+(2)t^4+[2](1)t^4+t^2+t^6-t^8-t^{12}-t^{16} $&$ \frac{(1 - t^8) (1 - t^{12}) (1 - t^{16})}{(1 - t^2)^4 (1 - t^4)^8 (1 - t^6)} $\\
$[0020]$&$ 8 $&$ B_1 $&$[2]t^2+[4]t^4+[2]t^6-t^8-t^{12}-t^{16} $&$ \frac{(1 - t^8) (1 - t^{12}) (1 - t^{16})}{(1 - t^2)^3 (1 - t^4)^5 (1 - t^6)^3} $\\
$[2200]$&$ 8 $&$ D_2 \otimes B_0 $&$[2,0]t^2+[0,2]t^2+[1,1] k t^6 -t^{12}-t^{16}$&$ \frac{(1 - t^{12}) (1 - t^{16})}{(1 - t^2)^6 (1 - t^6)^4} $\\
$[0201]$&$ 6 $&$ C_1 \otimes B_0 $&$[2]t^2+[1] k t^5+[2] t^6 -t^{12}-t^{16} $&$ \frac{(1 - t^{12}) (1 - t^{16})}{(1 - t^2)^3 (1 - t^5)^2 (1 - t^6)^3} $\\
$[2101]$&$ 6 $&$ C_1 \otimes B_0 $&$ [2] t^2+[1] k t^5+[1] k t^7+t^4-t^{12}-t^{16} $&$ \frac{(1 - t^{12}) (1 - t^{16})}{(1 - t^2)^3 (1 - t^4) (1 - t^5)^2 (1 - t^7)^2} $\\
$[2020]$&$ 4 $&$ B_0 \otimes B_0 \otimes B_0 $&$(k_1 k_3+k_3 k_5) t^4 +(k_1 k_5+k_3 k_5) t^6+ k_3 k_5 t ^8+t^4 - t^{12}-t^{16} $&$ \frac{(1 - t^{12}) (1 - t^{16})}{(1 - t^4)^3 (1 - t^6)^2 (1 - t^8)} $\\
$[2220]$&$ 2 $&$ D_1 \otimes B_0 $&$(1) k t^8+t^2-t^{16} $&$\frac{1-t^{16}}{(1-t^2) (1-t^8)^2} $\\
$[2222]$&$ 0 $&$ \emptyset $ &$ 0 $&$ 1 $\\
\hline
\end{tabular}
\end{center}
\text{N.B. $(n)$ denotes the character of the $D_1 \equiv SO(2)$ reducible representation $q^n+q^{-n}$ of $U(1)$.}\\
\text{$k$ denotes the character $\pm 1$ if $B_0 \to O(1)$ or $1$ if $B_0 \to SO(1)$.}
\caption{Hilbert Series for Slodowy Slices of $B_4$.}
\label{tab:BCD3}
\end{sidewaystable}
\begin{sidewaystable}[htp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
${\begin{array}{c} \text{Nilpotent}\\\text{Orbit} \end{array}}$
&${\begin{array}{c} \text{Dimension}\\ {| {\cal S}_{\cal N,\rho}|} \end{array}}$
&${\begin{array}{c} \text{Symmetry} \\ F\end{array}}$
&$ \text{Generators of HS} \equiv \text{PL[HS]}$
&$ \text{Unrefined HS} $\\
\hline
$[0]$&$ 2 $&$C_1 $&$ [2] t^2-t^4 $&$\frac { (1 - t^4)}{(1 - t^2)^3} $\\
$[2]$&$ 0 $&$ \emptyset $&$ 0 $&$ 1 $\\
\hline
$[00]$&$ 8 $&$ C_2 $&$ [2,0] t^2-t^4-t^8 $ & $\frac{(1 - t^4) (1 - t^8)}{(1 - t^2)^{10}}$\\
$[10]$&$ 4 $&$ C_1 \otimes B_0 $&$[2] t^2+[1] k t^3-t^8 $ & $ \frac{ (1 - t^8)}{(1 - t^2)^3 (1 - t^3)^2 } $\\
$[02]$&$ 2 $&$ D_1 $&$ t^2+(1) t^4- t^8 $ & $ \frac{ (1 - t^8)}{(1 - t^2) (1 - t^4)^2} $\\
$[22]$&$ 0 $&$ \emptyset $&$ 0 $ & $ 1 $\\
\hline
$[000]$&$ 18 $&$C_3$ & $[2,0,0]t^2-t^4-t^8-t^{12} $&$\frac{(1-t^4) (1-t^8) (1-t^{12})}{(1-t^2)^{21}} $\\
$[100]$&$ 12 $&$C_2 \otimes B_0$ & $[2,0] t^2+[1,0] k t^3-t^8-t^{12} $&$\frac{(1-t^8) (1-t^{12})}{(1-t^2)^{10} (1-t^3)^4} $\\
$[010]$&$ 8 $&$ C_1 \otimes D_1$&$ [2]t^2+[1](1)t^3+(2)t^4+t^2-t^8-t^{12} $&$\frac{(1-t^8) (1-t^{12})}{(1-t^2)^4 (1-t^3)^4 (1-t^4)^2} $\\
$[002]$&$ 6 $&$ B_1 $& $[2]t^2+[4]t^4-t^8-t^{12} $&$\frac{(1-t^8) (1-t^{12})}{(1-t^2)^3 (1-t^4)^5} $\\
$[020]$&$ 4 $&$ C_1 $ & $[2]t^2+[2]t^6-t^8-t^{12} $&$\frac{(1-t^8) (1-t^{12})}{(1-t^2)^3 (1-t^6)^3}$\\
$[210]$&$ 4 $&$ C_1 \otimes B_0 $& $[2]t^2+[1] k t^5-t^{12} $&$\frac{1-t^{12}}{(1-t^2)^3 (1-t^5)^2} $\\
$[202]$&$ 2 $&$ B_0 \otimes B_0 $& $t^4+k_2 k_4 t^4+k_2 k_4 t^6-t^{12} $&$\frac{1-t^{12}}{(1-t^4)^2 (1-t^6)} $\\
$[222]$&$ 0 $&$ \emptyset $&$ 0 $&$ 1 $\\
\hline
\end{tabular}
\end{center}
\text{N.B. $(n)$ denotes the character of the $D_1 \equiv SO(2)$ reducible representation $q^n+q^{-n}$ of $U(1)$.}\\
\text{$k$ denotes the character $\pm 1$ if $B_0 \to O(1)$ or $1$ if $B_0 \to SO(1)$.}
\caption{Hilbert Series for Slodowy Slices of $C_1$, $C_2$, and $C_3$}
\label{tab:BCD4}
\end{sidewaystable}
\begin{sidewaystable}[htp]
\footnotesize
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
${\begin{array}{c} \text{Nilpotent}\\\text{Orbit} \end{array}}$
&${\begin{array}{c} \text{Dimension}\\ {| {\cal S}_{\cal N,\rho}|} \end{array}}$
&${\begin{array}{c} \text{Symmetry} \\ F\end{array}}$
&$ \text{Generators of HS} \equiv \text{PL[HS]}$
&$ \text{Unrefined HS} $\\
\hline
$[0000]$&$ 32 $&$ C_4 $&$ [2,0,0,0] t^2-t^4-t^8-t^{12}-t^{16} $&$\frac {(1 - t^4) (1 - t^8) (1 - t^{12}) (1 - t^{16})}{(1 - t^2)^{36}} $\\
$[1000]$&$ 24 $&$ C_3 \otimes B_0 $&${[2,0,0]}t^2+[1,0,0] k t^3-t^8-t^{12}-t^{16} $&$\frac{(1-t^8) (1-t^{12}) (1-t^{16})}{(1-t^2)^{21} (1-t^3)^6}$\\
$[0100]$&$ 18 $&$C_2 \otimes D_1 $&${[2,0]}t^2+[1,0](1)t^3+(2)t^4+t^2-t^8-t^{12}-t^{16}$&$\frac{(1-t^8) (1-t^{12}) (1-t^{16})}{(1-t^2)^{11} (1-t^3)^8 (1-t^4)^2} $\\
$[0010]$&$ 14 $&$ C_1 \otimes B_1 $&${[2]_B}t^2+[2]_C t^2+[2]_B [1]_C t^3+[4]_B t^4 -t^8-t^{12}-t^{16}$&$\frac{(1-t^8) (1-t^{12}) (1-t^{16})}{(1-t^2)^6 (1-t^3)^6 (1-t^4)^5} $\\
$[0002]$&$ 12 $&$ D_2 $&${[2,0]}t^2+[0,2]t^2+[2,2]t^4-t^8-t^{12}-t^{16} $&$\frac{(1-t^8) (1-t^{12}) (1-t^{16})}{(1-t^2)^6 (1-t^4)^9} $\\
$[2100]$&$ 12 $&$ C_2 \otimes B_0 $&$[2,0] t^2+[1,0] k t^5 -t^{12} - t^{16}$&$ \frac{(1-t^{12}) (1-t^{16})}{(1-t^2)^{10} (1-t^5)^4} $\\
$[0200]$&$ 10 $&$ C_1 \otimes C_1 $&$[2]t^2+[2]t^2+[1][1]t^4+[2]t^6-t^8-t^{12}-t^{16} $&$\frac{(1-t^8) (1-t^{12}) (1-t^{16})}{(1-t^2)^6 (1-t^4)^4 (1-t^6)^3} $\\
$[0110]$&$ 8 $&$ C_1 \otimes B_0 $&$[2]t^2+[1] k t^3+[1] k t^5+ [2]t^6+t^4-t^8-t^{12}-t^{16}$&$\frac{(1-t^8) (1-t^{12}) (1-t^{16})}{(1-t^2)^3 (1-t^3)^2 (1-t^4) (1-t^5)^2 (1-t^6)^3} $\\
$[2010]$&$ 8 $&$ C_1 \otimes B_0 \otimes B_0 $&$[2] t^2+[1] k_2 t^3+[1] k_4 t^5+k_2 k_4 t^4+k_2 k_4t^6+t^4-t^{12}-t^{16} $&$\frac{(1-t^{12}) (1-t^{16})}{(1-t^2)^3 (1-t^3)^2 (1-t^4)^2 (1-t^5)^2 (1-t^6)} $\\
$[2002]$&$ 6 $&$ D_1 \otimes B_0 $&$ (1)k t^4+(2) t^4+(1)k t^6+t^2+t^4-t^{12}-t^{16} $&$ \frac{(1-t^{12}) (1-t^{16})}{(1-t^2) (1-t^4)^5 (1-t^6)^2} $\\
$[0202]$&$ 4 $&$D_1 $&$(2)t^4+(2)t^8+t^6-t^{12}-t^{16} $&$\frac{(1-t^{12}) (1-t^{16})}{(1-t^2) (1-t^4)^2 (1-t^6) (1-t^8)^2} $\\
$[2210]$&$ 4 $&$ C_1 \otimes B_0 $ & $[2] t^2+[1] k t^7-t^{16} $&$\frac{1-t^{16}}{(1-t^2)^3 (1-t^7)^2} $\\
$[2202]$&$ 2 $&$ B_0 \otimes B_0 $&$k_2 k_6 t^6+k_2 k_6 t^8+t^4-t^{16} $&$\frac{1-t^{16}}{(1-t^4) (1-t^6) (1-t^8)} $\\
$[2222]$&$ 0 $&$ \emptyset $ &$ 0 $&$ 1 $\\
\hline
\end{tabular}
\end{center}
\text{N.B. $(n)$ denotes the character of the $D_1 \equiv SO(2)$ reducible representation $q^n+q^{-n}$ of $U(1)$.}\\
\text{$k$ denotes the character $\pm 1$ if $B_0 \to O(1)$ or $1$ if $B_0 \to SO(1)$.}
\caption{Hilbert Series for Slodowy Slices of $C_4$.}
\label{tab:BCD5}
\end{sidewaystable}
\begin{sidewaystable}[htp]
\footnotesize
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
${\begin{array}{c} \text{Nilpotent}\\\text{Orbit} \end{array}}$
&${\begin{array}{c} \text{Dimension}\\ {| {\cal S}_{\cal N,\rho}|} \end{array}}$
&${\begin{array}{c} \text{Symmetry} \\ F\end{array}}$
&$ \text{Generators of HS} \equiv \text{PL[HS]}$
&$ \text{Unrefined HS} $\\
\hline
$[00]$&$ 4 $&$ D_2 $&$[2,0] t^2+[0,2] t^2-2 t^4 $ & $\frac{(1-t^4)^2}{(1-t^2)^6}$\\
$[20]$&$ 2 $&$ C_1 \cong A_1 $&$[2]t^2-t^4 $ & $\frac{1-t^4}{(1-t^2)^3}$\\
$[02]$&$ 2 $&$ C_1 \cong A_1 $&$[2]t^2-t^4 $ & $\frac{1-t^4}{(1-t^2)^3}$\\
$[22]$&$ 0 $&$ \emptyset $&$ 0 $ & $ 1 $\\
\hline
$[000]$&$ 12 $&$D_3$ & $[0,1,1]t^2-t^4-t^6-t^8$&$\frac{(1-t^4) (1-t^6) (1-t^8)}{(1-t^2)^{15}} $\\
$[011]$&$ 6 $&$C_1 \otimes D_1$ & $[2]t^2+[1](1) t^3 +t^2-t^6-t^8 $&$\frac{(1-t^6) (1-t^8)}{(1-t^2)^4 (1-t^3)^4} $\\
$[200]$&$ 4 $&$ B_1 \otimes B_0$&$[2] t^2+[2] k t^4-k t^6-t^8 $&$\frac{(1-t^6) (1-t^8)}{(1-t^2)^3 (1-t^4)^3}$\\
$[022]$&$ 2 $&$ D_1 $& $(2) t^4+ t^2 - t^8 $&$\frac{1-t^8}{(1-t^2) (1-t^4)^2} $\\
$[222]$&$ 0 $&$ \emptyset $ & $ 0$ &$1$\\
\hline
$[0000]$&$ 24 $&$D_4$ & $[0,1,0,0] t^2-t^4-2 t^8-t^{12}$ & $\frac{(1-t^4) (1-t^8)^2 (1-t^{12})}{(1-t^2)^{28}} $\\
$[0100]$&$ 14 $&$D_2 \otimes C_1$ & $[2,0] t^2+ [0,2] t^2+[2] t^2 +[1,1][1] t^3- 2 t^8-t^{12} $&$\frac{(1-t^8)^2 (1-t^{12})}{(1-t^2)^9 (1-t^3)^8} $\\
$[0002]$&$ 12 $&$ C_2$&$[2,0]t^2+[0,1]t^4 -2t^8-t^{12} $&$\frac{(1-t^8)^2 (1-t^{12})}{(1-t^2)^{10} (1-t^4)^5}$\\
$[0020]$&$ 12 $&$ C_2$&$[2,0]t^2+[0,1]t^4 -2t^8-t^{12} $&$\frac{(1-t^8)^2 (1-t^{12})}{(1-t^2)^{10} (1-t^4)^5}$\\
$[2000]$&$ 12 $&$ B_2 \otimes B_0 $ & $ [0,2] t^2+[1,0] k t^4- k t^8-t^8-t^{12}$ &$\frac{(1-t^8)^2 (1-t^{12})}{(1-t^2)^{10} (1-t^4)^5}$\\
$[1011]$&$ 8 $&$ C_1 \otimes B_0 \otimes B_0 $ & $ [2]t^2+[1](k_1+k_3) t^3+ [1] k_1 t^5+ t^4+k_1 k_3 t^4-k_1 k_3 t^8 -t^8-t^{12}$ &$\frac{(1-t^8)^2 (1-t^{12})}{(1-t^2)^3 (1-t^3)^4 (1-t^4)^2 (1-t^5)^2}$\\
$[0200]$&$ 6 $&$ D_1 \otimes D_1 $ & $2 t^2+(1)(1)t^4+(2)t^4+t^6-2 t^8-t^{12}$ &$\frac{(1-t^8)^2 (1-t^{12})}{(1-t^2)^2 (1-t^4)^6 (1-t^6)}$\\
$[0202]$&$ 4 $&$ C_1 $ & $ [2]t^2+[2]t^6-t^8-t^{12}$ &$\frac{(1-t^8) (1-t^{12})}{(1-t^2)^3 (1-t^6)^3}$\\
$[0220]$&$ 4 $&$ C_1 $ & $ [2]t^2+[2]t^6-t^8-t^{12}$ &$\frac{(1-t^8) (1-t^{12})}{(1-t^2)^3 (1-t^6)^3}$\\
$[2200]$&$ 4 $&$B_1 \otimes B_0 $ & $ [2]t^2+[2] k t^6-k t^8-k t^{12}$ &$\frac{(1-t^8) (1-t^{12})}{(1-t^2)^3 (1-t^6)^3}$\\
$[2022]$&$ 2 $&$ B_0 \otimes B_0 $ & $ k_3 k_5 t^4+t^4+k_3 k_5 t^6-t^{12}$ &$\frac{1-t^{12}}{(1-t^4)^2 (1-t^6)}$\\
$[2222]$&$ 2 $&$ \emptyset $ & $ 0$ &$1$\\
\hline
\end{tabular}
\end{center}
\text{N.B. $(n)$ denotes the character of the $D_1 \equiv SO(2)$ reducible representation $q^n+q^{-n}$ of $U(1)$.}\\
\text{$k$ denotes the character $\pm 1$ if $B_0 \to O(1)$ or $1$ if $B_0 \to SO(1)$.}
\caption{Hilbert Series for Slodowy Slices of $D_2$, $D_3$, and $D_4$}
\label{tab:BCD6}
\end{sidewaystable}
Many observations can be made about these Hilbert series.
\begin{enumerate}
\item As expected, (i) the Slodowy slice to the trivial nilpotent orbit $\mathcal{S}_{\mathcal{N},(1^N)}$ has the same Hilbert series as the nilpotent cone, (ii) the slice to the sub-regular orbit has the Hilbert series of a Kleinian singularity of type $\hat A_{2r-1}$ for the $B$ series, $\hat D_{r+1}$ for the $C$ series, and $\hat D_{r}$ for the $D$ series, and (iii) the slice to the maximal nilpotent orbit is trivial.
\item The Slodowy slices $\mathcal{S}_{\mathcal{N},\rho}$ are all complete intersections, giving a good answer to the question posed in \cite{Hanany:2011db}.
\item The adjoint maps can contain singlet generators at even powers of $t$ up to (twice) the degree of the highest Casimir of $G$; these generators may be cancelled by one or more Casimir relations.
\item The global symmetry groups of the Slodowy slice generators include mixed $BCD$ Lie groups (or $A$ series isomorphisms), as well as finite groups of type $B_0$, and descend in rank as the dimension of the Slodowy slice reduces. Different Slodowy slices may share the same symmetry group, while having inequivalent embeddings into $G$.
\item The sub-regular Slodowy slices of non-simply laced algebras match those of specific simply laced algebras, in accordance with their Kleinian singularities, as listed in table \ref{table:SS1}. In the case of Slodowy slices of $C_n$ nilpotent orbits with vector partitions of type $(2n-k,k)$, it was identified in \cite{henderson_licata_2014} that these isomorphisms with $D_{n+1}$ extend further down the Hasse diagram: ${\cal S}_{{\cal N},C (2n-k,k)} \equiv {\cal S}_{{\cal N},D (2n-k+1,k+1)}$. This occurs due to matching chains of Kraft-Procesi transitions \cite{Kraft:1982fk} within such slices.
\item We have not attempted an exhaustive analysis of $Z_2$ factors associated with the choice of $SO$ vs $O$ flavour groups and the ensuing subtleties.
For example, the slices ${\cal S}_{{\cal N},B[20]}$ and ${\cal S}_{{\cal N},C[02]}$ have the global symmetries $D_1 \otimes B_0$ and $D_1$, respectively, with the $B$ series Slodowy slice having an extra $B_0$ fugacity ($k = \pm 1$), notwithstanding the isomorphism between the $B$ and $C$ Lie algebras.
Similarly, in the case of $D_{4}$, the spinor pair slices, respectively ${\cal S}_{{\cal N},D[0020]}$/${\cal S}_{{\cal N},D[0002]}$ or ${\cal S}_{{\cal N},D[0220]}$/${\cal S}_{{\cal N},D[0202]}$, only carry a $C_{1 \text{ or } 2}$ series symmetry, while the corresponding vector slices of the same dimension, ${\cal S}_{{\cal N},D[2000]}$ or ${\cal S}_{{\cal N},D[2200]}$, carry a $B_0 \otimes B_{1 \text{ or } 2}$ symmetry.
\end{enumerate}
Whilst Higgs branch constructions based on the balanced quivers of type ${{\cal B}_{B/C/D}} ({\mathbf N_f}(\rho))$ are available for all Slodowy slices, Coulomb branch constructions based on ${{\cal L}_{BC/CD/DC}}$ quivers or Higgs branch constructions based on the quivers of type ${\cal D}_{G} ({\mathbf N_f})$ are not generally available:
\begin{enumerate}
\item In the cases calculated, the slice to a sub-regular nilpotent orbit always has a Coulomb branch construction.
\item Many $BCD$ Slodowy slices do not have Coulomb branch constructions as ${{\cal L}_{BC/CD/DC}}$ quivers, either because their underlying nilpotent orbits are not special, or due to zero conformal dimension problems under the $O/USp$ monopole formula. While the issue of zero conformal dimension ($\Delta = 0$) is less prevalent for low dimension Slodowy slices, the problem is inherent in maximal $B_r - C_r - B_{r-1}$ sub-chains, and so affects many $C$ series Slodowy slices; certain other quivers are also problematic.
\item Other than $A$ series isomorphisms, the quivers of type ${\cal D}_{G} ({\mathbf N_f})$ only provide Higgs branch constructions for $D$ series Slodowy slices of low dimension. The nilpotent orbits underlying these Slodowy slices are dual, under the Barbasch-Vogan map, to (minimal or near-to-minimal) nilpotent orbits of Characteristic height 2, for which Coulomb branch constructions using the unitary monopole formula are known \cite{Hanany:2017ooe}, plus some others, such as ${\cal S}_{{\cal N},D[0200]}$. These Dynkin diagram quivers have $S(U \otimes \ldots U )$ flavour nodes and their refined Hilbert series may not replicate all the possible combinations of orthogonal group characters.
\end{enumerate}
These matters are discussed further in the concluding section.
\FloatBarrier
\subsection{Matrix Generators for Orthosymplectic Quivers}
\label{subsec:BCDgenerators}
In the case of $BCD$ series, prescriptions are similarly available for obtaining the generators of the chiral ring corresponding to a Slodowy slice directly from the partition data or from the Higgs branch quiver.
\subsubsection{Vector Decomposition}
\label{sec:BCDVD}
From \ref{eq:BCDquivers3} and the alternating nature of the quiver, it follows that the character of the vector representation of $G$ decomposes into vector representations of an $O/USp$ product group, tensored with the $SU(2)$ embedding:
\begin{equation}
\label{eq:BCDgens1}
\begin{aligned}
\rho :\chi _{vector}^{O(N)} \to \mathop \oplus \limits_{\scriptstyle [n] \atop
\scriptstyle bosonic} [n]_{\rho}{~}\chi _{vector}^{O \left(N_{{f_{n + 1}}}\right)}\mathop \oplus \limits_{\scriptstyle [n] \atop
\scriptstyle fermionic} [n]_{\rho}{~}\chi _{vector}^{USp \left(N_{{f_{n + 1}}}\right)},\\
\rho :\chi _{vector}^{USp(N)} \to \mathop \oplus \limits_{\scriptstyle[n]\atop
\scriptstyle bosonic} [n]_{\rho}{~}\chi _{vector}^{USp \left(N_{{f_{n + 1}}} \right)}\mathop \oplus \limits_{\scriptstyle [n] \atop
\scriptstyle fermionic} [n]_{\rho}{~}\chi _{vector}^{O \left( N_{{f_{n + 1}}} \right)},\\
\end{aligned}
\end{equation}
where ${[n]_{\rho}}$ are bosonic (odd dimension) or fermionic (even dimension) irreps of the $SU(2)$ associated with the nilpotent orbit embedding $\rho$. The requirement that the partition $\rho$ obeys the $BCD$ selection rules ensures that the $USp$ irreps are all of even dimension. Once this decomposition has been identified, the mapping of the adjoint of $G$ into matrix generators \ref{eq:SS7} follows, either by symmetrising the $USp$ vector, or by antisymmetrising the $O$ vector. This can be checked against the adjoint partition $\rho :\chi _{adjoint}^G$. Note that a choice can be made whether to use the $SO$ form of orthogonal group characters or the $O^{-}$ form.
\subsubsection{Generators from Quiver Paths}
\label{sec:OfQBCD}
For orthosymplectic quivers, the method in section \ref{sec:OfQA} can be applied, with a few changes. An operator ${\mathcal P}_{ij}(a)$ formed from a path in the quiver is defined identically. However, for orthosymplectic quivers, $\mathcal P_{ij}(a) = \mathcal P_{ji}(a)^T$, and a path yields only one generator when $i \neq j$. Other differences follow from the irreducible representations of the operators $\mathcal P_{ij}(a)$ and the gauge group invariants. There are two cases:
\begin{enumerate}
\item $i\neq j$. The operator transforms in the defining representation of the initial flavour group and the defining representation of the final flavour group. For example, if the flavour node at position $i$ is $O(7)$ and the flavour node at position $j$ is $USp(4)$, $\mathcal P_{ij}(a)$ transforms in the irrep of dimension $7\times 4$.
\item $i=j$. The operator has two indices that transform under the flavour group at position $i$. They are symmetrized if the gauge node at the mid point of the path is of $O$-type, or antisymmetrized if the gauge node is of $USp$-type.
\end{enumerate}
The set of operators $\mathcal P_{ij}(a)$ gives us all the generators of the chiral ring. The relations are inherited from those of the \textit{nilpotent cone} $\mathcal N$, and for $\mathcal S_{\mathcal N,\rho}$ are always the Casimir invariants of $G$.
Now, an $O(N_{f_i})$ flavour node (of rank $>0$) always contributes (at least) a path $\mathcal P_{ii}(1)$ of length 2 that starts at $O(N_{f_i})$, goes to the gauge node $USp(N_i)$ and comes back to $O(N_{f_i})$. Since the gauge node in the middle of the path is $USp$, the operator transforms in the second antisymmetrization $\Lambda^2[fund.]_O=[adjoint]_O$. Similarly, a $USp(N_{f_i})$ flavour node always contributes (at least) a path $\mathcal P_{ii}(1)$ of length 2 that starts at $USp(N_{f_i})$, goes to the gauge node $O(N_i)$ and comes back to $USp(N_{f_i})$. Since the gauge node in the middle of the path is $O$, the operator transforms in the second symmetrization $Sym^2[fund.]_{USp}=[adjoint]_{USp}$. Consequently, the adjoint of every flavour group appears as a generator at path length 2.
\paragraph {Example} Consider the balanced quiver based on the partition $(2^2,1^4)$, whose Higgs branch is the the Slodowy slice $\mathcal S_{{\cal N},(4,2)}$ to the nilpotent orbit $D[0100]$:
\begin{equation}
{{\cal B}_{D}{({\bf N_f}(2^2,1^4)})} ={~} \ \node{\wver{}{\,\, O(4)}}{USp(4)}-\node{\wver{}{\,\,USp(2)}}{O(6)}-\node{}{USp(4)}-\node{}{O(4)}-\node{}{USp(2)}-\node{}{O(2)}.
\end{equation}
The decomposition of $G$ to $SU(2) \otimes F$ is:
\begin{equation} \label{eq:exbcd1}
SO(8) \rightarrow SU(2)_{\rho} \otimes O(4) \otimes USp(2).
\end{equation}
The Hilbert series of the chiral ring of operators in the Higgs branch has generators ${\mathcal P}_{ij}(a)$ given by the quiver paths in table \ref{tab:BCD7}.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$ {\cal P}_{ij}(a) $& Quiver Path & Generator \\
\hline
$ {\cal P} _{11}(1) $& $ \node{\fdu{}{\,\, O(4)}}{USp(4)}\ \ \node{}{O(6)}\node{}{USp(4)}\node{}{O(4)}\node{}{USp(2)}\node{}{O(2)} $ & $ \Lambda^2([1,1]t) = [2,0]t^2+[0,2]t^2 $ \\
\hline
$ {\cal P} _{2,2}(1) $&$ \node{}{USp(4)}\ \ \node{\fdu{}{\,\,USp(2)}}{O(6)}\node{}{USp(4)}\node{}{O(4)}\node{}{USp(2)}\node{}{O(2)} $ & $ Sym^2([1]t) = [2]t^2 $ \\
\hline
$ {\cal P} _{2,2}(2) $& $ \node{}{USp(4)}{\scriptstyle\rightleftarrows}\node{\fdu{}{\,\,USp(2)}}{O(6)}\node{}{USp(4)}\node{}{O(4)}\node{}{USp(2)}\node{}{O(2)} $ & $ \Lambda^2([1]t^2) = [0]t^4 $ \\
\hline
$ {\cal P} _{1,2}(1) $&$ \node{\fd{}{\,\, O(4)}}{USp(4)}{\scriptstyle\rightarrow}\node{\fu{}{\,\,USp(2)}}{O(6)}\node{}{USp(4)}\node{}{O(4)}\node{}{USp(2)}\node{}{O(2)} $ & $ [1,1][1] t^3 $ \\
\hline
\end{tabular}
\end{center}
\caption{Generators for Slodowy Slice to $D[0100]$.}
\label{tab:BCD7}
\end{table}
For $D_4$ the Casimirs give relations, $-t^4 - 2t^8 - t^{12}$, therefore, the PL[HS] read directly from the quiver is:
\begin{equation}
PL[g_{HS}^{Higgs[{{\cal B}_{D} ({ \bf N_f} (2^2,1^4))}]}] = [2,0]t^2+[0,2]t^2 + [2]t^2 + [1,1][1] t^3- 2t^8 - t^{12}.
\end{equation}
\FloatBarrier
\subsubsection{Matrices and Relations}
\label{BCDrelations}
Finally, in tables \ref{tab:BCD8910} to \ref{tab:BCD12} we provide a set of algebraic varieties described by matrices such that their HS have been computed to be identical to those of the corresponding Slodowy slices $\mathcal S_{\mathcal N,\rho}$ of $B_1$ to $B_3$ nilpotent orbits. The analysis can in principle be continued to higher rank.
\begin{table}[h]
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|}
\hline
Orbit & Partition & Dim. & $\begin{array}{c}\text{Generators;}\\ \text{Degree}\end{array}$ & Relations \\ \hline
B[0]&$(1^3)$&2& $\begin{array}{rc} M_{3\times 3}; & 2 \\ \end{array}$&$\begin{array}{rl} tr(M^2) &=0 \end{array}$ \\ \hline
B[2] & $(3)$ & 0 & - & - \\
\hline
\hline
B[00] & $(1^5)$ & 8 & $\begin{array}{rc} M_{5\times 5}; & 2 \end{array}$ & $\begin{array}{rl} tr(M^2) &= 0 \\ tr(M^4)&=0 \end{array}$ \\ \hline
B[01] & $(2^2,1)$ & 4 & $\begin{array}{rc} N_{2\times 2}; & 2 \\ A_{2\times 1} ; & 3 \end{array}$ & $\begin{array}{rl} tr((N\Omega)^4) &=A^T\Omega N\Omega A \end{array}$ \\ \hline
B[20] & $(3,1^2)$ & 2 & $\begin{array}{rc} M_{2\times 2}; & 2 \\ A_{2\times 1} ; & 4 \end{array}$ & $\begin{array}{rl} tr(M^4) &= A^TA \end{array}$ \\ \hline
B[22] & $(5)$ & 0 & - & - \\ \hline
\hline
B[000] & $(1^7)$ & 18 & $\begin{array}{rc} M_{7\times 7}; & 2 \end{array}$ & $\begin{array}{rl} tr(M^2) &= 0 \\ tr(M^4)&=0 \\ tr(M^6)&=0 \end{array}$ \\ \hline
{B[010]} & $(2^2,1^3)$ & 10 & $\begin{array}{rc} M_{3\times 3}; & 2 \\ N_{2\times 2}; & 2 \\ A_{3\times 2} ; & 3 \end{array}$ & $\begin{array}{rl} tr(M^4) + tr((N\Omega) ^{4}) &=tr(A\Omega A^TM) +tr(A^TA\Omega N\Omega)\\ tr(M^6) + tr((N\Omega)^6) &= tr((A\Omega A^T)^2) \end{array}$ \\ \hline
B[200] & $(3,1^4)$ & 8 & $\begin{array}{rc} M_{4\times 4}; & 2 \\ A_{4\times 1} ; & 4 \end{array}$ & $\begin{array}{rl} tr(M^4) &=A^T A \\ tr(M^6) &= A ^T M^2 A \end{array}$ \\ \hline
B[101] & $(3,2^2)$ & 6 & $\begin{array}{rc} N_{2\times 2}; & 2\\ A_{2\times 1} ; & 3 \\ M_{2\times 2}; & 4 \\ B_{2\times 1} ; & 5 \end{array}$ & $\begin{array}{rl} tr((N\Omega)^4 +(M\Omega)^2)&= B^T\Omega A\\ tr((N\Omega)^6 +(M\Omega)^3)&= B^T \Omega M\Omega A \end{array}$ \\ \hline
B[020] & $(3^2,1)$ & 4 & $\begin{array}{rc} M_{2\times 2}; & 2\\ A_{2\times 1} ; & 4 \\ N_{2\times 2}; & 4 \\ O_{2\times 2} ; & 6 \end{array}$ & $\begin{array}{rl} tr(N)&=0\\ tr(M^4+ N^2) &= A^T A \\ tr(M^6+N^3+O^2) &=A^T N A \end{array}$ \\ \hline
B[220] & $(5,1^2)$ & 2 & $\begin{array}{rc} M_{2\times 2}; & 2 \\ A_{2\times 1} ; & 6 \end{array}$ & $\begin{array}{rl} tr(M^6) & = A^T A \end{array}$ \\ \hline
B[222] & $(7)$ & 0 & - & - \\ \hline
\end{tabular}
\caption[$B_1$, $B_2$ and $B_3$ Slodowy Slice Varieties]
{$B_1$, $B_2$ and $B_3$ varieties, generated by complex matrices $M$, $N$, $O$, $A$ and $B$ and their relations, which have Hilbert series matching Slodowy slices $\mathcal S_{\mathcal N,\rho}$. The matrices $M=-M^T$ and $O=-O^T$ are antisymmetric, $N=N^T$ is symmetric and $\Omega$ represents a square matrix that is antisymmetric and invariant under the action of $USp(2n)$.}
\label{tab:BCD8910}
\end{table}
\begin{table}[h]
\centering
\small
\begin{tabular}{|c|c|c|c|c|}
\hline
Orbit & Partition & Dim. & $\begin{array}{c}\text{Generators;}\\ \text{Degree}\end{array}$ & Relations \\ \hline
C[0]&$(1^2)$&2& $\begin{array}{rc} N_{2\times 2}; & 2 \\ \end{array}$&$\begin{array}{rl} tr(N^2) &=0 \end{array}$ \\ \hline
C[2] & $(2)$ & 0 & - & - \\ \hline
\hline
C[00] & $(1^4)$ & 8 & $\begin{array}{rc} N_{4\times 4}; & 2 \end{array}$ & $\begin{array}{rl} tr((N\Omega)^2) &= 0 \\ tr((N\Omega)^4)&=0 \end{array}$ \\ \hline
C[10] & $(2,1^2)$ & 4 & $\begin{array}{rc} N_{2\times 2}; & 2 \\ A_{2\times 1} ; & 3 \end{array}$ & $\begin{array}{rl} tr((N\Omega)^4) &=A^T\Omega N\Omega A \end{array}$ \\ \hline
C[02] & $(2^2)$ & 2 & $\begin{array}{rc} M_{2\times 2}; & 2 \\ N_{2\times 2} ; & 4 \end{array}$ & $\begin{array}{rl}tr(N)&=0 \\ tr(M^4) &= tr(N^2) \end{array}$ \\ \hline
C[22] & $(4)$ & 0 & - & - \\ \hline
\hline
C[000] & $(1^6)$ & 18 & $\begin{array}{rc} N_{6\times 6}; & 2 \end{array}$ & $\begin{array}{rl} tr((N\Omega)^2) &= 0 \\ tr((N\Omega)^4)&=0 \\ tr((N\Omega)^6)&=0 \end{array}$ \\ \hline
C[100] & $(2,1^4)$ & 12 & $\begin{array}{rc} N_{4\times 4}; & 2 \\ A_{4\times 1} ; & 3 \end{array}$ & $\begin{array}{rl} tr((N\Omega)^4) &= A^T \Omega N \Omega A\\ tr((N\Omega)^6) & = A^T \Omega (N {\Omega})^3 A \end{array}$ \\
\hline
C[010] & $(2^2,1^2)$ & 8 & $\begin{array}{rc} N_{2\times 2}; & 2 \\ M_{2\times 2};& 2\\ A_{2\times 2};& 3 \\P_{2\times2} ; & 4 \end{array}$ & $\begin{array}{rl} tr(P)&=0\\tr((N\Omega)^4+M^4+P^2) &=tr(A^T\Omega N\Omega A) \\ tr((N\Omega)^6+M^6+P^3) &= tr(A ^T\Omega ( N\Omega) ^2\Omega A) \end{array}$ \\ \hline
C[002] & $(2^3)$ & 6 & $\begin{array}{rc} M_{3\times 3}; & 2\\ N_{3\times 3}; & 4 \\ \end{array}$ & $\begin{array}{rl} tr(N)&=0 \\tr(M^4)&=tr(N^2)\\tr(M^6)&=tr(N^3) \end{array}$ \\ \hline
C[020] & $(3^2)$ & 4 & $\begin{array}{rc} N_{2\times 2}; & 2\\ M_{2\times 2}; & 4 \\ P_{2\times 2} ; & 6 \end{array}$ & $\begin{array}{rl} tr(M \Omega)&=0\\ tr(P \Omega)&=0 \\tr((N\Omega)^4+ (M\Omega )^2) &= 0 \\ tr((N\Omega )^6+(M\Omega )^3+(P\Omega )^2) &=0 \end{array}$ \\ \hline
C[210] & $(4,1^2)$ & 4 & $\begin{array}{rc} N_{2\times 2}; & 2 \\ A_{2\times 1} ; & 5 \end{array}$ & $\begin{array}{rl} tr((N\Omega)^6) &= A^T\Omega N \Omega A \end{array}$ \\ \hline
C[202] & $(4,2)$ & 2 & $\begin{array}{rc} N_{1\times 1}; & 4 \\ A_{1\times 1} ; & 4\\ P_{1\times 1} ; & 6 \end{array}$ & $\begin{array}{rl} tr(N A^2) &= tr(P^2) \end{array}$ \\
\hline
C[222] & $(6)$ & 0 & - & - \\
\hline
\end{tabular}
\caption[$C_1$, $C_2$ and $C_3$ Slodowy Slice Varieties]
{$C_1$, $C_2$ and $C_3$ varieties, generated by complex matrices $M$, $N$, $O$, $P$, $A$ and $B$ and their relations, which have Hilbert series matching Slodowy slices $\mathcal S_{\mathcal N,\rho}$. The matrices $M=-M^T$ and $O=-O^T$ are antisymmetric, $N=N^T$ and $P=P^T$ are symmetric and $\Omega$ represents a square matrix that is antisymmetric and invariant under the action of $USp(2n)$.}
\label{tab:BCD11}
\end{table}
\begin{table}[h]
\centering
\small
\begin{tabular}{|c|c|c|c|c|}
\hline
Orbit & Partition & Dim. & $\begin{array}{c}\text{Generators;}\\ \text{Degree}\end{array}$ & Relations \\ \hline
D[00] & $(1^4)$ & 4 & $\begin{array}{rc} M_{4\times 4}; & 2 \end{array}$ & $\begin{array}{rl} tr(M^2) &= 0 \\ pf(M)&=0 \end{array}$ \\ \hline
D[20] & $(2^2)$ & 2 & $\begin{array}{rc} N_{2\times 2}; & 2 \\ \end{array}$ & $\begin{array}{rl} tr((N\Omega)^2) &=0 \end{array}$ \\ \hline
D[02] & $(2^2)$ & 2 & $\begin{array}{rc} N_{2\times 2}; & 2 \\ \end{array}$ & $\begin{array}{rl} tr((N\Omega)^2) &=0 \end{array}$ \\ \hline
D[22] & $(4)$ & 0 & - & - \\ \hline
\hline
D[000] & $(1^6)$ & 12 & $\begin{array}{rc} M_{6\times 6}; & 2 \end{array}$ & $\begin{array}{rl} tr(M^2) &= 0 \\ tr(M^4)&=0 \\ pf(M)&=0 \end{array}$ \\ \hline
D[011] & $(2^2,1^2)$ & 6 & $\begin{array}{rc} M_{2\times 2}; & 2 \\ N_{2\times 2}; & 2 \\ A_{2\times 2} ; & 3 \end{array}$ & $\begin{array}{rl} tr(A\Omega A^T\Omega) &= 0\\ tr(M^4) + tr((N\Omega)^4) &= tr(A\Omega A^T M +A\Omega N \Omega A^T) \end{array}$ \\ \hline
D[200] & $(3,1^3)$ & 4 & $\begin{array}{rc} M_{3\times 3}; & 2 \\ A_{3\times 1} ; & 4 \end{array}$ & $\begin{array}{rl} \epsilon^{ijk}M_{ij}A_k &= 0 \\ tr(M^4) &= A ^T A \end{array}$ \\ \hline
D[022] & $(3^2)$ & 2 & $\begin{array}{rc} M_{2\times 2}; & 2 \\ N_{2\times 2}; & 4 \end{array}$ & $\begin{array}{rl} tr(N)&=0\\ tr(M^4)&=tr(N^2)\end{array}$ \\ \hline
D[222] & $(5,1)$ & 0 & - & - \\ \hline
\end{tabular}
\caption[$D_2$ and $D_3$ Slodowy Slice Varieties]
{$D_2$ and $D_3$ varieties, generated by complex matrices $M$, $N$, and $A$ and their relations, which have Hilbert series matching Slodowy slices $\mathcal S_{\mathcal N,\rho}$. The matrix $M=-M^T$ is antisymmetric, $N=N^T$ is symmetric and $\Omega$ represents a square matrix that is antisymmetric and invariant under the action of $USp(2n)$. $pf()$ denotes the Pfaffian.}
\label{tab:BCD12}
\end{table}
\FloatBarrier
\section{Introduction}
\label{sec:intro}
The relationships between supersymmetric (``SUSY") quiver gauge theories, the Hilbert series (``HS") of their Higgs and Coulomb branches, and the nilpotent orbits (``NO") of simple Lie algebras $\mathfrak{g}$ were analysed in two recent papers \cite{Hanany:2016gbz, Hanany:2017ooe}. Closures of classical nilpotent orbits appear as Higgs branches on ${\cal N}=2$ quiver theories in $4d$, and also as Coulomb branches on ${\cal N}=4$ quiver theories in $2+1$ dimensions. Both these types of theory have 8 supercharges.
The aim herein is to examine systematically the relationships between these SUSY quiver gauge theories and the spaces transverse to nilpotent orbits, known as Slodowy slices. The focus herein is the Slodowy slices of the nilpotent orbits of Classical algebras, which are associated with a rich array of $3d {~} \mathcal {N}=4$ quiver theories and dualities. The relationships between SUSY quiver gauge theories and the Slodowy slices of nilpotent orbits of Exceptional algebras remain to be treated.
The mathematical study of Slodowy slices has its roots in \cite{slodowy_1980}, which built on earlier work by Brieskorn \cite{brieskorn1970singular}, Grothendieck and Dynkin \cite{Dynkin:1957um}. This showed that each nilpotent orbit $\cal O_\rho$ of a Lie algebra $\mathfrak g$ of a Classical group $G$ has a transverse slice, or Slodowy slice $\cal S_\rho$, lying within the algebra $\mathfrak g$.\footnote{$\rho$ identifies the embedding of $\mathfrak{su(2)}$ into $\mathfrak g$ that defines the nilpotent orbit.} There is a variety defined by the intersection between the Slodowy slice and the nilpotent cone ${\cal N}$ of the algebra: ${\cal S}_{{\cal N},\rho} \equiv {\cal N} \cap \cal S_{\rho}$. In this paper, we deal almost entirely with these intersections $S_{\cal N,\rho}$ and refer to them loosely as Slodowy slices (except where the context requires otherwise). Each such slice is a singularity that can be characterised by a sub-algebra $\mathfrak f$ of $\mathfrak g$ that commutes with (or stabilises) the $\mathfrak{su(2)}$. In the case of the sub-regular nilpotent orbit $\mathcal{S}_{\mathcal N,\rho}$ is a Kleinian singularity of type $ADE$.\footnote{For general background on nilpotent orbits the reader is referred to \cite{Collingwood:1993fk}.}
The connection between nilpotent orbits and their Slodowy slices, and instanton moduli spaces, i.e. the solutions of self dual Yang-Mills equations, was made in \cite{Kronheimer:1990ay}. The use of Dynkin diagrams and quiver varieties to define instantons on ALE spaces was discussed in \cite{nakajima_1994}. The relevance of nilpotent orbits and Slodowy slices to $3d$ ${\cal N} = 4$ quiver theories was later explored in detail in \cite{Gaiotto:2008sa} and \cite{Gaiotto:2008ak}. In this context, they appear as effective gauge theories describing the brane dynamics of a system in Type IIB string theory. Brane systems of the type of \cite{Hanany:1996ie} are relevant for the $A$ series and systems with three dimensional orientifold planes \cite{Feng:2000eq} for the $BCD$ series\footnote{Note that these brane systems can explicitly realize the transverse slices developed by Brieskorn and Slodowy \cite{brieskorn1970singular,slodowy_1980}. A systematic analysis of transverse slices was carried out by Kraft and Procesi \cite{Kraft:1982fk} and the physics realization was studied in \cite{Cabrera:2016vvv, Cabrera:2017njm}. The concept of transverse slices can be further extended as an operation of subtractions between two quivers \cite{Cabrera:2018ann}.}.
In the course of these latter papers, a class of superconformal field theories (``SCFT") was proposed, with moduli spaces defined by the intersections between Slodowy slices and nilpotent orbits. These are termed $T_{\sigma}^{\rho} (G)$ theories, where $G$ is a Lie group. Several types of Classical quiver theories were identified, along with associated brane configurations, including theories whose Higgs or Coulomb branches correspond to certain varieties ${\cal S}_{{\cal N},\rho}$, and a relationship between S-duality and dualities arising from the $3d$ mirror symmetry \cite{Intriligator:1996ex} of Classical quiver theories was conjectured\footnote{In the case of nilpotent orbits of C and D type, the precise match between quivers and orbits was given in \cite{Benini:2010uu}. Subsequently, \cite{Chacaltana:2012zy} described the relation for B type and unified all classical cases via the introduction of the Barbasch-Vogan map \cite{barbasch_vogan_1985}.}.
For example, in the case of an $A$ series nilpotent orbit $\cal O_{\rho}$, where $\rho$ describes the embedding of $\mathfrak{su(2)}$ into $\mathfrak{su(n)}$ that defines the nilpotent orbit, and $\rho = (1^N)$ corresponds to the trivial nilpotent orbit, these dualities entail that the \textit{Higgs} branch of a linear quiver based on a partition $\rho^T$, yields the closure of the nilpotent orbit $\cal {\bar O}_{\rho}$, while the \textit{Coulomb} branch of a linear quiver based on the partition $\rho$ gives its Slodowy slice ${\cal S}_{{\cal N},\rho}$. The application of $3d$ mirror symmetry to this pair of linear quivers yields a pair of ``balanced" quivers, with the \textit{Coulomb} branch of the former yielding $\cal {\bar O}_{\rho}$ and the \textit{Higgs} branch of the latter yielding ${\cal S}_{{\cal N},\rho}$.\footnote{The notation in the Literature regarding partitions and their dual maps has changed a great deal; see \cite[sec.~4]{Cabrera:2017njm} for a summary of the different maps that are relevant to our study and an explicit review of the different conventions used in mathematics and physics.}
More recently, in \cite{Chacaltana:2012zy} and \cite{Mekareeya:2012tn}, nilpotent orbits and Slodowy slices have been used in the study of $6d$ ${\cal N} =(2, 0)$ theories on Riemann surfaces. Relationships between diagram automorphisms of quiver varieties and Slodowy slices are explored in \cite{henderson_licata_2014}. In \cite{sole_kac_valeri_2016} the algebras of polynomial functions on Slodowy slices were shown to be related to classical (finite and affine) W-algebras.
Each Slodowy slice of a sub-algebra ${\mathfrak f}$ of ${\mathfrak g}$ has a ring of holomorphic functions transforming in irreps of the sub-group $F$ of $G$. Our approach is to compute the Hilbert series of these rings. Presented in refined form, such Hilbert series faithfully encode the class function content of Slodowy slices, and can be subjected to further analysis using the tools of the Plethystics Program \cite{Benvenuti:2006qr, Feng:2007ur, Hanany:2014dia}.
Importantly, following a result in \cite{slodowy_1980}, the Hilbert series of Slodowy slices $\mathcal{S}_{\mathcal{N},\rho}$ are always complete intersections, i.e. quotients of geometric series. It was shown in \cite{Cremonesi:2014kwa} how the HS of the Slodowy slices of $A$ series and certain $BCD$ series algebras can be calculated from the Coulomb branches of linear quivers (or from the Higgs branches of their $3d$ mirror duals). \cite{Cremonesi:2014kwa} also identified a relationship between Slodowy slices and the (modified) Hall Littlewood polynomials of $\mathfrak g$, under the mapping $\mathfrak g \to \mathfrak{su(2)} \otimes \mathfrak{f}$.
Methods of calculating Hilbert series for $T_{\sigma}^{\rho} (G)$ theories with multi-flavoured quivers of unitary or alternating $O/USp$ type were developed in \cite{Cremonesi:2014uva}, using both Coulomb branch and Higgs branch methods. As elaborated in \cite{Cabrera:2017ucb}, the calculation of Coulomb branches of quivers of $O/USp$ type requires close attention to the distinction between $SO$ and $O$ groups.
This paper builds systematically on such methods to calculate the Hilbert series of Slodowy slices of closures of nilpotent orbits of low rank Classical Lie algebras and to identify relevant generalisations to arbitrary rank.
In Section \ref{sec:Slodowy} we summarise some facts about nilpotent orbits and review the relationship between a Slodowy slice ${\cal S}_{{\cal N},\rho}$ and the homomorphism $\rho$ defining the embedding of $\mathfrak {su(2)}$ into ${\mathfrak g}$ (and thus of the mapping of irreps of $G$ into the irreps of $SU(2)$) associated with a nilpotent orbit $\cal O_{\rho}$. We give some simple representation theoretic formulae for calculating the dimensions and Hilbert series of a Slodowy slice, given a homomorphism $\rho$.
In Section \ref{sec:ASeries} we treat $A$ series Slodowy slices, summarising the relevant Higgs branch and Coulomb branch formulae, describing the quivers upon which they act, and tabulating the commutant global symmetry group and the Hilbert series of Slodowy slices for all nilpotent orbits up to rank 5. We also build upon the language of $T_{\sigma}^{\rho}(SU(N))$ theories to summarise the known exact $A$ series dualities between quiver theories for Slodowy slices and nilpotent orbits. We find matrix formulations for the generators of $A$ series Slodowy slices and their relations, which explicate the mechanism of symmetry breaking and the residual symmetries of the parent group.
In Section \ref{sec:BCDSeries} we extend this analysis to Slodowy slices of $BCD$ series algebras up to rank 4. We find a complete set of refined Hilbert series, by working with the Higgs branches of multi-flavoured alternating $O/USp$ quivers with appropriately balanced gauge nodes. As in the case of $BCD$ nilpotent orbits \cite{Hanany:2016gbz}, calculation of these Higgs branches requires taking ${\mathbb Z}_2$ averages over the $SO$ and $O^-$ forms of $O$ group characters. We also identify a limited set of Higgs branch constructions based on $D$ series Dynkin diagrams. We summarise the restricted set of Coulomb branch monopole constructions that are available for ${\cal S}_{{\cal N},\rho}$, which are based on alternating $\textit{SO}/USp$ linear quivers. We highlight apparent restrictions on $3d$ mirror symmetry between Higgs and Coulomb branches of $BCD$ quiver theories; these include the requirements that the nilpotent orbit $\cal O_{\rho}$ should be \textit{special}, and that the $O/USp$ quivers should not be ``bad" \cite{Gaiotto:2008ak} due to containing monopole operators with zero conformal dimension. We find matrix formulations for the Higgs branch generators of $BCD$ series Slodowy slices, and their relations, which explicate the mechanism of symmetry breaking and the residual symmetries of the parent group.
Taken together with other recent studies \cite{Hanany:2016gbz, Cabrera:2017ucb}, this analysis of Hilbert series is relevant for the understanding of $T_\sigma^\rho(G)$ theories of type $BCD$. It highlights the difference between orthogonal $O(n)$ and special orthogonal $SO(n)$ nodes and the surrounding problems associated with $3d$ mirror symmetry between orthosymplectic quivers.
The final Section summarises conclusions, discusses open questions and identifies areas for further work. Some notational conventions are detailed in Appendix \ref{apx:1}.
\input{SlodowySlices.tex}
\input{ASeriesQuivers.tex}
\input{BCDSeriesQuivers.tex}
\input{Conclusions.tex}
\paragraph{Acknowldgements}
We would like to thank Stefano Cremonesi and Benjamin Assel for helpful conversations during the development of this project.
S.C. is supported by an EPSRC DTP studentship EP/M507878/1. A.H. is supported by STFC Consolidated Grant ST/J0003533/1, and EPSRC Programme Grant EP/K034456/1.
\section{Discussion and Conclusions}
\label{sec:Conclusions}
\paragraph{Higgs Branch}
We have presented constructions for quivers whose Higgs branches yield Hilbert series corresponding to the Slodowy slices of the nilpotent orbits of $A_1$ to $A_5$ plus $BCD$ algebras up to rank 4. There are essentially two families of quivers, the balanced unitary type $\{{{\cal B}_{A}} = {\cal D}_A, {\cal D}_D \}$ and the canonically balanced orthosymplectic type $\{ {{\cal B}_{B/C/D}} \}$. The balanced unitary quivers have gauge nodes in the pattern of the parent algebra Dynkin diagram and yield constructions for Slodowy slices of simply laced algebras, including all $A$ series slices and $D$ series slices of low dimension. The orthosymplectic quivers yield constructions of all $BCD$ Slodowy slices.
The global symmetry $F$ of a Slodowy slice descends from that of the parent group $G$ (in the case of the slice to the trivial nilpotent orbit), via subgroups of $G$, down to ${\mathbb Z}_2$ symmetries (for the slices of some near maximal nilpotent orbits). The grading of the Hilbert series is such that (i) the sets of Slodowy slices and nilpotent orbits intersect at the nilpotent cone and at the origin and (ii) the sub-regular slices match the known singularities \cite{brieskorn1970singular,slodowy_1980, Cabrera:2017njm}. In between, we have shown how the Slodowy slice symmetry groups and mappings of $G$ representations to $SU(2) \otimes F$ follow, via the Higgs branch formula, from the $SU(2)$ homomorphisms into $G$ of the associated nilpotent orbits.
We anticipate that these results generalise to Classical groups of arbitrary rank.
\paragraph{Coulomb Branch}
As is known, in the case of the $A$ series, the existence of a bijection between partitions and their transposes (the Luztig-Spaltenstein map) leads to a complete set of Coulomb branch constructions for Slodowy slices; these yield the same set of Hilbert series as the Higgs branch constructions. The Coulomb branch constructions are based on applying the unitary monopole formula to linear quivers ${{\cal L}_{A}}$, which are not generally balanced.
In the case of the $BCD$ series, however, other than for accidental isomorphisms with the $A$ series, this study has clarified that (i) the existence of suitable linear orthosymplectic quivers $\{ {{\cal L}_{BC}}, {{\cal L}_{CD}}, {{\cal L}_{DC}} \}$ is limited to the Slodowy slices of special nilpotent orbits, (ii) within these, the applicability of the Coulomb branch orthosymplectic monopole formula is restricted to those quivers that have positive conformal dimension, and (iii) the resulting Hilbert series are only available in unrefined form.
\paragraph{Slodowy Slice Formula}
The refined Hilbert series of a Slodowy slice can also be obtained directly from the mapping of the adjoint representation of $G$ into $SU(2) \otimes F$, using \ref{eq:SS11}. This mapping follows from the decomposition of the fundamental/vector of $G \rightarrow SU(2) \otimes F$ under \ref{eq:Agens1} or \ref{eq:BCDgens1}.
\paragraph{Dualities and $3d$ Mirror Symmetry}
The $A$ series findings verify the known $3d$ mirror symmetry relations \ref{eq:Aquivers4} and \ref{eq:Aquivers4aa}. Under these, linear or balanced quivers based on partitions $\rho$ can be used either for Higgs branch or Coulomb branch constructions; one combination yields a Slodowy slice and the other combination yields a (generally different) dual nilpotent orbit under the Lusztig-Spaltenstein map $\rho^T$, as illustrated in figure \ref{fig:msA}.
\begin{figure}[htp]
\centering
\begin{displaymath}
\xymatrix{
& {{\cal L}_A(\rho ^T)} \ar[dl]|{Higgs} \ar[dr]|{Coulomb} & & & {{\cal L}_A(\rho)} \ar[dl]|{Higgs} \ar[dr]|{Coulomb} & \\
{\bar {\cal O}}_\rho &\text{\scriptsize \it 3d~Mirror Symmetry} \ar[d] \ar[u] & {{\cal S}}_{{{\cal N}},{\rho^T} } & {\bar {\cal O}}_{\rho^T} & \text{\scriptsize \it 3d~Mirror Symmetry} \ar[d] \ar[u] & {{\cal S}}_{{{\cal N}},{\rho} } \\
& {{\cal B}_A( {{\mathbf N_f}( \rho ^T)})} \ar[ul]|{Coulomb} \ar[ur]|{Higgs} & & & {{\cal B}_A( {{\mathbf N_f}( \rho)})} \ar[ul]|{Coulomb} \ar[ur]|{Higgs} & }
\end{displaymath}
\caption[A Series 3d Mirror Symmetry]{A Series 3d Mirror Symmetry. All constructions give refined Hilbert series for a partition $\rho$ and its dual $\rho^T$ under the Lusztig-Spaltenstein map.}
\label{fig:msA}
\end{figure}
The analysis of $BCD$ series quivers shows, however, that such a picture of dualities \cite{Gaiotto:2008ak} does not extend to the $BCD$ series, other than in a limited way, due to the various restrictions on Coulomb branch constructions, discussed above. The refined (i.e. faithful) HS relationships for nilpotent orbits of the $BCD$ series can be summarised:
\begin{equation}
\label{eq:conc2}
\begin{aligned}
{{\cal S}}_{{{\cal N}},\rho } & = Higgs \left[ {{\cal B}_{B/C/D}( {{\mathbf N_f}( \rho)})} \right],\\
{\bar {\cal O}}_\rho & = Higgs\left[ {{\cal L}_{B/C/D}(\rho ^T)} \right],
\end{aligned}
\end{equation}
and, for $D$ series Dynkin type quivers of Characteristic height 2:
\begin{equation}
\label{eq:conc3}
\begin{aligned}
{ \cal \bar O}_\rho = Coulomb\left[ {{{\cal D}_D}([\rho] )} \right],\\
S_{ {\cal N} , {d_{BV}(\rho)} } = Higgs\left[ {{{\cal D}_D}([\rho ])} \right],
\end{aligned}
\end{equation}
where $d_{BV}(\rho)$ is the dual partition to $\rho$ under the $D$ series Barbasch-Vogan map.
If we restrict ourselves (i) to \textit{special} nilpotent orbits, (ii) to quivers with positive conformal dimension, and (iii) to \textit{unrefined} Hilbert series, then we can summarise the more limited $3d$ mirror symmetry for the $BCD$ series as in figure \ref{fig:msBCD}.
\begin{figure}[htp]
\centering
\scriptsize
\begin{displaymath}
\xymatrix{
& {{\cal L}_{BC/CD/DC}(\rho ^T)} {\ar[dl]|{\color{black} {\mathop {Higgs}\limits_{(O)}}}} \ar@{.>}[dr]|{\mathop {Coulomb}\limits_{(SO)} } & & & {{\cal L}_{BC/CD/DC}({d_{BV}(\rho)}^T)} \ar@{-->}[dl]|{{\mathop {Higgs}\limits_{(O)}}} \ar@{.>}[dr]|{\mathop {Coulomb}\limits_{(SO)} } & \\
{\bar {\cal O}}_\rho & \text{\scriptsize \it 3d~Mirror Symmetry} \ar[d] \ar[u] & {{\cal S}}_{{{\cal N}},{d_{BV}(\rho)} } & {\bar {\cal O}}_{d_{BV}(\rho)} & \text{\scriptsize \it 3d~Mirror Symmetry} \ar[d] \ar[u] & {{\cal S}}_{{{\cal N}},{\rho} } \\
& {{\cal B}_{B/C/D}( {{\mathbf N_f}( d_{BV}(\rho))})} \ar@{.>}[ul]|{\mathop {Coulomb}\limits_{(O/SO)} } \ar@{-->}[ur]|{{\mathop {Higgs}\limits_{(O)}}} & & & {{\cal B}_{B/C/D}( {{\mathbf N_f}( \rho)})} \ar@{.>}[ul]|{\mathop {Coulomb}\limits_{(O/SO)} } \ar[ur]|{{\mathop {Higgs}\limits_{(O)}}} & }
\end{displaymath}
\caption[BCD Series 3d Mirror Symmetry]{BCD Series 3d Mirror Symmetry. Solid arrows indicate Higgs branches which give \textit{refined} Hilbert series for a partition $\rho$. Dashed arrows indicate Higgs branches which give refined Hilbert series for the Barbasch-Vogan dual partition $d_{BV}(\rho)$ of a \textit{special} nilpotent orbit. Dotted arrows indicate Coulomb branches which give \textit{unrefined} Hilbert series for those \textit{special} nilpotent orbits whose quivers have positive conformal dimension.}
\label{fig:msBCD}
\end{figure}
Note that even for these cases there is a further obstruction: the difference between $SO$ and $O$ nodes in the quiver \cite{Cremonesi:2014uva,Cabrera:2017ucb}. For the A series, $3d$ mirror symmetry involves a pair of quivers for which the Coulomb branch and Higgs branch are swapped. In the BCD series however, once the gauge algebra of the quiver is specified there is still the question of whether the gauge groups are orthogonal or special orthogonal. As shown in figure \ref{fig:msBCD} a different choice needs to be made depending on the branch of the quiver.
This is not quite the same as $3d$ mirror symmetry.
On the other hand, there is a pair of SCFTs, $T_\sigma^\rho(G)$ and $T^\sigma_\rho(G^\vee)$ \cite{Gaiotto:2008ak,Benini:2010uu,Chacaltana:2012zy}, which are predicted to have precisely the two different gauge algebras depicted in one of the diagrams of figure \ref{fig:msBCD}: if $T_\sigma^\rho(G)$ corresponds to quiver ${\cal L}_{BC/CD/DC}(\rho ^T)$, then $T^\sigma_\rho(G^\vee)$ has the quiver ${\cal B}_{B/C/D}( {{\mathbf N_f}( d_{BV}(\rho))})$, along with the Higgs and Coulomb branches depicted in the same diagram. However, the present results, together with \cite{Cremonesi:2014uva,Hanany:2016gbz,Cabrera:2017ucb}, show that this cannot be the case, since there are factors of $\mathbb{Z}_2$ in the gauge group of the quiver for $T_\sigma ^\rho (G)$ that differ depending on the branch being computed. This is a very intriguing point that needs to be addressed in future studies, especially since it has consequences for the way effective gauge theories can be employed to understand the dynamics of Dp-branes in the presence of Op-planes.
Thus, it is the Higgs branch that provides the means to conduct a refined analysis of the HS of $BCD$ series nilpotent orbits and Slodowy slices. These represent only a subset of the $BCD$ series moduli spaces, $S_{\rho_1,\rho_2}\equiv \bar{\mathcal{O}}_{\rho_1}\cap S_{\rho_2}$, which include nilpotent orbits $S_{\rho,trivial}$ and Slodowy slices $S_{{\cal N},\rho}$ as limiting cases.\footnote{Such $BCD$ series moduli spaces $S_{\rho_1,\rho_2}$ generalise naturally to any pair of nilpotent orbits (unlike $T_\sigma^\rho(O/USp)$ theories, which are restricted to special orbits).}. The indications are that Higgs branch methods should provide a fruitful means of analysing such spaces.
\paragraph{Further Work}
Besides a study of quivers for $S_{\rho_1,\rho_2}$ moduli spaces, it would be interesting to extend this analysis to the Slodowy slices of Exceptional groups. While Higgs branch quiver constructions are not available for nilpotent orbits of Exceptional groups, a limited number of Coulomb branch quiver constructions are known. For Slodowy slices, where the situation is somewhat reversed by dualities, some Higgs branch constructions should be available, based, for example, on Dynkin diagrams of the E series.
With respect to the Coulomb branch, it would be interesting to understand (i) whether some non-linear fugacity map can be developed for the orthosymplectic monopole formula in order to obtain \textit{refined} Hilbert series, and (ii) whether a modified monopole formula can be found that avoids the zero conformal dimension problem associated with many orthosymplectic quivers. A recent advance has been made on this front in \cite{Assel:2018exy}, where Coulomb branches of \textit{bad} quivers with a single $C_r$ gauge node have been computed. A case that also appears in our study is the quiver $[D_{2r}]-(C_r)$, where the expected Slodowy slices are formed in quite a surprising way\footnote{\cite{Assel:2018exy} computes that there are two most singular points in this Coulomb branch, related by a $\mathbb{Z}_2$ action. Crucially, at each point, an SCFT denoted $T_{USp(2r),2r}$ has a Coulomb branch identical to the expected Slodowy slice (identified in \cite{Assel:2018exy} as the Higgs branch of the corresponding ${\cal D}_{G} ({\mathbf N_f})$ quiver). }. It remains a challenge to develop such techniques to obtain Coulomb branch calculations for the Slodowy slices of the other quivers with $\Delta = 0$ in our tables.
More generally, the family of transverse spaces and symmetry breaking associated with Slodowy slices provides a rich basis set of quivers that can be extended or used as building blocks to understand the relationships between a wide array of quiver theories and their Higgs and/or Coulomb branches.
\section{Slodowy Slices}
\label{sec:Slodowy}
\subsection{Relationship to Nilpotent Orbits}
\label{sec:Slodowy1}
Each nilpotent orbit $\cal O_{\rho}$ of a Lie algebra $\mathfrak g$ is defined by the conjugacy class $\mathfrak g^X$ of nilpotent elements $X \in \mathfrak g$ under the group action \cite{Collingwood:1993fk}. Each nilpotent element $X$ forms part of a standard $\mathfrak{su(2)}$ triple $\{ X,Y,H \}$ and, following the Jacobson Morozov theorem, the conjugacy classes are in one to one correspondence with the equivalence classes of embeddings of $\mathfrak{su(2)}$ into $\mathfrak g$, described by some homomorphism $\rho$. The closure of each orbit $\cal {\bar O}_{\rho}$, can, as discussed in \cite{Hanany:2016gbz, Hanany:2017ooe}, be described as a moduli space, by a refined Hilibert series of representations of $G$, graded according to the degree of symmetrisation of the underlying nilpotent element.
The closures ${\cal {\bar O}}_{\rho}$ of the nilpotent orbits of $\mathfrak g$ form a poset, ordered according to their inclusion relations\footnote{See for example the Hasse diagrams in \cite{Kraft:1982fk}.}. The closure of the \textit{maximal} (also termed \textit{principal} or \textit{regular}) nilpotent orbit is called the nilpotent cone $\cal N$; it contains all the orbits $\cal {O}_{\rho}$ and has dimension $| {\cal N} |$ equal to that of the rootspace of $\mathfrak g$. The poset of nilpotent orbits contains a number of canonical orbits. These include the trivial nilpotent orbit (described by the Hilbert series $1$ with dimension zero), a minimal (lowest dimensioned non-trivial) nilpotent orbit, a sub-regular orbit of dimension $\left| {\cal N} \right|-2$ and the maximal nilpotent orbit:
\begin{equation}
\label{eq:SS_NO}
\{ 0\} = {{\cal O}_{trivial}} \subset {{\cal{\bar O}}_{minimal}} \ldots \subset {{\cal{\bar O}}_{sub - regular}} \subset {{\cal{\bar O}}_{maximal}}=\cal N.
\end{equation}
All nilpotent orbits have an even (complex) dimension and are HyperK\"ahler cones.
The closure of the minimal nilpotent orbit of $\mathfrak g$ corresponds to the reduced single G-instanton moduli space \cite{Kronheimer:1990ay, Benvenuti:2010pq}. As discussed in \cite{Hanany:2016gbz}, the Hilbert series of the nilpotent cone has a simple expression in terms of the symmetrisations of the adjoint representation of $G$, modulo Casimir operators, or equivalently in terms of (modified) Hall Littlewood polynomials:
\begin{equation}
\label{eq:SS1a}
\begin{aligned}
{g_{HS}^{\cal N}}& {{ = PE}}\left[ {\chi _{adjoint}^G{t^2} - \sum\limits_{i = 1}^r {t^{2{d_i}}} } \right],\\
{g_{HS}^{\cal N}}& = mHL_{singlet}^G \left[ {{t^2}} \right],
\end{aligned}
\end{equation}
where $t$ is a counting fugacity, $\chi _{adjoint}^G$ is the character of the adjoint representation and $\{d_1,\ldots,d_r\}$ are the degrees of the symmetric Casimirs of $G$, which is of rank $r$.
Slodowy slices are defined as \textit{slices} $\cal S_\rho\subseteq \mathfrak{g}$ that are \textit{transverse} in the sense of \cite{slodowy_1980} to the orbit $\mathcal O _\rho$.
The varieties $\cal S_{\cal N,\rho}$ that concern the present study are slices inside the nilpotent cone $\mathcal N$. They can be constructed as:
\begin{equation}
\label{eq:SS2}
{\cal S}_{{\cal N},\rho} \equiv \cal S_\rho \cap {{\cal N}}.
\end{equation}
Naturally, the slice $\cal S_{\cal N,\rho}$ transverse to the trivial nilpotent orbit is the entire nilpotent cone $\cal N$ and the slice $\cal S_{\cal N,\rho}$ transverse to the maximal nilpotent orbit is trivial. In between these limiting cases, however, the Slodowy slices do not match any nilpotent orbit. Consequently we have a complementary poset of Slodowy slices:
\begin{equation}
\label{eq:SS3}
{{\cal N}} = {{{\cal S}}_{trivial}} > {{{\cal S}}_{minimal}} \ldots > {{{\cal S}}_{sub - regular}} > {{{\cal S}}_{maximal}} = \{ 0\} .
\end{equation}
\subsection{Dimensions and Symmetry Groups}
\label{sec:SS_Dim}
The dimensions of a Slodowy slice ${{\cal S}_{{\cal N},\rho}}$ plus those of the nilpotent orbit $\cal {O}_{\rho}$ combine to the dimensions of the nilpotent cone $\cal {N}$:
\begin{equation}
\label{eq:SS4}
\left| {{\cal S}_{{\cal N},\rho}} \right| + \left| {{{{\cal O}}_\rho }} \right| = \left| {{\cal N}} \right| = \left| {\mathfrak g} \right| - rank[ {\mathfrak g} ].
\end{equation}
The elements of the Slodowy slice ${{\cal S}_{{\cal N},\rho}}$ lie in a subalgebra $ \mathfrak f$, which is the centraliser of the nilpotent element $X \in \mathfrak g$, so that $\left[ {X,{ \mathfrak f}} \right] = 0$, and $ \mathfrak f$ is often termed the commutant of $ \mathfrak{su(2)}$ in $ \mathfrak g$. The structure of $ \mathfrak f$ and the dimensions of ${{\cal S}_{{\cal N},\rho}}$ and $\cal {O}_{\rho}$ can be determined by analysing the embedding of ${ \mathfrak {su(2)}} \rightarrow \mathfrak g$.
Following \cite{Dynkin:1957um}, a homomorphism $\rho$ can be described by a root space map from $ \mathfrak g$ to $\mathfrak {su(2)}$, and this is conveniently encoded in a Characteristic set of Dynkin labels.\footnote{A Characteristic $G[\ldots]$ is distinct from highest weight Dynkin labels $[\ldots,\ldots]_G$.} The Characteristic $[q_1\ldots q_r]$ provides a map from the simple root fugacities $\{z_1,\ldots,z_r\}$ of $ \mathfrak g$ to the simple root fugacity $\{z\}$ of $\mathfrak {su(2)}$:
\begin{equation}
\label{eq:SS5}
\rho \left[ {{q_1} \ldots {q_r}} \right]:\left\{ {{z_1}, \ldots ,{z_r}} \right\} \to \left\{ {{z^{\frac{{{q_1}}}{2}}}, \ldots ,{z^{\frac{{{q_r}}}{2}}}} \right\},
\end{equation}
where the labels $q_i \in \{0,1,2\}$. This induces corresponding weight space maps under which each representation of $G$ of dimension $N$ branches to representations $[n]$ of $SU(2)$ \textit{at some multiplicity} $m_n$. This branching is conveniently described using partition notation, $\left( |[N-1]|^{m_{N-1}}, \ldots ,|[ n ]|^{m_n}, \ldots ,1^{m_0} \right)$, which lists the dimensions of the $SU(2)$ irreps, using exponents to track multiplicities. These partitions are tabulated in \cite{Hanany:2016gbz} for the key irreps of Classical groups up to rank 5, identifying each nilpotent orbit by its Characteristic.
For example, the homomorphism $\rho$ with Characteristic $[202]$, which generates the 10 dimensional nilpotent orbit of of $A_3$, induces the following maps:
\begin{equation}
\label{eq:SS6}
\begin{aligned}
\rho \left[ {202} \right]:& \left\{ {{z_1},{z_2},{z_3}} \right\} \to \left\{ {z,1,z} \right\}, & & \\
\rho \left[ {202} \right]:& \left[ {1,0,1} \right] \to \left[ 4 \right] + 3 \otimes \left[ 2 \right] + \left[ 0 \right] & \iff & \chi _{adjoint}^{{A_3}} \to \left( {5,{3^3},1} \right),\\
\rho \left[ {202} \right]:& \left[ {1,0,0} \right] \to \left[ 2 \right] + \left[ 0 \right] & \iff & \chi _{fundamental}^{{A_3}} \to \left( {3,1} \right).
\end{aligned}
\end{equation}
These $SU(2)$ partitions are invariant under the symmetry group $F \subseteq G$ of the Slodowy slice and hence the multiplicities encode representations of $F$.
Under the branching, the adjoint representation of $G$ decomposes to representations of the product group $SU(2) \otimes F$ with branching coefficients $a_{nm}$:
\begin{equation}
\label{eq:SS7}
\begin{aligned}
\chi _{adjoint}^G & \to \bigoplus \limits_{[n][m]} {a_{nm}} \left( {\chi _{[n]}^{SU(2)}\mathop \bigotimes \chi _{[m]}^F} \right).
\end{aligned}
\end{equation}
Other than for the trivial nilpotent orbit (in which the adjoint of $G$ branches to itself times an $SU(2)$ singlet), the adjoint of $SU(2)$ and the adjoint (if any) of $F$ each appear separately in the decomposition, so that $rank[G]\ge rank[F]\ge 0$. Along with the requirement that any multiplicities $m_n$ appearing in a partition must be dimensions of representations of $F$, this often makes it possible to determine the Lie algebra $\mathfrak f$ of the Slodowy slice directly from the partition data. In the example \ref{eq:SS6} the presence of a single $SU(2)$ singlet in the partition of the adjoint of $A_3$ entails that the symmetry group of the Slodowy slice to the $[202]$ orbit is simply $U(1)$.
The adjoint partition data also permits direct calculation of the complex dimensions of a Slodowy slice or nilpotent orbit, by summing multiplicities of SU(2) irreps or, equivalently, dimensions of $F$ irreps:
\begin{equation}
\label{eq:SS8}
\begin{aligned}
\left| {{S_\rho }} \right| & = \sum\limits_{[n][m]} {{a_{nm}}\left| {\chi _{[m]}^F} \right|},\\
\left| {{{{\cal O}}_\rho }} \right| & = \left| G \right| - \left| {{\cal S}_{\rho}} \right|,\\
\left| {{\cal S}_{{\cal N},\rho}} \right| & = \left| {{S_\rho }} \right|{}- rank[G].
\end{aligned}
\end{equation}
\subsection{Hilbert Series}
\label{sec:SS_HS}
The branching of the adjoint representation of $G$ determines the generators of the Slodowy slice. If the decomposition \ref{eq:SS7} is known, the Hilbert series for the Slodowy slice can be derived from the HS of the nilpotent cone by substitution under a particular choice of grading. Consider the map $\tilde \rho$ of the adjoint that is obtained from \ref{eq:SS7} by the replacement of $SU(2)$ irreps by their highest weight fugacities $\chi _{[n]}^{SU(2)} \to {t^n}$, taking the particular choice of $t$ from \ref{eq:SS1a} as the counting variable:
\begin{equation}
\label{eq:SS9}
\begin{aligned}
\tilde \rho :\chi _{adjoint}^G &\to \bigoplus \limits_{[n][m]} {a_{nm}} { \chi _{[m]}^{{F}} {t^n}}.
\end{aligned}
\end{equation}
When the adjoint map \ref{eq:SS9} is applied to the generators of the nilpotent cone \ref{eq:SS1a}, the replacement of the $SU(2)$ representations $[n]$ by the counting fugacity $t^n$ entails that the resulting Hilbert series only transforms in the symmetry group of the Slodowy slice. Thus, $g_{HS}^{{\cal S}_{{\cal N},\rho}} =g_{HS}^{ {\left.{{\cal N}} \right|_{\tilde \rho }}}$,
or, written more explicitly:
\begin{equation}
\label{eq:SS11}
\begin{aligned}
g_{HS}^{{\cal S}_{{\cal N},\rho}}(x,t) &=PE\left[ {{\left. {\chi _{adjoint}^G} \right|_{\tilde \rho}}{~} {t^2} - \sum\limits_{i = 1}^r {t^{2{d_i}}} } \right]\\
&= PE\left[ {\mathop \bigoplus \limits_{[n][m]}^{} {a_{nm}} \chi _{[m]}^F{t^{n + 2}} - \sum\limits_{i = 1}^r {{t^{2{d_i}}}} } \right].
\end{aligned}
\end{equation}
The expression \ref{eq:SS11} gives the \textit{refined} Hilbert series of the Slodowy slice in terms of its generators, which are representations of the Slodowy slice symmetry group $F$, at some counting degree in $t$, less its relations, which are set by the degrees of the Casimirs of $G$.\footnote{This construction for Slodowy slices is simpler, but equivalent to the Hall Littlewood method presented in \cite{Cremonesi:2014uva}.}
Importantly, an \textit{unrefined} Hilbert series, with representations of $F$ replaced by their dimensions, ${m_n} = \sum\limits_m {{a_{nm}}} |\chi _{[m]}^F|$, can be calculated directly from the adjoint partition under $\rho$, without knowledge of the precise details of the embedding \ref{eq:SS7}:
\begin{equation}
\label{eq:SS12}
\begin{aligned}
g_{HS}^{{\cal S}_{{\cal N},\rho}} (1,t) & = PE\left[ {\sum\limits_n^{} {{m_n}} {t^{n+2}} - \sum\limits_{i = 1}^r {{t^{2{d_i}}}} } \right].
\end{aligned}
\end{equation}
Finally, the freely generated Hilbert Series for the canonical Slodowy slices ${\cal S}_{\rho}$ are related to those of their nilpotent intersections ${\cal S}_{{\cal N},\rho}$ simply by the exclusion of the Casimir relations:
\begin{equation}
\label{eq:SS13}
\begin{aligned}
g_{HS}^{{S_\rho }}(x,t){} \equiv g_{HS}^{{S_{N,\rho }}}(x,t)~PE\left[ {\sum\limits_{i = 1}^r {{t^{2{d_i}}}} } \right]= PE\left[ {\mathop \bigoplus \limits_{[n][m]}^{} {a_{nm}} \chi _{[m]}^F{t^{n + 2}} } \right].
\end{aligned}
\end{equation}
In Sections \ref{sec:ASeries} and \ref{sec:BCDSeries} we set out the quiver constructions that provide a comprehensive method for identifying the decomposition \ref{eq:SS7} and for calculating the refined Hilbert series of the Slodowy slices ${\cal S}_{{\cal N},\rho}$.
\subsection{Sub-Regular Singularities}
\label{sec:SS_Singular}
As shown in \cite{brieskorn1970singular,slodowy_1980}, the Slodowy slices of sub-regular orbits $ {{\cal S}_{{\cal N},subregular}} $ take the form of ADE type singularities, ${\mathbb C}^2 / \Gamma $, where $\Gamma$ is a finite group of type ADE. Under the nilpotent orbit grading by $t^2$ used herein, these take the forms in table \ref{table:SS1}. The intersection $ {{\cal S}_{{\cal N},subregular}} $ is an example of a transverse slice between adjacent nilpotent orbits; all such transverse slices of Classical algebras were classified by Kraft and Procesi in \cite{Kraft:1982fk}.
\begin{table}[htp]
\begin{tabular}{|c|c|c|c|}
\hline
$\text{Group}$&$ \text{Singularity}$&$ \text{Dimension} $&$ \text{Hilbert Series} $\\
\hline
$A_r$ & ${{{\hat A}_r} \equiv {{\mathbb C}^2}/{{\mathbb Z}_{r + 1}}}$&$ 2 $ & ${PE \left[ 2{t^{r + 1}} + {t^2} - {t^{2r + 2}} \right]} $\\
$B_r$ & ${{{\hat A}_{2r-1}} \equiv {{\mathbb C}^2}/{{\mathbb Z}_{2r }}}$&$ 2 $ & ${{PE \left[ 2{t^{2r}} + {t^2} - {t^{4r}} \right] }} $\\
$C_{r>1}$ & ${{{\hat D}_{r+1}} \equiv {{\mathbb C}^2}/{{ Dic}_{r - 1}}}$&$ 2 $ & ${PE \left[{{t^{2r - 2}} + {t^{2r}} + {t^4} - {t^{4r}}} \right]} $\\
$D_{r>2}$ & ${{{\hat D}_r} \equiv {{\mathbb C}^2}/{{ Dic}_{r-2}}}$&$ 2 $ & ${PE \left[ {{t^{2r - 4}} + {t^{2r - 2}} + {t^4} - {t^{4r - 4}}} \right]} $\\
\hline
\end{tabular}
\footnotesize \text{The dicyclic group of order $4k$ is denoted as $Dic_k$.}
\caption{Sub-regular Slodowy Slices of Classical Groups}
\label{table:SS1}
\end{table}
This known pattern of singularities amongst the Slodowy slices of subregular orbits, along with the known forms of trivial and maximal Slodowy slices and dimensions, provide consistency checks on the grading methods and constructions given herein.
\FloatBarrier |
2,877,628,089,407 | arxiv | \section{Introduction}
Hypernuclei are nuclear systems in which one or more nucleons are replaced by one (or more) hyperons \cite{uno}. The more known and studied for a long time (almost 60 years) are $\Lambda$-hypernuclei, in which one $\Lambda$-hyperon replaces a nucleon of the nucleus. In Fig. ~\ref{f:fig1}a) a simplified representation of the single-particle states of the nucleons for a $^{12}$C nucleus is shown. When the $\Lambda$ replaces a neutron [Fig.~\ref{f:fig1}b)] it can occupy different states [see Fig.~\ref{f:fig1} c) to f)]. When it has the same quantum numbers of the neutron it has replaced, the state is called "{\it substitutional}".
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\textwidth]{fig1}
\caption{a) - Simplified representation of the single-particle states of the nucleons for a $^{12}$C nucleus. c) to f) - Same as a) when a $\Lambda$ replaced a neutron. State shown in e), when the $\Lambda$ has the same quantum numbers of the neutron it replaced, it is called {\it substitutional} state. Representations taken from \cite{bressani.report}.}
\label{f:fig1}
\end{center}
\end{figure}
Hypernuclei provide a unique laboratory suitable not only for studying nuclear structure in presence of a strange quark but also for probing weak interactions between baryons. Indeed hyperons-nucleons (YN) scattering experiments are difficult to perform and data are very limited. On the other hand from hypernuclear energy levels and theoretical models information about YN interaction can be extracted. Hypernuclei are also important to investigate dynamical changes of the nuclear structure induced by the added hyperon. In fact the $\Lambda$, reaching deep inside levels, it can attract the surrounding nucleons toward the interior ("{\it glue-like}" or "{\it core contraction}" effect), especially in light nuclei. Indeed the $\Lambda$-hyperon, since it does not suffer from Pauli blocking by other nucleons, it can penetrate into the nuclear interior and form deeply bound hypernuclear states.
\section{Hypernuclei production}
There are different ways to bring the {\it strangeness} inside a nucleus. Up to now three different reactions have been used:
\begin{center}
\begin{tabular}{l l l}
1a) & ${\mathrm K}^-_{in flight/stop} + n \rightarrow \Lambda + \pi^- $ & \multirow{2}{*}{\it strangeness exchange reaction} \\
\vspace{0.3 cm}
1b) & ${\mathrm K}^-_{in flight/stop} + p \rightarrow \Lambda + \pi^o$ & \\
\vspace{0.3 cm}
2) & $\pi^+ + n \rightarrow \Lambda + {\mathrm K}^+$ & {\it associated production} \\
\vspace{0.3 cm}
3) & $e + p \rightarrow e' + \Lambda + {\mathrm K}^+$ & {\it electroproduction} \\
\end{tabular}
\label{t:reac}
\end{center}
Each reaction has its own advantages and plays its role in a complete program of hypernuclear spectroscopy. Reaction 1a) at rest was the first to be used \cite{faessler}; reaction 2) then followed at BNL \cite{BNL1} and at KEK \cite{KEK1}, while reaction 3) is relatively new \cite{Jlab1}. The most important parameter in determining the differences of the distinct reactions is the momentum transfer. In Fig.~\ref{f:fig4} the relation between the beam momentum and the recoil/transfer momentum and between the momentum transfer and the hypernuclear cross section are shown. Low recoil momentum, like for the (K$^-_{inflight}$, $\pi^-$) reaction populates substitutional states, in which a nucleon is converted to a $\Lambda$ hyperon in the same orbit with no orbital angular momentum transfer. In this way it is difficult to populate the ground state. A large recoil momentum on the other hand can excite high-spin hypernuclear states with a nucleon-hole having large angular momentum and a $\Lambda$ hyperon having a small angular momentum, with the advantage to access more states.
The pros of the (K$^-_{stop}$,$\pi^-$) reaction, if compared with the others, is the fact that it populates many states and it has a {\it high} formation probability compared to the associate production. On the other hand the kaon beam suffer energy struggling in the slowing down by {\it thick} absorbers and thus also the target must be relatively {\it thick} (some g/cm$^2$). This determines the principal drawbacks, like a large background and a limit on the energy resolution (the out-coming pions suffer multiple scattering inside the targets).
A complete list of experiments on $\Lambda$ hypernuclear spectroscopy is shown in Fig.~\ref{f:fig3} \cite{hashimoto}. It can be seen that the K$^-_{stop}$ reaction was not so commonly used. Before the FINUDA experiment, which results will be presented in the following, the only other experiments that produced hypernuclei with the ${\mathrm K}^-_{stop} + n \rightarrow \Lambda + \pi^- $ reaction were two \cite{faessler,hayano}. Recently another experiment used the ${\mathrm K}^-_{stop} + p \rightarrow \Lambda + \pi^o$ reaction \cite{ahmed}. All these experiments used spectrometers designed for other purposes and modified to fit the needs of a hypernuclear apparatus. The FINUDA experiment on the other hand was specifically planned to produce and study hypernuclei.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.9\textwidth]{hyplist}
\caption{Experiments on $\Lambda$ hypernuclear spectroscopy (from \cite{hashimoto} with the addition of FINUDA).}
\label{f:fig3}
\end{center}
\end{figure}
\section{FINUDA hypernuclear spectroscopy}
FINUDA took data for few months in between 2003 and 2004 and 2006 and 2007 at the DA$\Phi$NE $e^+$-$e^-$ collider machine at the national laboratory of the Italian Institute of Nuclear Physics (INFN) in Frascati. The $e^+$-$e^-$ collisions create the $\Phi$ meson at rest that decays, about 50 $\%$ of the times, into two kaons with low kinetic energy ($\sim$ 16 MeV). The basic principle of FINDUA \cite{bressani} was to use such monochromatic source of low energy K$^-$'s for the production of hypernuclei. Since it is impossible to obtain such low energy beams in other ways (for example with fixed target experiments as done previously), FINUDA represented a real breakthrough for stopped kaons experiments. FINUDA in particular was characterized by important features, in particular it could: mount very thin targets (few tens of g/cm$^2$ compared to some g/cm$^2$ of previous experiments), install up to 8 different targets in the same data taking (thus minimizing the systematics in comparing results from different nuclei), completely reconstruct an event with large acceptance (studying both the production and the decay of hypernuclei), simultaneously track also the $\mu^+$ from the decay of the K$^+$ (which is generated in pair with the K$^-$) calibrating in this way the apparatus both for energy and rate measurements. Details about the FINUDA experimental setup can be found in \cite{C12, MWD, NMWD} and references therein.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.9\textwidth]{momTransfer}
\caption{a) Recoil momentum as a function of the beam momentum (from \cite{hashimoto}). When the momentum transfer is 0 ({\it magic (beam) momentum}), the hypernucleus production is called recoilless. b) Hypernuclear cross section as a function of the momentum transfer (from \cite{bressani.report}). The production of hypernuclei by particles at rest is in reality defined by a capture rate (or formation probability) and not by cross section. Suitable normalization has been used.}
\label{f:fig4}
\end{center}
\end{figure}
The FINUDA apparatus was able to reconstruct charged particles coming out of the targets. Hypernuclear candidate events were selected by requiring the simultaneous presence of a K$^-$ stopped inside a target and of a $\pi^-$ originating from the same target. Details about the event selection and data analysis can be found elsewhere \cite{germano, C12}. Imposing the conservation of the energy and of the momentum, the tracking of the emerging $\pi^-$ permits to calculate the hypernucleus binding energy, defined as the difference between the mass of the hypernucleus and the sum of the masses of the core nucleus (original nucleus without a neutron) and of the neutron. For what concerns the background, some other reactions between the K$^-$ and the target can produce an emerging negative pion [$K^- (np) \rightarrow \Sigma^- p$ (followed by $\Sigma^- \rightarrow n \pi^-$ decay), $K^- n \rightarrow \Lambda \pi^- $ (so called {\it $\Lambda$-Quasi Free}), $K^- p \rightarrow \Sigma^- \pi^+$ (followed by $\Sigma^- \rightarrow n \pi^-$ decay)]. Another process that proved to be able to generate a $\pi^-$ momentum distribution that can overlap with the one of hypernuclear formation is the in-flight K$^-$ decay. All these reactions have been simulated with the FINUDA Monte Carlo in order to account for the background of our hypernuclear signal. A sum of the distributions of the background reactions and of Gaussians, for the signal, was used to reproduced the overall data behaviour (see \cite{germano} for details). The output of the fit was the weight of the various contributions, the number of events and the mean of the Gaussians. The position of the peaks gives directly the binding energy value of the hypernuclear states created, with a total error of 0.3 MeV. A more important information can be extracted from the number of events in the peaks. Taking into account all the efficiencies involved in the detection and reconstruction, the formation probability per stopped K$^-$ can be calculated.
\section{Results and discussions}
The values of formation probabilities measured by the FINUDA experiments for $^7_{\Lambda}$Li, $^9_{\Lambda}$Be and $^{13}_{\Lambda}$C and $^{16}_{\Lambda}$O are reported in a recent publication \cite{germano}. Only few measurements of formation probability have been performed previously. Following the first experiment on a $^{12}$C stopping target \cite{faessler}, measurement on some other nuclei ($^4$He, $^{12}$C and $^{16}$O) were subsequently performed \cite{hayano}. A low statistics measurement on the ($K^-_{stop}, \pi^o$) reaction on $^{12}$C was later published \cite{ahmed}. FINUDA also reported a first result on a $^{12}$C target \cite{C12}. Based on these measurements the formation probability was a decreasing function of the atomic mass number A, but some discrepancy appeared for example between the ground state formation probabilities measured by \cite{C12,hayano} and \cite{ahmed}. The new FINUDA results \cite{germano}, along with the one reported previously by FINUDA \cite{C12}, offer a complete set of measurements that can be compared one each other to extract the formation probability as a function of the atomic mass number A. Since they were measured in the same experiment and using the same experimental and reconstruction techniques, the effect of systematic errors is thus minimized.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\textwidth]{figHadron11}
\caption{Formation probabilities from FINUDA \cite{germano} (a) and cross section from E336 \cite{hashimoto} (b) for bound states (see text for details) as a function of A. Ratio between the two c). Theoretical prediction by \cite{cieply} (e). Formation probabilities for the ground states (d).}
\label{f:fig5}
\end{center}
\end{figure}
Since each target has more than one hypernuclear state (the ground plus some excited), it is not easy to extract the A dependence from the results. The ground state formation probability, as shown in Fig.~\ref{f:fig5} d), can be used, but for some hypernuclei (namely $^7_{\Lambda}$Li and $^{12}_{\Lambda}$C) the first excited state is too close to be isolated. In order to consider only well defined hypernuclides, hypernuclear states below the threshold for the decay by proton emission have been selected. The results are summarized in Fig.~\ref{f:fig5} a). A smoothly decreasing behaviour appears, with the exception of a strong enhancement corresponding to the formation of $^{12}_{\Lambda}$C. For comparison, in Fig.~\ref{f:fig5} b) the cross section measurement for the ($\pi^+$, K$^+$) production reaction \cite{hashimoto} is also shown. The ratio between such values and the (K$^-_{stop}$, $\pi^-$) formation probability ones (Fig.~\ref{f:fig5} c) changes by a factor of 5 from $^7$Li to $^{16}$O. Two conclusions can be then drawn. First of all, the two production reactions (K$^-_{stop}$, $\pi^-$) and ($\pi^+$, K$^+$) show a distinct A dependence. Secondly, hypernuclei production on $^{12}$C deviates from the overall behaviour, being higher than all other neighbour nuclei.
The new results \cite{germano} triggered a new study \cite{cieply} to reproduce experimental data to extract information on the K$^-$ nuclear potential V$_K$, important for kaon condensation in neutron-star matter, quasibound K-nuclear clusters, self-bound strange hadronic matter, etc. etc. The authors used the Distorted-Wave Impulse Approximation (DWIA) to calculate the formation probability for the target nuclei reported in \cite{germano}. Two different potentials were tested, namely a shallow (SH) and a deep (DD) ones with respectively $Re$ V$_K$ $\sim$ 50 MeV and $Re$ V$_K$ $\sim$ 180 MeV. The dependence on the nuclear density has also been taken into account. Although the calculated rates were about 15 $\%$ of the measured rates, the overall behaviour could be reproduced, the comparison slightly preferring the deep K$^-$ nuclear potential over the shallow one.
\section{Conclusions}
The ${\mathrm K}^-_{stop} + n \rightarrow \Lambda + \pi^- $ strangeness exchange reaction was the first \cite{faessler} to be used for the creation of hypernuclei in {\it modern} (post emulsion/bubble chambers era). After having being used at BNL \cite{BNL1} and KEK \cite{KEK1}, it found its {\it best configuration} at the INFN-LNF with the FINUDA experiment. Hypernuclei formation probabilities for stopped kaons has been measured for p-shell nuclei and for the first time a study as a function of A has been performed \cite{germano}. The new results gave new inputs to the theory to extract useful information about the K$^-$ nuclear potential.
No new experiment using the (K$^-_{stop}$, $\pi^-$) reaction is foreseen at the moment since future programs of hypernuclear physics, at JParc in Japan and at GSI in Germany, will be using different production methods. Complete reviews of hypernuclear physics can be found in \cite{bressani.report, hashimoto, chrien, bando}.
|
2,877,628,089,408 | arxiv | \section{Introduction}
\label{sec:intro}
It is undeniable that there exists a massive reservoir of multi-phase
gas that resides around star-forming galaxies \citep{tumlinson17}. A
large collection of works provide evidence that outflows and accretion
are ongoing processes that continuously change the properties of the
circumgalactic medium (CGM) and the host galaxies. Low ionization
level ions within the CGM show strong kinematic signatures that are
consistent with large-scale outflows
\citep{bouche06,tremonti07,martin09,weiner09,
nestor11,noterdaeme10,coil11,kacprzak10,kacprzak14,rubin10,menard12,
martin12,noterdaeme12,krogager13,peroux13,rubin14,crighton15,
nielsen15,nielsen16,lan18}. Furthermore, low angular momentum and
co-rotating gas around galaxies and orientation dependant absorption
velocity widths point to signatures of gas accretion \citep{steidel02,
kacprzak10,ho17,kacprzak17,martin19,zabl19}.
The CGM, as traced by {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}} absorption, appears to have a preference
to exist along the major and minor axes of galaxies
\citep{bouche12,kacprzak12a,schroetter19}, while the equivalent width
of the absorption is highest along the galaxy minor axis
\citep{bordoloi11,kacprzak12a,lan14,lan18}. This geometric dependence
could be additional evidence for outflows and accretion. Furthermore,
the metallicity distribution bimodality found for $z<0.4$ Lyman limit systems
(LLS) and partial Lyman limit systems (pLLS) shows a high
([X/H]$\sim-0.4$) and a low ([X/H]$\sim-1.7$) metallicity peak that
could be attributed to being caused by outflows and accretion
\citep{lehner13,lehner19,wotta16,wotta19}. Thus the spatial
distribution of metallicity around galaxies would seem to be a likely
key to understanding the origins of the CGM.
Numerous studies have obtained the CGM metallicity associated with a
known galaxy in an effort to determine the origin and history of the
absorption. These CGM metallicities generally reflect the
metallicity bimodality where systems near galaxies are either
metal-poor with metallicities between $-2<$[X/H]$<-1$
\citep{tripp05,cooksey08,kacprzak10b,ribaudo11,thom11,churchill12,bouche13,crighton13,stocke13,kacprzak14,crighton15,muzahid15,bouche16,fumagalli16b,peroux16}
or metal-enriched with metallicities of [X/H]$>-0.5$
\citep{chen05,peroux11,krogager13,stocke13,crighton15,muzahid15,muzahid16,peroux16}. However,
little is known about the host galaxy geometry with respect to the
quasar sight-line in most cases. Using a sample of 47 galaxies with
measured morphologies/orientations and CGM metallicities,
\citet{pointon19} has shown that CGM metallicities do not
correlate with azimuthal angle or inclination of the galaxy
regardless of impact parameter and N({\hbox{{\rm H}\kern 0.1em{\sc i}}}). Thus, it is possible
that the spatial azimuthal dependence and the metallicity bimodality
are unrelated.
However, we do not fully understand how the host galaxy-ISM
metallicities relate to the CGM metallicities. Since CGM galaxies
span a range of stellar mass and given that there is a well-known
galaxy stellar mass and ISM metallicity relation found at all
redshifts
\citep[e.g,][]{tremonti04,sanders14,steidel14,zahid14,kacprzak15b},
then it is possible that the difference between the ISM and CGM
metallicities would be more telling of the origins of the
CGM.
Initial work from \citet{prochaska17} has shown that for $\sim20$
systems, the CGM metallicity does not correlate with the ISM
metallicity of host galaxies. In addition, work by \citet{peroux16}
examined the metallicity difference between the galaxy ISM
metallicity and CGM metallicity for nine systems. They found at low
azimuthal angles, there are a range of ISM-CGM metallicity
differences which would be unexpected for accreting gas. They only
had two lower ISM-CGM metallicity systems along the minor axis,
which is also unexpected for an outflow model. Fully exploring the
relative galaxy ISM-CGM metallicities could provide additional
insight into the relationship between galaxies and ongoing processes
within the CGM.
We aim to further explore the relationship between the galaxy ISM
and CGM metallicities and how they relate to the expectations of
accretion/outflow models. We have acquired Keck/ESI spectra for 25
star-forming galaxies to obtain their ISM metallicities and their
CGM metallicities are derived in \citet{pointon19}. We examine the
stellar mass-metallicity relation for the galaxies and the CGM and
test if the relative metallicity difference, defined to be the
difference between the ISM and CGM metallicities, is dependent on
hydrogen column density and/or galaxy properties such as azimuthal
angle, inclination angle, and impact parameter. In
Section~\ref{sec:data} we present our sample, data and data
reduction. In Section~\ref{sec:results} we present our observational
results. In Section~\ref{sec:discussion}, we discuss what can be
inferred from the results and concluding remarks are offered in
Section~\ref{sec:conclusion}. Throughout we adopt an H$_{\rm
0}=70$~\hbox{~km~s$^{-1}$} Mpc$^{-1}$, $\Omega_{\rm M}=0.3$,
$\Omega_{\Lambda}=0.7$ cosmology.
\begin{deluxetable*}{llcrcrclrrrrr}
\tabletypesize{\scriptsize}
\tablecaption{Absorption and host galaxy properties\label{tab:morph}}
\tablecolumns{13}
\tablewidth{0pt}
\tablehead{
\colhead{Quasar\tablenotemark{ a}}&
\colhead{$z_{\rm gal}$\tablenotemark{ b}} &
\colhead{M$_{r}$} &
\colhead{R$_{vir}$} &
\colhead{log(M$_{h}$)} &
\colhead{log(M$_{*}$)} &
\colhead{log(O$/$H)$+$12\tablenotemark{b}} &
\colhead{$i$} &
\colhead{$\Phi$} &
\colhead{$D$} &
\colhead{log N({\hbox{{\rm H}\kern 0.1em{\sc i}}})} &
\colhead{log N({\hbox{{\rm H}\kern 0.1em{\sc i}}})} &
\colhead{log($Z_{CGM}$)}\\
\colhead{field }&
\colhead{ } &
\colhead{(AB)} &
\colhead{(kpc)} &
\colhead{(M$_{\odot}$)} &
\colhead{(M$_{\odot}$)} &
\colhead{} &
\colhead{(degree) } &
\colhead{(degree) } &
\colhead{(kpc) } &
\colhead{$_{Measured}$ (cm$^{-2}$) } &
\colhead{$_{Modeled}$ (cm$^{-2}$) } &
\colhead{[Si/H] ($Z_{\odot}$) }
}
\startdata
J012528 &0.398525 & $-$21.99 & $285.5_{-32}^{+37}$&$12.5_{-0.2}^{+0.2}$& $10.9_{-0.2}^{+0.2}$ & 8.69 & $63.2_{-2.6}^{+1.7}$ & $73.4_{-4.7}^{+4.6}$ & $163.0$&$ [18.85,19.00] $ & $18.85^{+0.04}_{-0.01}$ &$ -1.56 ^{+0.03 }_{-0.03} $ \\[+0.35ex]
J035128 &0.356992 & $-$20.86 & $190.9_{-26}^{+48}$&$12.0_{-0.2}^{+0.3}$& $10.4_{-0.2}^{+0.3}$ & 8.63 & $28.5_{-12.5}^{+19.8}$ & $4.9_{-40.2}^{+33.0}$ &$72.3$ &$ 16.86 \pm0.03 $ &$16.86^{+0.03}_{-0.03}$ & $ -0.38 ^{+0.04 }_{-0.04} $ \\[+0.35ex]
J040748 &0.495164 & $-$19.73 &$124.4_{-18}^{+52}$&$11.4_{-0.2}^{+0.5}$& $9.7_{-0.2}^{+0.5}$ &8.46 &$67.2_{-7.5}^{+7.6}$ & $21.0_{-3.7}^{+5.3}$ & 107.6 &$ 14.34 \pm0.56 $ &$14.35^{+0.35}_{-0.35}$ & $ -1.10 ^{+0.49 }_{-0.55} $ \\[+0.35ex]
J045608 &0.277938 & $-$19.12 & $122.0_{-18}^{+57}$&$11.4_{-0.2}^{+0.5}$& $ 9.8_{-0.2}^{+0.5}$ &8.14 & $71.2_{-2.6}^{+2.6}$ & $78.4_{-2.1}^{+2.1}$ & 50.7 &$ [15.06,19.00] $ &$15.71^{+1.55}_{-0.73}$ & $ < -1.40 $ \\[+0.35ex]
J045608 &0.381511 & $-$20.87 & $192.3_{-26}^{+48}$&$12.0_{-0.2}^{+0.3}$& $10.3_{-0.2}^{+0.3}$ & 8.67 & $57.1_{-2.4}^{+19.9}$ & $63.8_{-2.7}^{+4.3}$ & $103.4$&$ 15.10 \pm0.39 $ &$15.13^{+0.38}_{-0.35}$ & $ -0.06 ^{+0.03 }_{-1.01} $ \\[+0.35ex]
J045608 &0.48382 & $-$21.91 & $241.8_{-27}^{+38}$&$12.3_{-0.2}^{+0.2}$& $10.6_{-0.2}^{+0.2}$ &8.67 & $42.1_{-3.1}^{+3.1}$ & $85.2_{-3.7}^{+3.7}$ & 108.0 &$ [16.53,19.00] $ &$17.65^{+0.18}_{-0.17}$ & $ -1.32 ^{+0.15 }_{-0.15} $ \\[+0.35ex]
J085334 &0.163403 & $-$20.56 & $167.6_{-24}^{+48}$&$11.9_{-0.2}^{+0.3}$& $10.3_{-0.2}^{+0.3}$ &8.86 & $70.1_{-0.8}^{+1.4}$ & $56.0_{-0.8}^{+0.8}$ & 26.2 &$ 19.93 \pm0.01 $ &$19.93^{+0.01}_{-0.01}$ & $ -1.70 ^{+0.06 }_{-0.05} $ \\[+0.35ex]
J091440 &0.244312 & $-$20.55 & $170.7_{-24}^{+49}$&$11.9_{-0.2}^{+0.3}$& $10.3_{-0.2}^{+0.3}$ & 8.52 & $39.0_{-0.2}^{+0.4}$ & $18.2_{-1.0}^{+1.1}$ & 105.9&$ 15.55 \pm0.03 $ &$15.55^{+0.04}_{-0.03}$ & $ -0.78 ^{+0.09 }_{-0.10} $ \\[+0.35ex]
J094331 &0.2284$^b$& $-$21.34 & $216.5_{-27}^{+42}$&$12.2_{-0.2}^{+0.2}$& $10.6_{-0.2}^{+0.2}$ &8.94$^c$& $52.3_{-0.3}^{+0.3}$ & $30.4_{-0.4}^{+0.3}$ & 123.3 &$ 16.03 \pm0.67 $ &$16.04^{+0.66}_{-0.48}$ & $ -1.33 ^{+0.66 }_{-0.71} $ \\[+0.35ex]
J094331 &0.353052 & $-$19.88 & $146.8_{-22}^{+54}$&$11.7_{-0.2}^{+0.4}$& $10.0_{-0.2}^{+0.4}$ & 8.53 & $44.4_{-1.2}^{+1.1}$ & $8.2_{-5.0}^{+3.0}$ & $96.5$&$ 16.46 \pm0.03 $ &$16.38^{+0.11}_{-0.01}$ & $ < -1.69 $ \\[+0.35ex]
J095000 &0.211866 & $-$21.73 & $246.9_{-29}^{+36}$&$12.4_{-0.2}^{+0.2}$& $10.8_{-0.2}^{+0.2}$ & 8.19 & $47.7_{-0.1}^{+0.1}$ & $16.6_{-0.1}^{+0.1}$ & $93.6$&$ [16.28,19.00] $ &$19.00^{+0.01}_{-0.09}$ & $ -1.48 ^{+0.04 }_{-0.02} $ \\[+0.35ex]
J100902 &0.227855 & $-$20.19 & $154.5_{-23}^{+51}$&$11.8_{-0.2}^{+0.4}$& $10.1_{-0.2}^{+0.4}$ & 8.52 & $66.3_{-0.9}^{+0.6}$ & $89.6_{-1.3}^{+1.3}$ & $64.0$ &$ [17.51,19.00] $ &$18.26^{+0.10}_{-0.13}$ & $ -2.00 ^{+0.07 }_{-0.04} $ \\[+0.35ex]
J113327 &0.154599 & $-$19.84 & $138.8_{-21}^{+52}$&$11.6_{-0.2}^{+0.4}$& $10.0_{-0.2}^{+0.4}$ &8.19 & $23.5_{-0.2}^{+0.4}$ & $56.1_{-1.3}^{+1.7}$ & 55.6 &$ [15.82,17.00] $ &$16.11^{+0.42}_{-0.29}$ & $ < -1.98 $ \\[+0.35ex]
J113910 &0.204194 & $-$19.99 & $146.1_{-22}^{+52}$&$11.7_{-0.2}^{+0.4}$& $10.1_{-0.2}^{+0.4}$ & 8.67 & $81.6_{-0.5}^{+0.4}$ & $5.8_{-0.5}^{+0.4}$ & $93.2$&$ [16.04,17.00] $ &$16.04^{+0.04}_{-0.01}$ & $ -0.35 ^{+0.03 }_{-0.07} $ \\[+0.35ex]
J113910 &0.219724 & $-$17.67 & $88.7 _{-14}^{+52}$&$11.0_{-0.2}^{+0.6}$& $ 9.4_{-0.2}^{+0.6}$ &8.37 & $85.0_{-8.5}^{+5.0}$ & $44.9_{-8.1}^{+8.9}$ & 122.0 &$ 14.20 \pm0.07 $ &$14.30^{+0.01}_{-0.28}$ & $ < 0.63 $ \\[+0.35ex]
J113910 &0.319255 & $-$20.48 & $170.4_{-24}^{+51}$&$11.9_{-0.2}^{+0.3}$& $10.2_{-0.2}^{+0.3}$ & 8.61 & $83.4_{-1.1}^{+1.4}$ & $39.1_{-1.7}^{+1.9}$ & $73.3$ & $ 16.19 \pm0.03 $ & $16.19^{+0.03}_{-0.03}$ &$ -2.59 ^{+0.58 }_{-0.04} $ \\[+0.35ex]
J123304 &0.318757 & $-$20.62 & $176.6_{-25}^{+50}$&$11.9_{-0.2}^{+0.3}$& $10.3_{-0.2}^{+0.3}$ &8.57 & $38.7_{-1.8}^{+1.6}$ & $17.0_{-2.3}^{+2.0}$ & 88.9 &$ 15.72 \pm0.02 $ &$15.72^{+0.02}_{-0.02}$ & $ -1.14 ^{+0.13 }_{-0.09} $ \\[+0.35ex]
J124154 &0.205267 & $-$19.83 & $140.2_{-21}^{+52}$&$11.6_{-0.2}^{+0.4}$& $10.0_{-0.2}^{+0.4}$ & 8.64 & $56.4_{-0.5}^{+0.3}$ & $77.6_{-0.4}^{+0.3}$ & $21.1$ &$ [16.63,19.00] $ &$17.43^{+0.02}_{-0.03}$ & $ -0.32 ^{+0.05 }_{-0.03} $ \\[+0.35ex]
J124154 &0.217905 & $-$19.77 & $138.7_{-21}^{+52}$&$11.6_{-0.2}^{+0.4}$& $10.0_{-0.2}^{+0.4}$ &8.62 & $17.4_{-1.6}^{+1.4}$ & $63.0_{-2.1}^{+1.8}$ & 94.6 &$ 15.59 \pm0.12 $ &$15.72^{+0.09}_{-0.11}$ & $ -0.57 ^{+0.16 }_{-0.09} $ \\[+0.35ex]
J132222 &0.214431 & $-$21.18 & $204.8_{-26}^{+44}$&$12.1_{-0.2}^{+0.3}$& $10.5_{-0.2}^{+0.3}$ & 8.80 & $57.9_{-0.2}^{+0.1}$ & $13.9_{-0.2}^{+0.2}$ & $38.6$& $ [16.97,19.00] $ &$19.00^{+0.01}_{-0.12}$ & $ -1.90 ^{+0.04 }_{-0.03} $ \\[+0.35ex]
J134251 &0.0708$^b$& $-$18.89 & $109.1_{-16}^{+54}$&$11.4_{-0.2}^{+0.5}$& $ 9.8_{-0.2}^{+0.5}$ &8.86$^c$& $57.7_{-0.3}^{+0.3}$ & $13.9_{-0.2}^{+0.2}$ & 39.4 &$ 14.61 \pm0.47 $ &$15.33^{+0.26}_{-0.69}$ & $ -0.02 ^{+0.57 }_{-0.33} $ \\[+0.35ex]
J134251 &0.227042 & $-$21.77 & $251.8_{-29}^{+36}$&$12.4_{-0.2}^{+0.2}$& $10.8_{-0.2}^{+0.2}$ & 8.72 & $10.1_{-10.1}^{+0.6}$ & $13.2_{-0.4}^{+0.5}$ & $35.3$& $ 18.83 \pm0.05 $ &$18.88^{+0.06}_{-0.04}$ & $ -0.36 ^{+0.04 }_{-0.05} $ \\[+0.35ex]
J155504 &0.189201 & $-$21.03 & $193.7_{-25}^{+45}$&$12.1_{-0.2}^{+0.3}$& $10.5_{-0.2}^{+0.3}$ & 8.67 & $51.8_{-0.7}^{+0.7}$ & $47.0_{-0.8}^{+0.3}$ & $33.4$ & $ [16.37,19.00] $ &$18.04^{+0.01}_{-0.90}$ & $ -1.43 ^{+0.71 }_{-0.04} $ \\[+0.35ex]
J213135 &0.430200 & $-$21.47 & $199.8_{-25}^{+42}$&$12.0_{-0.2}^{+0.3}$& $10.4_{-0.2}^{+0.3}$ & 8.65 & $48.3_{-3.7}^{+3.5}$ & $14.9_{-4.9}^{+6.0}$ & $48.4$& $ 19.88 \pm0.10 $ &$19.78^{+0.01}_{-0.01}$ & $ -1.96 ^{+0.03 }_{-0.03} $ \\[+0.35ex]
J225357 &0.352787 & $-$20.67 & $180.3_{-25}^{+50}$&$11.9_{-0.2}^{+0.3}$& $10.3_{-0.2}^{+0.3}$ & 8.58 & $36.7_{-4.6}^{+6.9}$ & $88.7_{-4.8}^{+4.6}$ & $203.2$& $ 14.53 \pm0.05 $ &$14.56^{+0.02}_{-0.19}$ & $ < -0.22 $
\enddata
\tablenotetext{a}{The full quasar name along with the quasar and galaxy RA
and DEC can be found in \citet{pointon19}.}
\tablenotetext{b}{Keck ESI galaxy redshifts and metallicites derived
from this work and from
\citet{kacprzak19,pointon19,kacprzak10}.}
\tablenotetext{c}{Galaxy redshifts and metallicites obtained from \cite{werk12}.}
\end{deluxetable*}
\section{SAMPLE AND DATA ANALYSIS}
\label{sec:data}
We have obtained galaxy ISM and CGM metallicities for 25 of the 47
systems selected from \citet{pointon19}, having a redshift range of
0.07$<$$z$$<$0.50 within $\sim200$~kpc (21$<$$D$$<$203~kpc) of
background quasars. The \citet{pointon19} absorption systems were
selected based on the presence of hydrogen having a column density
range of log(N({\hbox{{\rm H}\kern 0.1em{\sc i}}}))$=14-20$ and did not require the presence, but
must have existing spectral coverage, of metal-lines. Our subset of
25 galaxies were selected to be star-forming such that we are able to
obtain emission-line metallicities from Keck/ESI
spectra. \citet{pointon19} selected galaxies that are isolated such
that there are no other galaxies within 100~kpc and with velocity
separations less than 500~{\hbox{~km~s$^{-1}$}}. From our survey, and in the
literature, the quasars fields have been surveyed to the equivalent of
a sensitivity of $\geq0.1L_*$ out to at least 350~kpc at $z=0.2$.
These {\it HST} imaged galaxy--absorber systems were identified as
part of our ``Multiphase Galaxy Halos'' Survey [from PID 13398
\citep{kacprzak15, kacprzak19, muzahid15, muzahid16,
nielsen17,pointon17, pointon19, ng19} and from the literature
\citep{chen01a, chen09, prochaska11, werk12, werk13, johnson13}]. We
discuss the data and analysis below.
\subsection{Quasar Spectroscopy and Models}
The {\it HST}/COS quasar spectra have a resolution of $R\sim$20,000
and cover a range of hydrogen and metal absorption lines associated
with the targeted galaxies. Details of the {\it HST}/COS observations
are found in \citet{kacprzak15} and \citet{pointon19}. The data were
reduced using the {\sc CALCOS} software. Individual grating
integrations were aligned and co-added using the {\sc IDL} code
`coadd\_x1d' created by
\citet{danforth10}\footnote{http://casa.colorado.edu/danforth/science/cos/costools.html}.
Since the COS FUV spectra are over-sampled, we binned the spectra by
three pixels to increase the signal-to-noise and all of our analysis
was performed on the binned spectra. Continuum normalization was
performed by fitting the absorption-free regions with smooth low-order
polynomials.
We further use Keck/HIRES or VLT/UVES quasar spectra when available to
complement our COS spectra by including coverage of {\hbox{{\rm Mg}\kern 0.1em{\sc i}}}, {\hbox{{\rm Mg}\kern 0.1em{\sc ii}}},
{\hbox{{\rm Fe}\kern 0.1em{\sc ii}}}, {\hbox{{\rm Mn}\kern 0.1em{\sc ii}}} and {\hbox{{\rm Ca}\kern 0.1em{\sc ii}}} absorption, which provide additional
metallicity constraints for absorbers with $z_{\rm abs} >$ 0.2. HIRES
spectra were reduced using either the Mauna Kea Echelle Extraction
(MAKEE) package or IRAF. The UVES spectra were reduced using the
European Southern Observatory (ESO) pipeline \citep{dekker00} and the
UVES Post- Pipeline Echelle Reduction (UVES POPLER) software
\citep{murphy19}.
We adopted the CGM metallicities modeled from \citet{pointon19}. In
summary, the CGM metallicities were modeled using a combination of
either {\it HST}/COS or {\it HST}/COS+ Keck/HIRES or VLT/UVES spectra.
The column densities were obtained from Voigt profile fits modeled
using VPFIT \citep{carswell14}. \citet{pointon19} account for a
non-Gaussian line spread function (LSF) of the COS spectrograph by
using its wavelength dependant LSF \citep{kriss11} convolved with the
model profile during the fitting process. They assumed Gaussian LSF
for the HIRES and UVES data. When fitting the absorption profiles,
they fit the minimum number of components to obtain a satisfactory fit
with reduced $\chi^2 \sim 1$.
The CGM metallicities are calculated in \citet{pointon19} by fitting a
grid of ionization properties generated by the ionization modeling
suite Cloudy to the calculated column densities \citep{ferland13}. We
assume a uniform single-phase layer of gas, with no dust, having solar
abundance that is irradiated by a background UV spectrum. We adopt the
HM05 UV background to generate the grids to be consistent with
previous surveys \citep{lehner13,lenher19,wotta16,wotta19}. We used
the Markov Chain Monte Carlo (MCMC) technique described by
\citet{crighton13} to find the best-fit metallicity (quoted as the
[Si/H] ratio) and ionization parameter to the measured column
densities. The modeled N({\hbox{{\rm H}\kern 0.1em{\sc i}}}) and CGM metallicities adopted from
\citet{pointon19} are shown in Table~\ref{tab:morph}.
\subsection{HST Imaging and Galaxy Models}
All galaxy inclination angles and galaxy-quasar azimuthal angles were
adopted from \citet{kacprzak15} and \citet{pointon19}. All
quasar/galaxy fields have been imaged with {\it HST} using either ACS,
WFC3 or WFPC2. Details of the observations are found in
\citet{kacprzak15}. ACS and WFC3 data were reduced using the
DrizzlePac software \citep{gonzaga12} and cosmic rays were removed
during the multidrizzle process when enough frames were available,
otherwise L.A.Cosmic was used \citep{vandokkum01}. WFPC2 data were
previously reduced using the WFPC2 Associations Science Products
Pipeline (WASPP) \citep[see][]{kacprzak11b}. Galaxy morphological
parameters were modeling with a two-component disk$+$bulge model using
GIM2D \citep{simard02}, where the disk component has an exponential
profile while the bulge has a S{\'e}rsic profile with $0.2\leq n\leq
4.0$. We apply the standard convention of an azimuthal angle
$\Phi=0^{\circ}$ defined to be along the galaxy projected major axis
and $\Phi=90^{\circ}$ defined to be along the galaxy projected minor
axis.
Galaxy photometry was adopted from \citet{kacprzak15}, who used the
Source Extractor software \citep[SExtractor;][]{bertin96} with a
detection criterion of 1.5~$\sigma$ above background. The $m_{HST}$
magnitudes in each filter are quoted in the AB system and are listed
in Table~\ref{tab:morph}. We adopt calculated halo masses and virial
radii from \citet{ng19}, who applied halo abundance matching methods
in the Bolshoi N-body cosmological simulation \citep{klypin11} see
\citet{churchill13a,churchill13b} for further details. We then
calculate stellar masses using abundance matching models from
\citet{moster10} as described by \citet{stewart11b}.
\subsection{Galaxy Spectroscopy}
Galaxy spectra were obtained using the Keck Echelle Spectrograph and
Imager, ESI, \citep{sheinis02}. Details of the ESI/Keck observations
are presented in \citet{kacprzak19} and \citet{pointon19}. We binned
the CCD by two in the spatial directions resulting in pixel scales of
$0.27-0.34''$ over the echelle orders of interest. Also, we binned the
CCD by two in the spectral direction resulting in a resolution of
22~\hbox{~km~s$^{-1}$}~pixel$^{-1}$ (${\rm FWHM}\sim90$~km/s) for a $1''$ slit. ESI
has a wavelength coverage of 4000--10,000~{\AA}, which allows for the
detection of multiple emission lines such as {\hbox{[{\rm O}\kern 0.1em{\sc ii}]}} doublet,
$\rm{H}\beta$, {\hbox{[{\rm O}\kern 0.1em{\sc iii}]}} doublet, $\rm{H}\alpha$, and [\hbox{{\rm N}\kern 0.1em{\sc ii}}] doublet.
All ESI data were reduced using IRAF. Galaxy spectra are both vacuum
and heliocentric velocity corrected to provide a direct comparison
with the quasar spectra. The derived wavelength solution was verified
against a catalog of known sky-lines which resulted in a RMS
difference of $\sim0.03$~{\AA} ($\sim2$~{\hbox{~km~s$^{-1}$}}). The Gaussian fitting
software \citep[FITTER: see][]{archiveI} was used to simultaneously
fit to {$\rm{H} \alpha$} and [\hbox{{\rm N}\kern 0.1em{\sc ii}}] emission lines to determine their total
flux. The line centers and velocity widths were tied together for the
two lines. We compute a gas-phase oxygen abundance for each galaxy
using the N2 relation of \citet{pettini04}, where
12+log(O/H)=8.90+0.57$\times$N2 (N2$\equiv$log(\hbox{{\rm N}\kern 0.1em{\sc ii}}/{$\rm{H} \alpha$})). Galaxy
ISM metallicities are shown in Table~\ref{tab:morph}.
\begin{figure*}
\begin{center}
\includegraphics[angle=0,scale=0.95]{AllMZ.eps}
\caption[angle=0]{(Left) The stellar mass and ISM metallicity relation
normalized to the solar oxygen abundance of 8.69
\citep{asplund09}. The dashed line is the expected stellar
mass-metallicity relation at the mean redshift of our sample
$z=0.28$ (see text for details). The CGM metallicity [Si/H] is also
shown as a function of stellar mass, which exhibits large scatter
relative to the ISM metallicities at fixed stellar mass. (Middle)
ISM and CGM metallicities as a function of azimuthal angle and
(Right) inclination angle. As expected the ISM metallicities are
flat as a function of azimuthal and inclination angle while the CGM
metallicity exhibits large $\sim2$~dex of scatter.}
\label{fig:MMR}
\end{center}
\end{figure*}
\begin{figure}[!h]
\begin{center}
\includegraphics[angle=0,scale=1.2]{AlldzvM.eps}
\caption[angle=0]{The difference between the CGM and galaxy ISM
metallicities as a function of host galaxy stellar mass. All but
one system has CGM metallicity higher than the galaxy
metallicity. The 21 CGM metallicity measurements exhibit a mean
offset from is offset from the galaxy metallicity by
log($dZ)=-1.17\pm0.11$ where the error is quoted as the standard
error in the mean. The scatter in this difference can be expressed
by the standard deviation of $1\sigma=0.72$. This metallicity
difference is independent of stellar mass over the small range
examined here.}
\label{fig:dZvsM}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[angle=0,scale=1.2]{AlldZvsD.eps}
\caption[angle=0]{The difference between the CGM and galaxy ISM
metallicities shown as function of impact parameter (left) and as a
function of the fraction of the virial radius (right). All of the
CGM measurements reside within 200~kpc and most within 1~R$_{vir}$
of their host galaxies. Arrows represent limits on the CGM
metallicities. Note the large scatter at all distances away from the
galaxy with no obvious metallicity gradient.}
\label{fig:ZvsD}
\end{center}
\end{figure*}
\section{Results}\label{sec:results}
In this section we explore the metallicities of both the galaxy ISM
and of the CGM to determine if there is a relationship between distant
CGM gas and its host galaxies.
In Figure~\ref{fig:MMR}, we present galaxy ISM metallicities as
determined from the {$\rm{H} \alpha$} and {\hbox{{\rm N}\kern 0.1em{\sc ii}}} line ratios normalized to oxygen
solar abundance of 8.69 \citep{asplund09}. Here log($Z$) is defined
for galaxies as the ratio of the mass of oxygen in the gas-phase and
the hydrogen gas mass. Our sample of galaxies have a stellar mass
range from 9.4$\leq$log(M$_*$/M$_{\odot}$)$\leq$10.9 with roughly
0.5~dex error on the stellar mass. The dashed line shows the galaxy
mass-metallicity relation obtained from the formalism of
\citet{zahid14} evaluated at our mean galaxy redshift of $z=0.28$ and
then normalized to the solar oxygen abundance. We also normalize the
\citet{zahid14} relation to the N2 \citet{pettini04} calibration used
to calculate our galaxy ISM metallicities following the methods of
\citet{kewley08}\footnote{ Note \citet{zahid14} used the
\citet{kobulnicky04} calibration and the difference between these
calibration methods can lead to offset of $\sim$0.3~dex in
metallicity.}.
We find that our galaxy ISM metallicities agree with the expectations
and follow the general trend of increasing metallicity with increasing
mass having a $1\sigma$ scatter of 0.19~dex about the relation. This
scatter could be further reduced if the mass-metallicity relation was
computed for the full range of galaxy redshifts observed here, however
this is beyond the scope of this paper and not necessary for our
analysis.
While the galaxy metallicities exhibit a tight relation with stellar
mass (0.19~dex scatter), the metallicity of the CGM shows no
dependence with stellar mass and exhibits a scatter that ranges over
2~dex. This clearly shows that the CGM is more complex and the
metallicity is likely driven by a range of processes compared to that
of the ISM. In all cases, except for a poorly constrained limit, the
CGM metallicity is always lower than the galaxy ISM
metallicity. Figure~\ref{fig:dZvsM} shows the difference between the
CGM and galaxy ISM metallicities as a function of host galaxy stellar
mass. Using a survial analysis with all of the data we find that the
CGM metallicity is offset from the galaxy metallicity by
log($dZ)=-1.17\pm0.11$ where log($dZ$) is quoted as the mean offset
from the galaxy metallicity while the error is quoted as the standard
error in the mean. The scatter in this difference can be expressed by
the standard deviation of $1\sigma=0.72$. This metallicity difference
is independent of stellar mass over the small range examined here.
The relative CGM and ISM metallicities for the stellar mass range of
9.7$\leq$M$_*<$10.3 and 10.3$\leq$M$_*\leq$10.8 exhibit metallicity
differences of $-1.27\pm0.14$ (1$\sigma$ scatter of 0.91) and
$-1.09\pm0.17$ (1$\sigma$ scatter of 0.67), respectively, with values
quoted as the mean difference while the error is quoted as the
standard error in the mean. We have also applied generalized Kendall
and Spearman rank correlation tests, which accounts for measured
limits in the sample \citep{feigelson85}, between the stellar mass and
log($dZ$). We no strong supporting evidence for trends between
stellar mass and log($dZ$) ($2.1\sigma$ -- Kendall, 2.3$\sigma$ --
Spearman).
Given that our sample is low redshift ($\left<z\right>=0.28$), where
one could expect metal-poor accretion to minimal and metals within the
CGM could be well mixed or metal enriched from Gyrs of ongoing
outflows, there is still a significant metallicity difference between
the host galaxy and the CGM. Furthermore, this difference is
independent of stellar mass.
The middle and right panels of Figure~\ref{fig:MMR} show both the
galaxy and CGM metallicities as a function of the azimuthal and
inclination angles, respectively. As expected, the galaxy ISM
metallicity is independent of the galaxy orientation with respect to
the quasar sight-line as well as the galaxy's inclination angle. The
CGM however exhibits large scatter as a function of azimuthal angle
and inclination angle as previously shown by \citet{pointon19} using a
larger sample of 47 galaxy-absorber pairs. \citet{pointon19} explored
how the CGM metallicity behaves relative to the galaxy inclination and
azimuthal angles and found no apparent trend, which conflicts with a
scenario of planer accretion and bi-polar outflows
\citep[e.g.,][]{nelson19}. However, the Pointon et al.\ study did not
address how the relative galaxy-CGM metallicity behaves as a function
of orientation or impact parameter.
\begin{figure*}
\begin{center}
\includegraphics[angle=0,scale=1.2]{AlldZvsAz.eps}
\caption[angle=0]{The difference between the CGM and galaxy ISM
metallicities as a function of azimuthal angle (left) and galaxy
inclination (right). In a simple CGM model, we would expect low
metallicity gas to accrete along the major axis (bottom left corner
of the plot) while higher metallicity gas outflows along the minor
axis (upper right corner of the plot). When taking the galaxy
metallicity into account, we do not find a correlation with
azimuthal angle as expected from the simple model. The exists a
range of metallicities at all azimuthal angles. (Right) As a galaxy
becomes more edge-on, it is expected that outflows and accretion
signatures would be more apparent than for near face-on galaxies. We
do find that highly inclined galaxies have a range of metallicities
that could arise from outflow and/or accretion signatures.}
\label{fig:ZvsOri}
\end{center}
\end{figure*}
It is unclear how we expect the CGM metallicity to behave as a
function of impact parameter. Simulations predict co-planer accretion
with bi-conical outflows \citep[e.g.,][]{nelson19} and negative radial
metallicity gradients for both outflow and accretion models
\citep[e.g.,][]{freeke12}. Furthermore, simulations of extended disk
ISM metallicities show that galaxies have negative metallicity
gradients extending out to 10--20~kpc
\citep[e.g.,][]{kobayashi11,pilkington12}, while observations show
either flat or negative gradients with significant scatter in their
slopes \citep[e.g.,][]{wuyts16,sanchez18}.
\begin{figure*}
\begin{center}
\includegraphics[angle=0,scale=1.2]{AlldZvsAz_c.eps}
\caption[angle=0]{Same as left panel of Figure~\ref{fig:ZvsOri} except
now the data are color-coded as a function high and low inclination
angles (left) and high and low impact parameters (right). Note that
regardless of low or high inclination, or low and high impact
parameter, there is no correlation with the relative ISM-CGM
metallicity and azimuthal angle.}
\label{fig:ZvsOriC}
\end{center}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[angle=0,scale=1.2]{AlldZvsAz_c2.eps}
\caption[angle=0]{Same as Figure~\ref{fig:ZvsOriC} except now the data
are color-coded as a function of low (log N({\hbox{{\rm H}\kern 0.1em{\sc i}}})<17.2) and high
(log N({\hbox{{\rm H}\kern 0.1em{\sc i}}})$\geq$17.2) modeled {\hbox{{\rm H}\kern 0.1em{\sc i}}} column densities.}
\label{fig:ZvsOriC_NH}
\end{center}
\end{figure}
In the left panel of Figure~\ref{fig:ZvsD}, we present the difference
in metallicity between the CGM and galaxy as a function of impact
parameter. All of our absorption systems are within 200~kpc of the
host galaxy. We find no apparent metallicity gradient ($0.78\sigma$ --
Kendall, 0.92$\sigma$ -- Spearman) with a larger scatter of low and
high metallicity systems at all distances away from the galaxy. It is
interesting to note that the two lowest metallicity systems reside
within 75~kpc (or within 0.5$R_{vir}$) of the host galaxy, which is
counter-intuitive since one might expect these more unpolluted systems
to reside further away from their host galaxies (unless metal-poor
accretion is really reaching low impact parameters without mixing).
It is more meaningful to show the difference in ISM/CGM metallicities
as a function of the the fraction of the virial radius given that
these galaxies cover a range of halo masses. In the right panel of
Figure~\ref{fig:ZvsD} shows the difference between CGM and ISM
metallicities as a function of the fraction of the virial
radius. Almost all of our absorption systems, except for two limits,
reside within 1~R$_{vir}$ of their host galaxies. Again, we find no
strong trend between metallicity and the fraction of the virial radius
($1.63\sigma$ -- Kendall, 1.51$\sigma$ -- Spearman). While the most
metal poor absorbers reside near to the galaxy, there is significant
scatter at all radii. The scatter seen here could be a result of gas
being enriched from outflows, while metal-poor gas could come from
accreting gas. All of these might be expected to have an orientation
dependence.
In the left panel of Figure~\ref{fig:ZvsOri} we present the difference
between the CGM and galaxy ISM metallicities as a function of
azimuthal angle. In a simple CGM scenario, one may expect that
metal-poor gas relative to the galaxy should accrete along the major
axis of the galaxy disk, which should populate the lower left corner
of the plot. On the other hand, metal-enriched outflows relative to
the host galaxy should occur along the galaxy minor axis, which should
populate the upper right corner of the plot. However, it is clear from
the figure that there is a large range in log($dZ$) from $0$ to $-$2
at all azimuthal angles. We find that the mean metallicity difference
between the CGM and host galaxy for both along the major and minor
axes, bifurcated at 45~degrees, are $-$1.13$\pm$0.18 (1$\sigma$ scatter
of 0.76) and $-$1.23$\pm$0.11 (1$\sigma$ scatter of 0.65),
respectively.
It does seems clear that there is no significant relative metallicity
dependence as a function of azimuthal angle. This is consistent with
the results of \citet{pointon19} who showed that the CGM metallicity
alone does not have a metallicity dependence. When taking into account
the host galaxy metallicity, this does not unveil a new result. This
is consistent with the first suggestive results of \citet{peroux16}
using 9 galaxy absorber pairs. Given that there exists a large scatter
in CGM metallicities, while ISM metallicities show very little
scatter, then it is not surprising no additional relationships are
discovered here. It is also interesting to note that the most metal
enriched systems are along the major axis of the galaxy and not the
minor axis, where outflows are expected to dominate. So it is unclear
what is the source of these high metallicity systems and/or if they
are part of a very extended HI disk. It is plausible that recycled
metal-enriched gas could fall back towards the galaxy and reaccrete
along the major axis, which could explain the large scatter seen at
low azimuthal angles. However, it is puzzling that there exists
absorption systems that have much lower metallicities than their host
galaxies along the minor axis.
The right hand panel of Figure~\ref{fig:ZvsOri} shows the relative
metallicity as a function of galaxy inclination. We do not have many
near face-on galaxies in our sample, so we are unable to comment on
the distribution of metallicities here. However, in a simple
inflow/outflow scenario, one would assume that these gas flows may be
more distinguishable for edge-on galaxies. At intermediate to highly
inclined galaxies, we find significant scatter ranging from 0$\lesssim
$log (dZ)$ \lesssim -2$. This further indicates that there is likely
no metallicity dependence on galaxy inclination.
It could be possible that combination of inclination angles and/or
impact parameters could dilute any correlation between the relative
galaxy ISM an CGM metallicities as a function of azimuthal angle. In
Figure~\ref{fig:ZvsOriC} we explore the azimuthal dependence of the
relative metallicity bifurcated by high and low galaxy inclination
angles at 57 degrees, which splits the sample roughly equally into two
subsets. For highly inclined galaxies, we would expect to see the
strongest relation between relative metallicity and azimuthal angle
since the quasar sight-line should only pass though individual outflow
and accretion structures and not a blend of the two in projection. We
find a similar scatter in metallicity for both low and high
inclination galaxies. Interestingly, we find the lowest metallicities
relative to their host galaxies tend to be highly inclined and exist
over the full range of azimuthal angles. Low inclination galaxies have
fewer very metal-poor CGM systems which could be due to an averaging
of structures and gas metallicities along the quasar sight-line (i.e,
passing though both outflows and accreting gas).
The right panel of Figure~\ref{fig:ZvsOriC} also shows the azimuthal
metallicity dependence as a function of low ($D<80$~kpc) and high
($D>80$~kpc) impact parameters. Since it has been shown that outflows
may only extend out to 50--100~kpc \citep[e.g.,][]{bordoloi11,lan18},
one could expect the highest metallicity systems, or at least
metal-enriched systems (at or above the galaxy ISM metallicity), to
exist at low impact parameters and possibly along the galaxy minor
axis. We find that along the minor axis, both low and high impact
parameter systems have a range of metallicities. In fact, the three
low metallicity systems are at low impact parameters, which is
unexpected. Again, along the major axis, both low and high impact
parameter systems have a range of relative CGM to galaxy
metallicity. Therefore impact parameter doesn't seem to play a
critical role in the azimuthal dependence for the metallicity
difference between the CGM and the host galaxies. We do find that
intermediate azimuthal angles are dominated by $D<80$~kpc systems,
where possibly extended disk or interactions may also contribute to
the absorption detected here.
It is further possible that any azimuthal dependence could be driven
by the hydrogen column density since the CGM metallicity bi-modality
is only shown for pLLSs and LLSs \citep{wotta16,wotta19}. In
Figure~\ref{fig:ZvsOriC_NH} we show the ISM-CGM metallicity difference
versus azimuthal angle separated into high N({\hbox{{\rm H}\kern 0.1em{\sc i}}})
(logN({\hbox{{\rm H}\kern 0.1em{\sc i}}})$\geq17.2$ -- purple) and low N({\hbox{{\rm H}\kern 0.1em{\sc i}}}) (logN({\hbox{{\rm H}\kern 0.1em{\sc i}}})$<17.2$
-- red). In this figure high column density systems tend to have
lower metallicities, which is consistent with previous work
\citep[see][]{pointon19}. The vast majority of low column density
systems (10/15 -- red points in Figure~\ref{fig:ZvsOriC_NH}) reside
along the galaxy major axis and only four low column density systems
with metal-line measurements are found at greater than
$\sim60$~degrees. This could be suggestive that low N({\hbox{{\rm H}\kern 0.1em{\sc i}}}) systems
are better tracers of accretion if the accreting gas has a range of
metallicities, however more data is required. It is also possible
that low N({\hbox{{\rm H}\kern 0.1em{\sc i}}}) gas along the major axis of galaxies occurs as both
metal-poor gas accretion and metal-enriched recycling.
We also find that 6/10 high column density systems (purple points in
Figure~\ref{fig:ZvsOriC_NH}) reside above an azimuthal angle of
40~degrees, suggesting that high column density systems could better
trace outflows. However, the metallicities along the major and minor
axes are consistent with each other.
\section{Discussion}\label{sec:discussion}
Simulations clearly show that the CGM is complex, yet observations
have shown that the spatial distribution of high and low ions are
azimuthally dependent
\citep{bouche12,kacprzak12a,lan14,kacprzak15,lan18}. Even the internal
dispersion of the CGM absorption for low ions points to accretion and
outflow scenarios \citep{nielsen15}. Furthermore, relative gas and
galaxy kinematics show that low ions are kinematically connected to
their host galaxy by aligning with their rotation curves and being
modelled well by accreting+corotating gas
\citep{steidel02,kacprzak10,kacprzak11a,ho17,martin19,zabl19}. On the other
hand, minor axis gas also seems to be well modelled by outflowing gas
\citep{bouche12,gauthier12,schroetter16}. Finally, the metallicity
distribution of LLS and pLLS appears bimodal, which also suggests that
outflows and accretion are dominant phenomena within the CGM
\citep{wotta16,wotta19}. Thus the spatial distribution of metallicity
around galaxies seemed to be key to understanding the origins of the
CGM.
However, as \citet{pointon19} has shown, CGM metallicity alone has no
correlation with azimuthal angle or inclination regardless of impact
parameter, N({\hbox{{\rm H}\kern 0.1em{\sc i}}}), etc. This is quite disappointing given the simple
picture presented by observations. However, we do not know how the
relative galaxy-ISM and CGM metallicities affect these results given
the well-known galaxy stellar mass and ISM metallicity relation found
at all redshifts
\citep[e.g,][]{tremonti04,sanders14,steidel14,zahid14,kacprzak15b}. Thus
accounting for the galaxy metallicity could enhance any possible
relationship with metallicity and galaxy orientation. Here we examine
these relationships using 25 systems with both galaxy ISM and CGM
metallicities.
We find that although host galaxies follow a stellar mass metallicity
relation (0.19~dex scatter over the mass range
9.4$\leq$log(M$_*$/M$_{\odot}$)$\leq$10.9), the CGM is quite scattered
as a function of stellar mass spanning 2~dex in metallicity. This is
expected as galaxy ISM metallicities are driven by stellar evolution
and gas accretion and are averaged over entire galaxy disks while the
CGM detected along point-like quasar sightlines may originate from IGM
gas accretion, from nearby galaxies/satellites, or from recycled and
outflowing gas generated from within the galaxy. We find that the
mean of the CGM metallcities are lower than the mean galaxy
metallicities by $-1.17\pm0.11$. This offset is independent of stellar
mass over the small range examined here. The mean CGM metallicities
for stellar mass ranges of9.7$\leq$log(M$_*/$M$_{\odot}$)$_*<$10.3 and
10.3$\leq$ log(M$_*/$M$_{\odot}$)$_*\leq$10.8 are lower than the
galaxy metallicity by $-1.27\pm0.14$ (1$\sigma$ scatter of 0.91) and
$-1.09\pm0.17$ (1$\sigma$ scatter of 0.67), respectively. Thus there
is a significant difference between the host galaxy and the CGM
metallicities, which are stellar mass independent. There may be a
small hint of a correlation with stellar mass and log($dZ$) but it is
not highly significant ($2.1\sigma$ -- Kendall, 2.3$\sigma$ --
Spearman).
These results are consistent with the findings of \citet{prochaska17}
who has shown that the CGM metallicity does not correlate with the ISM
metallicity of host galaxies nor does the CGM metallicity correlate
with stellar mass. However, it is difficult to compare our works
directly since they use the \citet{haardt12} (HM12) ionizing
background. Previous works have shown that harder spectrum of ionizing
photons from the HM12 background is due to a lower escape fraction of
radiation from galaxies compared to the HM05 background, which leads
to higher metallicity estimates and an anti-correlation between
N({\hbox{{\rm H}\kern 0.1em{\sc i}}}) and metallicity \citep{howk09,werk14,wotta16,wotta19,chen17,
zahedy19, pointon19}.
All of our CGM metallicities are lower than the galaxy ISM
metallicities. This could imply that CGM may originate from a nearby
satellite galaxy or its outflow or tidal debris. However, the halo gas
cross-section of satellite galaxies are predicted to be extremely
small \citep{gauthier10,martin12,tumlinson13} and thus, an unlikely
contributor to the bulk of the detected absorption.
The lower CGM metallicities could imply that gas ejected from galaxies
is diluted with metal poor gas within the CGM or metals ejected from
host galaxies have taken a long time to travel out into the CGM. If
the gas does take a long time to travel out into the CGM, an
interesting experiment is to then see at what age of the Universe did
a $z=0.28$ M$_{*}\sim$10$^{10.5}$M$_{\odot}$ galaxy have a ISM
metallicity that was $-$1.2~dex lower than its current value. We
estimate this gas must have been ejected prior to $z=3$ given the
limits of the stellar mass evolution of $\sim$10$^{10.5}$M$_{\odot}$
galaxy \citep{papovich15} and combined with the evolution of the
mass-metallicity relation \citep{mannucci09,zahid14}. Thus, the gas
would have to be ejected roughly at $>$8~Gyr prior to $z=0.28$ in
order to have a galaxy such a low metallicity. This large time-scale
provides ample time for ejected gas to travel out into the CGM and
also recycle back to the disk since this is estimated to take at least
1 Gyr \citep[e.g.,][]{oppenheimer09,oppenheimer10}. However, this also
assumes no gas mixing, which would likely further change the
metallicity of the ejected material. So it seems possible that any
metal poor gas that was ejected at early times should have been
enriched several times over a $>$8~Gyr time-frame. Thus, in order to
find low metallicity systems along the minor axis of galaxies,
outflowing gas would have to be well-mixed with its metal-poor
surroundings within the CGM or maybe the cool CGM is not a good tracer
of galactic outflows.
\citet{peroux16} first looked into the difference between the galaxy
ISM and CGM metallicities as a function of azimuthal angle with nine
galaxies and suggested that there seems to be a large scatter along
the major axis while they only had two lower relative metallicity
systems along the minor axis. We further find that the mean
metallicity differences along the major and minor axes, bifurcated at
45~degrees, are $-$1.13$\pm$0.18 (1$\sigma$ scatter of 0.76) and
$-$1.23$\pm$0.11 (1$\sigma$ scatter of 0.65), respectively.
Regardless of whether we examine our sample by low/high inclination or
low/high impact parameter, or low/high column density (or any
combination of these), we do not find any significant relationship
with relative metallicity and azimuthal angle.
So what is going on here and should we be focusing on metallicity when
examining modes of accretion and outflows? Outflows do occur and there
is plenty of evidence that they likely occur along the minor axis and
this gas has to be metal-enriched. Also, some form of accretion must
happen given all the kinematic evidence found for CGM-galaxy pairs and
that fact that galaxies continue to form stars. Yet it is unclear what
the metallicity of that accreting gas could be. Cosmological
simulations predict that gas accretion metallicities range between
10$^{-3}-10^{-0.5}$~~$Z_\odot$, which is dependent on redshift and
halo mass
\citep{keres05,fumagalli11b,oppenheimer12,freeke12,shen13,kacprzak16},
however this range does overlap with the expected metallicities of
recycled/outflowing gas. Also, the complexity of outflowing gas makes
things worse given there is typically hot outflowing material
containing cool entrained clouds. Thus maybe metallicity alone is a
poor indicator of the origins of the CGM gas or the metallicity of low
ions might be a poor indicator of the metallicity of hot outflowing
gas.
Analysis of cosmological simulations from \citet{ford14} showed that
low-ionization metal absorbers tend to arise within inflowing gas,
while high-ionization metal absorbers trace ancient outflowing gas
deposited in galaxy haloes many Gyr ago. \citep{muzahid15} showed a
galaxy having both a metal-poor low ionization component ($\sim$-1.5)
and a high ionization metal rich component ($>0.3$). They concluded
that the low ionization metal-poor phase was consistent with being
recycled material in the galaxy halo and that the high-ionization,
metal-enriched, low density gas presumably originated from
star-formation driven outflows from the host-galaxy. Thus it is
possible that different gas phases have different origins and given
this example, more work is needed to further model the multi-phase CGM
metallicities.
It is still puzzling however that pLLSs and LLSs have bimodal
metallicity distribution and this needs to be explored within
simulations and determined whether the bimodality is caused by
internal CGM properties or due to environmental effects. So far
cosmological simulations have been unable to reproduce the CGM
metallicity bimodality
\citep{hafen17,hafen18,rahmati18,lehner19}. Furthermore environmental
effects may not be the likely mechanism producing the bimodality
either \citep{pointon19b}. It seems that properties such as
velocities, column densities and equivalent widths that are straight
forward to measure provide the most fruitful evidence for gas flows.
On the other hand, modelling the total metallicity along a given
sight-line is not straightforward either and can lead to confusing
results. We know that the CGM metallicity must vary along the
sight-lines \citep{churchill15,peeples18}, however most studies model
a global single-phase metallicity since it is difficult in most cases
to assign the correct amount of hydrogen to given metal features from
different gas phases in a single spectrum. \citet{lehner19} has shown
for $\sim30$ near redshift-separated absorbers (separations of
50--400~\hbox{~km~s$^{-1}$}) have metallicities differences ranging from 0 to 1.7dex.
Only a smaller number of absorption-line systems have been modeled as
multi-phase and with cloud-to-cloud metallicities
\citep[e.g.,][]{prochter10,tripp11,crighton13b,muzahid15,muzahid16,rosenwasser18,zahedy19}. Furthermore,
it is expected that absorption arising from outflows would have large
cloud-to-cloud variations in metallicity and ionization level
\citep[e.g.,][]{veilleux05,rosenwasser18,zahedy19}, which we are
averaging over. We are also metal-biased in that detecting some metals
at a given velocity does not imply there is no metal poor gas at that
same velocity in some other spatial location along the sight-line that
is masked by those other metal lines. Maybe the CGM is not well mixed
and metallicity is not a great indicator of dynamic processes and we
are best to focus our efforts on dynamic/kinematic measurements to
study gas flows.
Either way, larger and well targeted samples may provide future
insight to the metallicity distribution around galaxies.
\section{Conclusions}\label{sec:conclusion}
We present galaxy ISM and CGM metallicities for 25 absorption systems
associated with isolated star-forming galaxies ($0.07\leq
z\leq0.50$). Galaxy ISM metallicities were measured using {$\rm{H} \alpha$} and
[\hbox{{\rm N}\kern 0.1em{\sc ii}}] emission lines obtained from Keck/ESI spectra. The CGM
metallicities were adopted from \citet{pointon19}, which were modeled
using an MCMC analysis along with Cloudy. We examine the galaxy mass
metallicity relation for our galaxies and their absorption systems. We
also explore whether the relative galaxy ISM and CGM metallicity
correlates with galaxy orientation with respect to the quasar. Our
results are summarized as follows:
\begin{enumerate}
\item We find that our galaxy ISM metallicities agree with the
expectations of following the general trend of increasing
metallicity with increasing stellar mass having a $1\sigma$ scatter
of 0.19~dex about the relation determined at
$\left<z\right>=0.28$. This scatter could be further reduced if the
mass-metallicity relation was computed for the full range of galaxy
redshifts in our sample.
\item CGM metallicity shows no dependence with stellar mass ($<2.3
\sigma$ significance) and exhibits a scatter that ranges over 2~dex.
The CGM and galaxy galaxy metallicity differences for stellar mass
ranges of 9.7$\leq$M$_*<$10.3 and 10.3$\leq$M$_*\leq$10.8 are
$-1.27\pm0.14$ (1$\sigma$ scatter of 0.91) and $-1.09\pm0.17$
(1$\sigma$ scatter of 0.67), respectively. Thus, even at low
redshift, where one might expect the global metallicities to be more
homogenized, there is still a significant difference between the
host galaxy ISM and the CGM metallicities and are stellar mass
independent
\item The CGM metallicities are always lower than
the galaxy ISM metallicities and are offset by log($dZ)=-1.17\pm0.11$
where log($dZ$) is quoted as the mean offset from the galaxy metallicity
while the error is quoted as the standard error in the mean. The
scatter in this offset can be expressed by the standard deviation of
$1\sigma=0.72$.
\item All of our CGM measurements reside within 200~kpc and
1.5~R$_{vir}$ of their host galaxies. We find no obvious metallicity
gradient as a function of impact parameter or virial radius ($<1.6
\sigma$ significance). This could be diluted with a range of galaxy
orientations within that sample. Ideally, this sort of work would
best be done for a large sample of edge-on galaxies.
\item There is no relative CGM--galaxy metallicity as a function of
azimuthal angle. We find that the mean metallicity differences along
the major and minor axes, bifurcated at 45~degrees, are
$-$1.13$\pm$0.18 (1$\sigma$ scatter of 0.76) and $-$1.23$\pm$0.11
(1$\sigma$ scatter of 0.65), respectively.
\item Regardless of whether we examine our sample by low/high
inclination or low/high impact parameter, or low/high {\hbox{{\rm H}\kern 0.1em{\sc i}}} column
density (or any combination of these), we do not find any
significant relationship with relative CGM--galaxy metallicity and
azimuthal angle.
\item
The majority of low column density systems (10/15 --
logN({\hbox{{\rm H}\kern 0.1em{\sc i}}})$<17.2$) reside along the galaxy major axis and only two
low column density systems with metal-line measurements are found at
$\sim60$~degrees. We also find that 6/10 high column density systems
(logN({\hbox{{\rm H}\kern 0.1em{\sc i}}})$\geq 17.2$) reside above an azimuthal angle of
40~degrees, suggesting that high column density systems could better
trace outflows. However, the metallicities along the major and minor
axes are consistent. This could be suggestive that low N({\hbox{{\rm H}\kern 0.1em{\sc i}}})
systems are better tracers of accretion if the accreting gas has a
range of metallicities. More data is required to determine whether
these trends really do exist.
\end{enumerate}
It is undoubtedly true that the CGM is complex. The community has put
forth a large body of work showing evidence for accretion and
outflows, however a clear confirmation of cosmological accretion
remains elusive. CGM metallicities and metallicity differences between
the galaxy-ISM to CGM do not help illuminate our understanding of the
CGM, at least with current sample sizes. We further need to address how
assuming averaged line of sight metallicities and/or single phase
metallicities truly effects our results.
An additional issue is that our point-source quasars probe through
individual galaxy halos, which could give rise to large variations in
metallicity within the halo and along the sight-line. Hopefully in the
future, we will be able to use background galaxies, or gravitational
lenses \citep[e.g.,][]{lopez18} to obtain a better sampling of the
halo metallicities and to be less susceptible to line-of-sight
variations. For now, it seems that properties such as velocities,
column densities and equivalent widths that are easy to measure
provide the most fruitful evidence for gas flows.\\
\acknowledgments We thank Roberto Avila (STScI) for his help and
advice with modeling PSFs with ACS and WFC3. GGK and NMN acknowledges
the support of the Australian Research Council through a Discovery
Project DP170103470. Parts of this research were supported by the
Australian Research Council Centre of Excellence for All Sky
Astrophysics in 3 Dimensions (ASTRO 3D), through project number
CE170100012. CWC and JCC are supported by NASA through grants HST
GO-13398 from the Space Telescope Science Institute, which is operated
by the Association of Universities for Research in Astronomy, Inc.,
under NASA contract NAS5-26555. CWC and JCC are further supported by
NSF AST-1517816. Most of the data presented here were obtained at the
W. M. Keck Observatory, which is operated as a scientific partnership
among the California Institute of Technology, the University of
California, and the National Aeronautics and Space Administration. The
Observatory was made possible by the generous financial support of the
W. M. Keck Foundation. Observations were supported by Swinburne Keck
programs 2016A\_W056E, 2015B\_W018E, 2014A\_W178E and
2014B\_W018E. The authors wish to recognize and acknowledge the very
significant cultural role and reverence that the summit of Mauna Kea
has always had within the indigenous Hawaiian community. We are most
fortunate to have the opportunity to conduct observations from this
mountain. Based on observations made with the NASA/ESA Hubble Space
Telescope, and obtained from the Hubble Legacy Archive, which is a
collaboration between the Space Telescope Science Institute
(STScI/NASA), the Space Telescope European Coordinating Facility
(ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
{\it Facilities:} \facility{Keck II (ESI)}
\facility{HST (COS, WFPC2, ACS, WFC3)}.
|
2,877,628,089,409 | arxiv | \section{Introduction}
The unit disk $\mathbb{D}=\{z\in\mathbb{C} \colon \abs{z}<1\}$ can be endowed with the hyperbolic metric
\[d\sigma=\frac{\abs{dz}}{1-\abs{z}^2}.\]
The Schwarz-Pick lemma (e.g., ~\cite{Abook}) implies that
any holomorphic map $f\colon \mathbb{D}\to\mathbb{D}$ does not increase distances in the hyperbolic metric. This is no longer true
for harmonic maps, which verify the Laplace equation $\partial \bar \partial f=0$ but not necessarily the Cauchy-Riemann equation $\bar\partial f=0$.
The harmonic version of the Schwarz lemma (\cite{He}, see also~\cite{CGH}) states
that any harmonic map $f\colon\mathbb{D}\to\mathbb{D}$ with normalization $f(0)=0$ satisfies
\[\abs{f(z)}\leqslant \frac{4}{\pi}\arctan\abs{z},\quad z\in\mathbb{D}.\]
This inequality is sharp~\cite[p. 77]{Dub}. More precisely, for any $r\in (0,1)$ and any small $\epsilon>0$ there is a
bijective harmonic map $f\colon \mathbb{D}\to\mathbb{D}$ such that $f(0)=0$ and
\[f(r)=-f(-r)=\frac{4}{\pi}\arctan r -\epsilon.\]
This map is not a contraction in either Euclidean or hyperbolic metric. With respect to either
metric, the diameter of the disk $\mathbb{D}_r =\{z\in\mathbb{C} \colon \abs{z}<r\}$ is strictly less than the diameter of $f(\mathbb{D}_r)$.
In this note we prove that a bijective harmonic map $f\colon \mathbb{D}\to\mathbb{D}$ does not increase the area of $\mathbb{D}_r$ for any $0<r<1$.
We write $\abs{E}$ for the area (i.e., planar Lebesgue measure) of a set $E$.
\begin{theorem}\label{main}
Let $f\colon \mathbb{D}\to\mathbb{D}$ be a bijective harmonic map. Then
\begin{equation}\label{main1}
\abs{f(\mathbb{D}_r)} \leqslant \abs{\mathbb{D}_r},\qquad 0< r <1.
\end{equation}
If~\eqref{main1} turns into an equality for some $r\in (0,1)$, then $f$ is an isometry.
\end{theorem}
It should be noted that the class of harmonic automorphisms of $\mathbb{D}$ is much wider than
the class of holomorphic automorphisms, which consists of M\"obius maps only. Harmonic homeomorphisms of $\mathbb{D}$
form an interesting and much-studied class of planar maps, see~\cite{CSS,Ka,Pa} or the monograph~\cite{Dub}.
Theorem~\ref{main} is different from most known estimates for harmonic maps in that it remains sharp
when specialized to the holomorphic case.
An immediate consequence of~\eqref{main1} is
$\abs{f(\mathbb{D}\setminus \mathbb{D}_r)} \geqslant \abs{\mathbb{D}\setminus\mathbb{D}_r}$. If $f$ is sufficiently smooth,
we can divide by $1-r$ and let $r\to 1$ to obtain the following.
\begin{corollary}\label{cor}
Let $f\colon \mathbb{D}\to\mathbb{D}$ be a bijective harmonic map that is continuously differentiable in the closed
disk $\overline{\mathbb{D}}$. Then
\begin{equation*
\int_{\abs{z}=1} \abs{\det Df}\,\abs{dz} \geqslant 2\pi,
\end{equation*}
where $\det Df=\abs{\partial f}^2-\abs{\bar\partial f}^2$ is the Jacobian determinant of $f$.
\end{corollary}
Corollary~\ref{cor} was proved in a different way in~\cite{IKO2} where it serves as an important part of the
proof of Nitsche's conjecture on the existence of harmonic homeomorphisms between doubly-connected domains.
In fact, Corollary~\ref{cor} is what led us to think that~\eqref{main1} might be true.
If $f\colon \mathbb{D}\to\mathbb{D}$ is holomorphic, then~\eqref{main1} holds without the assumption of $f$ being bijective.
Indeed, in this case $f(\mathbb{D}_r)$ is contained in a hyperbolic disk $D$ of the same hyperbolic radius as $\mathbb{D}_r$. Since the
density of the hyperbolic metric increases toward the boundary, it follows that the Euclidean radius of $D$ is at most $r$,
which implies~\eqref{main1}.
\begin{question} Does the area comparison~\eqref{main1} hold for general harmonic maps $f\colon \mathbb{D}\to\mathbb{D}$?
Does it hold in higher dimensions?
\end{question}
We conclude the introduction by comparing the behavior of $\abs{f(\mathbb{D}_r)}$ for holomorphic and harmonic maps.
If $f\colon \mathbb{D}\to \mathbb{C}$ is holomorphic and injective, one can use the power series $f(z)=\sum c_n z^n$ to compute
\[\abs{f(\mathbb{D}_r)} = \sum_{n=1}^{\infty} n \abs{c_n}^2 r^{2n}.\]
Since the right-hand side is a convex function of $r^2$, it follows that
\begin{equation}\label{general}
\abs{f(\mathbb{D}_r)}\leqslant r^2 \abs{f(\mathbb{D})},
\end{equation}
which includes~\eqref{main1} as a special case. However, ~\eqref{general} fails for harmonic maps.
Indeed, let $f(z)=z+c\bar z^2$ where $0<\abs{c}<1/2$. It is easy to see that $f\colon \mathbb{D}\to \mathbb{C}$ is harmonic and one-to-one, but
\[\abs{f(\mathbb{D}_r)} = r^2-2\abs{c}^2 r^4\]
is a strictly concave function of $r^2$. Therefore, $\abs{f(\mathbb{D}_r)}>r^2 \abs{f(\mathbb{D})}$ for $0<r<1$.
This example does not contradict Theorem~\ref{main} since $f(\mathbb{D})$ is not a disk.
\section{Preliminaries}
Let $f$ be as in Theorem~\ref{main}. We may assume that $f$ is orientation-preserving; otherwise consider $f(\bar z)$ instead.
In this section we derive an identity that relates the area of $f(\mathbb{D}_r)$ with the boundary values of $f$, which
exist a.e. in the sense of nontangential limits.
The Poisson kernel for $\mathbb{D}$ will be denoted $P_{r}(t)$,
\[P_r (t) = \frac{1-r^2}{1-2r\cos t+r^2},\quad 0\leqslant r<1, \ t\in\mathbb{R}.\]
We represent $f$ by the Poisson integral
\begin{equation}\label{Pintegral}
f(r e^{i\theta}) = \frac{\omega}{2\pi} \int_{0}^{2\pi} e^{i\xi(t)}P_r(\theta-t) \, dt,
\end{equation}
where $\xi \colon [0,2\pi)\to [0,2\pi)$ is a nondecreasing function and $\omega$ is a unimodular constant.
By Green's formula we have
\begin{equation*
\abs{f(\mathbb{D}_r)}= \frac{1}{2}\int_{0}^{2\pi} \im\big(\overline{f(re^{i\theta})} f_{\theta}(re^{i\theta})\big)\,d\theta,
\end{equation*}
where $f_\theta$ indicates the derivative with respect to $\theta$. Since
\begin{equation*
f_\theta(r e^{i\theta}) = \frac{\omega}{2\pi} \int_{0}^{2\pi} e^{i\xi(t)}P_r'(\theta-t)\, dt,
\end{equation*}
it follows that
\begin{equation}\label{ar2}
\overline{f(re^{i\theta})} f_{\theta}(re^{i\theta}) =
\frac{1}{4\pi^2}\int_{0}^{2\pi} \int_{0}^{2\pi} e^{-i\xi(t)} e^{i\xi(s)} P_r(\theta-t) P_r'(\theta-s) \, dt\,ds.
\end{equation}
Integrating~\eqref{ar2} with respect to $\theta$ and reversing the order of integration, we find
\begin{equation}\label{ar3}
\abs{f(\mathbb{D}_r)}=
\frac{1}{4\pi}\int_{0}^{2\pi} \int_{0}^{2\pi} \mathcal{K}_r \, \sin(\xi(s)-\xi(t)) \, dt\,ds
\end{equation}
where $\mathcal{K}_r$ is a function of $r$, $s$, and $t$,
\[
\mathcal{K}_r = \frac{1}{2\pi} \int_{0}^{2\pi} P_r(\theta-t) P_r'(\theta-s)
\]
Recall that the Poisson kernel has the semigroup property~\cite[p.62]{St},
\begin{equation}\label{Pconv}
P_{r \sigma}(t) = \frac{1}{2\pi} \int_{0}^{2\pi} P_r(s)P_\sigma(t-s) \, ds,\quad 0\leqslant r,\sigma<1.
\end{equation}
We will only use~\eqref{Pconv} with $\sigma=r$. Differentiation with respect to $t$ yields
\begin{equation}\label{Pconv2}
\frac{1}{2\pi} \int_{0}^{2\pi} P_r(s)P'_r(t-s)\, ds = P_{r^2}'(t)
= -\frac{2r^2 (1-r^4) \sin t}{(1-2r^2\cos t+r^4)^2}.
\end{equation}
Identity~\eqref{Pconv2} provides an explicit formula for $\mathcal{K}_r$,
\begin{equation}\label{kernel1}
\mathcal{K}_r = \mathcal{K}_r(s-t)= \frac{2\rho^2 (1-\rho^4) \sin (s-t)}{(1-2\rho^2\cos (s-t)+\rho^4)^2}.
\end{equation}
Now we can rewrite~\eqref{ar3} as
\begin{equation}\label{ar4}
\abs{f(\mathbb{D}_r)}=
\frac{1}{4\pi}\int_{0}^{2\pi} \int_{0}^{2\pi} \mathcal{K}_r(s-t)\, \sin(\xi(s)-\xi(t))\, dt\,ds.
\end{equation}
In the next section we will estimate~\eqref{ar4} from above.
\section{Proof of Theorem~\ref{main}}
We continue to use the Poisson representation~\eqref{Pintegral}.
The function $\xi$, originally defined on $[0,2\pi)$, can be extended
to $\mathbb{R}$ so that $\xi(t+2\pi)=\xi(t)+2\pi$ for all $t\in\mathbb{R}$. By~\eqref{ar4} we have
\begin{equation}\label{ar5}
\abs{f(\mathbb{D}_r)}=
\frac{1}{4\pi}\int_{0}^{2\pi} \int_{0}^{2\pi} \mathcal{K}_r(s-t)\, \sin(\xi(s)-\xi(t))\, dt\,ds.
\end{equation}
When $f$ is the identity map,~\eqref{ar5} tells us that
\[
\frac{1}{4\pi}\int_{0}^{2\pi} \int_{0}^{2\pi} \mathcal{K}_r(s-t)\, \sin(s-t)\, dt\,ds = \abs{\mathbb{D}_r}.
\]
The desired inequality $\abs{f(\mathbb{D}_r)}\leqslant \abs{\mathbb{D}_r}$ now takes the form
\begin{equation}\label{ar7}
\int_{0}^{2\pi} \int_{0}^{2\pi} \mathcal{K}_r(s-t)\, \big\{\sin(s-t)-\sin(\xi(s)-\xi(t))\big\}\, dt\,ds \geqslant 0.
\end{equation}
Neither the kernel $\mathcal{K}_r$, which is defined by~\eqref{kernel1}, nor the other factor in the integrand are nonnegative.
We will have to transform the integral in~\eqref{ar7} before effective pointwise estimates can be made.
It will be convenient to use the notation
\begin{equation}\label{alga}
\alpha=s-t, \quad \text{and }\ \gamma=\gamma(\alpha,t)=\xi(\alpha+t)-\xi(t),
\end{equation}
so that the integral in~\eqref{ar7} becomes
\[
\int_{0}^{2\pi} \int_{-\pi-t}^{\pi-t} \mathcal{K}_r(\alpha)\, (\sin \alpha-\sin \gamma )\, d\alpha\,dt.
\]
Since the integrand is $2\pi$-periodic with respect to $\alpha$, our goal can be equivalently stated as
\begin{equation}\label{ar17}
\int_{0}^{2\pi} \int_{0}^{2\pi} \mathcal{K}_r(\alpha)\, (\sin \alpha-\sin \gamma)\, d\alpha\,dt \geqslant 0.
\end{equation}
Note that $\gamma\in [0,2\pi]$ for all $\alpha,t\in [0,2\pi]$.
\textbf{Step 1.} We claim that
\begin{equation}\label{s1t0}
\int_{0}^{2\pi} \int_{0}^{2\pi} \mathcal{K}_r(\alpha)(\gamma - \alpha)\cos\alpha \,d\alpha\,dt =0.
\end{equation}
Indeed, the function $\zeta(t):=\xi(t)-t$ is $2\pi$-periodic, which implies
\begin{equation}\label{s1t1}
\int_{0}^{2\pi} \{\zeta(\alpha+t)-\zeta(t)\}\,dt=0
\end{equation}
for every $\alpha\in\mathbb{R}$. Multiplying~\eqref{s1t1} by $\mathcal{K}_r(\alpha)\cos \alpha$ and integrating over $\alpha\in [0,2\pi]$, we obtain
\[
\int_{0}^{2\pi} \int_{0}^{2\pi}\mathcal{K}_r(\alpha)\{\zeta(\alpha+t)-\zeta(t)\}\cos \alpha \,d\alpha\,dt=0
\]
It remains to note that $\zeta(\alpha+t)-\zeta(t)=\gamma - \alpha$, completing the proof of~\eqref{s1t0}.
We take advantage of~\eqref{s1t0} by adding it to~\eqref{ar17}, which reduces our task to proving that
\begin{equation}\label{ar8}
\int_{0}^{2\pi} \int_{0}^{2\pi} \mathcal{K}_r(\alpha)\left\{\sin\alpha + (\gamma - \alpha)\cos\alpha - \sin\gamma\right\}\,d\alpha\,dt \geqslant 0.
\end{equation}
\textbf{Step 2.} Let us now consider the function
\begin{equation}\label{tangent}
H(\alpha,\beta):=\sin\alpha + (\beta - \alpha)\cos\alpha - \sin\beta, \quad (\alpha,\beta)\in [0,2\pi]\times [0,2\pi]
\end{equation}
which appears in~\eqref{ar8}. It has a simple geometric interpretation in terms of the graph of the sine function $y = \sin x$.
Indeed, the tangent line to this graph at $x=\alpha$ has equation $y=\sin\alpha + (x-\alpha)\cos\alpha$.
The quantity $H(\alpha,\beta)$ represents the difference in the $y$-values of the tangent line and the graph at $x=\beta$.
Since the sine curve is strictly concave on $[0,\pi]$, it follows that
\begin{equation}\label{hpos1}
H(\alpha,\beta)\geqslant 0,\qquad 0\leqslant \alpha,\beta\leqslant \pi,
\end{equation}
with equality only when $\alpha=\beta$. The upper bound on $\beta$ in~\eqref{hpos1} can be weakened
to $\beta\leqslant 2\pi-\alpha$ thanks to the monotonicity with respect to $\beta$,
\[
\frac{\partial H}{\partial \beta}=\cos\alpha - \cos\beta \geqslant 0,\quad 0\leqslant\alpha\leqslant \pi, \ \alpha\leqslant \beta\leqslant 2\pi-\alpha.
\]
Note that the product $\mathcal{K}_r(\alpha)H(\alpha,\beta)$ is invariant under the central symmetry of the square
$[0,2\pi]\times [0,2\pi]$, i.e., the transformation $(\alpha,\beta)\mapsto (2\pi-\alpha,2\pi-\beta)$. Hence
\begin{equation}\label{pointwise}
\mathcal{K}_r(\alpha)H(\alpha,\beta)\geqslant 0, \qquad (\alpha,\beta) \in \big([0,2\pi]\times [0,2\pi]\big) \setminus (T_1\cup T_2)
\end{equation}
where
\begin{align*}
T_1&=\{(\alpha,\beta)\colon 0< \alpha < \pi, \ 2\pi-\alpha < \beta \leqslant 2\pi\}; \\
T_2&=\{(\alpha,\beta)\colon \pi < \alpha < 2\pi, \ 0 \leqslant \beta < 2\pi-\alpha\}.
\end{align*}
Within the triangles $T_1$ and $T_2$ the product $\mathcal{K}_r(\alpha)H(\alpha,\beta)$ may be negative.
However, for all $(\alpha,\beta)\in [0,2\pi]\times [0,2\pi]$ the following holds.
\begin{equation}\label{symsum}
\mathcal{K}_r(\alpha)H(\alpha,\beta) + \mathcal{K}_r(2\pi-\alpha)H(2\pi-\alpha,\beta) = 2\mathcal{K}_r(\alpha)H(\alpha,\pi) \geqslant 0,
\end{equation}
where the last inequality follows from~\eqref{pointwise}. We will use~\eqref{symsum} to control
the contribution of triangles $T_1$ and $T_2$ to the integral~\eqref{ar8}.
\textbf{Step 3.} For each fixed $t$ the function $\alpha\mapsto \gamma(\alpha,t)$ defined by~\eqref{alga}
is nondecreasing and it maps the interval $[0,2\pi]$ onto itself. Thus,
inequality~\eqref{ar8} will follow once we show that for any nondecreasing function
$\Gamma \colon [0,2\pi]\to [0,2\pi]$
\begin{equation}\label{ar10}
\int_{0}^{2\pi} \mathcal{K}_r(\alpha) H(\alpha,\Gamma(\alpha))\, d\alpha \geqslant 0.
\end{equation}
The integral in~\eqref{ar10} remains unchanged if we replace $\Gamma(\alpha)$ with
$\widetilde{\Gamma}(\alpha)=2\pi-\Gamma(2\pi-\alpha)$. Thus we lose no generality in assuming that
$\Gamma(\pi)\leqslant \pi$. By virtue of~\eqref{pointwise} the integrand in~\eqref{ar10} is nonnegative outside of the interval $[\pi,\alpha_0]$, where
\[
\alpha_0=\sup\{\alpha\in [\pi,2\pi]\colon \alpha+\Gamma(\alpha)\leqslant 2\pi\}
\]
We claim that
\begin{equation}\label{comp1}
\mathcal{K}_r(\alpha) H(\alpha,\Gamma(\alpha)) \geqslant \mathcal{K}_r(\alpha) H(\alpha,\Gamma(\pi)), \quad
2\pi-\alpha_0< \alpha < \alpha_0.
\end{equation}
Indeed, the inequality
\[
\frac{\partial H}{\partial \beta}=\cos\alpha - \cos\beta \leqslant 0,\quad \abs{\alpha-\pi}\leqslant \abs{\beta-\pi}\leqslant \pi,
\]
implies
\begin{equation}\label{hmot}
H(\alpha,\beta_1) \geqslant H(\alpha,\beta_2),\quad
0\leqslant \beta_1\leqslant \beta_2 \leqslant \min(\alpha,2\pi-\alpha).
\end{equation}
To see that~\eqref{hmot} applies in our situation, note that
$\Gamma(\alpha)\leqslant 2\pi-\alpha_0$ for $\alpha<\alpha_0$.
Inequality~\eqref{hmot} yields
\begin{equation}\label{comp2}
\begin{split}
H(\alpha,\Gamma(\alpha)) &\leqslant H(\alpha,\Gamma(\pi)), \quad \pi\leqslant \alpha < \alpha_0; \\
H(\alpha,\Gamma(\alpha)) &\geqslant H(\alpha,\Gamma(\pi)), \quad 2\pi-\alpha_0 < \alpha \leqslant \pi.
\end{split}
\end{equation}
Multiplying~\eqref{comp2} by $\mathcal{K}_r(\alpha)$, we arrive at~\eqref{comp1}.
Finally, we combine~\eqref{pointwise}, \eqref{comp1}, and~\eqref{symsum} to obtain
\begin{equation}\label{punchline} \begin{split}
\int_{0}^{2\pi} \mathcal{K}_r(\alpha) H(\alpha,\Gamma(\alpha))\, d\alpha
&\geqslant \int_{2\pi-\alpha_0}^{\alpha_0} \mathcal{K}_r(\alpha) H(\alpha,\Gamma(\alpha))\, d\alpha \\
&\geqslant \int_{2\pi-\alpha_0}^{\alpha_0} \mathcal{K}_r(\alpha) H(\alpha,\Gamma(\pi))\, d\alpha \\
&= 2 \int_{\pi}^{\alpha_0} \mathcal{K}_r(\alpha) H(\alpha,\pi)\, d\alpha \geqslant 0,
\end{split}
\end{equation}
completing the proof of~\eqref{ar8}.
\textbf{Step 4.} It remains to prove the equality statement in Theorem~\ref{main}.
Suppose that $\Gamma \colon [0,2\pi]\to [0,2\pi]$ is a nondecreasing function such that $\Gamma(\pi)\leqslant \pi$, and
equality holds everywhere in~\eqref{punchline}. Returning to
the geometric interpretation of $H(\alpha,\gamma)$ in~\eqref{tangent}, we note that
\[\mathcal{K}_r(\alpha) H(\alpha,\pi)>0,\quad 0<\abs{\alpha-\pi}<\pi.\]
This forces $\alpha_0=\pi$, which by definition of $\alpha_0$ implies
\begin{equation}\label{equal1}
\mathcal{K}_r(\alpha) H(\alpha,\Gamma(\alpha))\geqslant 0,\quad 0\leqslant\alpha\leqslant 2\pi.
\end{equation}
Hence, ~\eqref{equal1} must turn into an equality for almost all $\alpha\in [0,2\pi]$.
In view of~\eqref{hpos1} and of the monotonicity of $\Gamma$ this is only possible if
$\Gamma(\alpha)=\alpha$ for all $\alpha\in [0,2\pi]$.
If $\abs{f(\mathbb{D}_r)}=\abs{\mathbb{D}_r}$, then equality holds in~\eqref{ar8}. Then for almost all $t\in [0,2\pi]$
the function $\Gamma(\alpha)=\xi(\alpha+t)-\xi(t)$, or its reflection
$\widetilde{\Gamma}(\alpha)=2\pi-\Gamma(2\pi-\alpha)$, turns~\eqref{punchline} into an equality.
Hence $\xi(\alpha+t)-\xi(t)=\alpha$ for almost all $t\in [0,2\pi]$ and all $\alpha\in [0,2\pi]$.
Thus $\xi$ is the identity function and $f\colon\mathbb{D}\to\mathbb{D}$ is an isometry. Theorem~\ref{main} is proved.
\section*{Acknowledgements}
We thank Tadeusz Iwaniec and Jani Onninen for valuable discussions on the subject of this paper.
\bibliographystyle{amsplain}
|
2,877,628,089,410 | arxiv | \section{Introduction}
Precise shaping of laser beams is crucial to their application as confining potentials for ultracold atoms. Notable among the applications for such trapped cold atomic systems is the quantum simulation of many-particle condensed matter systems in periodic potentials~\cite{lewenstein_ultracold_2007}. Beam shaping is increasingly important too in the creation, as well as subsequent manipulation, of a Bose-Einstein condensate (BEC). Following the initial demonstration of crossing the BEC transition by purely optical means~\cite{BarrettAllOptical}, increasingly precise schemes have been developed by which to optically cool and compress an atomic cloud, including a proposed scheme whereby dynamic beam shaping techniques transform the potential between a sequence of power-laws~\cite{bruce_holographic_2011}. Minimising perturbative effects in any such experiment requires the potential experienced by the atoms to smoothly and accurately conform to a target intensity. However, accurate beam shaping becomes more challenging the greater the deviation from the diffraction-limited Laguerre-Gaussian or Hermite-Gaussian propagation modes. Additional considerations include restricting the formation of interference fringes, caused either by rapid phase variations across the beam profile or high beam coherence in an imperfect optical system. Furthermore, underlying potentials associated with the surrounding experiment may affect the confinement experienced by trapped atoms.
A prominent starting point for many quantum simulation experiments is the experimental realisation of the Bose-Hubbard Hamiltonian achieved by loading quantum degenerate bosonic atoms into an optical lattice~\cite{jaksch_cold_1998}:
\begin{equation}
\label{eqn:BHM}
\mathcal{H}=-J \sum_{\langle i,j \rangle} a_i^\dagger a_j+\frac{U}{2} \sum_i \hat{n}_i \left( \hat{n}_i - 1 \right)+\sum_i \epsilon_i \hat{n}_i
\end{equation}
This provides an experimental framework within which to simulate the electron gas in solids, with the freedom to tune interparticle interactions by manipulating lattice parameters. Investigations may be performed into the effects of disorder, and within the Mott Insulator regime whereby atoms are localised onto individual lattice sites into topological states in the fractional quantum Hall effect, or even as a starting point for a quantum register~\cite{ref:BillyAndersonLoc, greiner_quantum_2002, ref:JakschFQHE, ref:ScalableComp}. However, the validity of the cold atom system as a quantum simulator strongly depends on the form of the trapping potential. The last term in Eq.~(\ref{eqn:BHM}) denotes an energy offset on individual lattice sites arising from a typically harmonic external confinement. The spatial dependence of the system density arising from the external potential~\cite{jaksch_cold_1998} associated with alternating superfluid and Mott-insulating regions is colloquially referred to as the `wedding-cake' structure. The resultant blurring of phase transitions makes comparison with theory less direct; eliminating this effect would allow answers to open questions such as the phase diagram of the fermionic Hubbard model~\cite{campo_quantitative_2007}. Precise control of the intensity distribution and elimination of external effects is of benefit to many other experimental situations including the observation of wave dynamics and quantum chaos using Bose-Einstein condensation confined in an optical corral-type potential~\cite{ref:corral}.
Compensation for the harmonic trapping term can be achieved either by direct cancellation with a compensatory potential to produce a uniform potential landscape, or by modifying the form of the trapping potential of experimental interest. Flat-topped beams hold particular appeal in this regard, used either as a square-well potential or to directly form standing wave optical lattices without a Gaussian envelope profile. Whatever the precise form of the chosen potential, an accurate and smoothly-varying beam profile is imperative to confining, manipulating and probing the trapped atoms.
The freedom to meet the constraints imposed on the optical potential by experimental applications is granted by relaxing restrictions on the trapping plane phase. Optical trapping of ultracold atoms is facilitated by the dipole force, associated with a potential $U_{dip}\left(\vec{r}\right) \propto I(\vec{r})/\delta $ with $I(\vec{r})$ the spatially dependent laser intensity and $\delta$ the detuning of laser light from resonance. The dipole force therefore depends on the intensity gradient of the laser light. Any phase gradient affects only the scattering force, negligible under detuning far from the atomic resonance due to its $I/\delta^{2}$ dependence in comparison to the $I/\delta$ dependence of the dipole force.
Diffractive optical elements can be used to provide the necessary beam shaping precision and versatility. Using these techniques, we can obtain both continuous arbitrary intensity distributions and exotic lattice configurations inaccessible with standing wave interference alone such as circular distributions corresponding to an infinite 1D lattice~\cite{ref:ferriswheel, ref:ringlattices}. Our approach centres upon diffracting an incident laser beam using an acousto-optic deflector (AOD). The diffractive acoustic wave established in the AOD crystal is determined by a multiplexed input acoustic frequency signal generated using an arbitrary waveform synthesiser (AWS). The relative amplitudes within the multiplexed input determine the proportion of the total light diverted into the first diffracted order corresponding to the appropriate frequency, such that the total intensity pattern corresponds to the sum of the constituent diffracted beams. We thereby achieve precise control over both the position and amplitude of each diffracted beam. The application to discrete lattice patterns is evident, but by calculating the effect of neighbouring beam sites on each other, this approach can be easily extended to produce arbitrary composite continuous patterns including flat-topped beams. Alternatively, rather than the superposition of static frequencies, rapid deflection of a single beam such that trapped atoms experience a time-averaged potential has been successfully demonstrated in red-detuned potentials with minimal heating of trapped atoms~\cite{henderson_experimental_2009}. Dark optical lattices have also been realised by scanning around lattice sites~\cite{Arnold_BlueDetunedScanning}. A combination of time-averaging and the composite beam approach presented here can yield a smoothly dynamically-varying potential, additionally enhancing the scalability of both methods. AOD-induced rotation and expansion of an optical lattice loaded with a BEC has been previously demonstrated~\cite{Accordion_OptExpress, AccordionAtoms}; dynamic shaped composite potentials would be a straightforward extension to this.
A popular alternative approach to generating arbitrary and dynamic potentials is to use a computer-generated hologram imposed on an incident laser beam using a spatial light modulator (SLM), an array of either liquid crystal or micro-mirror pixels programmatically altering the beam phase. Improved versatility in the range of accessible potentials is granted the higher the phase resolution of each pixel, though this increases the complexity of the required numerical phase profile calculation. Iterative Fourier transform algorithms (IFTAs), of which multiple variants exist~\cite{ref:pasienski}, are extensively used to perform these high-resolution phase calculations. Pixel switching frequency is an important consideration if dynamic manipulation of trapped atoms is an experimental goal. The Texas Instruments digital micro-mirror device SLM has a switching frequency on the order of 50 \si{\kilo \hertz}, and a Boulder Nonlinear Systems ferroelectric liquid crystal SLM around 1 \si{\kilo \hertz}. However, these are both binary devices; with a phase resolution of $2\pi/256$, a Boulder Nonlinear Systems nematic liquid crystal SLM is capable of a far more versatile range of truly arbitrary potentials, but the switching frequency of hundreds of Hz could limit their applicability to dynamic manipulation of optical trapping potentials. AODs have an update frequency on the order of 10 \si{\mega \hertz}, facilitating almost seamless switching between dynamic frames, thus combining the versatility and switching rate necessary for an arbitrary dynamic manipulation sequence.
In experimental situations without restrictions on the image plane phase, a significant advantage of using an AOD to form large continuous patterns is that the resultant potential is composed of multiple beams of different frequencies, the precise frequency separation dependent on the desired beam location and the details of the optical system. This frequency difference circumvents interference effects that arise when sculpting a single, highly coherent beam. Furthermore, unwanted beams arising from diffraction into the zeroth and higher orders are easily eliminated from the trapping plane intensity distribution, albeit with some loss of overall power. In contrast, achieving the highest-accuracy reproduction of large continuous arbitrary targets using spatial light modulation requires introduction of limited amplitude freedom in the trapping plane~\cite{ref:pasienski}, resulting in a significant noise accumulation near the trapping potential which can detrimentally perturb the experimental system.
We illustrate below the accuracy and versatility of the composite beam method for a range of continuous trapping potentials, and demonstrate a process by which an external potential can be compensated to produce a trapping potential tailored to specific experimental requirements. The first example compensates a harmonic term in the case of a square-well target potential; the applicability of the AOD beam shaping method to arbitrary continuous potentials is then illustrated, indicating the utility of the method in both creating and compensating arbitrary continuous potentials as required by the experimental conditions. Details of the experimental methods follow these examples.
\section{Beam shaping using an acousto-optic deflector}
\subsection{Compensation potential}
\label{section:comp}
The effect of the additional harmonic confinement term is perhaps most immediately obvious with regard to flat-topped target potentials, although the principle of applying a compensation potential is identical in other cases. As discussed above, such flat-topped beams are experimentally applicable both in their own right and as a starting point for building other arbitrary potentials.
To create a flat-topped beam using the superposition of diffracted beams, we initially consider the Sparrow resolution criterion~\cite{sparrow, ResolutionSurvey}. This refinement of the Rayleigh criterion is popular in astronomy, stating that multiple beams are indistinguishable, i.e. their composite intensity distribution perfectly flat, if the second derivative of this distribution is zero. The spacing $a$ between adjacent beams to achieve a flat-topped composite potential is therefore chosen such that:
\begin{equation}
\label{eqn:sparrow}
\frac{d}{dx} \big\{f(x)+f(x+a)\big\}=0 \quad \mbox{and} \quad \frac{d^2}{dx^2} \big\{f(x)+f(x+a)\big\}=0
\end{equation}
In one dimension, our intensity distribution $f(x)$ is the sum of the $N$ constituent Gaussian beams, with $1/e^{2}$ waist $w$, and relative amplitudes $A_{n}$ and positions $x_{n}$:
\begin{equation}
\label{eqn:GaussSum}
f(x) = \sum\limits_{n}^{N} A_{n} e^{-2(x-x_{n})^{2}/w^{2}}
\end{equation}
The beam spacings calculated using the Sparrow criterion provide a good starting point for a feedback process that iteratively optimises beam spacings and relative amplitudes based on intensity variations measured across the composite beam profile; optimisation changes the frequency separations between beams by less than 10\% from their starting values in the case considered here.
Figure~\ref{fig:flattop} shows the experimental realisation of a flat-topped intensity profile as the sum of 10 deflected beam components. In this case, the Sparrow criterion suggests a separation between adjacent beams of $a = 0.527 w$, with $w$ the beam waist. After optimisation, the experimental error is 1.4\% over the flat region of the intensity profile and 2.3\% over the full distribution. In this and subsequent figures, the corrugations visible on the compensation potential arise from dust specks on the imaging camera rather than being features of the potential itself.
\begin{figure}[ht]
\centering
\includegraphics[width=.8\textwidth]{coflatTiffanySep.pdf}
\caption{Intensity distribution measured using a CCD camera (top) for a flat-topped beam comprising 10 individual Gaussians of identical amplitude, with a line profile (bottom, solid line). Superposition of the dash-dot Gaussians yields the calculated target intensity distribution (dashed line).}
\label{fig:flattop}
\end{figure}
Using a sequence of equal-amplitude constituent beams, the composite potential has a power-law dependence with a maximum order scaling approximately with the number of beams used. The general form of the intensity profile is:
\begin{equation}
\label{eqn:powerlaw}
I(x) = \sum_{n} A_{n} x^{n}
\end{equation}
The beam shown in Fig.~\ref{fig:flattop} is associated with a potential proportional to $x^{10}$. Whilst the example shown is for a single row of Gaussians, this principle can be readily extended to constructing arbitrary-shaped two-dimensional potentials by deflecting the beam in both x- and y-directions with separations calculated as above (see the Experimental Techniques section for discussion of the dual-axis AOD). The only subtlety arising from additional rows of beams is that if the frequency spacing is equal in both x- and y-directions, then beams lying along diagonal lines have the same frequency and thus interfere. Such undesirable interference is easily avoided by using different frequency spacings and elliptical spots to fulfill the Sparrow criterion in both directions; elliptical distributions occur anyway in the focal plane of an optical system with a high numerical aperture and linearly polarised light~\cite{ref:ellipticalspot}. This extension is simple in comparison to a similar extension of the target output of a computer-generated hologram. With an IFTA used to improve the range and versatility of accessible patterns, hologram calculation becomes more complicated for large continuous potentials due to the appearance of optical vortices in the calculation process~\cite{ref:senthilkumaran}. Furthermore, limited by a finite pixel array, spatial light modulators find it difficult to realise a sharp edge to a flat-topped beam due to the high Fourier-space frequencies required. A super-Lorentzian target array is therefore often used, the order of which is a compromise between flatness and calculation accuracy. In contrast, the AOD composite beam approach has an intensity falloff limited only by the beam waist. For example, using a composite flat-top consisting of 10 beams as in Fig.~\ref{fig:flattop}, the intensity falls from 95\% of its maximum value to 5\% over $1.6 w$, whereas an eighth-order super-Lorentzian as demonstrated in~\cite{liang_high-precision_2010}, of identical width, has the same intensity falloff over $2.0 w$. The accuracy of exotic patterns calculated using an IFTA can be improved by incorporating Helmholtz propagation into hologram calculation~\cite{ref:gaunt}; although this does not as yet match the approximately 1\% RMS error setting the current accuracy limit on a flat-top generated by binary spatial light modulation~\cite{liang_high-precision_2010}, binary devices are associated with intensity profiles of restricted complexity. This SLM accuracy limit is slightly higher than the accuracy obtained using the AOD composite beams above, but this small difference should be balanced against the improved edge definition and greater complexity possible with the AOD as opposed to a binary SLM.
However, these flat-topped beams are unlikely to be used in isolation. A harmonic term arising from the external potential of a magnetic trap or additional dipole trapping beam will typically dominate over higher-order power-law terms unless compensated. The flexibility of the composite beam AOD approach allows such compensation to be implemented straightforwardly, with a potential that cancels out the dominant low-order terms of the power-law Taylor expansion.
An arbitrary potential along the x-axis can be expressed as a Taylor series up to $n_{max}$, the order we want to cancel:
\begin{equation}
\label{eqn:extTaylor}
V_{tot}(x)=\sum_{n=0}^{n_{max}} \frac{V_{tot}^{(n)}(x_0)}{n!}(x-x_0)^n+\mathcal{O}(n_{max}+1)
\end{equation}
We fit the functional form of this potential using a sum of equal-width Gaussians by adjusting their relative positions and amplitudes. The accuracy increases with the number of beams used, and depends on the complexity of the target distribution. Along the same axis this AOD-generated composite potential has the form:
\begin{equation}
V_{dip}(x)=\sum_{n=1}^N a_i V_{beam}(x-s_i)
\end{equation}
where $N$ is the number of constituent beams, $a_i$ the amplitude of each beam, $s_i$ the displacement along the x-axis and
$V_{beam}(x)$ the Gaussian function produced by each constituent deflected beam. The optimal set of parameters ${a_{i}, s_{i}}$ to cancel the external potential are determined using an optimisation routine.
Figure~\ref{fig:compensation} illustrates the use of 6 beams to cancel an $\mathcal{O}\left(2\right)$ term, with matching performed using the Taylor expansion of the potentials. These beams would have a blue frequency detuning to create a repulsive potential. For this example, the experimental error over the entire pattern is 1.8\%.
\begin{figure}[ht]
\centering
\includegraphics[width=.8\textwidth]{otherTiffanySep.pdf}
\caption{Intensity distribution measured using a CCD camera (top) and the corresponding line profile (bottom, solid line) for a harmonic compensation potential, with the target intensity distribution (dashed line) the sum of the dash-dot Gaussians.}
\label{fig:compensation}
\end{figure}
Smoothing the nonuniform density distribution of an ultracold gas cloud trapped in an optical box has been experimentally demonstrated using repulsive spots of laser light positioned using an acousto-optic modulator at points along the axis of the cloud~\cite{meyrath_bose-einstein_2005}. This illustrates the viability of the compensation method in correcting small-scale beam imperfections that can fragment the cloud. However, the current approach and potential generated in Fig.~\ref{fig:compensation} focusses primarily on offering compensation for large-scale external or residual continuous harmonic potentials perturbing the overall form of the trapping potential. As in~\cite{meyrath_bose-einstein_2005}, these may be superimposed onto a dipole potential to smooth out the residual confinement terms and provide a uniform potential landscape, but the combination of this compensation with continuous beam shaping also allows the trapping laser beams to be directly modified.
\subsection{Arbitrary continuous potentials}
\label{section:buchleitner}
As indicated by the ability to modify the target to incorporate a compensation term, this beam sculpting approach can be applied to generating arbitrary discrete or continuous trapping potentials. The example illustrated in Fig.~\ref{fig:buchleitner} extends the flat-topped beam of Fig.~\ref{fig:flattop} by the addition of a spatially separated single Gaussian, to form a potential analogous to a single well connected to a reservoir of variable size. This type of potential has been used for theoretical studies of tunneling and decoherence~\cite{ref:Buchleitner} and our method is well-suited to realising this in practice.
\begin{figure}[ht]
\centering
\includegraphics[width=.8\textwidth]{BuchleitnerTiffanySep.pdf}
\caption{Intensity distribution measured using a CCD camera (top) and corresponding line profile (bottom, solid line) for an extension of the flat-topped beam creating a broad reservoir connected to a single well, with applications to studying decoherence in quantum systems. The target intensity (dashed line) is the sum of the dash-dot Gaussians.}
\label{fig:buchleitner}
\end{figure}
The flat-topped reservoir of Fig.~\ref{fig:buchleitner} consists of 9 deflected beams, optimised from the starting point of equal amplitude and a Sparrow separation of $0.552 w$. The well depth is controlled by a low-amplitude Gaussian midway between the reservoir and a spatially separated single well. The parameters defining the potential and the interplay between reservoir and single well are sufficiently flexible that the intensity distribution can be easily and precisely modified, allowing dynamic real-time manipulation of trapped atoms. The illustrated experimental realisation has an error of 2.1\% over the entire pattern region.
This example illustrates the utility of this method in generating both arbitrary, non-symmetric continuous potentials, and single diffraction-limited points that could be arranged in a discrete lattice structure. Although this method is most suitable for shapes that can be expressed as sums of Gaussians, this is not a significant limitation: numerous arbitrary intensity distributions can be generated with a high level of accuracy. The reproduction accuracy of all patterns could be significantly improved with further optimisation.
\section{Conclusion}
The power of the composite beam method lies in its simplicity. With rudimentary optimisation based on the measured intensity profile, large continuous potentials, or indeed discrete patterns, may be reproduced without loss of accuracy resulting from interference or increased numerical calculation complexity. Arbitrary patterns, most notably flat-topped beams, can be produced with an error of around 2\%, comparing favourably to results demonstrated using SLM techniques, and without compromising either versatility or frame update rate. Systematic optimisation would further enhance this accuracy. Although approached from the opposite direction, superposition of constant frequencies is comparable to the rapid painting of a single beam presented in~\cite{henderson_experimental_2009} in terms of the access to arbitrary continuous potentials with high reproduction accuracy; furthermore, the scalability of each of these methods could be enhanced by their combination.
Alongside other examples, the versatility of the approach has been utilised in producing a repulsive compensation potential, which could be used either to directly shape a target potential, or to create a flat background on which to construct additional potentials. This could significantly improve the relation of quantum simulation experiments to theoretical calculations, allowing phase transitions to be investigated with greater clarity. The compensation process could also be applied in SLM-based experiments, although measures would have to be employed to restrict vortex formation in large continuous distributions. Whilst this investigation focussed upon flat-topped and compensatory potentials, and a uniform continuous potential for decoherence investigations, the precise shaping inherent in the composite beam approach holds universal appeal in creating arbitrary discrete and continuous trapping potentials for a wide range of processes.
|
2,877,628,089,411 | arxiv | \section{Introduction}
\label{sec:intro}
Galaxy formation and evolution is the culmination of competing forces and processes over each galaxy's lifetime. These processes can be internal to the galaxy, such as the energy generated by the central super-massive black hole (SMBH) or the winds generated by star formation. They can also have external origins, such as the gravitational potential of other galaxies \citep{toomre1972} or cosmic gas filaments. Due to this `superposition' of evolutionary processes, it is difficult to isolate the impact on the galaxy from only one of them, especially when many are still occurring. The environment which a galaxy inhabits has long been suspected of altering its evolutionary path \citep{gunn1972, dressler1980, postman1984, ryden1993}, but with conflicting results on the exact impact. Field environments are relatively simple and provide a control sample for comparison with higher-density environments such as groups and clusters. This comparison is not straight forward, however, since cluster environments are a complex mixture of many, often dramatic, physical processes such as gravitational disruption (owing to the significantly deeper gravitational potential), hydrodynamic effects due to the hot intra-cluster medium (ICM), and thermodynamic effects such as shocks due to the high relative velocities that a galaxy can experience when it first encounters the ICM during in-fall.\par
A number of correlations have been observed between galactic observables and some metric for the local environment. Historically, the projected density of galaxies or \(N\)-th nearest neighbour measurements of the local density have been found to correlate with visual morphology \citep[e.g.][]{dressler1980, cappellari2011a, oh2018, gargiulo2019} and invoked to explain morphological transformations \citep[e.g.][]{bekki2002, kauffmann2004, blanton2005, donofrio2015, coccato2020}. Yet morphology has also been observed to correlate with stellar mass at fixed local density \citep{vanderwel2008}, and so the underlying cause is difficult to discern. This problem permeates through most observed correlations. Some works have shown that galaxies exhibit a lower net angular momentum for a higher local density \citep[e.g.][]{cappellari2011, cortese2019, graham2019, cole2020}, while others find that there is no additional dependence on the environment once the correlation between the angular momentum and stellar mass is accounted for \citep{brough2017}. Finally, the stellar population parameters also suffer from conflicting correlations. Some observations indicate reduced star-formation activity \citep[e.g.][]{balogh2004, poggianti2006, allen2016, owers2019}, a higher stellar metallicity \citep[e.g.][]{schaefer2019}, older stellar ages \citep[e.g.][]{thomas2005, mcdermid2015}, and a lower gas content \citep[e.g.][]{zabel2019} for higher local density, while others indicate that stellar mass is the driver instead of environment \citep{alpaslan2015, goddard2017}. Many of these correlations have also been found in recent cosmological hydrodynamical simulations \citep[e.g.][]{choi2018a, wang2018c, wang2018}. More broadly, it is not straight forward to disentangle the effects of mass and environment, and it is likely that both play a role \citep{peng2010, smith2012, mcdermid2015, wang2020b}, and joint analyses over all available parameters are needed such as those applied by \cite{christlein2005} to global galaxy properties. The morphology, mass, and other galactic properties are intricately connected through each galaxy's unique assembly history. It is therefore clear that to uncover what impact the environment has, if any, the complete assembly history must be investigated directly as a function of the environment.\par
The dynamical memory of galaxies plays an important role in attempting to disentangle such assembly histories, assisted by the (often long) dynamical times of galactic systems. As such, the stellar kinematics can provide insight into this history. Dynamical models of stellar kinematics have been employed to measure constraints on galaxy formation for a variety of morphological types and environments, based on a number of different principles. The Jeans equations have been readily applied owing to their relative simplicity and computational efficiency \citep[e.g.][and \protect\citealt{cappellari2016} for a review]{cappellari2013, watkins2013, zhu2016, zhu2016a, poci2017, bellstedt2018, nguyen2019, nitschai2020, li2020e}, though with specific assumptions about the intrinsic velocity distributions of galaxies. Distribution-function models \citep[e.g.][]{cole2017, taranu2017, pascale2018} can be quite general and computationally-efficient, but usually use parametric expressions which may not provide enough freedom. Finally, the \cite{schwarzschild1979} orbit-superposition method provides a general approach without the assumption of specific distribution functions or density distributions, while also providing a wealth of information on the intrinsic properties of the model. Though it is far more computationally-expensive, it has seen a growing diversity of applications \citep[e.g.][]{vandermarel1998, cretton1999, verolme2002, gebhardt2003, valluri2004, cappellari2006, krajnovic2009, krajnovic2015, vasiliev2013, vasiliev2019, leung2018, zhu2018, zhu2018a, vasiliev2020}. Through these models, a galaxy's merger history can be traced through the potentially-complex observed kinematics, but only when confronted with a sufficiently-sophisticated dynamical model which can access the underlying intrinsic properties \citep[e.g.][]{vandenbosch2008, lyubenova2013, krajnovic2015}. However, purely-dynamical models can not produce a chronological assembly history, since they lack information about the ages of the stars and where they might have originated.\par
This work is part of the \ftd\ survey; an observational programme to study the Fornax galaxy cluster with the Multi-Unit Spectroscopic Explorer ({\rm MUSE}) at {\rm VLT}. In total, the survey observed \(31\) members of the Fornax cluster with \(m_B < 15\ \si{mag}\), at or interior to the Virial radius \citep[\(R_{\rm vir} \sim 0.7\ \si{Mpc}\);][]{drinkwater2001}. Fornax is a well-surveyed \citep{drinkwater2001, jordan2007, davies2013, munoz2015, iodice2016, pota2018, sarzi2018, zabel2019, scott2020} galaxy cluster at a distance \(D \sim 20\ \si{\mega\parsec}\), and with a total halo mass of \(\logM[{\rm halo}] \sim 13.85\) \citep{jordan2007}. The application of the \shw\ models to \ftd\ data was showcased in \cite{sarzi2018}, and a qualitative comparison to the stellar populations was made in \cite{martin-navarro2019}. In this work, we aim to measure complete chronological assembly histories of three edge-on \SZ\ galaxies - FCC~153, FCC~170, and FCC~177 - by quantitatively combining these sophisticated dynamical modelling techniques with the measured stellar populations. They are discussed in conjunction with a previous application of this method \citep{poci2019} to a massive \([\logM[\star] \sim 11]\) field \SZ, NGC~3115, to probe any potential impact of the cluster environment.\par
This work is organised as follows: the data and target selection are briefly outlined in \cref{sec:data}, and the combined dynamical and population modelling is detailed in \cref{sec:methods}. Results for each galaxy are presented in \cref{sec:res}. The implications of these results in the context of the Fornax cluster and specific quantitative correlations are investigated in \cref{sec:discussion}.
\section{Data and targets}\label{sec:data}
\subsection{Photometry}
The photometric data for this work is taken from the Fornax Deep Survey \citep[FDS;][]{iodice2016, venhola2018}, which acquired deep photometry of the Fornax cluster out to in the \(u\), \(g\), \(r\), and \(i\) bands using the Very Large Telescope ({\rm VLT}) Survey Telescope ({\rm VST}). We utilise the \(r\)-band photometry to model the surface brightness distribution of these galaxies. We also make use of the \(g-i\) colour to characterise the mass distribution beyond the field-of-view (FOV) of the spectroscopy (see \cref{ssec:massModel}). FDS data extend down to a surface brightness of \(\mu_r \sim 28\ \si{mag\ arcsec^{-2}}\) in the \(r\) band.\par
Distances to these galaxies were measured in \cite{blakeslee2009} via surface-brightness fluctuations. We adopt those measurements here, given in \cref{tab:masses}.
\subsection{Spectroscopy}\label{ssec:spec}
The spectral data are taken from the \ftd\ project \citep{sarzi2018}. In this work, all data products are computed on the spectral range \(\lambda \in[4600, 6700]\ \si{\angstrom}\). This range avoids the problematic sky emission lines and telluric effects. It is wide enough, however, to include many of the important absorption features for the stellar population analyses. Moreover it encapsulates the bandwidth of the \(r\) filter of {\rm VST} which is utilised in conjunction with the spectroscopy to describe the luminosity density of the stellar kinematic tracer. To prepare the data products, the data-cubes are spatially binned to a target signal-to-noise ratio \((S/N)\) of \(100\) using the Python implementation\footnote{Available at \href{https://pypi.org/project/vorbin/}{https://pypi.org/project/vorbin/}} of the Voronoi binning technique \citep{cappellari2003}. This ensures that the kinematic and stellar-population measurements can achieve measurement errors \(\lesssim 5\%\) (shown in \cref{app:schwarz}). \par
Kinematics are extracted for each binned spectrum using the {\tt pPXF} \citep{cappellari2004, cappellari2017} Python package\footnote{Available at \href{https://pypi.org/project/ppxf/}{https://pypi.org/project/ppxf/}}, which determines the line-of-sight velocity distribution (LOSVD) through moments of the Gauss-Hermite series. We extract the first six moments of the LOSVD in each bin; mean velocity \(V\), velocity dispersion \(\sigma\), skewness \(h3\), kurtosis \(h4\), and higher-order deviations \(h5\) and \(h6\). {\tt pPXF} is run with the {\rm MILES} empirical stellar library \citep{falcon-barroso2011}, and with an additive polynomial of degree \(10\) in order to accurately reproduce the line shapes. Naturally, spectra are dominated by the brightest components of the observed galaxies through the LOS, and so the extracted kinematics are effectively luminosity-weighted.\par
Star formation histories (SFH) and their mean stellar population properties are extracted by running {\tt pPXF} with the {\rm E-MILES} single stellar population (SSP) templates \citep{vazdekis2016} using the `BaSTI' isochrone models \citep{pietrinferni2004}. A multiplicative polynomial of degree \(10\) is included in order to account for the continuum without affecting the relative line shapes. The SSP models are normalised such that we measure luminosity-weighted stellar populations, in order to maintain consistency with the stellar kinematics and subsequent dynamical model (described in \cref{ssec:schwarz}). The stellar-population fits use a first-derivative linear regularisation with \(\Delta=1.0\), which prefers a smoother solution in the case of degeneracy between the SSP models. We assume a fixed \cite{kroupa2002} galaxy initial mass function (IMF). The canonical \cite{salpeter1955} IMF has been shown to disagree with the mass-to-light ratios from stellar dynamics \citep{lyubenova2016}, while the low central velocity dispersion of these galaxies \citep{iodice2019} is consistent with an IMF which is relatively deficient of dwarf stars \citep[e.g.][]{thomas2011, cappellari2012, wegner2012}. In this work, we explore the projected distribution of mean stellar age \((t)\) and metallicity \((\text{total metal abundance, }\chemZH)\). Representative spectral fits are presented in \cref{img:spectra}.\par
\begin{figure}
\centerline{\includegraphics[width=\columnwidth]{spectra.pdf}}
\caption{Fits ({\em red}) to spectra ({\em black}) from the centre and outer regions ({\em top} and {\em bottom} of each pair of spectra, respectively) for our galaxy sample, as labelled on the right. Residuals are shown in green, offset for presentation. The grey bands show regions which are masked during the fit. All spectra are normalised, but vertically offset for presentation. It can be seen that the outer spectra are more noisy, as expected, but that in all cases the data are reproduced well by the fit.}
\label{img:spectra}
\end{figure}
The solutions from the stellar-population run of {\tt pPXF} and the predictions from E-MILES \citep{vazdekis2010} then enable the derivation of the \(R\)-band stellar mass-to-light ratio \((\ml[{\star}]_R)\) for each spectrum, using the mass in stars and stellar remnants for the assumed IMF. This is utilised for the dynamical modelling (\cref{ssec:massModel}). We generate Monte Carlo fits to the spectra by adding random noise within the variance spectra in the data-cubes. Each spectrum is re-fit \(100\) times to generate a new distribution of SSP weights. Luminosity-weighted properties are re-derived for each weight distribution. The `uncertainty' in a given aperture is then estimated from the variance of luminosity-weighted properties across all Monte Carlo simulations in that aperture. These uncertainty maps (shown in \cref{app:schwarz}) are utilised to gauge the stability of our final results.\par
Similar data products have already been measured for these galaxies as part of \ftd\ \citep{pinna2019a, pinna2019, iodice2019}. The motivation for re-extracting them in this work is to achieve higher \(S/N\) for higher-precision stellar population parameters \citep[see, for instance,][]{asad2020} and to minimise the impact of measuring \(\sigma_{\rm los} \lesssim \sigma_{\rm inst}\) \citep[for the line-of-sight velocity dispersion and instrumental velocity resolution \(\sigma_{\rm los}\) and \(\sigma_{\rm inst}\), respectively;][]{cappellari2017}, albeit on larger spatial bins. Moreover, we fit the specific wavelength range, as discussed above. Finally, luminosity-weighted stellar populations are required for the analyses in this work as described above, while previous measurements are mass-weighted \citep{pinna2019a, pinna2019}. The new kinematics from this work are consistent with previous measurements. The luminosity-weighted ages are systematically younger than the mass-weighted determinations while the metallicities are consistent, as expected \citep{serra2007, mcdermid2015}.
\subsection{Targets}\label{ssec:target}
For this work, due to the nature of our dynamical and population orbital analysis (\cref{sec:methods}), we selected a sub-sample of three galaxies: FCC~153, FCC~170, and FCC~177. These galaxies are all approximately edge-on, and have \SZ\ morphology. They are suitable targets for our analysis because they show no signs of dust or spiral arms. This is important because the dynamical model assumes a steady-state gravitational potential while spiral arms are transient, and dust would impact the inferences of the stellar populations. Additionally, our methodology (\cref{sec:methods}) is most robust for edge-on systems. Each galaxy has a central and outer pointing from the \ftd\ survey, ensuring that the vast majority of the stellar body is covered while retaining the high spatial resolution of {\rm MUSE}. The FDS \(r\)-band images of the three galaxies are shown in \cref{img:photo}, with the MUSE outline shown in dashed brown.
\begin{figure*}
\centerline{\includegraphics[width=\textwidth]{photo}}
\caption{Full \(r\)-band images from FDS, overlaid with the MUSE FOV in dashed brown for FCC~153 ({\em left}), FCC~170 ({\em middle}), and FCC~177 ({\em right}).}
\label{img:photo}
\end{figure*}
As measured from the FDS data, FCC~153, FCC~170, and FCC~177 are at a projected distance of \(1.17\si{\degree}\), \(0.42\si{\degree}\), and \(0.79\si{\degree}\) from the cluster core, respectively \citep{iodice2019b}. These galaxies have integrated \(g-i\) colours of \(0.77 \pm 0.07\), \(1.07 \pm 0.02\), and \(1.80\pm 0.03\), and \(r\)-band surface-brightness radial profiles which extend down to \(28.9\), \(29.2\), and \(29.5\ \si{mag\ arcsec^{-2}}\), respectively, as derived from the FDS photometry \citep{iodice2019b}.\par
These galaxies are the focus of the spectral analyses presented in \cite{pinna2019a, pinna2019}, where their SFH are discussed in the context of the Fornax cluster. Those works conclude that FCC~170 matured more rapidly, having plausibly evolved in an earlier group environment in the initial stages of the Fornax cluster assembly. Conversely, FCC~153 and FCC~177 are seen to exhibit relatively smooth SFH in their thin disk regions. In the study on stellar accretion fractions in members of the Fornax cluster, \cite{spavone2020} find that it is difficult to photometrically disentangle the various components of these three galaxies, since they have indistinguishable surface brightness profiles. They find that low accretion fractions \((\lesssim 50\%)\) are typical for other galaxies at cluster-centric radii similar to FCC~153 and FCC~177. Conversely, for galaxies in the region close to FCC~170, higher accretion fractions \((\gtrsim 50 \%)\) are derived. We aim, as part of this work, to place constraints on this fraction even for the galaxies which are photometrically degenerate.
\section{Stellar content of galaxies}\label{sec:methods}
We endeavour to consider the complete stellar information content available through observations. The model we fit to these data was the self-consistent combination of \shw\ orbit-superposition dynamical models, using a triaxial implementation \citep{vandeven2008, vandenbosch2008}, and stellar-population measurements derived from full spectral fitting. We employed the method described in \cite{poci2019} for this combination. We therefore refer to that work and references therein for details, but lay out the basic structure of the method and the differences with that work in this section.\par
We also ensured that our data are tracing the galaxies themselves, and not components of the cluster environment. The FDS data show that FCC~170 is within the large-scale intra-cluster light (ICL) detected towards the cluster centre \citep{iodice2019}. This ICL component was measured to have total integrated magnitudes of \(12.1 \pm 0.3\) and \(11.4 \pm 0.3\) in \(g\)- and \(r\)-band, respectively, over an area of \(\sim 432\ \si{arcmin^2}\) \citep[assuming a uniform surface brightness distribution;][]{iodice2017b}. At the distance and direction from the cluster centre to FCC~170, the ICL has a \(r\)-band surface brightness of \(\sim 27.5\ \si{mag\ arcsec^{-2}}\) \citep{iodice2017b}. In contrast, the spectroscopic data from the \ftd\ survey have a \(r\)-band target depth of \(25\ \si{mag\ arcsec^{-2}}\) in the faintest regions covered by the FOV. For our sample, the FOV extend to \(4.90\), \(5.63\), and \(9.61\ \si{\kilo\parsec}\) along the major axis for FCC~153, FCC~170, and FCC~177, respectively, at our adopted distances (see \cref{tab:masses}). In all three cases, therefore, we expect the impact of the ICL on the measured properties to be negligible, being at least \(\sim 100\) times fainter than the faintest regions of the galaxies within our spectroscopic FOV.
\subsection{Stellar mass model}\label{ssec:massModel}
One of the most crucial aspects of a dynamical model of stellar kinematics is the input mass model in which the observed tracer population resides. This is often derived from the observed photometry. We begin by fitting a multi-Gaussian Expansion \citep[MGE;][]{monnet1992, emsellem1994} to the \(r\)-band photometry from FDS using a Python implementation\footnote{Available at \href{https://pypi.org/project/mgefit/}{https://pypi.org/project/mgefit/}} \citep{cappellari2002}. This produces a projected surface-brightness model (\mgeS), which serves as the luminous tracer of the gravitational potential of the galaxy. These models are shown in \cref{app:massMGE}.\par
To reconstruct the mass, the surface brightness must be converted into surface mass density. While standard implementations of \shw\ (and indeed dynamical) models assume a spatially-constant conversion from luminosity to mass, we exploit the spatially-resolved map of stellar \(\ml[{\star}]_R\) in order to account for the prominent structures and variations in the stellar populations that are resolved by the high-quality spectroscopy. We make additional use of the deep FDS photometry to constrain the stellar populations outside of the spectroscopic FOV to constrain the dynamical model well beyond the measured kinematics. We use the predictions from the E-MILES SSP models to derive a relation between \(g-i\) colours and \(\ml[{\star}]_R\), which we assume to be of the form \(\log_{10}(\ml[{\star}]_R) \propto (g-i)\) as found empirically \citep{tortora2011, wilkins2013, mcgaugh2014, du2020d}. The smaller spectroscopic FOV, which is used where available, is thus augmented by the larger photometric FOV to generate \(\ml[{\star}]_R\) on the same extent as the photometry. While the spectroscopic measurements of \(\ml[{\star}]_R\) reach \(\lesssim 60\si{\arcsecond}\) for the three galaxies, the depth of the FDS survey allows this coverage to be extended to \(\sim 150\si{\arcsecond}\) providing a dramatic improvement to the constraints of the mass model. Using this large-scale combined (spectroscopic and photometric) \(\ml[{\star}]_R\) map, \mgeS\ is then converted to a map of surface mass density, to which a mass density MGE (\mgeT) is fit. The fits and results for all \mgeT\ are given in \cref{app:massMGE}. \cref{img:153massMGE,img:170massMGE,img:177massMGE} show deviations of up to \(\pm 30\%\) compared to a spatially-constant \(\ml[{\star}]_R\) (in projection). This approach takes into account not just these deviations in the absolute scale, but also the structures of the stellar populations, producing a more accurate mass model and subsequent dynamical model.\par
The photometric measurements of \(\ml[{\star}]_R\) are effectively `SSP-equivalent', while the spectroscopic values are derived from the full SFH. To mitigate any systematic offsets this may cause, the photometrically-derived values are re-scaled to match the spectroscopic values in the overlapping regions. We emphasise that the rapidly-varying spatial structures in the stellar populations --- caused primarily by the thin edge-on disks --- are captured by the spectroscopy, while the photometry is utilised only in the region where variations are mild. Coupled with the intrinsic symmetry of the MGE fitting, the photometric \(\ml[{\star}]_R\) serves to extend the range of the MGE model and stabilise the shape of the gravitational potential in that region. It can be seen in \cref{img:153massMGE,img:170massMGE,img:177massMGE} that there are no systematic offsets at the transition from spectroscopically- to photometrically-derived \(\ml[{\star}]_R\), and the level of noise in the colour region is no greater than the pixel-to-pixel scatter in images to which MGE is typically applied. Overall, this procedure allows for the stellar populations to be more robustly accounted for.
\subsection{\shw\ dynamical models}\label{ssec:schwarz}
The basic premise of the \shw\ method is to numerically integrate a large number of permitted orbits within a model for the gravitational potential, then measure their kinematics and compare to observations. For real observations, the gravitational potential is of course unknown and must be iteratively fit for. To achieve this, we used a triaxial implementation of the \shw\ method that has been robustly developed and validated \citep{vandenbosch2008, vandeven2008, zhu2018, zhu2018a, zhu2020, jin2019}. In this implementation, a single model is described by seven parameters: \begin{enumerate*}[label=(\emph{\alph*})] \item the three parameters describing the intrinsic shape and viewing direction of the stellar mass distribution, \(q=C/A\), \(p=B/A\), and \(u=A^\prime/A\), where \(A\), \(B\), and \(C\) are the intrinsic major, intermediate, and minor axes, respectively, and \(A^\prime\) is the projected major axis \item the mass of the central SMBH, \(M_\bullet\) \item the parameters of the dark matter (DM) profile, which is implemented as a spherical Navarro-Frenk-White (NFW) model \citep{navarro1996}. These are the concentration \(C_{\rm DM}\) and dark mass fraction at \(r_{200}\), \(f_{\rm DM}\) \item a global dynamical mass-to-light ratio, which we denote \(\Upsilon\). This parameter can shift the global depth of the potential in order to better match the observed kinematics, but does not change its shape nor therefore which orbital families can reside within it. \(\Upsilon\) is included to account for any deviations in the absolute depth of the gravitational potential due to the assumption of the IMF when computing the \(M_\star/L\) and/or systematics in the assumed DM halo model.\end{enumerate*}\par
We streamlined the search through this large parameter-space by making reasonable assumptions about some of these parameters. The masses of the SMBH were fixed according to the empirical \(M_\bullet-\sigma_\eff\) relation of \cite{kormendy2013}, using the \(\sigma_\eff\) measurements for these galaxies reported in \cite{iodice2019}. In addition, we estimated the sphere of influence \(r_i\) of each SMBH, which is utilised by the model but is not a free parameter, using the relation of \cite{vandenbosch2015}. This is expected to have little impact on the model however --- for the galaxy with the largest central velocity dispersion, FCC~170, \(r_i \approx 0.08\si{\arcsecond}\), which is below the pixel scale of MUSE. In addition, the stellar shape parameter \(u\) was fixed to \(u=(1.0-\epsilon)\) for some small number \(\epsilon\) to avoid numerical issues. This assumption is reasonable since regular fast-rotator galaxies are found to be consistent with oblate intrinsic shapes \citep{weijmans2014}. We note that mild triaxiality is still permitted in these models, with the condition that the potential must be axisymmetric in projection. The parameter-space is thus reduced to five dimensions; \(q, p, C_{\rm DM}, f_{\rm DM}, \Upsilon\).\par
Each \shw\ model corresponds to a unique intrinsic gravitational potential. The orbital families which can reside within each gravitational potential are therefore also unique. Thus, each location in the hyper-parameter-space is accompanied by its own library of numerically-integrated orbits. These orbits are characterised by the integrals of motion which they conserve, namely the binding energy \(E\), angular momentum \(I_2\), and the third non-classical conserved integral \(I_3\). Each library of orbits was generated by sampling these integrals in \((E, I_2, I_3) = (30, 20, 10)\) steps \citep[logarithmically for \(E\) and linearly for \(I_2\) and \(I_3\); see][for details of the integral sampling]{cretton2000}. The region around the best-fit model was re-computed with a higher orbit sampling of \((E, I_2, I_3) = (60, 30, 15)\) to increase the resolution of the resulting intrinsic properties. To avoid discreteness in these libraries, each orbit was dithered by a factor of \(5\), creating a cloud of orbits around each \((E, I_2, I_3)\). Using a Non-Negative Least-Squares \citep[NNLS;][]{lawson1995} fit, the model selects the best sub-set of orbits from each library which reproduces the observed kinematics in projection. It simultaneously fits a boundary constraint, which in this work is the projected luminosity distribution, such that the weights assigned to the orbits during NNLS are luminosity weights. Thus, each unique gravitational potential has a corresponding unique set of best-fit orbits.\par
By construction, \(\Upsilon\) does not change the shape of the gravitational potential. In a gravitational potential with a fixed shape but varying \(\Upsilon\), the families of orbits do not change. Rather, the velocities of these orbits are simply scaled up or down to reflect a deeper or more shallow potential, respectively, and the NNLS fit is repeated for the scaled orbits. Therefore only four parameters require the computationally-expensive numerical integration of an orbit library. The five free parameters were optimised using an adaptive grid search, whose direction and step-size depend on the existing set of evaluated models, with a large initial spread to avoid local minima. The search terminated once all surrounding models were worse fits to the data. The kinematic fits are shown in the top seven rows of \cref{img:2dmap153,img:2dmap170,img:2dmap177}. The parameter-space searches and best-fit parameters are presented in \cref{app:schwarz}.\par
To avoid artificial bias in the model due to systematic asymmetries in the data, the even (odd) kinematic moments were point-(anti)symmetrised to be consistent with the intrinsic model symmetry\footnote{using the {\tt plotbin} package, available at \href{https://pypi.org/project/plotbin/}{https://pypi.org/project/plotbin/}}. These asymmetries present deviations of up to \(\sim 6\ \si{\kilo\metre\per\second}\) in velocity and velocity dispersion with respect to the symmetrised kinematics, which is of order the measurement uncertainties on the kinematics. The `raw' un-symmetrised kinematics and their Monte Carlo-derived errors are shown in \cref{app:schwarz}.\par
The \shw\ models allow us to investigate the distribution of mass within these galaxies. Enclosed mass profiles are presented in \cref{img:encm}, where the maximum extent of the spectroscopy is marked by \(R_{\rm max}\), while useful quantities are provided in \cref{tab:masses}.
\begin{figure}
\centerline{\includegraphics[width=\columnwidth]{enclosedMasses.pdf}}
\caption{Enclosed-mass profiles of the total (dynamical) mass ({\em solid line}), stellar mass ({\em dashed line}), and DM ({\em dot-dashed line}) for the three galaxies. The effective radii are denoted by the small arrows, and the radial extent of each spectroscopic FOV is shown by the vertical dotted line. The lower radial bound of the figure is set to half the width of the point-spread function from the spectroscopic observations.}
\label{img:encm}
\end{figure}
\begin{table*}
\newcommand\Tstrut{\rule{0pt}{2.6ex}}
\newcommand\Bstrut{\rule[-0.9ex]{0pt}{0pt}}
\(\begin{tabu}{cc|ccc|ccc}
\hline\hline
{\rm Galaxy} & D & R_\eff & M_{\eff}^{\star} & M_{\eff}^{\rm DM} & R_{\rm enc} & M_{\rm enc}^{\star} & M_{\rm enc}^{\rm DM}\Tstrut\Bstrut\\\relax
& [\si{\mega\parsec}] & & [\log_{10}\si{\Msun}] & [\log_{10}\si{\Msun}] & & [\log_{10}\si{\Msun}] & [\log_{10}\si{\Msun}]\\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8)\Tstrut\Bstrut\\\hline
\multirow{2}{*}{\rm FCC 153} & \multirow{2}{*}{20.8} & 19.80\si{\arcsecond} & \multirow{2}{*}{9.55} & \multirow{2}{*}{9.63} & 165.78\si{\arcsecond} & \multirow{2}{*}{10.06} & \multirow{2}{*}{11.43}\\
& & 2.00\ \si{\kilo\parsec} & & & 16.72\ \si{\kilo\parsec} & &\Bstrut\\\hline
\multirow{2}{*}{\rm FCC 170} & \multirow{2}{*}{21.9} & 15.90\si{\arcsecond} & \multirow{2}{*}{10.33} & \multirow{2}{*}{8.83} & 148.39\si{\arcsecond}& \multirow{2}{*}{10.67} & \multirow{2}{*}{10.74}\\
& & 1.69\ \si{\kilo\parsec} & & & 15.76\ \si{\kilo\parsec} & &\Bstrut\\\hline
\multirow{2}{*}{\rm FCC 177} & \multirow{2}{*}{20.0} & 35.90\si{\arcsecond} & \multirow{2}{*}{9.43} & \multirow{2}{*}{9.71} & 133.347\si{\arcsecond} & \multirow{2}{*}{9.73} & \multirow{2}{*}{10.84}\\
& & 3.48\ \si{\kilo\parsec} & & & 12.93\ \si{\kilo\parsec} & &\Bstrut\\\hline
\end{tabu}\)
\caption{Physical properties of the galaxy sample. \((1)\) galaxy name \((2)\) distance to the galaxy measured by \protect\cite{blakeslee2009} using surface-brightness fluctuations \((3)\) \(r\)-band effective radius taken from \protect\cite{iodice2019b} and converted into physical units at our adopted distances \((3)-(4)\) stellar and DM masses enclosed within \(R_\eff\), respectively \((5)\) radius which encloses \(98\%\) of the stellar mass \((6)-(7)\) stellar and DM masses enclosed within \(R_{\rm enc}\), respectively.}
\label{tab:masses}
\end{table*}
It can be seen that FCC~170 is baryon-dominated within the spectroscopic FOV, while FCC~153 and FCC~177 transition to DM-dominated at or below their effective radii (given in \cref{tab:masses}). We also define \(R_{\rm enc}\), the spherical radius which encloses \(98\%\) of the stellar mass (derived by integrating the stellar mass profile). This reduces the dependence of the mass profile on the lowest surface-brightness (most uncertain) regions. These radii are in good quantitative agreement with the maximum extent of the surface brightness profiles of \cite{spavone2020}. The corresponding stellar and DM masses within \(R_{\rm enc}\) are denoted by \(M_{\rm enc}^\star\) and \(M_{\rm enc}^{\rm DM}\), respectively. These are given in \cref{tab:masses}. The amount of stellar mass outside of the spectroscopic FOV can be estimated as \(\log_{10}\left[M_\star(R=R_{\rm max})/M_\star(R=R_{\rm enc})\right]\). This gives \(0.12\), \(0.09\), and \(0.07\ \si{dex}\), for FCC~153, FCC~170, and FCC~177, respectively. While the mass in this region \((R_{\rm max} < R < R_{\rm enc})\) is not directly constrained by the kinematics, it is still constrained by the mass model described in \cref{ssec:massModel}. We explore these mass distributions further in the sections below.
\subsection{Dynamical decomposition}\label{ssec:ddec}
From the best-fit dynamical model, we used the phase-space of circularity \citep{zhu2018a}, \(\lambda_z\), and cylindrical radius, \(R\), in order to conduct a dynamical decomposition. This radius represents the time-averaged cylindrical radius of each orbit over its orbital period. The circularity is a normalised measure of the intrinsic orbital angular momentum, and we used it here to divide the \shw\ model into orbits with varying degree of rotation \((\left|\lambda_z\right| \sim 1)\) or pressure \((\lambda_z \sim 0)\) support. In order to account for the structure in the kinematics and stellar-population maps simultaneously, and motivated by tests conducted in \cite{poci2019}, we divided the phase-space into many \((\sim10^2)\) `components'. This was achieved by imposing a log-linear grid on the circularity phase-space. The radial axis was sampled logarithmically, but with a floor on the grid size. This preserves the orbital sampling from the \shw\ model\footnote{The binding energy \(E\), which is sampled logarithmically in the \shw\ models, is equivalent to the radius for a circular orbit} but avoids generating cells in the circularity phase-space which are below the spatial resolution of the data. The circularity axis was sampled linearly. This phase-space and corresponding dynamical decompositions are presented in \cref{img:decomp153,img:decomp170,img:decomp177} for FCC~153, FCC~170, and FCC~177, respectively.
\begin{figure}
\centerline{
\includegraphics[width=\columnwidth]{{L_z_decomp_448_FOV}.png}}
\caption{Phase-space of circularity \(\lambda_z\) as a function of cylindrical radius \(R\) for the best-fit model of FCC~153. The colour represents the orbital weight from the \shw\ model, which has been normalised to an integral of unity. The dynamical decomposition is overlaid in black, where only those components which have non-zero contribution to the original model are defined. The figure is shown on the radial extent of the spectroscopy for clarity, but the decomposition is conducted over the full \shw\ model. The black dashed line is the half-mass radius, derived from \mgeT, shown for scale. The distribution indicates the prevalence of high-angular-momentum (cold disk-like) co-rotating orbits in this galaxy, with very little contribution from hot \((\lambda_z \sim 0)\) or counter-rotating \((\lambda_z < 0)\) orbits.}
\label{img:decomp153}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width=\columnwidth]{{L_z_decomp_455_FOV}.png}}
\caption{Same as \protect\cref{img:decomp153}, but for FCC~170. This galaxy has a large contribution from hot central orbits, with most of the cold orbits appearing at larger radius.}
\label{img:decomp170}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width=\columnwidth]{{L_z_decomp_449_FOV}.png}}
\caption{Same as \protect\cref{img:decomp153}, but for FCC~177. Similarly to FCC~153, this galaxy is dominated by co-rotating cold orbits.}
\label{img:decomp177}
\end{figure}
This sampling in \(\lambda_z-R\) was used for all three galaxies, however the final distribution of `components' depends on the circularity distribution of each galaxy's best-fitting \shw\ model. \par
A single component is composed of a unique subset of the orbit library of its parent \shw\ model. The decomposition is an effective way of simply bundling orbits of similar properties --- in this case, angular momentum and radius. Kinematics, masses, and mass densities are computed for each component individually, based on its specific subset of orbits and those orbits' relative contribution to the original dynamical model. Thus, each component has fixed projected kinematics and spatial distributions.
\subsection{Adding stellar populations}\label{ssec:dynPop}
We now describe the extension beyond the standard \shw\ approach for the inclusion of the stellar population measurements. In order to self-consistently combine the kinematics and stellar populations, we exploited the fact that both the derivation of the SFH from full spectral fitting and the construction of the \shw\ model are based on the same principle; they are weighted integrations over many distinct populations, integrated through the line-of-sight (LOS). Specifically, the measured stellar populations are luminosity-weighted by construction as described in \cref{ssec:spec}, and the orbital weights are constrained by the surface brightness even though their dynamical properties are computed in the total gravitational potential. Therefore, we assume that the distributions of stellar and dynamical populations are the same. The weight distributions from the dynamical models were then used to derive the distributions of stellar populations that reproduce their observed maps. The result is that each orbit which contributes to the dynamical model now has an associated age and metallicity. Each dynamical component can thus be considered a mono-abundance population. By fitting age and metallicity independently, we avoided the possibility of degeneracies between them, as well as having to assume a specific age-metallicity relation. Instead, regularisation was utilised for each stellar-population fit, and is analogous to what is routinely used for spectral-fitting analyses such as in \cref{ssec:spec}. The specific implementation is detailed in \cite{poci2019}. We tested this approach using mock data from the Auriga simulations \citep{grand2016}, presented in \cref{sec:mockvalid}, and find that the main results of this work are accurate to \(\lesssim 10\%\) (\cref{img:mockCosmoDisp}). An alternative approach which uses a chemical-evolution model to derive the age-metallicity relation is presented in \cite{zhu2020}.\par
The subsequent integration through the LOS of the stellar orbits reproduces all measured kinematic and stellar-population maps. Fits to all maps are shown in \cref{img:2dmap153,img:2dmap170,img:2dmap177}.
\begin{figure}
\centerline{
\includegraphics[width=\columnwidth]{{chemDynGrid_9_448_azReg0.1550_0.2000_SN100_BaSTI_KB1.30_LW_reg1.000}.png}}
\caption{Best-fitting \shw\ model for FCC~153. The data ({\em left}), fits ({\em middle}), and residuals ({\em right}) of, from top to bottom, the dynamical model (surface brightness, velocity, velocity dispersion, and \(h3-h6\)), and the subsequent stellar-population fitting (age and metallicity). The outline of the MUSE mosaic is shown in dashed brown. All residual panels show the absolute differences (data - model), but are offset such that green is zero. The stellar-population maps share a common colour-bar between galaxies for comparison. We note that for FCC~153, since the observed metallicity map reaches \(0.4\ \si{dex}\) along the major axis, and is itself an average through the LOS, we extend the upper bound for the individual components during the stellar-population fitting to \(1.0\ \si{dex}\).}
\label{img:2dmap153}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width=\columnwidth]{{chemDynGrid_9_455_azReg0.0450_0.0450_SN100_BaSTI_KB1.30_LW_reg1.000}.png}}
\caption{Same as \protect\cref{img:2dmap153}, but for FCC~170.}
\label{img:2dmap170}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width=\columnwidth]{{chemDynGrid_9_449_azReg0.3650_0.2176_SN100_BaSTI_KB1.30_LW_reg1.000}.png}}
\caption{Same as \protect\cref{img:2dmap153}, but for FCC~177.}
\label{img:2dmap177}
\end{figure}
We also conducted Monte Carlo simulations by re-fitting the stellar-population maps \(100\) times after randomly perturbing them within their measurement errors. These fits re-distribute the dynamical components in the \(t-\chemZH\) plane (without changing their kinematics), and so were used to estimate the uncertainties of our results. Using all available information -- kinematics, ages, metallicities, and density distributions -- we can now investigate the formation events that built up each galaxy.
\section{Combined dynamical and stellar populations}\label{sec:res}
The combination of dynamical and stellar populations is imperative to be able to decode the integrated assembly history into its constituent events. We do this using the diagnostic power of \cref{img:mfh153,img:mfh170,img:mfh177} for FCC~153, FCC~170, and FCC~177, respectively.
\begin{figure*}
\centerline{
\includegraphics[width=\textwidth]{{mah_z_R_448_age06_metal04_azReg0.1550_0.2000_SN100_BaSTI_KB1.30_LW_reg1.000}.png}}
\caption{Mass assembly history for FCC~153. The panels are ordered by increasing mean stellar age ({\em left to right}) and decreasing mean stellar metallicity ({\em top to bottom}). The value given at the top and right of each column and row, respectively, denotes its upper bound (inclusive). Each panel is composed of a radial profile of the vertical stellar velocity dispersion \(\sigma_z\) ({\em black/white curve}), the surface brightness distribution at the best-fitting projection ({\em top-right}) with the outline of the MUSE mosaic shown in dashed brown, and the total stellar mass within the FOV for that panel. The \(\sigma_z(R)\) profiles are coloured according to the stellar mass in that panel at that radius (sampled within the logarithmic radial bins). This indicates the spatial region in which each curve contributes most (white regions), and which regions may be impacted by numerical noise (black regions). The grey shaded regions show the spread of velocity dispersion profiles for \(100\) Monte Carlo fits to the stellar-population maps. This galaxy exhibits a dominant disk-like, metal-rich component that has steadily formed over the last \(\sim 10\ \si{\giga\year}\).}
\label{img:mfh153}
\end{figure*}
\begin{figure*}
\centerline{
\includegraphics[width=\textwidth]{{mah_z_R_455_age06_metal04_azReg0.0450_0.0450_SN100_BaSTI_KB1.30_LW_reg1.000}.png}}
\caption{Same as \protect\cref{img:mfh153}, but for FCC~170. This galaxy is dominated by an old central pressure-supported spheroidal component spanning \(\sim 1\ \si{dex}\) in metallicity. It has a secondary contribution from a progressively thinner and younger disk-like component, and a potential minor contribution from a warm metal-poor halo-like component.}
\label{img:mfh170}
\end{figure*}
\begin{figure*}
\centerline{
\includegraphics[width=\textwidth]{{mah_z_R_449_age06_metal04_azReg0.3650_0.2176_SN100_BaSTI_KB1.30_LW_reg1.000}.png}}
\caption{Same as \protect\cref{img:mfh153}, but for FCC~177. This galaxy appears to have begun forming late. It is dominated by a young, thin disk, with contributions from dynamically-warmer and slightly older stars.}
\label{img:mfh177}
\end{figure*}
These figures show radial profiles of the intrinsic vertical stellar velocity dispersion \(\sigma_z(R)\) and the projected surface brightness distributions for each galaxy as a function of both age and metallicity. This combination of kinematic and population constraints effectively produces star-formation and accretion histories simultaneously, resulting in genuine mass assembly histories. The vertical velocity dispersion is a useful metric for discriminating between different dynamical structures, as well as being comparable to a variety of different observations (explored below). However, for dissecting the model into different dynamical regimes, we used the intrinsic orbital circularity to determine dynamical temperature as this property is inherently connected to the intrinsic orbital phase-space. Before exploring each galaxy individually, we first qualitatively discuss how various features of these figures are interpreted.\par
The presence of cold kinematics and flattened (`disk'-like) mass distributions are interpreted as in situ star formation, especially (though not necessarily) at high metallicity. Metal-rich and metal-poor stars in this regime would indicate that the gas likely originated from internal (recycling) and external (accretion) sources, respectively. This selection is in principle independent of age.\par
Centralised spheroidal distributions which are dynamically hot are interpreted as the in situ core or `bulge'. There is no strict selection on the stellar populations, since a large diversity has been observed in this region, especially if a stellar bar is or was present in the galaxy \citep{morelli2008, morelli2016, coelho2011, zhao2012, florido2015, seidel2015, corsini2018, barsanti2021}. Orbits at large radius with hot kinematics and with metallicities towards the metal-poor tail of the host galaxy's distribution are interpreted as the result of stellar accretion from many lower-mass systems. Such accretion is expected to be at least dynamically `warm'. This is because, although the impact of satellites may be preferentially along a particular axis \citep{shao2019}, accreted stars would nevertheless be on dynamically hotter orbits compared to the in situ cold disk. In the event of minor merging, the accreted systems will, by definition, be lower mass than the host, and via the mass-metallicity relation will thus have lower metallicities on average. Since the age of the accreted stars depends critically on the SFH of the satellites, we make no selection on age for the `accreted' stars. It is possible that some orbits in this regime have an in situ origin, from either past major mergers or significant external perturbations \citep[since low-mass accretion events themselves are not expected to perturb the existing disk significantly;][]{hopkins2008}. We nevertheless interpret this region as accretion under the assumption that it is dominated by ex situ material, subject to possible contamination by in situ material.\par
In the remainder of this section, the results are discussed briefly for each galaxy in the context of their individual assembly histories. We constrain the origin of dominant structures in each galaxy, which includes identifying the fraction of likely accreted material. Since the condition described above for the selection of accreted material favours orbits at larger radii, the limited spectroscopic FOV can bias these estimates. We instead estimate the accretion fraction as \(f_{\rm acc} = M^\star_{\rm acc}\big/M^\star_{\rm enc}\), where the accreted stellar mass \(M^\star_{\rm acc}\) is approximated for each galaxy below, and the total enclosed stellar mass \(M^\star_{\rm enc}\) is given in \cref{tab:masses}. This proposed accreted fraction is discussed further in \cref{sec:discussion}.
\subsection{FCC~153}
FCC~153 is suspected of being an `intermediate in-faller' to the Fornax cluster (\(4 < t_{\rm in-fall} < 8\ \si{\giga\year}\); as estimated from the cluster projected phase-space diagram in \citealt{iodice2019}). \cref{img:mfh153} exhibits the largest spread of metallicity of the three galaxies studied here. FCC~153 also shows the strongest recent star-formation activity, having formed the most stellar mass \((\sim 3\times 10^9\ \si{\Msun})\) in recent times \((<6\ \si{\giga\year})\), and in a kinematically-cold configuration. In fact, our model reveals that it has retained cold kinematics over all redshifts with even the oldest bins containing stars with \(\sigma_z \sim 50\ \si{\kilo\metre\per\second}\). The combination of late-time star-formation and persistent cold kinematics implies that the integrated assembly history of this galaxy (all mergers and interactions combined) has had a minimal impact, at least in the region covered by the spectroscopy. There is a suggestion of stellar accretion through the old, kinematically-warm, metal-poor population forming part of the stellar `halo'. We can use the model to estimate the mass in accreted stars by quantitatively isolating the orbits which meet the qualitative criteria discussed above. Specifically, orbits are selected with \(\chemZH \leq -0.5\ \si{dex}\) (lower half of \cref{img:mfh153}), \(\left|\lambda_z\right| \leq 0.5\), and mean guiding radius \(\overline{r} \geq 2\ \si{\kilo\parsec}\ (\sim 20\si{\arcsecond})\) to exclude any potential in situ `bulge'-like orbits (see \cref{img:decomp153}). This selection results in \(f_{\rm acc} \sim 10\%\). Under the assumption that these criteria isolate the accreted stars, we estimate an accreted mass of \(M^\star_{\rm acc} \sim 1 \times 10^{9}\ \si{\Msun}\). This selection has a luminosity-weighted average age and metallicity of \(t = 11.8\ \si{\giga\year}\) and \(\chemZH = -1.3\ \si{dex}\), respectively. Compared to the other two galaxies in this work, FCC~153 has the highest \(f_{\rm acc}\), and its relatively late in-fall may explain that. This is supported by the average age of the tentative accreted material, which implies that the main accretion events (those which dominate the luminosity-weighted average) occurred \(\lesssim 11.8\ \si{\giga\year}\) ago.
\subsection{FCC~170}
FCC~170 is believed to be an ancient in-faller to the Fornax cluster (\(t_{\rm in-fall} > \SI{8}{\giga\year}\); as estimated from the cluster projected phase-space diagram in \citealt{iodice2019}). It is the most distinct galaxy of those modelled in this work, being the most massive (\cref{tab:masses}). It is also believed to be situated closest to the cluster core. FCC~170 appears to have ceased the majority of its star formation the earliest, and its early in-fall and current position in the cluster likely played a role. The galaxy is very old, but we see evidence of recent star-formation, again in a cold configuration seemingly in spite of its environment (\cref{img:mfh170}). Overall, FCC~170 has relatively-high velocity dispersion everywhere with respect to the other two galaxies. Yet the central regions (where we probe with the spectroscopy) have remained heavily rotationally-supported over its history, with \(\sigma_z \lesssim 100\ \si{\kilo\metre\per\second}\). Applying the same accretion criteria as for FCC~153, we estimate an accretion fraction of \(f_{\rm acc} \sim 7\%\), implying \(M_{\rm acc} \sim 3\ \times 10^{9}\ \si{\Msun}\) with a luminosity-weighted average age and metallicity of \(t = 13.3\ \si{\giga\year}\) and \(\chemZH = -1.3\ \si{dex}\), respectively. With respect to FCC~153, this implies that FCC~170 experienced more accretion events of lower mass (lower metallicity).
\subsection{FCC~177}
FCC~177 is also believed to be an ancient in-faller \citep{iodice2019}, as for FCC~170. It has the lowest stellar mass and highest DM fraction of the three galaxies studied here (\cref{tab:masses}). It exhibits low velocity dispersion \((\sigma_z < 100\ \si{\kilo\metre\per\second})\) everywhere and at all times, with the younger, metal-rich populations reaching \(\sigma_z \lesssim 20\ \si{\kilo\metre\per\second}\) (\cref{img:mfh177}). We find evidence for a delayed formation, with only a small fraction of old populations \((t \gtrsim 12\ \si{\giga\year})\) and without any clear spatial structures. At later times, FCC~177 appears to have sustained modest and roughly-constant star-formation for \(t \lesssim 10\ \si{\giga\year}\). This combination of prolonged star-formation and cold kinematics is especially surprising given its early in-fall, and poses problems for the expectation of group pre-processing and cluster quenching processes. The mass budget of FCC~177 is more complicated to disentangle, especially due to the relatively diffuse mass at old ages. In fact, FCC~177 has formed the largest percentage of its stellar mass in recent times, compared to the other two galaxies. Moreover, our assembly history indicates that for lookback times greater \(10\ \si{\giga\year}\) ago, FCC~177 had just \(\logM[\star] \sim 8\), implying that the in situ component formed during that time would be lower metallicity with respect to the other two galaxies during the same period. This caveat notwithstanding, applying the same criteria as for the other galaxies, we estimate \(f_{\rm acc} \sim 6\%\). This results in \(M_{\rm acc}^\star \sim 3 \times 10^8\ \si{\Msun}\), with luminosity-weighted average age and metallicity of \(t = 12.8\ \si{\giga\year}\) and \(\chemZH = -1.5\ \si{dex}\), respectively.
\section{Mass assembly histories in context}\label{sec:discussion}
In this section we review all the evidence afforded by this technique in the context of the Fornax cluster in order to investigate the dominant processes that built up the stellar mass in these galaxies. By analysing the trends in \cref{img:mfh153,img:mfh170,img:mfh177}, and exploring them more quantitatively throughout this section, we can constrain certain formation mechanisms.\par
Interestingly, we see a diversity in the assembly histories of the three galaxies studied here via the different distributions of mass between \cref{img:mfh153,img:mfh170,img:mfh177}. Yet the persistence of kinematically-cold orbits is common throughout all of the galaxies for all stellar ages. The observation of such kinematics for old populations places constraints on both internal and external disruption processes. Owing to the archaeological nature of the methodology employed here, all stars are observed in their present-day, not formation, configurations. It is clear, therefore, that in order for these orbits to remain kinematically-cold, those stars need to not only form as such, but also experience little-to-no subsequent disruption until the epoch of observation. This implies that neither internal instabilities nor the cluster potential (or other members) can cause significant perturbations to the kinematics of the central regions of these galaxies (though this is discussed further in \cref{ssec:angmom,ssec:sz}). For the same reason, we argue that these galaxies have likely not experienced any high-mass-ratio mergers, as they would have similarly disrupted these old cold orbits \citep{hopkins2008}.\par
There seems to be no lack of historic star-formation activity in these galaxies. This is perhaps most surprising for FCC~170 which exhibits by far the oldest mean stellar age, and is purported to reside in the central region of the cluster. We find evidence for the continued formation of stars in all three galaxies down to relatively young ages, and at super-solar metallicity. These episodes occurred comfortably after each galaxy is suspected to have entered the cluster. Their metallicity is consistent with self-enrichment, and thus in conjunction with their kinematics, these stars very likely formed in-situ from recycled gas.\par
The accretion of low-mass stellar systems is expected to deposit material into the outer stellar `halo' regions of galaxies. It is also expected to contribute significantly to the present-day stellar mass of galaxies \citep{oser2010}. We have estimated, however, low accretion fractions \((< 10\%)\) for the galaxies studied here. There are two sources of uncertainty in the \(f_{\rm acc}\) estimates in this work; contamination by in situ stars in the region we consider `accreted', and excluding some accreted material which resides at lower radius. We can not strictly exclude an in situ contribution to these accretion fractions, but such contamination would imply an intrinsic accreted fraction even lower than estimated here. Without major mergers, any in situ stars that satisfy the proposed criteria for accretion are difficult to explain, unless external perturbations from the cluster have caused dramatic transformations. Moreover, \cite{karademir2019} find that mergers with smaller mass ratios deposit stars at larger radii. Once again, since we have argued against major (or even a significant amount of minor) mergers, it is plausible that at least the majority of accretion for these galaxies resides at large radius. \cite{davison2020} similarly find that for galaxies in the {\rm EAGLE} simulation, most of the accreted mass is deposited beyond the half-mass radius \(r_{1/2}\) for host stellar masses within the range of our Fornax galaxies. We nevertheless caution that \(f_{\rm acc}\) is subject to these uncertainties, and highlight that the other main conclusions of this work do not depend on the measurements of accretion. While the mass models are constrained over the full extent of the galaxies using the FDS photometry (\cref{ssec:massModel}), we can not exclude higher accretion fractions being found at larger radii as inferred by, for instance, \cite{pulsoni2018} for the stellar mass range probed by our sample.\par
A lack of accretion can be explained by the high relative motions of member galaxies within a cluster, and reduced merging has been seen previously for cluster members with respect to the field \citep{berrier2008, pipino2014}. Specifically for the Fornax cluster, its members and their globular cluster (GC) populations have been analysed previously \citep{jordan2015, fahrion2020}. \cite{fahrion2020} finds that FCC~170 has a significantly reduced number of GC for its stellar mass, and those that is has are notably metal-poor. While the numbers of GC for FCC~153 and FCC~177 are less unusual, since the hosts are themselves lower stellar mass, their GC are also more metal-poor compared to the stellar body by \(\sim 1\ \si{dex}\). This implies that the GC originated in lower-mass systems. Once again, this suggests a lack of major mergers, and low incidence of minor merging for the three galaxies studied here. Low accretion fractions for these three galaxies were also inferred from the analysis of \cite{pinna2019a}. These galaxies appear to have been shut off from sources of external material by the cluster environment, which has likely stifled their growth. Their stellar mass assembly was able to continue through in situ star formation, but has ceased in the present day likely due to the exhaustion of internal gas in conjunction with the lack of replenishment.
\subsection{The stellar age--velocity-dispersion relation}\label{ssec:angmom}
Here we quantitatively explore some of the correlations alluded to in the assembly histories. To this end, we investigate the vertical component of the intrinsic stellar velocity dispersion, \(\sigma_z\), as a function of formation time of the stars \citep[converted to redshift assuming the cosmology of][as implemented in {\tt astropy}]{ade2016}. This stellar age--velocity-dispersion relation (AVR) has been studied previously in the Local Group \citep{wielen1977, nordstrom2004, rocha-pinto2004, seabroke2007, martig2014, sharma2014, beasley2015, hayden2017, grieves2018, bhattacharya2019, mackereth2019} and a number of cosmological and idealised simulations \citep{bird2013, aumer2016, grand2016, kumamoto2017}. The gas-phase AVR is also well-studied \cite[e.g.][]{wisnioski2015}. While the stellar AVR is derived through the properties of stars of different ages within individual galaxies, in contrast, the gas-phase AVR is measured via the global properties of different galaxies observed directly at different redshifts. In all cases, \(\sigma_z\) is seen to decrease towards the present day, but with competing explanations as to the physical driver of this relation. It is often thought to be the result of either internal instabilities whose cumulative effects have disturbed older stars more \citep{saha2010, aumer2016, grand2016, yu2018} or that populations of stars which formed at high redshift inherited higher random motion from their surroundings, which has been decreasing towards the present day as conditions stabilise \citep{noguchi1998, bournaud2009, bird2013, leaman2017, ma2017}. Since the AVR pertains to the conditions which lead to star-formation, the measurements of these properties are restricted to the disk plane, as this is where in situ star-formation is expected to occur.\par
The stellar AVR measured in this work for the three Fornax galaxies are presented in \cref{img:cosmoDisp}, with comparisons to literature measurements.
\begin{figure}
\centerline{
\includegraphics[width=\columnwidth]{{cosmo_disp_cutFull_z_Age}.pdf}
}
\caption{Stellar disk AVR as derived from our models. The coloured stars are the galaxies modelled in this work \protect\citep[and][for NGC~3115]{poci2019}. The symbol size is proportional to the fractional stellar mass in each age bin, for each galaxy independently. The horizontal error-bars denote the width of the age bin. The vertical error bars are computed as the weighted standard deviation within each age bin, for the best-fit model. The shaded regions show the spread in \(\sigma_z\) for \(100\) Monte Carlo fits to the stellar-population maps. The dashed curves show the stellar AVR of the four \SZ\ galaxies when all orbits are included (no selection on orbital circularity). The box-whisker plots are literature measurements of cold gas disks, from {\rm HERACLES} \protect\citep{leroy2009}, {\rm DYNAMO} \protect\citep{green2014}, {\rm GHASP} \protect\citep{epinat2010}, {\rm PHIBBS} \protect\citep{tacconi2013}, {\rm MASSIV} \protect\citep{epinat2012}, {\rm OSIRIS} \protect\citep{law2009}, {\rm AMAZE-LSD} \protect\citep{gnerucci2011}, {\rm SINS} \protect\citep{schreiber2009} and {\rm zC-SINF} \protect\citep{schreiber2014}, {\rm KMOS}\(^{3{\rm D}}\) \protect\citep{wisnioski2015}, and {\rm KDS} \protect\citep{turner2017}. The black dots and crosses are Milky-Way stellar measurements \protect\citep{yu2018} for stars on \((|z| < 270\ \si{\parsec})\) and off \((|z| > 270\ \si{\parsec})\) the plane, respectively. Galaxy disks become dynamically colder towards the present day. The cluster \SZ\ galaxies have a higher contribution from warmer orbits at more recent times compared to the field galaxy (comparing the full and disk-only \(\sigma_z\)). The Milky-Way, despite its higher stellar mass, is dynamically colder than the \SZ\ galaxies studied here.}
\label{img:cosmoDisp}
\end{figure}
We track the velocity dispersion as a function of formation redshift, marginalised over metallicity and radius. That is, at fixed age, the metallicities are averaged according to their luminosity-weighted contribution to the model, then similarly for radius. This maintains the appropriate weighting such that the final \(\sigma_z\) measurements are also luminosity-weighted. For consistency with literature measurements, we measure these properties for the `disk'-like orbits of the models; that is, we consider only those orbits with \(\left|\lambda_z\right| \geq 0.8\). The resulting relations are shown by the large stars in \cref{img:cosmoDisp}. Additionally, the relations for all orbits (with no selection on circularity) are given by the dashed curves.\par
All of these galaxies show the same general trend of decreasing \(\sigma_z\) with decreasing stellar age. The trends we measure for the stellar \(\sigma_z\) are flatter than those for direct gas measurements, as the stars do not reach the coldest dynamical temperatures observed in gas in the present day. This is in agreement with predictions from simulations \citep{pillepich2019}. We also see that the field galaxy NGC~3115, despite being two times more massive than our most massive Fornax object, exhibits comparable vertical velocity dispersion. By comparing the disk AVR to the full-orbit AVR, it can be seen that all three Fornax galaxies show a measurable contribution from non-disk-like orbits at all ages. Together, these observations imply that we are likely measuring the impact of the cluster on the dynamics of its galaxies due to so-called `harassment' \citep{moore1996}; frequent though minor gravitational interactions between galaxies in close proximity. External perturbations have also been seen to cause heating coincident with the time of the interaction \citep{grand2016}.\par
Our results are also compared in \cref{img:cosmoDisp} to data from the Milky-Way, both on \((|z| < 270\ \si{\parsec})\) and off \((|z| > 270\ \si{\parsec})\) the disk plane \citep{yu2018}. For comparison, the physical pixel scale of the data used in this work is \(\sim 20\ \si{\parsec\per pixel}\), but with a real physical resolution of \(\sim 70\ \si{\parsec}\) due to the point-spread function of the observations. So while our models are not separated based on height above the plane, they still probe the most dynamically-cold physical scales, meaning that any differences between these results is not due to spatial resolution effects. All four \SZ\ galaxies exhibit a systematic increase of \(\sigma_z\) with respect to the Milky-Way, which is expected since galaxies that can support spiral arms should be dynamically colder. In this case, the offset is likely a combination of the different morphology and environment, yet the general shape of the relation is preserved despite these differences.\par
As discussed above, each galaxy retains a significant portion of mass with cold kinematics and disk-like morphology at the oldest age. Specifically, these oldest age bins (as seen in the present day) exhibit \(\sigma_z \sim 50\ \si{\kilo\metre\per\second}\) on the disk plane at intermediate radii and high metallicity. This is inconsistent with internal heating whose effect should be maximal for the oldest stars. For instance, the simulations of \cite{aumer2016} show that for the oldest stars, internal heating will increase \(\sigma_z\) by \(\sim 15-20\ \si{\kilo\metre\per\second}\) above the value at birth. For the old stars we measure in the present day which have \(\sigma_z \sim 50\ \si{\kilo\metre\per\second}\), this would imply that they were born with \(\sigma_z \sim 30-35\ \si{\kilo\metre\per\second}\) at \(z=4-5\), which is significantly lower than the gas measurements at that epoch. Therefore, we conclude that the AVR for these galaxies is the result of hotter dynamical temperatures at early times, while further minor heating is contributed by the cluster interactions.\par
Closer inspection of \cref{img:mfh153,img:mfh170,img:mfh177} indicates a \ND{3} correlation between mean \(\sigma_z\), stellar age, and stellar metallicity, yet the results of \cref{img:cosmoDisp} marginalise over metallicity. We therefore compute these relations without such marginalisation, to investigate the impact of age and metallicity independently. These are presented in \cref{img:fixedSPDisp}.
\begin{figure}
\centerline{
\includegraphics[width=\columnwidth]{fixedSPDisp_jet.pdf}
}
\caption{Correlations of \(\sigma_z\) with stellar age at fixed metallicity ({\em left}), and stellar metallicity at fixed age ({\em right}) for, from top to bottom, the three galaxies studied in this work, and the field \SZ\ NGC~3115 \protect\citep{poci2019}. The curves are coloured by their age/metallicity bin corresponding to those of \cref{img:mfh153,img:mfh170,img:mfh177}. The Spearman rank coefficient \(r\) and the associated \(p\)-value, computed for all curves collectively in a given panel, are inset. The shaded regions correspond to the variations derived from \(100\) Monte Carlo fits to the stellar population maps. The stellar AVR at fixed metallicity exhibits lower significance than the \(\chemZH-\sigma_z\) at fixed age.}
\label{img:fixedSPDisp}
\end{figure}
In order to avoid numerical noise which could be introduced through the increasingly-complex selection criteria, we conduct this analysis on the full diversity of orbits (without selecting on circularity). The curves in \cref{img:fixedSPDisp} are constructed by collecting individual rows and columns of \cref{img:mfh153,img:mfh170,img:mfh177}. Each panel of those figures is integrated along the radial profile, preserving the luminosity weighting at each point, to produce a single \(\sigma_z\) measurement for that panel. Each row in the assembly histories corresponds to a single curve of the AVR at fixed metallicity (left column of \cref{img:fixedSPDisp}), while each column in the assembly histories corresponds to a single curve of the \(\chemZH-\sigma_z\) relation at fixed age (right column of \cref{img:fixedSPDisp}). The Spearman rank correlation coefficient \(r\), which indicates the strength and direction of a trend, is computed using the {\tt scipy} implementation for all curves in a given panel. The corresponding \(p\)-value is also shown for each panel, which indicates the probability that the two axes are uncorrelated.\par
We observe a significant \(\chemZH-\sigma_z\) correlation at fixed age, such that the more metal-poor stars are dynamically hotter. Similar correlations between the metallicity and vertical velocity dispersion have been seen previously for the Milky-Way \citep[for the iron abundance \(\chemFeH\), and typically with non-trivial selection functions;][]{meusinger1991, ness2013a, minchev2014, grieves2018, arentsen2020} and for {\rm M 31} \citep{dorman2015} but those results are marginalised over age. Similarly, all previous studies of the stellar AVR have been marginalised over metallicity --- with the exception of \cite{sharma2020}, discussed below. Interestingly, \cite{guiglion2015} see the inverse trend of \(\sigma_z\) with \(\chemMgFe\) at fixed \(\chemFeH\) for the Milky-Way. \cref{img:fixedSPDisp} shows that the AVR is a weak correlation once metallicity is accounted for, quantified by the correlation coefficients in each case. At fixed age, the \(\chemZH-\sigma_z\) relation is significantly more correlated than the AVR at fixed metallicity. We emphasise that the stellar AVR in \cref{img:cosmoDisp} (even the dashed full-orbit curves) exhibits a correlation which is consistent with previous measurements when metallicity is not taken into account. This means that the result in \cref{img:fixedSPDisp} can not be due to any degeneracy between age and metallicity in our models. Furthermore, age and metallicity are fit independently in \cref{ssec:dynPop}, and the spatial coherence of the dynamical components (each spatial bin is not independent) is exploited to reduce possible degeneracies within each fit. At face value, this result implies that \(\chemZH-\sigma_z\) is the underlying physical correlation, while the impact of age (or formation redshift) is of secondary importance. In this scenario, the stellar AVR would manifest through the age-metallicity relation and its scatter. Finally, while the results in \cref{img:fixedSPDisp} include the full diversity of orbits from our models, we confirmed, by removing the suspected accretion components (via the same selection identified in \cref{sec:res}) that neither the direction nor the relative significance of these correlations change. This implies that the results of \cref{img:fixedSPDisp} are not merely driven by the fact that accreted material is often dynamically hotter and relatively metal-poor, but rather that it is inherent to what we identify as the in situ component.\par
We posit that the \(\chemZH-\sigma_z\) relation is driven by the successive `generations' of star formation, each becoming more enriched and more dynamically-cold than those before (in the absence of accretion which would result in the chemical and dynamical mixing of the populations). This could be the case if, for instance, mass segregation of metals occurs vertically as well as radially. Alternatively, this would be the result if higher-metallicity gas requires colder kinematics before star formation is possible, or if the cooling effects of metals naturally produces more dynamically-cold disks if the gas is more metal-rich. \cite{choi2009} show that metal cooling can significantly increase the star-formation efficiency of the inter-stellar medium, though there is no direct link in that work to dynamics.\par
Yet measurements of the gas-phase AVR show clear trends with redshift, and the physical interpretation in that case explicitly includes a redshift dependence. However this redshift dependence is via gas depletion through the cosmic specific SFR \citep[such as in][]{whitaker2014}; in that scenario, galaxies with larger gas fractions experience larger inter-stellar medium turbulence, and higher \(\sigma_z\) is imparted to the stars upon star formation \citep{leaman2017}. So in much the same way as the scenario proposed here for the stellar AVR, the gas-phase AVR is tied to episodes of star formation, which happen to decline on average with redshift. This subtle difference is especially important when analysing individual galaxies with individual assembly histories. There also remains significant scatter at fixed redshift within the gas-phase AVR that needs to be accounted for, which indicates a potential additional dimension to this issue. Naturally, these star-formation episodes lead to enrichment of the gas over cosmic time \citep{daigne2006, kobayashi2007}. Therefore, if metallicity is the underlying physical driver of the gas-phase AVR, it would still manifest as an observed redshift dependence without intrinsically depending on redshift directly. But this is only, at present, a circumstantial argument in lieu of an explicit experiment for gas disks.\par
In any case, a testable prediction of this hypothesis is that at fixed present-day stellar mass (and without significant ex situ contributions or perturbations), galaxies with higher SFR (that is, faster chemical enrichment) should achieve dynamically-colder orbits at fixed age --- or alternatively, the stellar AVR should have a steeper slope. This is because in such a scenario, the absolute cosmic time is not the driver of the AVR, but rather the time it takes for a particular galaxy to achieve a particular degree of enrichment. Since the stellar metallicity and \(\sigma_z\) will depend on both stellar mass and accretion history, it is imperative to control for those parameters to test this prediction. This is at present not possible for the sample of galaxies for which our analysis has been performed, but should be accessible to theoretical models and simulations. In fact, \cite{just2010a} explicitly investigate the effect of SFR on the stellar AVR, tailored to fit the Milky-Way, through a series of models. That work finds that for similar forms of the SFH, the model with a higher SFR has lower vertical velocity dispersion, despite peaking at earlier epochs.\par
The outlier in this respect from \cref{img:fixedSPDisp} is NGC~3115. We have already established that accretion has played a minor role in the stellar mass assembly of the three Fornax galaxies. Conversely, \cite{poci2019} conclude that NGC~3115 assembled \(\sim 68\%\) of its present-day stellar mass from external sources. Moreover, given the higher stellar mass of NGC~3115, these accreted systems could be higher mass, and therefore more enriched on average, compared to lower-mass satellites accreted onto lower-mass hosts. Thus the age and metallicity trends would be significantly phase-mixed, as is seen by the reduced correlation coefficients. The persistence of an AVR, only at high metallicity, may be indicative of secular evolution following the last accretion event.\par
A similar analysis has been performed for the Milky-Way using a combination of many of the recent photometric and spectroscopic surveys \citep{sharma2020}. That work finds a strong \(\chemZH-\sigma_z\) relation at fixed age. Yet they also find a persistent stellar AVR at fixed metallicity, but only at young ages. The AVR then flattens and correlates solely with metallicity at old ages. \cite{yu2018} see similar trends with two bins of metallicity. \cite{sharma2020} interpret the \(\chemZH-\sigma_z\) correlation as a connection between \(\sigma_z\) and the stellar birth radius. However, in the case of the Fornax galaxies, we see no clear (monotonic) radial gradients of metallicity in \cref{img:2dmap153,img:2dmap170,img:2dmap177}. Comparisons to that work are complicated, however, by the selection functions of the Milky-Way data sets, and so these results may be tracing different physical regimes.
\subsection{\SZ\ formation}\label{ssec:sz}
All of our results indicate that the Fornax galaxies have undergone mild transformations due to the cluster potential, primarily in their outer regions. They have been able to retain their cold central kinematics, yet in a thicker configuration compared to the field. We posit, thus, that their \SZ\ morphology is a result of these interactions. This is neither of the explanations typically invoked to explain the transformations of galaxies into \SZ; mergers \citep{chilingarian2009, querejeta2015, tapia2017, poci2019} or the `fading' of spiral galaxies \citep{larson1980, bekki2002, donofrio2015, mishra2018, rizzo2018}. While cluster environments are common in the faded-spiral scenario, it is predicated on the supposed spiral progenitor first ceasing star-formation due to the environment \citep[for instance,][]{boselli2006, book2010, peng2012, mendel2013, bekki2014}, allowing it to subsequently transform into an \SZ\ galaxy. Yet we see evidence for significant star formation activity well beyond the suspected time of in-fall to the cluster for all three galaxies. Gas was therefore readily available until relatively recently. The evidence for a lack of stellar accretion has been discussed, in agreement with other cluster studies, rendering this formation path unlikely as well. This is also consistent with the deductions of \cite{comeron2019} and \cite{pinna2019a} who find that accretion does not play a major role in the formation of `thick' disks in local galaxies.\par
An alternative scenario proposed by \cite{diaz2018} states that at high redshift, gas-rich satellite accretion onto compact elliptical galaxies leads to the formation of the disk component of the resulting \SZ. Our models impose the constraint that if this scenario occurred for the three Fornax galaxies, such accretion would have had to occur \(\gtrsim 12\ \si{\giga\year}\) ago for FCC~153 and FCC~170, and \(\gtrsim 10\ \si{\giga\year}\) ago for FCC~177, since it must precede the formation of the dynamically-cold disk. However, this scenario supposes that the compact elliptical, which goes on to form the `bulge' of the subsequent \SZ, is responsible for the suppression of spiral arms in the disk. Conversely, our data suggest that only FCC~170 has a significant contribution from a central pressure-supported structure. More broadly, a diversity of \SZ\ properties is emerging \citep{fraser-mckelvie2018, coccato2020, deeley2020, tous2020}, and it is unlikely that a single formation path is responsible for this diversity.
\subsection{The Fornax galaxy cluster population}
The photometric catalogue of \cite{ferguson1989}, covering \(40\) sq. degrees, contains \(35\) \SZ-like galaxies (some of which have uncertain classification), with \(20\) being brighter than the magnitude limit of the \ftd\ survey \((m_B \leq 15)\). Of these, \(12\) were observed by \ftd, accounting for \(60\%\ (34\%)\) of bright (all) lenticular galaxies in the Fornax cluster. Our analysis on the sub-sample of three galaxies, chosen for the reasons discussed above, can not therefore account for the expected diversity of evolutionary pathways within the cluster's galaxy population. So while we infer no major mergers for these galaxies, for instance, we can not exclude this formation path for some of the other cluster \SZ\ galaxies. We have, however, probed the relative extremes in terms of assembly histories, with FCC~170 and FCC~177 forming the majority of their stellar content at early and late times, respectively. This analysis has simultaneously uncovered properties which appear to be approximately independent of assembly history - namely the stellar AVR - which we thus expect to hold for all but the most violent histories. To confidently infer the histories of the remaining galaxies from \ftd, this analysis must be applied to each of them individually. This is the goal of future work.
\section{Conclusions}
In this work we modelled three edge-on \SZ\ galaxies in the Fornax cluster as part of the \ftd\ project. We applied sophisticated dynamical and stellar-population techniques to self-consistently model the entire stellar information content. These models were used to infer how each galaxy formed, and allowed us to place strong constraints on some of the hypothesised processes that affect galaxy formation and evolution, in particular in the cluster environment. These findings are summarised here:
\begin{itemize}[label=\(\bullet\)]
\item All three galaxies retain a strongly-rotating component that has persisted for many dynamical times. These structures can be composed of both young and old stars, implying that they have survived the galaxy's entry into the cluster and subsequent evolution therein (\cref{img:mfh153,img:mfh170,img:mfh177}).
\item There is evidence of continued star-formation in all three galaxies, to varying degrees. Owing to the metallicity and kinematics, we suggest this star formation is almost exclusively in-situ through recycled gas, as there is no evidence of gas accretion (\cref{img:mfh153,img:mfh170,img:mfh177}).
\item Our results are suggestive of a suppression of stellar accretion. We postulate that this is driven by the relative motions of galaxies within the cluster (\cref{img:mfh153,img:mfh170,img:mfh177}), as proposed in previous works.
\item There is evidence against internal heating as the cause of the stellar age--velocity-dispersion relation, suggesting that older stars were created with inherently-higher \(\sigma_z\), in agreement with the results of \cite{poci2019}. There is tentative evidence that the relations for the cluster galaxies are elevated with respect to the field (\cref{img:cosmoDisp}), implying that harassment may be responsible for mild dynamical heating.
\item We find tentative observational evidence of a potential fundamental stellar \(\chemZH-\sigma_z\) relation which we argue may contribute significantly to the observed age--velocity-dispersion relation (\cref{img:fixedSPDisp}).
\end{itemize}\par
We endeavour in future work to incorporate more detailed stellar-population analyses, including variable IMF, while continuing to apply this methodology to a variety of galaxies. Deriving these histories for other galaxies will enable a more thorough understanding of how galaxies piece together their mass, and which processes have dominant effects in various regimes.
\begin{acknowledgements}
We thank Lorenzo Morelli, Thomas Spriggs, and Adrian Bittner for discussions on this work. AP acknowledges financial support from Macquarie University and the ESO Studentship Programme. RMM is the recipient of an Australian Research Council Future Fellowship (project number FT150100333). LZ acknowledges the support from National Natural Science Foundation of China under grant No. Y945271001, and the National Key R$\&$D Program of China under grant No. 2018YFA0404501. GvdV acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement no. 724857 (Consolidator Grant ArcheoDyn). EMC is supported by MIUR grant PRIN 2017 20173ML3WW{\_}001 and by Padua University grants DOR1715817/17, DOR1885254/18, and DOR1935272/19. JF-B, IMN, and FP acknowledge support through the RAVET project by the grant PID2019-107427GB-C32 from The Spanish Ministry of Science and Innovation.
\par
Based on observations collected at the European Southern Observatory under ESO programme 296.B-5054(A). This work makes use of the \tfo{SciGar} compute cluster at ESO, and the \tfo{OzStar} supercomputer at Swinbourne University. The work also makes use of existing software packages for data analysis and presentation, including \tso{AstroPy} \citep{astropycollaboration2013}, \tso{Cython} \citep{behnel2011}, \tso{IPython} \citep{perez2007}, \tso{matplotlib} \citep{hunter2007}, \tso{NumPy} \citep{harris2020a}, the \tso{SciPy} ecosystem \citep{virtanen2020}, and \tso{statsmodels} \citep{seabold2010}. We finally thank the anonymous referee, whose feedback greatly improved the depth and clarity of this work.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,089,412 | arxiv | \section{Introduction}
Since in 2021 the Muon $g-2$ Collaboration at Fermilab
\cite{Muong-2:2021ojo}
has succeeded in confirming and improving the result
of the E821/BNL measurement from 2006 \cite{Muong-2:2006rrc}
for the anomalous magnetic moment of the
muon \cite{Jegerlehner:2017gek}
and is under way on further increasing its accuracy,
the existing uncertainties in the disagreeing theoretical Standard Model
result \cite{Aoyama:2020ynm} need to be scrutinized and
also improved.
Whereas QED \cite{Aoyama:2012wk,*Aoyama:2017uqe,*Aoyama:2019ryr} and electroweak contributions \cite{Czarnecki:2002nt,Gnendiger:2013pva} are
sufficiently under control, the theoretical uncertainty
is dominated by hadronic effects
\cite{Melnikov:2003xd,Prades:2009tw,Kurz:2014wya,Colangelo:2014qya,Pauk:2014rta,Davier:2017zfy,Masjuan:2017tvw,Colangelo:2017fiz,Keshavarzi:2018mgv,Colangelo:2018mtw,Hoferichter:2019gzf,Davier:2019can,Keshavarzi:2019abf,Hoferichter:2018dmo,*Hoferichter:2018kwz,Gerardin:2019vio,Bijnens:2019ghy,Bijnens:2020xnl,Colangelo:2019lpu,Colangelo:2019uex,Danilkin:2019mhd,Blum:2019ugy,Chao:2021tvp,Hoferichter:2021wyj,Danilkin:2021icn}.
The largest contribution by far is the hadronic vacuum polarization (HVP),
where a recent lattice calculation \cite{Borsanyi:2020mff}
is at variance with the result of the 2020 White Paper (WP) of the Muon $g-2$ Theory Initiative \cite{Aoyama:2020ynm} beyond the
respective estimated errors, leading to a less strong deviation
from the experimental result if the lattice result is used
in place of the data-driven one obtained in the WP.
Once this discrepancy is resolved, it will be important to
also reduce the uncertainty in the contribution from hadronic light-by-light scattering (HLBL), which at present has errors at the level of 20\%,
which in absolute numbers are comparable to the small errors aimed for in the case
of HVP.
Besides the dominant pion pole contribution to HLBL, which by now
seems to be well understood, and where data-driven approaches and
lattice evaluations agree perfectly, and the similarly well determined contributions from $\eta$ and $\eta'$ mesons, other single-meson contributions
are much less under control. An important contribution is
expected in particular from axial vector mesons, which like
pseudoscalars have anomalous couplings to photons. However,
theoretical predictions from various hadronic models vary
a lot \cite{Bijnens:2001cq,Melnikov:2003xd,Pauk:2014rta,Jegerlehner:2017gek,Roig:2019reh,Masjuan:2020jsf}, which has led to a WP estimate
of the axial vector contribution with 100\% error.
Holographic QCD models motivated by the AdS/CFT correspondence \cite{Maldacena:1997re,Witten:1998zw,Aharony:1999ti}
have proved to be remarkably successful in qualitatively and also
quantitatively describing hadronic observables,
even those with a minimal set of
parameters and the simplest geometry of anti-de Sitter space with a hard-wall (HW) cutoff.
Such AdS/QCD models are not good enough to help with the
current discrepancy between different predictions for the HVP contribution, where sub-percent accuracy is required. However,
they are certainly of interest for the HLBL contributions.
In Ref.~\cite{Leutgeb:2019zpq}, we have revisited a previous
study \cite{Cappiello:2010uy} of the pion pole contribution to HLBL and its consequences
for the value of $a_\upmu=(g-2)_\upmu/2$ using simple bottom-up AdS/QCD models
in the chiral limit and we have found a satisfactory agreement
with the data-driven and lattice approaches. The transition
form factors obtained in AdS/QCD involve infinite towers of
vector mesons, realizing vector meson dominance (VMD) in a form
that is consistent with the asymptotic behavior derived
from perturbative QCD \cite{Hoferichter:2020lap}
for both, the singly and the doubly virtual case.
In \cite{Leutgeb:2019gbz,Cappiello:2019hwh}, also the
contribution from the infinite tower of axial vector mesons and
their anomalous coupling to photons has been calculated,
and it could be shown that this takes care of the
long-standing problem that simpler hadronic models had
with the Melnikov-Vainshtein (MV) constraint \cite{Melnikov:2003xd}
on the HLBL scattering
amplitude (see \cite{Colangelo:2021nkr} for
a review assessing its impact on $a_\upmu$).
In \cite{Leutgeb:2021mpu}, we have more recently
extended these calculations to include finite quark masses
in the flavor-symmetric case. Besides demonstrating that
the saturation of the MV constraint is entirely due to
axial vector mesons also away from the chiral limit, we have
confirmed the relatively large contribution obtained in
the chiral model.
In the present paper, we consider a minimal extension of the original
hard-wall AdS/QCD model \cite{Erlich:2005qh} due to Katz and Schwartz \cite{Katz:2007tf} for solving the U(1)$_A$ problem associated with
the relatively large $\eta'$ mass.
Going slightly beyond the setup of \cite{Katz:2007tf} by
including a nonvanishing gluon condensate, we find that a very accurate
match of the masses of $\eta$ and $\eta'$ mesons as well as their
coupling strength to photons can be achieved.
We then use this model to evaluate all contributions of
pseudoscalar and
axial vector meson excitations, and thereby also the effect of the MV short-distance constraint, to the HLBL contribution to $a_\upmu$.
\section{Katz-Schwartz model: Hard-wall AdS/QCD with solved $U(1)_A$ problem}
The model proposed by Katz and Schwartz \cite{Katz:2007tf,Schafer:2007qy} for solving the $U(1)_A$ problem builds upon the original HW AdS/QCD models of Ref.~\cite{Erlich:2005qh,DaRold:2005mxj} which have turned out to
provide a remarkably good approximation to the physics of light hadrons
while introducing a minimal set of parameters.
In these models, one keeps the background geometry of pure anti-de Sitter space
with metric
\begin{equation}
ds^2=z^{-2}(\eta_{\mu\nu}dx^\mu dx^\nu - dz^2),
\end{equation}
cut off by a hard wall at a finite value of the holographic radial coordinate at $z=z_0$ with suitable boundary conditions for the five-dimensional fields
that at the conformal boundary at $z=0$ represent sources for a set of
QCD operators of interest.
In addition to five-dimensional Yang-Mills fields $\mathcal{B}^{L,R}$ dual to left and right chiral
quark currents, a bifundamental scalar $X$ representing quark-antiquark bilinears is introduced for spontaneous symmetry breaking of $U(N_f)\times U(N_f)\to U(N_f)_V$.
Confinement is implemented by
a cutoff at some finite value of the radial coordinate $z_0$,
where boundary conditions for the five-dimensional fields are imposed.
The five-dimensional Yang-Mills action
\begin{eqnarray}
S_{\rm YM} &=& -\frac{1}{4g_5^2} \int d^4x \int_0^{z_0} dz\
\sqrt{g}\, g^{PR}g^{QS}
\nonumber\\
&&\qquad\text{tr}\left(\mathcal{F}^\mathrm{L}_{PQ}\mathcal{F}^\mathrm{L}_{RS}
+\mathcal{F}^\mathrm{R}_{PQ}\mathcal{F}^\mathrm{R}_{RS}\right),
\end{eqnarray}
where $P,Q,R,S=0,\dots,3,z$ and $\mathcal{F}_{MN}=\partial_M \mathcal{B}_N-\partial_N \mathcal{B}_M-i[\mathcal{B}_M,\mathcal{B}_N]$, is
augmented by
a Chern-Simons action $S_{\rm CS}=S_{\rm CS}^\mathrm{L}-S_{\rm CS}^\mathrm{R}$
to account for flavor anomalies, reading (in
differential form notation)
\begin{equation}\label{SCS}
S_{\rm CS}^\mathrm{L,R}=\frac{N_c}{24\pi^2}\int\text{tr}\left(\mathcal{B}\mathcal{F}^2-\frac{i}2 \mathcal{B}^3\mathcal{F}
-\frac1{10}\mathcal{B}^5\right)^\mathrm{L,R},
\end{equation}
(up to a boundary term at $z_0$ that needs to be subtracted \cite{Grigoryan:2007wn,Leutgeb:2021mpu}).
The bifundamental bulk scalar $X$ is
parametrized as
\cite{Abidin:2009aj}
\begin{equation}
X=e^{i\eta^a(x,z) t^a}X_0e^{i\eta^a(x,z)t^a},
\end{equation}
where $\eta^a$, $a=0,\ldots 8$ is a nonet of pseudoscalars excitations.
The five-dimensional mass of $X$ is fixed at $M_X=-3$ by the scaling
dimension of the dual operator $\bar q_L q_R$,\footnote{In \cite{Leutgeb:2021mpu}
we have also studied the generalization to other values of $M_X$ as
proposed in \cite{Domenech:2010aq}.}
leading to a vacuum solution
\begin{equation}
X_{0ij}=\frac12 m_{ij}z+\frac12 \sigma_{ij}z^3.
\end{equation}
Choosing $N_f=3$, we restrict ourselves to the isospin symmetric case
$m_u=m_d=m_q\not=m_s$ with
$X=\frac12\text{diag}(v_q,v_q,v_s)$.
For taking care of the $U(1)_A$ problem, a massless complex field $Y$ is
introduced, representing the gluon field strength squared $\alpha_s G_{\mu\nu}^2$ by its modulus and $\alpha_s G\tilde G$ by its phase, such that the Lagrangian of scalars reads
\begin{eqnarray}
&&\mathcal{L}_{X,Y}/\sqrt{g}=
\text{tr}\left[ |DX|^2+3|X|^2\right]\nonumber\\
&&+\frac1{2(\ln z\Lambda)^2}|DY|^2+\frac{\kappa_0}{4}\left[\bar{Y}^{N_f}
\det(X)+\text{h.c.}\right],\label{LXY}
\end{eqnarray}
where the logarithm in front of the kinetic term for $Y$ accounts for
the fact that its dual operators approach scaling dimension 4 only
asymptotically.
The complex scalar field $Y$ is charged only under the singlet axial vector field and hence its coupling is given by
\begin{eqnarray}
D_M Y= \partial_M Y+\frac{i}{\sqrt{2 N_f}}(\mathcal{B}^{L,0}_M-\mathcal{B}^{R,0}_M)Y.
\end{eqnarray}
Without the logarithm in \eqref{LXY}, the field equations for $Y$
would give a background $\langle Y\rangle = C + \Xi z^4$, where
$\Xi$ represents a gluon condensate.
After absorbing some numerical factors into $C$, the authors of \cite{Katz:2007tf} use the axial anomaly relation and the QCD operator product expansion (OPE fit) of the flavor-singlet axial vector correlation function to find
\begin{equation}
C=\frac{\alpha_s}{2\pi^2}\sqrt{2N_f}.
\end{equation}
In QCD $\alpha_s$ is a running coupling and since in holography energy is identified with $z^{-1}$, they argue that $\alpha_s$ should be made $z$-dependent.
Matching $z\partial_z\alpha_s(z)\simeq -\beta_0\alpha_s^2$ with the one-loop
QCD $\beta$ function where $\beta_0=9/2\pi$ for $N_c=N_f=3$
gives $\alpha_s^{-1}=\beta_0\ln(\Lambda z)$, which
is adopted for all $z<z_0=\Lambda^{-1}$.
Making $\alpha_s$ and therefore $C$ depend on $z$ is of course inconsistent with the equations of motion (without the logarithm), hence we use the modified version \eqref{LXY} including the logarithms.
The presence of the logarithm in the action modifies the field equations for $Y$, leading to the general solution%
\footnote{Here we also deviate from Ref.~\cite{Katz:2007tf},
where a gluon condensate, here parametrized by $D_1$ was neglected and only $C$ of the background solution had to be modified.}
\begin{equation}\label{yvev}
\langle Y\rangle=D_0 +D_1 z^4 \left[(\ln z\Lambda)^2 -\frac12\ln z\Lambda+\frac{1}{8} \right].
\end{equation}
For later convenience we define\footnote{This is the same redefinition that \cite{Katz:2007tf} use, except for the logarithm} $\tilde{Y}_0=\frac{2}{\sqrt{2 N_f}}(-\ln z \Lambda)^{-1}\langle Y\rangle
$, which is parameterized as
\begin{equation}
\tilde{Y}_0=\frac{C_0}{-\ln z \Lambda}-\Xi_0 z^4 \big( (\ln z\Lambda) -\frac{1 }{2}+\frac{1}{8\ln z \Lambda } \big)
\end{equation}
where $C_0=\sqrt{2N_f}/(2\pi^2\beta_0)=\sqrt{2/3}/(3\pi)$.
This background now naturally incorporates the running of $\alpha_s$ consistently and permits a nonvanishing gluon condensate through nonzero values of $\Xi_0$.
The coupling constant $g_5$ can be fixed by the OPE of the
vector current correlator as
\begin{equation}\label{g5LO}
g_5^2=12\pi^2/N_c=(2\pi)^2\quad \text{(OPE fit)},
\end{equation}
but we shall alternatively consider matching the
decay constant of the $\rho$ meson,
which in the hard-wall model leads to \cite{Leutgeb:2022cvg}
\begin{equation}\label{g5Frho}
g_5^2=0.894(2\pi^2)\quad \text{($F_\rho$-fit)}.
\end{equation}
The latter leads to a significant improvement of the holographic
result for the hadronic vacuum polarization:
With the leading-order OPE fit \eqref{g5LO}, there is a deviation
of 14\% from $N_f=2$ dispersive results, which is reduced to
about 5\% with \eqref{g5Frho} \cite{Leutgeb:2022cvg}.
A reduction of $g_5^2$ by about 10\% appears to be
warranted also by comparing with next-to-leading order QCD results
for the vector correlator at moderately large $Q^2$ values \cite{Shifman:1978bx,Melic:2002ij,Bijnens:2021jqo}.
It also brings our $N_f=2$ results for the pion pole contribution to $a_\upmu$
\cite{Leutgeb:2021mpu} in line with the WP result.
With $g_5$ and $C_0$ fixed by the UV behavior, the free parameters of the model
are i) the location of the hard wall, $z_0$, which can be identified with $\Lambda^{-1}$, and will be set by the $\rho$ meson mass,
ii) quark masses in $m_{ij}=\text{diag}(m_q,m_q,m_s)$,
iii) chiral condensates $\sigma_{ij}$, which we shall assume to be
given by a single parameter, $\sigma_{ij}=\sigma\delta_{ij}$,
and iv) $\Xi_0$, which corresponds to the gluon condensate $\alpha_s\langle G^2\rangle$.
The coupling constant $\kappa_0$, on the other hand, can be
set to some sufficiently large value, since it turns out that
for $\kappa_0\gg1$ all results depend only weakly on $\kappa_0$ \cite{Katz:2007tf}.
\section{Meson modes and transition form factors}
Vector meson dominance is naturally part of this model by
relating a nonzero boundary value $\mathcal{B}^V_\mu(0)=e\mathcal{Q}A_\mu^\mathrm{e.m.}$
to the background electromagnetic potential and setting
$\mathcal{Q}=\text{diag}(\frac23,-\frac13,-\frac13)$ according to the charges of up, down, and strange quarks.
Normalizable modes of $\mathcal{B}_V$ and $\mathcal{B}_A$
correspond to vector and axial vector mesons, the longitudinal polarizations
of the latter mixing with the pseudoscalars $\eta^a$ in $X$.
Ignoring scalar excitations, which in this model do not
couple to photons,\footnote{See Ref.~\cite{Cappiello:2021vzi} for extensions of the HW model where further interactions are switched on to have scalars couple to photons in order to study their potential contribution to $a_\upmu$ in AdS/QCD.} the additional $Y$ field also involves a
pseudoscalar $a$ through its phase
\begin{equation}
Y=\langle Y\rangle \exp\left[{i2a(x,z)/\sqrt{2N_f}}\right],
\end{equation}
which corresponds to a pseudoscalar glueball in the boundary theory ($G$) that
eventually couples to photons through mixing with the flavor-singlet pseudoscalar mesons.
To determine the pseudoscalar eigenmodes in the mixed $a=(0,8)$-sector, we consider the equations of motion\footnote{We assume a summation over $a=(0,8)$ for contracted flavor indices and work in the $A_z=0$ gauge.}
\begin{eqnarray}
\label{eq:phi}
&&\partial_z\left(\frac{1}{z} \partial_z \varphi^a_n \right) + g_5^2 \frac{M_{ab}^2}{z^3} \left(\eta^b_n-\varphi^b_n\right) \nonumber\\
&&\qquad+\delta^{a0} g_5^2 \frac{\tilde{Y}_0^2}{z^3} \left(a_n-\varphi^0_n\right)=0,
\end{eqnarray}
\begin{eqnarray}
\label{eq:aeq}
&&\partial_z\left(\frac{\tilde{Y}_0^2}{z^3} \partial_z a_n \right) + m_n^2 \frac{\tilde{Y}_0^2}{z^3} \left(a_n-\varphi^0_n\right) \nonumber\\
&&\qquad+ { \kappa N_f} \frac{v_q^2 v_s }{z^5 } \left(\frac{ \tilde{Y}_{00}}{4}\right)^{N_f} \left(\eta^0_n-a_n\right)=0,
\end{eqnarray}
\begin{eqnarray}
&&\frac{m_n^2}{g_5^2} \frac{1}{z} \partial_z \varphi^a_n- \delta^{a0}\frac{{\tilde{Y}_0^2}}{z^3} \partial_z a_n -\frac{M^2_{ab}}{z^3}\partial_z\eta^b_n=0,
\end{eqnarray}
with the longitudinal component of the axial gauge field $\partial_\mu \varphi^a=A^{a\parallel}_\mu$ and an effective 5-dimensional mass term
\begin{equation}
M^2_{ab} = \frac{1}{3}
\begin{pmatrix}
2 v_q^2+ v_s^2 & \sqrt{2}( v_q^2-v_s^2)\\
\sqrt{2}(v_q^2-v_s^2) & v_q^2+ 2 v_s^2
\end{pmatrix},
\end{equation}
with $v_{q,s}=m_{q,s} z + \sigma z^3$.
Here we have also absorbed numerical constants in $\kappa_0$ and renamed it to $\kappa$, and we defined $\tilde{Y}_{00}=-{\tilde{Y}_0 \ln z \Lambda}/{C_0}$.
The fields $\varphi^a$ are dual to QCD operators $-\partial_{\mu}J_A^{\mu, a}$ and the glueball field $a$ is dual to $-\sqrt{2N_f}K$, where K is the instanton density $K=\frac{\alpha_s}{8 \pi}G^a_{\mu \nu}\tilde{G}^{a\mu \nu}$. This new field $a$ allows then among other things to compute overlaps of the instanton density $K$ with pseudoscalar modes $\eta, \eta',...$ and the topological susceptibility.
All fields of the normalizable modes have Dirichlet boundary conditions in the UV at $z=0$, Neumann boundary conditions in the IR at $z=z_0$ and are canonically normalized by
\begin{equation}
\int dz \left( \frac{M_{ab}}{z^3}(\eta^a_n(\eta^b_m-\varphi^b_m))+\frac{\tilde{Y_0}^2}{z^3}a_n(a_m-\varphi^0_m) \right) = \delta_{nm}.
\end{equation}
From the Chern-Simons term \eqref{SCS} we obtain the transition form factor (TFF)
\begin{eqnarray}
&&F_n(Q_1^2, Q_2^2)=\text{tr}(t^a \mathcal{Q}^2) F_n^{a}(Q_1^2, Q_2^2),
\end{eqnarray}
with
\begin{eqnarray}
&&F_n^a(Q_1^2, Q_2^2)= -\frac{N_c}{2 \pi^2} \bigg( \int dz \varphi'^a_n(z) \mathcal{J}(Q_1,z) \mathcal{J}(Q_2,z) \nonumber\\
&&\qquad-\left.\left(\varphi^a_n(z)-\eta^a_n(z)\right) \mathcal{J}(Q_1,z) \mathcal{J}(Q_2,z)\right|_{z_0} \bigg),
\end{eqnarray}
where the vector bulk-to-boundary propagator
\begin{equation}\label{HWVF}
\mathcal{J}(Q,z)=
Qz \left[ K_1(Qz)+\frac{K_0(Q z_0)}{I_0(Q z_0)}I_1(Q z) \right]
\end{equation}
describes virtual photons
with spacelike momentum $q^2=-Q^2$.
We can also generalize the
sum relations obtained in \cite{Leutgeb:2021mpu} to the $a=0,3,8$-sector. Most importantly, we can derive the anomaly equation
\begin{equation}
\sum_n f_n^a F_n^{a}(0,0) = \frac{N_c}{2 \pi^2} \text{tr}(t^a \mathcal{Q}^2), \quad a=(0,3,8)
\end{equation}
and the short-distance constraint (SDC) ($Q \gg 1$)
\begin{equation}
F_n(Q^2 (1+w), Q^2 (1-w))=\frac{N_c}{2 \pi^2}\text{tr}(t^a \mathcal{Q}^2) g_5^2 f_n^a \frac{1}{Q^2} f(w),
\end{equation}
with the asymmetry function
\begin{equation}
f(w)=\frac{1}{w^2}-\frac{1-w^2}{2w^3}\ln\frac{1+w}{1-w},
\end{equation}
and the pseudoscalar decay constants
\begin{equation}
f_n^a=\left.-g_5^{-2}\partial_z\varphi^{a}_n/z\right|_{z\to0}.
\end{equation}
Note that the decay rate associated with the glueball field,
\begin{equation}
\left. f_G^n=\tilde{Y}_0^2 \partial_z a_n / z^3\right|_{z\to0}
\end{equation}
does not contribute to the TFF. In QCD $ f_G^n$ computes $(-\sqrt{2 N_f})\langle \Omega|K|P_n\rangle$, where $P_n$ is the respective pseudoscalar particle.
Comparing the axial vector sector to \cite{Leutgeb:2021mpu}, the $a=(0,8)$ equations of motion are changed to
\begin{eqnarray}\label{psiAnHW}
\partial_z\left(\frac1z \partial_z \psi^a_{A,n}(z)\right)+\frac1z m_{A,n}^2 \psi^a_{A,n}(z) \nonumber
\\
-\frac{g_5^2( M_{ab}^2+\delta_{0a} \delta_{0b}\tilde{Y}_0^2)}{z^3} \psi^b_{A,n}(z)=0,
\end{eqnarray}
and we obtain the corresponding asymptotic behavior of the axial vector TFF%
\footnote{See Ref.~\cite{Leutgeb:2019gbz} for the precise relation to the amplitude $\gamma^*\gamma^*\to A$.}
\begin{eqnarray}
&&A_n(Q_1^2, Q_2^2)=\text{tr}(t^a \mathcal{Q}^2) A_n^{a}(Q_1^2, Q_2^2),
\end{eqnarray}
with
\begin{equation}\label{TFFAn}
A_n^a(Q_1^2,Q_2^2) = \frac{2g_5}{Q_1^2} \int_0^{z_0} \!\!\! dz \left[ \frac{d}{dz} \mathcal{J}(Q_1,z) \right]
\mathcal{J}(Q_2,z) \psi^{A,a}_n(z)
\end{equation}
and the SDC
\begin{equation}
A_n^a(Q^2 (1+w), Q^2 (1-w)) \to \frac{g_5^2 F^a_{A,n}}{Q^4} f_A(w) ,
\end{equation}
with the decay constants
\begin{equation}
F_n^a=\left.-g_5^{-2}\partial_z\psi^{a}_{A,n}/z\right|_{z\to0}
\end{equation}
and the asymmetry function
\begin{equation}
f_A(w) = \frac1{w^4} \left[ w(3-2w + \frac12 (w+3)(1-w)\ln\frac{1-w}{1+w}\right],
\end{equation}
in agreement with the asymptotic form derived from QCD in \cite{Hoferichter:2020lap}.
The most general expression for axial vector amplitudes has actually one further asymmetric structure function \cite{Pascalutsa:2012pr,Roig:2019reh,Zanke:2021wiq}, which is set to zero in the holographic model and whose phenomenological importance has not yet been established; see Ref.~\cite{Zanke:2021wiq} for
a compilation of the available phenomenological information.
\section{Results}
\subsection{Parameter settings}
As one of the input data which we fit, we take the $\rho$ meson mass\footnote{A shortcoming of the minimal HW models considered here is that the strange quark mass modifies the vector meson masses too little compared to reality: $\rho$, $\omega$ and $\phi$ mesons are degenerate, the mass of $K^*$ is raised to only 0.79 GeV.}
$m_\rho=\gamma_{0,1}z_0^{-1}=2.40483\ldots z_0^{-1}$, where $\gamma_{0,1}$
is the first zero of the Bessel function $J_0$. Following Ref.~\cite{Abidin:2009aj}, we have chosen $z_0^{-1}=0.3225$ GeV corresponding to $m_\rho=775.556$ MeV. This fixes the location
of the hard wall, $z_0$, and $\Lambda$ in
the expression for $\alpha_s$.
The coupling $g_5$ is either set by the leading-order OPE result \eqref{g5LO} or the slightly reduced value \eqref{g5Frho} obtained by fitting the $\rho$ meson decay constant, where the TFFs reach only 89.4\% of the OPE and Brodsky-Lepage limits,
thereby coming closer to next-to-leading order results at moderately large, experimentally relevant energy scales.
The isospin-symmetric quark mass parameter $m_q$ and the chiral condensate parameter $\sigma$ are chosen such that $m_\pi=134.97$ MeV and $f_\pi=92.21$ MeV
\cite{FlavourLatticeAveragingGroupFLAG:2021npn}; the strange quark mass
parameter $m_s$ is chosen such that \cite{Brunner:2015oga}
\begin{eqnarray}
m_K^2&=&\frac12(m_{K_\pm}^2+m_{K_0}^2)-
\frac12(m_{\pi_\pm}^2-m_{\pi_0}^2)\nonumber\\
&&= (495.007\mathrm{MeV})^2
\end{eqnarray}
in order to minimize isospin-breaking contributions.
For the two choices of $g_5$, we consider the model with and without a gluon condensate parameter $\Xi_0$. When $\Xi_0=0$ (referred to as model version v0 in the following), we obtain predictions for $m_\eta$
and $m_\eta'$ that are around 10\% lower than the real-world values, in accordance
with Ref.~\cite{Katz:2007tf} who had omitted to turn on a nonzero $\Xi_0$.
Fitting $\Xi_0$ such that $(1-m^{exp}_{\eta}/m^{th}_{\eta})^2+(1-m^{exp}_{\eta'}/m^{th}_{\eta'})^2$ is minimized (model v1), $m_\eta$
and $m_\eta'$ can be matched at the percent level, as shown in
Table \ref{tab:etas}.
In the four versions of our model, we have chosen a large value of $\kappa=700$, in order to be in the regime where the dependence on $\kappa$
is rather weak.
\subsection{Decay constants and photon coupling}
Up to the slightly different choice of $f_\pi$, the results for the mesons in the isotriplet sector, where $\Xi_0$ does not play a role, are identical to the
HW1m model presented in \cite{Leutgeb:2021mpu} for $g_5=2\pi$. Table \ref{tab:pia1}
generalizes this to the case where $g_5$ is fitted to match $F_\rho$.
In Tables \ref{tab:etas} and \ref{tab:AV}, detailed results for the two versions v0 and v1 are given for the first few pseudoscalar and axial vector modes in the isosinglet sector, showing their mixing behavior in the decay constants $f^8$, $f^0$, $f_G$ for the $\eta$'s, and $F_A^8$, $F_A^0$ for the $f_1$'s, as well
as in the coupling to real photons given by $F(0,0)$ and $A(0,0)$, respectively.
All results are given in units of GeV raised to the appropriate power [note that
the mass dimension of $f^8$ and $f^0$ is 1, but that of $f_G$ is 3; $F(0,0)$ and
$A(0,0)$ have mass dimensions $-1$ and $-2$, respectively].
The pseudoscalars $\eta$, $\eta'$ and a third ground-state $\eta''$ meson arise from mixing of flavor octet and flavor singlet degrees of freedom with the pseudoscalar glueball $G$, each followed by an infinite tower of excited states.
The ground state modes are dominantly flavor octet, singlet, and
glueball judging from the corresponding decay constants evaluated at $z\to0$,
while the first excited triplet $\eta^{(3)}$ to $\eta^{(5)}$ shows a more
involved mixing behavior.
The decay constants for $\eta$ and $\eta'$ agree reasonably well with the
recent lattice results of Ref.~\cite{Bali:2021qem}, where also pseudoscalar matrix
elements have been evaluated. Our results for $f_G$ correspond to $\sqrt{N_f/2}a$ in \cite{Bali:2021qem} and also agree reasonably well. The ratio $a_{\eta'}/a_\eta$ is between 2 and 2.5 depending on the renormalization scale. This is better in line with our model v1 that includes a nonzero gluon condensate, where $f_{G,\eta'}/f_{G,\eta}=2.60$ and 2.66 for the two choices of $g_5$, while
model v0 has 1.46 and 1.30.
Without gluon condensate (v0), the results for $F(0,0)$ show rather poor agreement
with experimental results for the $\eta$ meson with deviations of around 30\%, while
those for $F_{\eta'\to\gamma\gamma}(0,0)$ are much better. With gluon condensate (model v1), where the masses of $\eta$ and $\eta'$ agree with experimental data
at the percent level,
both couplings turn out to agree remarkably well with the experimental values.
For isosinglet axial vector mesons (Table \ref{tab:AV}), both model versions predict
generally too high values of $f_1$ and $f_1'$ masses (+8\% to +28\% compared to PDG data \cite{ParticleDataGroup:2022pth}). The $f_1$ and $f_1'$ mesons are obtained as
dominantly flavor octet and flavor singlet, respectively.
In the holographic model, the mixing angle is an energy or $z$ dependent
quantity. In the case of the $f_1$ mesons, it is usually
extracted from equivalent photon decay rates at zero virtuality, where the experimental results from
the L3 experiment read \cite{Achard:2001uu,Achard:2007hm}
\begin{equation}
\tilde\Gamma_{\gamma\gamma}=\left\{
\begin{array}{cc}
3.5(8) \;\text{keV} & \text{for}\;f_1=f_1(1285)\\
3.2(9) \;\text{keV} & \text{for}\;f_1'=f_1(1420)
\end{array}
\right.
.
\end{equation}
With the definition
\begin{equation}
f_1=\cos\theta_A f^0+\sin\theta_A f^8
\end{equation}
and the assumption that
$\tilde\Gamma_{\gamma\gamma}\propto m_A$, one has
\begin{equation}
\tan^2(\theta_A-\arcsin\frac13)=\frac{m_{f_1}\tilde\Gamma_{\gamma\gamma}^{f_1'}}{m_{f_1'}\tilde\Gamma_{\gamma\gamma}^{f_1}},
\end{equation}
leading to \cite{Hoferichter:2020lap}
$\theta_A=62(5)^\circ$, superficially agreeing with model version v0.
However, in the holographic model we have $\tilde\Gamma_{\gamma\gamma}\sim m_A(m_A/\Lambda)^4$, resulting in $\theta_A=56(5)^\circ$
for the experimental value, which
does not fit to the results for either v0 or v1, the latter
disagreeing even more than the former.\footnote{It would be interesting to revisit this issue in other holographic
QCD models, in particular ones that are closer to a string-theoretic top-down construction such as the models of Ref.~\cite{Casero:2007ae,Arean:2013tja}.}
While the mixing angle depends rather strongly on $\Xi_0$,
the combination $\sqrt{[A^8(0,0)]^2+[A^0(0,0)]^2}$ changes only slightly
between models v0 and v1, and it
is also close to the value of $A(0,0)$ in the isotriplet sector,
as well as to the same quantity in the chiral hard-wall model
\cite{Leutgeb:2019gbz}, $(21.04\text{GeV})^{-2}$.
Matching $A(0,0)$ with $\tilde\Gamma_{\gamma\gamma}\propto m_A(m_A/\Lambda)^4$
to the L3 results
leads to a value of 15.2(2.0) GeV$^{-2}$ so that the holographic results, which
read 20-21 GeV$^{-2}$ when $g_5=2\pi$ and 19-20 GeV$^{-2}$ for the reduced $g_5$, are somewhat too high for $f_1$ and $f_1'$, but
not excluded for $a_1$, for which Ref.~\cite{Roig:2019reh} has a concordant
estimate of 19.3(5.0) GeV$^{-2}$.
\subsection{Transition form factors}
For the HLBL contribution of single mesons to $a_\upmu$, their singly and doubly virtual TFFs are of
critical importance.
As in the chiral HW model \cite{Leutgeb:2019zpq}, we find excellent agreement
of the singly virtual result for the pion TFF with available experimental data, see Fig.~\ref{fig53comp}. At virtualities relevant for $a_\upmu$, the
results with $g_5$ fitted to $F_\rho$, where the asymptotic limit
is 89.4\% of the Brodsky-Lepage value, seem to give the best match.
\begin{figure}
\bigskip
\centerline{$Q^2 F_{P\gamma^*\gamma}(Q^2,0)$ [GeV]\hfill}
\includegraphics[width=0.38\textwidth]{pi0TFF+.pdf}
\centerline{$Q^2$ [GeV$^2$]}
\caption{Holographic results for the single virtual TFF $Q^2 F(Q^2,0)$ for $\pi^0$, plotted on top of experimental data as compiled in Fig.~53 of Ref.~\cite{Aoyama:2020ynm}
for $g_5=2\pi$ (OPE fit, blue) and the reduced value (red)
corresponding to a fit of $F_\rho$.
(For $\pi^0$ results for the model with and without gluon condensate coincide.)}
\label{fig53comp}
\end{figure}
For the symmetric doubly virtual TFF the comparison is made with
the dispersive result of Ref.~\cite{Hoferichter:2018kwz} and the lattice result of Ref.~\cite{Gerardin:2019vio}
in Fig.~\ref{figpi0TFFd}. Both choices of $g_5$ are within the error band
of the dispersive result, while the result for the reduced $g_5$ is
also within the error band of the lattice result and moreover
happens to coincide with
the central values of the dispersive approach within line thickness of the plot
throughout the entire range of $Q^2$.
\begin{figure}
\bigskip
\includegraphics[width=0.45\textwidth]{pi0TFFd+l.pdf}
\caption{Holographic results for the doubly virtual $F_{\pi^0\gamma^*\gamma^*}$ compared to dispersive result of Ref.~\cite{Hoferichter:2018kwz} (green band) and the lattice result of Ref.~\cite{Gerardin:2019vio} (yellow band); the OPE limit given by dashed horizontal line. Full lines are with gluon condensate (version v1), dashed lines without (v0); blue color corresponds to $g_5=2\pi$ (OPE fit) and red to the reduced value $g_5$ ($F_\rho$-fit).
}
\label{figpi0TFFd}
\end{figure}
With $\eta$ and $\eta'$ mesons, there is a rather strong
dependence on the parameter $\Xi_0$ representing a gluon condensate.
With this parameter turned on, the masses of $\eta$ and $\eta'$ can be
matched to percent level accuracy, and the resulting prediction for
$F_{P\gamma\gamma}(0,0)$ is then in complete agreement with experiment for $g_5=2\pi$ (see Table \ref{tab:etas}), while with reduced $g_5$ this value is
slightly underestimated in the case of $\eta'$.
For the singly virtual TFF of $\eta$, only the results with nonzero $\Xi_0$ are
close to the experimental data, see Fig.~\ref{fig54comp}. They match those at low $Q^2$ quite well,
but are generally larger at higher virtualities.
In the case of $\eta'$, all model versions agree with the low-$Q^2$ date
due to L3, while at higher $Q^2$ the results without gluon condensate
agree with more of the data points, but only with unreduced $g_5=2\pi$.
\begin{figure}
\bigskip
\centerline{$Q^2 F_{P\gamma^*\gamma}(Q^2,0)$ [GeV]\hfill}
\includegraphics[width=0.38\textwidth]{etaTFF+.pdf}
\includegraphics[width=0.38\textwidth]{etaprimeTFF+.pdf}
\centerline{$Q^2$ [GeV$^2$]}
\caption{Holographic results for the single virtual TFF $Q^2 F(Q^2,0)$ for $\eta$ and $\eta'$ plotted on top of experimental data as compiled in Fig.~54 of Ref.~\cite{Aoyama:2020ynm}
for $g_5=2\pi$ (OPE fit, blue) and the reduced value (red)
corresponding to a fit of $F_\rho$. Full lines are with gluon condensate (version v1), dashed lines without (v0).
}
\label{fig54comp}
\end{figure}
\begin{figure}
\bigskip
\includegraphics[width=0.45\textwidth]{BBetapggV.pdf}
\caption{Holographic results for the doubly virtual $F_{\eta'\gamma^*\gamma^*}$ compared to BABAR data points (black) and a simple VMD model fitted with singly virtual data (cyan circles) \cite{BaBar:2018zpn}. Full lines are with gluon condensate (version v1), dashed lines without (v0); blue color corresponds to $g_5=2\pi$ (OPE fit) and red to the reduced value $g_5$ ($F_\rho$-fit).
}
\label{figBBetacomp}
\end{figure}
In contrast to the case of $\pi^0$, there are also several experimental data points for the doubly virtual TFF of $\eta'$. As opposed to the simple VMD model
considered in \cite{BaBar:2018zpn} and represented by the cyan circles in Fig.~\ref{figBBetacomp}, the holographic results are within 1-2 standard deviations.
For the lowest virtualities $Q^2_1=Q_2^2=6.48 \text{GeV}^2$, which are the most significant for $a_\upmu$, all versions of the model come close to the experimental result. With gluon condensate, the agreement is better with the reduced $g_5$, whereas without gluon condensate, a reduction of $g_5$ to fit $F_\rho$ moves the prediction slightly outside the error bar.
All in all, the model with gluon condensate and reduced $g_5$ seems to
be the optimal choice regarding pseudoscalar TFFs.
\begin{table*}[]
\centering
\begin{tabular}{lccccccp{12pt}cccccc}
\toprule
& \multicolumn{6}{c}{$g_5^2=(2\pi)^2$} & & \multicolumn{6}{c}{$g_5^2=0.894(2\pi)^2$} \\
& $\pi^0$ & $\pi^*$ & $a_1$ & $a_1^*$ & {$a_1^{**}$} & $a_1^{***}$ & & $\pi^0$ & $\pi^*$ & $a_1$ & $a_1^*$ & {$a_1^{**}$} & $a_1^{***}$ \\
\hline
$m$ & 0.135$^*$ & 1.891 & 1.363 & 2.137 & {2.987} & 3.935 && 0.135$^*$ & 1.841 & 1.278 & 2.047 & {2.936} & 3.902 \\
$f\;\vee\;F_A/m_A$ & 0.09221$^*$ & 0.00157 & 0.175 & 0.204 & {0.263} &0.311 && 0.09221$^*$ &0.00173&0.173 &0.217&0.280 & 0.329\\
$F(0,0)\vee A(0,0)$ &0.277 &-0.203 & 20.96& 3.31& {-0.336}& 2.16&& 0.276&-0.199&19.46&4.87&-0.413&2.05 \\ \hline
$a_\upmu\times 10^{11}$ & 66.1 & 0.7 & 7.8 & 1.2 & {0.4} & 0.3 &&
63.4 & 0.7 & 7.1 & 1.5 & 0.4 & 0.3 \\
\botrule
\end{tabular}
\caption{Results for pseudoscalar and axial vector mesons in the isotriplet sector (the gluon condensate parameter $\Xi_0$ does not play a role here). All quantities in units of (powers of) GeV.}\label{tab:pia1}
\end{table*}
\begin{table*}
\centering\bigskip
\begin{tabular}{lccccccp{12pt}cccccc}
\toprule
\textbf{(v0)} & \multicolumn{6}{c}{$\Xi_0=0$} && \multicolumn{6}{c}{$\Xi_0=0$} \\
& \multicolumn{6}{c}{$g_5^2=(2\pi)^2$} & & \multicolumn{6}{c}{$g_5^2=0.894(2\pi)^2$} \\
& $\eta$ & $\eta'$ & $G/\eta''$ & $\eta^{(3)}$ & {$\eta^{(4)}$} & $\eta^{(5)}$ && $\eta$ & $\eta'$ & $G/\eta''$ & $\eta^{(3)}$ & $\eta^{(4)}$ & $\eta^{(5)}$ \\ \colrule
$m$ & 0.513 & 0.840 & 1.862 & 1.999 & {2.257} & 2.705 && 0.503 & 0.819 & 1.764 & 1.948 & {2.207} & 2.638 \\
${m}-{m^\mathrm{exp}}$ & -6.4\% & -12.3\% & & & & && -8.2\% & -14.5\% & & \\
$f^8$ & 0.0917 & -0.0565 & 0.00197 & 0.0266 & {0.0121} &0.0080 && 0.0902 & -0.0624 & 0.00405 & 0.0293 & {0.0132}&0.00837 \\
$f^0$ & 0.0394 & 0.0945 & -0.0212 & -0.00823 & {-0.0390} &0.0362 && 0.0446 & 0.0952 & -0.0224 & -0.00802 & {-0.0416} &0.0337 \\
$f_G$ & -0.0264 & -0.0385 & 0.0674 & -0.0400 & {0.154} &-0.310 && -0.0265 & -0.0344 & 0.0600 & -0.0454 & {0.156} &-0.280 \\
$F^8(0,0)$ & 1.46 & -0.674 & 0.177 & -1.18 & {0.00233} &0.236 && 1.41 & -0.737 & 0.0640 & -1.16 & {0.0239}&0.241 \\
$F^0(0,0)$ & 0.776 & 1.42 & 0.169 & 0.0383 & {1.08} &0.229 && 0.828 & 1.34 & 0.00310 & 0.00492 & {1.10}&0.253 \\
$F(0,0)$ & 0.351 & 0.322 & 0.0629 & -0.103 & {0.293} &0.0851 && 0.361 & 0.295 & 0.00700 & -0.110 & {0.302}&0.0922 \\
${F}-{F^\mathrm{exp}}$ & +28(2)\% & -6(2)\% & & & & && +32(2)\% & -14(2)\% & & \\
\hline
$a_\upmu\times 10^{11}$ & 32.8 & 15.7 & 0.055 & 0.14 & {0.79} & 0.16 && 34.0 & 13.3 & 0.003 & 0.16 & 0.85 & 0.16 \\
\botrule
\end{tabular}
\medskip
\begin{tabular}{lccccccp{12pt}cccccc}
\toprule
\textbf{(v1)} & \multicolumn{6}{c}{$\Xi_0=0.01051$} && \multicolumn{6}{c}{$\Xi_0=0.01416$} \\
& \multicolumn{6}{c}{$g_5^2=(2\pi)^2$} & & \multicolumn{6}{c}{$g_5^2=0.894(2\pi)^2$} \\
& $\eta$ & $\eta'$ & $G/\eta''$ & $\eta^{(3)}$ & {$\eta^{(4)}$} & $\eta^{(5)}$ && $\eta$ & $\eta'$ & $G/\eta''$ & $\eta^{(3)}$ & $\eta^{(4)}$ & $\eta^{(5)}$ \\ \colrule
$m$ & 0.557 & 0.950 & 1.992 & {2.390} & 2.954 & 3.214 && 0.561 & 0.947 & 1.943 & 2.428 & 2.914 & 3.317 \\
${m}-{m^\mathrm{exp}}$ & +1.7\% & -0.8\% & & {} & & && +2.4\% & -1.1\% & & \\
$f^8$ & 0.101 & -0.0385 & -0.0267 & {0.0116} & -0.0228 &-0.0049 && 0.103 & -0.0393 & -0.0299 & 0.0112 & -0.0253&-0.00767 \\
$f^0$ & 0.0272 & 0.113 & 0.0049 & {-0.0492} &-0.00115 &-0.0214 && 0.0298 & 0.121 & 0.00761 & -0.0522 & 0.00320&-0.0128 \\
$f_G$ & -0.0298 & -0.0774 & 0.053 & {0.233} & 0.1483 &0.269 && -0.0313 & -0.0821 & 0.048 & 0.260 & 0.1236& 0.214 \\
$F^8(0,0)$ & 1.55 & -0.431 & 1.19 & {-0.0478} &-0.887 & 0.167&& 1.53 & -0.442 & 1.149 & -0.0312 & -0.877&0.129 \\
$F^0(0,0)$ & 0.468 & 1.40 & 0.0051 & {0.904} & 0.0300 &0.0867 && 0.444 & 1.31 & -0.000026 & 0.837 & 0.0307&0.130 \\
$F(0,0)$ & 0.276 & 0.340 & 0.116 & {0.241} & -0.0772 &0.0397 && 0.268 & 0.313 & 0.111 & 0.225 & -0.0760& 0.0477 \\
${F}-{F^\mathrm{exp}}$ & +1(2)\% & -0(2)\% & & {} & & && +2(2)\% & -8(2)\% & & \\
\hline
$a_\upmu\times 10^{11}$ & 19.3 & 16.9 & 0.19 & {0.53} & 0.043 & 0.008 && 17.6 & 14.9 & 0.18 & 0.45 & 0.039 & 0.007 \\
\botrule
\end{tabular}
\caption{Results for the isoscalar pseudoscalar sector, for the model with (v1) and without (v0) gluon condensate, and for two choices of $g_5$: $g_5=2\pi$ corresponding to matching the vector correlator to the LO UV-behavior in QCD, and the reduced value corresponding to a fit of $F_\rho$. All dimensionful quantities in units of (powers of) GeV.}
\label{tab:etas}
\end{table*}
\begin{table}[]
\centering
\begin{tabular}{lccp{12pt}cc}
\toprule
\textbf{(v0)} & \multicolumn{2}{c}{$\Xi_0=0$} && \multicolumn{2}{c}{$\Xi_0=0$} \\
& \multicolumn{2}{c}{$g_5^2=(2\pi)^2$} && \multicolumn{2}{c}{$g_5^2=0.894(2\pi)^2$} \\
& $f_1$ & {$f_1'$} && $f_1$ & {$f_1'$} \\
\colrule
$m$ & 1.460 & {1.651} && 1.388 & {1.598} \\
${m}-{m^\mathrm{exp}}$ & +14\% & +16\% && +8\% & +12\% \\
$F_A^8/m$ & 0.163 & -0.0732 && 0.165 & -0.0627 \\
$F_A^0/m$ & 0.0743 & 0.169 && 0.0690 & 0.180 \\
$A^8(0,0)$ & 19.27 & -8.649 && 18.38 & -7.194 \\
$A^0(0,0)$ & 8.676 & 19.21 && 7.310 & 18.62 \\
$\theta_A$ & $65.8^\circ$ & -24.2$^\circ$ && 68.3$^\circ$ & -21.1$^\circ$ \\
$A(0,0)$ & 4.22 & 4.40 && 3.76 & 4.37 \\
$m^*$ & 2.241 & 2.614 && 2.147 & 2.561 \\
$m^{**}$ & 3.056 & 3.580 && 2.999 & 3.535 \\ \hline
$a_\upmu\times 10^{11}$ & 11.0 & {10.8} && 9.1 & 11.0 \\
$a^*_\upmu\times 10^{11}$ & 0.6 & 1.5 && 0.6 & 1.5 \\
$a^{**}_\upmu\times 10^{11}$ & 0.2 & 1.1 && 0.2 & 1.0 \\
\botrule
\end{tabular}
\medskip
\begin{tabular}{lccp{12pt}cc}
\toprule
\textbf{(v1)} & \multicolumn{2}{c}{$\Xi_0=0.01051$} && \multicolumn{2}{c}{$\Xi_0=0.01416$} \\
& \multicolumn{2}{c}{$g_5^2=(2\pi)^2$} && \multicolumn{2}{c}{$g_5^2=0.894(2\pi)^2$} \\
& $f_1$ & {$f_1'$} && $f_1$ & {$f_1'$} \\
\colrule
$m$ & 1.481 & 1.810 && 1.410 & 1.820 \\
$m-m^\mathrm{exp}$ & +15\% & +27\% && +10\% & +28\% \\
$F_A^8/m_A$ &0.176 &-0.0299 &&0.176 & -0.0167\\
$F_A^0/m_A$ & 0.0365&0.201 && 0.0292&0.219 \\
$A^8(0,0)$ &20.77 &-3.842 && 19.58&-2.556 \\
$A^0(0,0)$ &3.857 &20.07 && 2.690& 19.00\\
$\theta_A$ &79.5$^\circ$ &-10.8$^\circ$ &&82.2$^\circ$ &-7.7$^\circ$ \\
$A(0,0)$ &3.05 & 5.09&& 2.62&4.93 \\
$m^*$ & 2.246 & 2.862 && 2.153 & 2.891 \\
$m^{**}$ & 3.058 & 3.869 && 3.004 & 3.907 \\ \hline
$a_\upmu\times 10^{11}$ & 5.7 & 14.3 && 4.3 & 13.6 \\
$a^*_\upmu\times 10^{11}$ & 0.3 & 0.9 && 0.3 & 0.9 \\
$a^{**}_\upmu\times 10^{11}$ & 0.1 & 1.1 && 0.05 & 1.0 \\
\botrule
\end{tabular}
\caption{Results for the isoscalar axial vector sector, for the model with (v1) and without (v0) gluon condensate, and the two choices $g_5$(OPE fit) and
$g_5$($F_\rho$-fit). Here $\theta_A\equiv\arctan(A^8(0,0)/A^0(0,0))$
for both $f_1$ and $f_1'$. All dimensionful quantities in units of (powers of) GeV. In the $a_\upmu$ contributions, about 58\% are due to the longitudinal part of the axial vector meson propagator which contributes to the MV constraint.}
\label{tab:AV}
\end{table}
\subsection{HLBL contribution to $a_\upmu$}
Tables \ref{tab:pia1} and \ref{tab:etas} include also the individual contributions
of the listed pseudoscalar and axial vector meson modes to $a_\upmu$, which are
collected in Table \ref{tab:total} for the model with nonzero gluon condensate (v1) with $g_5=2\pi$ (OPE-fit)
and the reduced value \eqref{g5Frho} from fitting the $\rho$ meson decay. Only with
the extra parameter $\Xi_0$ for the gluon condensate, the predictions for $F_{P\gamma\gamma}(0,0)$ and masses of $\eta$ and $\eta'$ match experimental data with good accuracy. With reduced $g_5$ ($F_\rho$-fit), the predictions for
$a_\upmu^{\pi^0}$ and $a_\upmu^{\eta'}$ are
extremely close to the central values adopted by the White Paper
\cite{Aoyama:2020ynm}, and those for $\eta$ agree within $1\sigma$.
The holographic model also includes a third ground-state $\eta$ meson, which we called $\eta''$, the result of mixing with the pseudoscalar glueball $G$.
It contributes only $0.2\times 10^{-11}$, but
there is also a whole tower of excited $\eta$ modes, which together with excited pion modes contribute around $1.5\times 10^{-11}$ so that the total pseudoscalar poles prediction
for model v1($F_\rho$-fit) is close to the upper end of the WP prediction, whereas
the result for model v1(OPE fit) is 2.5$\sigma$ higher.
The main aim of this study is of course the experimentally less well constrained axial vector meson contribution, which in holographic QCD has been shown to
take into account the Melnikov-Vainshtein short-distance constraint \cite{Leutgeb:2019gbz,Cappiello:2019hwh}, also away from the chiral limit \cite{Leutgeb:2021mpu}. The holographic result thus presents an alternative
estimate of the combined contribution of axial vector mesons, for which
the WP estimate is $6(6)\times 10^{-11}$, and of short-distance contributions\footnote{In the symmetric high-energy limit, the holographic results
for the HLBL scattering amplitude have the correct dependence on $Q^2$, but reproduce the OPE value only at the level of 81\% when $g_5=2\pi$, where the asymmetric
MV limit is saturated fully \cite{Cappiello:2019hwh,Leutgeb:2021mpu}.},
estimated in the WP as $15(10)\times 10^{-11}$. With errors added linearly,
the WP value is at $21(16)\times 10^{-11}$.
It is difficult to estimate errors for any holographic result, but we expect
our results for $a_\upmu$ to be in good shape despite some deviations in its
ingredients.
The holographic results for axial vector mesons have turned out to overestimate the masses of $f_1$ and $f_1'$ by 8-28\%, where the models with gluon condensate have the higher deviations. On the other hand, all our models have an equivalent real photon coupling $A(0,0)$
that is 20-28\% too large compared to the value derived from L3 data for $f_1$ and $f_1'$, albeit
in good agreement with the estimate of Ref.~\cite{Roig:2019reh} for $a_1(1260)$.
The mixing angles for $f_1$ and $f_1'$ are poorly predicted, and even worse
when the gluon condensate is turned on. However, the prediction for the amplitude $\sqrt{(A^8)^2+(A^0)^2}$
appears to be fairly robust and only weakly dependent on $\Xi_0$.
A different modeling of the gluon condensate could perhaps lead to better
predictions for the mixing with similar overall amplitude.
Our summary in Table \ref{tab:total} therefore lists the more reliable
combined contribution of $f_1$ and $f_1'$.
Since the contribution to $a_\upmu$ decreases with increasing axial vector meson mass by approximately two inverse powers while the amplitude $A$ enters quadratically, we expect that the errors in the predictions of both will largely cancel out, so that the holographic results can still be a reasonably good
prediction for the axial vector meson contributions to $a_\upmu$.
For our favored model v1($F_\rho$-fit), the contribution from the
ground-state axial vector mesons is $a_\upmu^{a_1+f_1+f_1'}=25.0\times 10^{-11}$,
about 4 times the WP estimate. The contribution from $f_1+f_1'$ is 2.5 times that
of $a_1$, somewhat reduced from the flavor-U(3)-symmetric value of 3 that was
assumed in our previous estimates in Ref.~\cite{Leutgeb:2021mpu}. For this contribution, Pauk and Vanderhaeghen \cite{Pauk:2014rta}
have estimated a value of only $6.4(2.0)\times 10^{-11}$, much smaller than
our holographic prediction of $17.9\times 10^{-11}$. A crucial difference of the TFF assumed in \cite{Pauk:2014rta} is that it is obtained from a factorized ansatz
that unlike the holographic result does not have the correct asymptotic
behavior \cite{Hoferichter:2020lap} in the doubly virtual case, where it
falls off as $1/Q^{4}$ instead of $1/Q^2$.
In the holographic models, the excited axial vector mesons ensure agreement
with the longitudinal (Melnikov-Vainshtein) short-distance constraint. This
constraint derived from the axial anomaly is satisfied to 100\% in the model v1(OPE fit), and to 89.4\% in the case of v1($F_\rho$-fit). The latter should
provide a better approximation at large but still physically relevant energy
scales, where typically $\sim 10\%$ of next-to-leading order pQCD corrections apply \cite{Melic:2002ij,Bijnens:2021jqo}.
In the chiral HW1 model and in the U(3)-symmetric massive HW1m model that we have investigated in Refs.~\cite{Leutgeb:2019gbz,Leutgeb:2021mpu}, we have obtained 9.2 and $9.4\times10^{-11}$
from excited axial vectors, where 25\% are due to $a_1$ by U(3) symmetry.
The contribution of excited $a_1$'s in our present models are essentially
the same as in the HW1m model (up to a slightly different fit value of $f_\pi)$,
but the excited isoscalars remain below the extra factor of 3 expected from U(3) symmetry.\footnote{In order to approximate the sum of contributions from the infinite tower of axial vector mesons we have used the observation that in the chiral HW models as well as in the HW1m model the infinite series of contributions can be roughly approximated by a geometric one with $a_{n+1}/a_n\approx 0.6$ for $n>2$. The full sum can thus be approximated by multiplying the last contribution of a truncated sum by a factor of 1/(1-0.6)=2.5. In the case of excited pseudoscalars, which do not contribute to the longitudinal short-distance constraint \cite{Leutgeb:2021mpu}, the contributions drop much more quickly. Our results for those are obtained simply from the sum of the first few modes.} Instead, the latter provide only 1.4 and 1.2 times the contributions from excited $a_1$'s in the case of v1(OPE fit) and v1($F_\rho$-fit), respectively.
The total contribution from axial vector
mesons is thus significantly smaller than the estimates we have come up with
in the flavor-symmetric case of Ref.~\cite{Leutgeb:2021mpu}: 34 and 31$\times 10^{-11}$ for the two choices of $g_5$ (instead of 40.8 and 38.8$\times10^{-11}$ for HW1m and HW1m with reduced $g_5$, respectively). Comparing this to the combined estimate of axial vector mesons and short-distance contributions in the WP, $21(16)\times10^{-11}$, we find values that are about 50\% higher, but well within the estimated error.
\begin{table}[t]
\bigskip
\centering
\begin{tabular}{lccc}
\toprule
$a_\upmu^{...}\times 10^{11}$ & v1(OPE fit) & v1($F_\rho$-fit) & WP \\
\hline
$\pi^0$ & 66.1 & 63.4 & 62.6$^{+3.0}_{-2.5}$ \\
$\eta$ & 19.3 & 17.6 & 16.3(1.4) \\
$\eta'$ & 16.9 & 14.9 & 14.5(1.9) \\
$G/\eta''$ & 0.2 & 0.2 \\
$\sum_{PS^*}$ & 1.6 & 1.4 \\[4pt]
\hline
PS poles total & 104 & 97.5 & 93.8(4.0) \\
\hline
$a_1$ & 7.8 & 7.1 \\
$f_1+f_1'$ & 20.0 & 17.9 \\
$\sum_{a_1^*}$ & 2.4 & 2.5 \\
$\sum_{f_1^{(')*}} $ & 3.4 & 2.9 \\[4pt]
\hline
AV+LSDC total & 33.6 & 30.5 & 21(16) \\
\hline
total & 138 & 128 & 115(16.5) \\
\botrule
\end{tabular}
\caption{Summary of the results for the different contributions to $a_\upmu$ in comparison with the White Paper \cite{Aoyama:2020ynm} values.}
\label{tab:total}
\end{table}
\section{Conclusion}
In this paper, we have upgraded our previous studies of the HLBL contribution
in HW AdS/QCD models to 2+1 flavors with strange quark mass $m_s>m_u=m_d$
plus a Witten-Veneziano mass for the flavor singlet degree of freedom
generated by interaction terms involving a pseudoscalar glueball with
the latter that implement the anomalous Ward identities of the $U(1)_A$ symmetry
in the line of Ref.~\cite{Katz:2007tf,Schafer:2007qy}.
In holographic QCD, the Melnikov-Vainshtein constraint on the HLBL scattering
amplitude is naturally satisfied, to the same degree that TFFs satisfy the
Brodsky-Lepage and OPE limits. All these are saturated at the level of 100\%
for the standard value of $g_5=2\pi$ in HW1 models.\footnote{The simpler Hirn-Sanz (HW2) model, which omits the bifundamental scalar $X$, reaches 62\% when $f_\pi$ and $m_\rho$ are fitted.} However, because these
models do not involve a running coupling in the UV, the UV-limits of TFFs
are approached too quickly, likely
leading to overestimated HLBL contributions to $a_\upmu$.
Next-to-leading-order gluonic corrections in pQCD suggest a reduction
by about 10\% at large but still experimentally relevant virtualities.
Precisely such a correction is obtained by fitting $g_5$ such that the
decay constant of the $\rho$ meson is matched instead of the OPE result
for the vector correlator. In Ref.~\cite{Leutgeb:2022cvg}, we have found
that this also brings the comparably crude result of HW AdS/QCD models for the HVP contribution
better in line with dispersive results.
In Ref.~\cite{Leutgeb:2019gbz,Leutgeb:2021mpu} we have shown that the MV short-distance constraint
is realized by the infinite tower of axial vector mesons, with the excited
axial vector mesons adding about a third of the contribution from the ground-state
axial vectors in the flavor-symmetric case. A much smaller contribution
comes from excited pseudoscalars, which do not contribute to the
longitudinal short-distance behavior at leading order.
In our present study with $U(1)_A$ anomaly included, where we have
obtained a remarkably accurate fit of the masses of $\eta$ and $\eta'$ mesons
as well as of their $F_{P\gamma\gamma}(0,0)$ values when including a
nonzero gluon condensate that was omitted in \cite{Katz:2007tf},
we have found a reduction of the ratio 3:1 for the isoscalar:isotriplet
contributions of axial vector mesons to about 2.5:1. For excited
mesons (axial vector as well as pseudoscalar), we have obtained an even
more pronounced reduction, which reduces our prediction for the $a_\upmu$
contribution of axial vector mesons in the U(3)-symmetric case
from around 41 and 39$\times 10^{-11}$ to 34 and 31$\times 10^{-11}$
for $g_5$(OPE) and $g_5$($F_\rho$-fit), respectively.
These values are above the estimate of the White Paper \cite{Aoyama:2020ynm}
for the contribution of (ground-state) axial vector mesons plus
short-distance constraints, but well within the error given there.
The pseudoscalar contributions obtained in our model v1($F_\rho$-fit)
agree completely with the WP results for $\pi^0$, $\eta$, and $\eta'$,
however this model also has a contribution of $1.6\times 10^{-11}$ from excited pseudoscalars, where the tower of $\eta$'s mixes with a pseudoscalar glueball.
The complete contribution from summing pseudoscalar and axial vector contributions
thus turns out to be close to but below the upper end of the WP contribution for this model, which we consider our currently best estimate obtained from AdS/QCD.
\phantom{XXXXXXX}
\phantom{XXXXXXX}
\phantom{XXXXXXX}
\phantom{XXXXXXX}
\begin{acknowledgments}
We would like to thank
Gilberto Colangelo, Martin Hoferichter, and Elias Kiritsis for helpful discussions.
J.~L.\ and J.~M.\ have been supported by the Austrian Science Fund FWF, project no. P 33655,
and by the FWF doctoral program
Particles \& Interactions, project no. W1252-N27.
\end{acknowledgments}
\raggedright
\bibliographystyle{JHEP}
|
2,877,628,089,413 | arxiv | \section{Introduction.}
D. Coronel, A. Navas and M. Ponce have recently given in \cite{cnp:bdd orbits} a generalization of the classical Gottschalk-Hedlund theorem on bounded cocycles to affine isometric actions on a Hilbert space. The present work gives a groupoid version of their result. First, instead of a semigroup $\Gamma$ acting by continuous maps on a space $X$, our dynamical system is given by a locally compact groupoid $G$. Second, instead of a single Hilbert space, we consider a $G$-Hilbert bundle. Going from a dynamical system $(\Gamma, X)$ to a topological groupoid $G$ is only a minor variation. The main difficulty will be to pass from a constant Hilbert bundle to a continuous field of Hilbert spaces.
Let us first recall what the Gottschalk-Hedlund theorem \cite[Chapter 14]{gh:top dyn} says.
\begin{thm} {(\bf Gottschalk-Hedlund)}\label{Gottschalk-Hedlund} Let $T$ be a minimal continuous map on a compact space $X$ and let $f:X\rightarrow{\bf{C}}$ be a continuous function. Then the following properties are equivalent:
\begin{enumerate}
\item there exists a continuous function $g:X\rightarrow{\bf{C}}$ such that for all $x\in X$, $f(x)=g(x)-g(Tx)$;
\item there exists $x\in X$ and $M\in{\bf{R}}$ such that for all $n\in{\bf{N}}$,
$$|\sum_{i=0}^n f(T^i x)|\le M;$$
\item there exists $M\in{\bf{R}}$ such that for all $x\in X$ and for all $n\in{\bf{N}}$,
$$|\sum_{i=0}^n f(T^i x)|\le M.$$
\end{enumerate}
\end{thm}
Let us fix a point of terminology: compact means here quasi-compact and Hausdorff; locally compact means that each point has a compact neighborhood. A locally compact space is not necessarily Hausdorff.
An easy generalisation of this theorem is given in \cite{ren:approach} in the language of groupoids and cocycles. For example, given $(T,X)$ as above, one can consider the topological groupoid
$$G(X,T)=\{(x,m-n,y): x,y\in X,\quad m,n\in{\bf{N}}\quad T^mx=T^ny\}$$
with range and source maps $r,s: G(X,T)\rightarrow X$ given respectively by $r(x,k,y)=x$ and $s(x,k,y)=y$, multiplication $(x,k,y)(y,l,z)=(x,k+l,z)$, inverse map $(x,k,y)^{-1}=(y,-k,x)$ and basic open sets of the form
$${\mathcal U}(U,V,m,n)=\{(x,m-n,y): (x,y)\in U\times V\quad T^mx=T^ny\}.$$
A function $f:X\rightarrow A$, where $A$ is an abelian group defines $c_f: G(X,T)\rightarrow A$ according to
$$c_f(x, m-n, y)=f(x)+f(Tx)+\ldots+f(T^{m-1}x)-f(T^{n-1}y)-\ldots -f(Ty)-f(y).$$
This is a cocycle (with respect to the trivial action of $G$ on $A$): $c_f(\gamma\gamma')=c_f(\gamma)+c_f(\gamma')$ for all composable pairs $(\gamma,\gamma')$.
This cocycle is a coboundary if and only $f$ is a coboundary, i.e. of the form $f=g-g\circ T$: then $c_f=g\circ r-g\circ s$. Let us assume that $A$ is a topological group. The cocycle is continuous if and only if $f$ is continuous; it is said to be a continuous coboundary if the function $g$ can be chosen continuous. Let us recall that a topological groupoid $G$ with unit space $X$ is said to be minimal if $\emptyset$ and $X$ are the only open invariant subsets of $X$. Here is a groupoid version of the Gottschalk-Hedlund theorem adapted from Theorem 1.4.10 of \cite{ren:approach}.
\begin{thm} \label{LNM 793} Let $G$ be a minimal topological groupoid with compact unit space $X$ and let $A$ be a topological abelian group without non-trivial compact subgroup. For a continuous cocycle $c:G\rightarrow A$, the following properties are equivalent:
\begin{enumerate}
\item there exists a continuous function $g:X\rightarrow A$ such that $c=g\circ r-g\circ s$,
\item there exists $x\in X$ such that $c(G_x)$ (where $G_x=s^{-1}(x)$) is relatively compact,
\item $c(G)$ is relatively compact.
\end{enumerate}
\end{thm}
\begin{rem}
The extra assumption, namely that $G$ admits a cover of continuous $G$-sets, made in \cite{ren:approach} is in fact not needed. However, our standing assumption for topological groupoids is that the range and source maps are open. The above groupoid $G(X,T)$ satisfies this assumption only if $T$ is an open map.
\end{rem}
The above setting is unsatisfactory: the natural data for continuous groupoid cohomology (see for example \cite{tu:cohomology}) consist of:
\begin{itemize}
\item a topological groupoid $G$ over a topological space $X$,
\item a space of coefficients (or $G$-module) $A$, which is a continuous bundle of topological abelian groups $A_x$ over $X$ endowed with a continuous $G$-action, i.e. $G$ acts by isomorphisms $L(\gamma): A_{s(\gamma))}\rightarrow A_{r(\gamma)}$ and the action map $G*A\rightarrow A$ is continuous.
\end{itemize}
The above theorem only covers the case when $A$ is a constant bundle endowed with a trivial action. We refer the reader to \cite{mrw:Morita} for the definition of a groupoid action.
In this framework, a (one-)cocycle is a section $c: G\rightarrow r^*A$ (i.e. $c(\gamma)\in A_{r(\gamma)}$) such that $c(\gamma\gamma')=c(\gamma)+L(\gamma)c(\gamma')$. It is a coboundary if there exists a section $f: X\rightarrow A$ such that $c(\gamma)=f\circ r(\gamma)-L(\gamma)f\circ s(\gamma)$. A cocycle $c: G\rightarrow r^*A$ defines an affine action of $G$ on $A$, given by
$$\gamma a=L(\gamma)a+c(\gamma).$$
Let us denote by $A(c)$ this $G$-affine space. More generally, a $G$-affine space over $A$ is a space $Z$ endowed with a left action of $G$ and a right principal action of $A$ (written additively) such that
\begin{itemize}
\item $r: Z\rightarrow G^{(0)}$ identifies $Z/A$ with $G^{(0)}$,
\item $\gamma(z+a)=\gamma z+L(\gamma)a$.
\end{itemize}
The choice of a section $f$ for $r: Z\rightarrow G^{(0)}$ gives a cocycle
$$c(\gamma)=\gamma f\circ s(\gamma)-f\circ r(\gamma)$$
which identifies $Z$ with $A(c)$: $f$ becomes the zero section. Up to a coboundary, the cocycle depends only on the isomorphism class of the $G$-affine space $Z$. A $G$-affine space over $A$ is said to be trivial if it is isomorphic to $A$; this is equivalent to the existence of a $G$-equivariant section $f$.
\vskip 5mm
We are interested in the case when the space of coefficients is a $G$-Hilbert bundle. We shall assume here that our Hilbert spaces are complex Hilbert spaces but our proofs and results hold for real Hilbert spaces as well. The scalar product, which is chosen to be linear in the second variable, is denoted by $(u|v)$ and the associated norm is denoted by $\|u\|$. We shall deal with a family $(E_x)_{x\in X}$ of Hilbert spaces; when there is no ambiguity, we shall omit the index $x$ in $(u|v)_x$ and in $\|u\|_x$. We shall use \cite{dd:dixmier-douady} and \cite{fd:representations} as references to Hilbert bundles. Let us recall the definition of a Hilbert bundle given in \cite[II.13.5]{fd:representations} as a particular case of a Banach bundle:
\begin{defn}\label{Banach bundle} Let $X$ be a topological space. A Banach [resp. Hilbert] bundle over $X$ is a pair $<E,\pi>$ where $E$ is a topological space called the bundle space, and $\pi:E\rightarrow X$ is a continuous open surjection called the bundle projection, together with operations and norms making each fiber $E_x=\pi^{-1}(x)$ into a Banach [resp. Hilbert] space, and satisfying the following conditions:
\begin{enumerate}
\item $u\mapsto \|u\|$ is continuous on $E$ to ${\bf{R}}$.
\item The operation $+$ is continuous as a map on $E*E\stackrel{\mathrm{def}}{=} \{(u,v)\in E\times E: \pi(u)=\pi(v)\}$ to $E$.
\item For each scalar $\lambda$, the map $u\mapsto \lambda u$ is continuous on $E$ to $E$.
\item If $x\in X$ and $(u_i)$ is a net of elements of $E$ such that $\|u_i\|\to 0$ and $\pi(u_i)\to x$ in $X$, then $u_i\to 0_x$ in $E$, where $0_x$ is the zero element of $E_x$.
\end{enumerate}
\end{defn}
It is assumed in \cite{fd:representations} that $X$ is Hausdorff. We shall make the weaker assumption that $X$ is locally Hausdorff (every point has a Hausdorff neighborhood) and shall apply results of \cite{fd:representations} to the reduction of $E$ to Hausdorff subspaces of $X$. Elements of $E$ will be denoted by $u$ or by $(x,u)$ where $x=\pi(u)$. An important result (see Appendix C of \cite{fd:representations}) says that when $X$ is paracompact, there are sufficiently many continuous sections, i.e. for all $(x,u)\in E$, there is a continuous section $f:X\rightarrow E$ such that $f(x)=u$. The related notion of continuous field of Banach spaces developed in \cite{dd:dixmier-douady} (see also \cite[10.1]{dix:C*}) privileges continuous sections. Both notions -- Banach bundle and continuous field of Banach spaces -- are equivalent when the base space $X$ is paracompact. One can recover the topology of $E$ from the space $C(X,E)$ of continuous sections as follows: $g\in C(X,E)$, $V$ open subset of $X$ and $\epsilon>0$ define a ``tube''
$$T(g,V,\epsilon)=\{(x,u)\in E: x\in V,\quad \|u-g(x)\|<\epsilon\}.$$
The family of these tubes form a base for the topology of $E$ (\cite[Theorem II.13.18]{fd:representations}). When $X$ is locally paracompact (every point has a paracompact neighborhood), we consider continuous sections $g:V\rightarrow E$ where $V$ is open and paracompact instead of global continuous sections. For the sake of simplicity, we shall always assume that the base space of the bundle is locally compact.
When $E$ is a Hilbert bundle over a locally compact Hausdorff space $X$, the space ${\mathcal E}=C_0(X,E)$ of continuous sections vanishing at infinity is a Hilbert $C_0(X)$-module, where $C_0(X)$ is the C*-algebra of complex-valued continuous functions vanishing at infinity endowed with the sup-norm: given $h\in C_0(X)$ and $f\in C_0(X,E)$, we define $fh\in C_0(X,E)$ by $(fh)(x)=f(x)h(x)$ and given $f,g\in C_0(X,E)$, we define $<f,g>\in C_0(X)$ by $<f,g>(x)=(f(x)|g(x))$.
Let us turn to the definition of a $G$-Hilbert bundle. From now on $G$ designates a topological groupoid with unit space $X$.
\begin{defn} A $G$-Hilbert bundle is a Hilbert bundle $\pi:E\rightarrow X$ together with a continuous action $G*E\rightarrow E$, where as usual, $G*E=\{(\gamma, u)\in G\times E: s(\gamma)=\pi(u)\}$, sending $(\gamma, u)$ to $L(\gamma)u$ and such that for all $\gamma \in G$, $L(\gamma): E_{s(\gamma))}\rightarrow E_{r(\gamma)}$ is a linear isometry.\end{defn}
The theory of induced representations provides a justification for studying $G$-Hilbert bundles. For example, if $E$ is a $H$-Hilbert space, where $H$ is a closed subgroup of a locally compact group $G$, then the quotient $(G\times E)/H$, where $H$ acts by the diagonal action $h(g,e)=(gh^{-1}, L(h)e)$ is an equivariant $G$-Hilbert bundle over $G/H$; equivalently, it is a $G\,\lsd G/H$-Hilbert bundle, where $G\,\lsd G/H$ is the groupoid of the left action of $G$ on $G/H$. A trivialization of this bundle would often require a continuous section of the quotient map $G\rightarrow G/H$, which may not exist.
We can now state our theorem.
\begin{thm}\label{main} Let $G$ be a minimal locally compact groupoid on a compact metrizable space $X$, let $E$ be a continuous $G$-Hilbert bundle and let $c:G\rightarrow r^*H$ be a continuous cocycle. Assume that $E$ is second countable. Then the following properties are equivalent:
\begin{enumerate}
\item $c$ is a continuous coboundary,
\item there exists $x\in X$ such that $\|c(G_x)\|$ is bounded,
\item $\|c(G)\|$ is bounded.
\end{enumerate}
\end{thm}
These properties have a nice interpretation in terms of the $G$-affine space $E(c)$. As we have seen earlier, condition $(i)$ says that $E(c)$ is trivial or, equivalently, admits a $G$-equivariant continuous section. Condition $(ii)$ says that there exists a bounded $G$-orbit in $E(c)$. Condition $(iii)$ says that all $G$-orbits in $E(c)$ are bounded. The implications $(i)\Rightarrow (iii) \Rightarrow (ii)$ are obvious.
When $G$ is a group, this theorem is a well-known result (see for example \cite[Proposition 2.2.9]{bhv:T}). In fact, it is valid for a much larger class of Banach spaces than Hilbert spaces (we still assume that the action is isometric!). U. Bader, T. Gelander and N. Monod have recently shown in \cite{bgm:fixed} that it is true for Banach spaces which are $L$-embedded.
When $G=G(X,T)$ as above and $E=X\times{\bf{C}}$ with the trivial $G$-action, this is \thmref{Gottschalk-Hedlund}.
The situation studied by Coronel, Navas and Ponce is essentially the case when $E=X\times F$, where $F$ is a fixed Hilbert space, is a constant Hilbert bundle (but on which $G$ acts non-trivially). More precisely, they consider a skew action of a semigroup $\Gamma$ on $X\times F$ where $g\in\Gamma$ acts continuously on $X\times F$ according to
$g(x,v)=(g(x), I(g,x)v)$, where $I(g,x)$ is an isometry of $F$ and satisfies $$I(gh, x)=I(g, h(x))I(h,x).$$
Under additional assumptions on the dynamical system $(\Gamma,X)$, this can be put into the groupoid setting along the lines of \cite{er:semigroups}.
We shall prove that $(ii)\Rightarrow (i)$, namely the existence of a bounded $G$-orbit in $E(c)$ implies the existence of a $G$-equivariant continuous section. Our proof is modelled after \cite[Section 4]{cnp:bdd orbits} and consists of two steps. First, we show that $E(c)$ admits a $G$-equivariant weakly continuous section. Secondly, we show that a $G$-equivariant weakly continuous section is automatically continuous.
\section{Existence of a $G$-equivariant weakly continuous section.}
The main task is to define the weak topology on the bundle space $E$ of a continuous Hilbert bundle over a topological space $X$. It is easy to do when $E=X\times F$ is a constant bundle: then we just consider the product topology $X\times F_\sigma$, where $F_\sigma$ is the Hilbert space $F$ endowed with the weak topology.
\begin{prop} Let $E$ be a Hilbert bundle over a locally compact space $X$ and let $(\underline x,\underline u)\in E$. Then the sets
$$U(V;f_1,\ldots,f_n;\epsilon)=\{(x,u)\in E: x\in V,\, \forall i=1,\ldots, n, |(f_i(x)|u)-(f_i(\underline x)|\underline u)|<\epsilon\},$$
where $V$ is a compact neighborhood of $\underline x$, for all $i=1,\ldots, n\,$,$\,f_i:V\rightarrow E$ is a continuous section and $\epsilon>0$,
form a fundamental system of neighborhoods of $(\underline x,\underline u)$ for a topology of $E$.
\end{prop}
This topology is called the weak topology of $E$. When $E$ is endowed with the weak topology, it is denoted by $E_\sigma$. The original Hilbert bundle topology is called the strong topology. We let the reader check that the strong topology is finer than the weak topology.
\begin{proof} One checks that the family ${\mathcal V}(\underline x,\underline u)$ of subsets of $E$ containing some $U(V;f_1,\ldots,f_n;\epsilon)$ satisfies the axioms $(V_I), (V_{II}), (V_{III})$ and $(V_{IV})$ of \cite[Section 1.2]{bbki:topologie}.
\end{proof}
One can also check that $E_\sigma=X\times F_\sigma$ when $E=X\times F$, where $F$ is a fixed Hilbert space.
\begin{prop} Let $E$ be a Hilbert bundle over a compact space $X$. Let ${\mathcal E}=C(X,E)$ be the Banach space of continuous sections of $E$, equipped with the sup-norm. Then, the map $(id_X,j): E_\sigma\rightarrow X\times {\mathcal E}^*_\sigma$, where ${\mathcal E}^*_\sigma$ is the dual Banach space ${\mathcal E}^*$ endowed with the $*$-weak topology and where $j: E\rightarrow {\mathcal E}^*$ is the evaluation map $<j(x,u),f>=(u|f(x))$ for $(x,u)\in E$ and $f\in C(X,E)$, is a homeomorphism onto its image.
\end{prop}
\begin{proof} The map $(id_X,j): E\rightarrow X\times {\mathcal E}^*$ is injective: consider $(x,u)$ and $(x',u')$ in $E$. If $x\not=x'$, they have distinct images. If $x=x'$ and $u\not=u'$, there exists $v\in E_x$ such that $(u|v)\not=(u'|v)$. There exists $f\in C(X,E)$ such that $f(x)=v$. Then $<j(x,u),f>\not=<j(x,u'),f>$ and $j(x,u)\not=j(x,u')$. The map is continuous with respect to the weak topologies: if the net $(x_i,u_i)$ converges to $(x,u)$ in $E_\sigma$, then $x_i$ converges to $x$ in $X$; for $f\in C(X,E)$, $<j(x_i,u_i),f>=(u_i|f(x_i))$ converges to $(u|f(x))=<j(x,u),f>$ by definition of the weak topology of $E$. Conversely, if $x_i$ converges to $x$ in $X$ and $j(x_i,u_i)$ converges to $j(x,u)$ in ${\mathcal E}^*_\sigma$, then by definition, $(x_i,u_i)$ converges to $(x,u)$ in $E_\sigma$.
\end{proof}
\begin{defn} We say that a subset $A$ of the bundle space $E$ of a Hilbert bundle is bounded if the norm function is bounded on $A$.
\end{defn}
\begin{lem}\label{joint continuity} Let $E$ be a Hilbert bundle over a locally compact space $X$. Assume that the net $(x_i,v_i)$ (based on some directed set $J$) is bounded and converges to $(x,v)$ in $E_\sigma$ and that the net $(x_i,e_i)$ (based on the same $J$) converges to $(x,e)$ in $E$. Then, the net $(v_i|e_i)$ converges to $(v|e)$.
\end{lem}
\begin{proof} Choose a compact neighborhood $V$ of $x$. Choose a continuous section $f:V\rightarrow E$ such that $f(x)=e$. Assuming that $\|v_i\|\le a$, we have
$$\begin{array}{cc}
(v_i|e_i)-(v|e)&=(v_i|e_i-f(x_i)) +(v_i|f(x_i)-(v|f(x))\\
|(v_i|e_i)-(v|e)|&\le a\|e_i-f(x_i)\| +|(v_i|f(x_i)-(v|f(x))|.\\
\end{array}$$
By continuity of the addition in $E$, $\|f(x_i)-e_i\|$ tends to 0. By definition of the weak convergence, $|(v_i|f(x_i)-(v|f(x))|$ tends also to 0.
\end{proof}
Note that this lemma gives another definition of the weak convergence of a bounded net $(x_i,v_i)$. The next lemma generalizes a well-known characterization of strongly convergent nets in Hilbert spaces.
\begin{lem}\label{weak/strong} Let $E$ be a Hilbert bundle over a locally compact space $X$. Let $(x_i,u_i)$ be a net in $E$ and let $(x,u)$ be an element of $E$. Then the following conditions are equivalent:
\begin{enumerate}
\item $(x_i,u_i)\to (x,u)$ strongly;
\item $(x_i,u_i)\to (x,u)$ weakly and $\|u_i\|\to \|u\|$;
\end{enumerate}
\end{lem}
\begin{proof} The implication $(i)\Rightarrow (ii)$ is clear. Suppose that $(ii)$ holds. Choose a compact neighborhood $V$ of $x$ and choose a continuous section $f:V\rightarrow E$ such that $f(x)=u$. By definition of the weak convergence, $(f(x_i)|u_i)\to (f(x)|u)=\|u\|^2$. Therefore:
$$\|f(x_i)-u_i\|^2=\|f(x_i)\|^2+\|u_i\|^2-2{\rm Re}(f(x_i)|u_i)$$
tends to 0. According to \cite[Proposition 13.12]{fd:representations}, this implies that $(x_i,u_i)$ tends to $(x,u)$ strongly.
\end{proof}
We are interested in continuity properties of sections of a Hilbert bundle $E$. Given an arbitrary section $f:X\rightarrow E$, we can define a map $<f|$ which sends a section $g$ to the scalar function $<f,g>(x)=(f(x)|g(x))$ and a map $\tilde f: E\rightarrow {\bf{C}}$ which sends $(x,u)\in E$ to $(f(x)|u)$. We have:
\begin{prop}\label{weak section} Let $E$ be a Hilbert bundle over a compact metrizable space $X$. Let $f:X\rightarrow E$ be a section. Then the following conditions are equivalent:
\begin{enumerate}
\item $f$ is weakly continuous;
\item $<f|$ sends $C(X,E)$ into $C(X)$;
\item $\tilde f: E\rightarrow {\bf{C}}$ is continuous with respect to the strong topology.
\end{enumerate}
\end{prop}
\begin{proof} The equivalence of $(i)$ and $(ii)$ results directly from the definition of the weak topology. Let us show that $(i)\Rightarrow (iii)$. Let $f:X\rightarrow E$ be weakly continuous. According to \cite[Proposition 1.1.9]{laf:gpd} (this is the only place where the metrizability of $X$ is used), $\|f(x)\|_x$ is bounded on $X$. Suppose that $(x_i,u_i)$ converges to $(x,u)$ in $E$. According to \lemref{joint continuity}, $(f(x_i)|u_i)$ converges to $(f(x)|u)$. This proves the continuity of $\tilde f$. The implication $(iii)\Rightarrow (ii)$ is clear, because, for $g\in C(X,E)$, $<f,g>=\tilde f\circ g$.
\end{proof}
There is an analogous characterization of (strongly) continuous sections:
\begin{prop}\label{strong section} Let $E$ be a Hilbert bundle over a compact space $X$. Let $f:X\rightarrow E$ be a section. Then the following conditions are equivalent:
\begin{enumerate}
\item $f$ is strongly continuous;
\item $<f|$ is an adjointable $C(X)$-linear map from $C(X,E)$ into $C(X)$;
\item $\tilde f: E\rightarrow {\bf{C}}$ is continuous with respect to the weak topology.
\end{enumerate}
\end{prop}
\begin{proof} The only possible candidate for the adjoint of $<f|: C(X,E)\rightarrow C(X)$ is the map $|f>:C(X)\rightarrow C(X,E)$ sending $h\in C(X)$ to $hf$. This map exists if and only if $f\in C(X,E)$. This proves the equivalence of $(i)$ and $(ii)$. The implication $(i)\Rightarrow (iii)$ results from the definition of the weak topology. Suppose that $(iii)$ holds. Let us show that $f$ is strongly continuous. The continuity of $\tilde f: E_\sigma\rightarrow {\bf{C}}$ implies the continuity of $\tilde f: E\rightarrow {\bf{C}}$; according to \propref{weak section}, $f$ is weakly continuous. Therefore, if $x_i$ tends to $x$, then $f(x_i)$ tends to $f(x)$ weakly. But then, $\|f(x_i)\|^2=\tilde f(f(x_i))$ tends to $\tilde f(f(x))=\|f(x)\|^2$. According to \lemref{weak/strong}, this implies that $f(x_i)$ tends to $f(x)$ strongly.
\end{proof}
Assume that $X$ is compact metrizable. The space $C(X,E_\sigma)$ of weakly continuous sections can be identified with the space of $C(X)$-linear bounded maps from $C(X,E)$ to $C(X)$. It agrees with $C(X,E)$ if and only if $C(X,E)$ is a self-dual Hilbert module. This is the case for example when $E$ is a vector bundle in the usual sense (i.e. locally trivial finite dimensional) but also when $X$ is reduced to a point.
\begin{prop} Let $E$ be a Hilbert bundle over a topological space $X$. For $0<R<\infty$, we define the cylinder $C_R=\{(x,u)\in E: \|u\|_x\le R\}$. Then
\begin{enumerate}
\item if $X$ is compact, $C_R$ is a compact subset of $E_\sigma$;
\item if $X$ is locally compact, $C_R$ is a closed subset of $E_\sigma$.
\end{enumerate}
\end{prop}
\begin{proof} Let us assume that $X$ is compact. Since $(id_X,j)(C_R)$ is contained in $X\times B_R$, where $B_R$ is the closed ball of radius $R$ of ${\mathcal E}^*$ which is $*$-weakly compact, it suffices to show that $(id_X,j)(C_R)$ is closed in $X\times {\mathcal E}^*_\sigma$. Let $(x_i,u_i)$ be a net in $C_R$ such that $(x_i,j(x_i,u_i))$ converges to $(x,\varphi)$, where $\varphi\in {\mathcal E}^*$. Then $x_i$ converges to $x$. Let us show that $\varphi=j(x,u)$ for some $u\in E_x$. For that, it suffices to show that $<\varphi,f>=0$ for all $f\in C(X,E)$ such that $f(x)=0$. Let $f$ and $\epsilon>0$ be given. There exists a neighborhood $V$ of $x$ such that $\|f(y)\|\le \epsilon/R$ for all $y\in V$. For $i$ large enough, $x_i$ belongs to $V$, thus
$$|(u_i | f(x_i))|\le \|u_i\|\|f(x_i)\|\le R(\epsilon/R)$$
which implies $|<\varphi,f>|\le \epsilon$ and $<\varphi, f>=0$.
Let us assume that $X$ is locally compact. Let $(x_i,u_i)$ be a net in $C_R$ which converges weakly to $(x,u)$. The point $x$ has a compact neighborhood $V$. For $i$ large enough, $x_i$ belongs to $V$ and $(x_i,u_i)$ belongs to the cylinder $C_R(E_V)$ of the reduction $E_V$ of $E$ to $V$, which is compact. Since a net which is weakly convergent in $E_V$ is also weakly convergent in $E$, the limit $(x,u)$ belongs to the cylinder $C_R(E_V)$.
\end{proof}
\begin{cor}\label{compact} Let $E$ be a Hilbert bundle over a compact space $X$. Then bounded subsets are relatively compact in $E_\sigma$.
\end{cor}
\begin{defn} We say that a subset $A$ of the bundle space $E$ of a Hilbert bundle over $X$ is fiberwise convex if for every $x\in X$, $A_x\stackrel{\mathrm{def}}{=} A\cap E_x$ is convex.
\end{defn}
\begin{prop} Let $E$ be a Hilbert bundle over a locally compact space $X$. Given a bounded subset $A$ of $E$, there exists a smallest fiberwise convex and weakly closed subset of $E$ containing $A$. It is bounded. This subset will be called the convex hull of $A$ and denoted by ${\rm conv}(A)$.
\end{prop}
\begin{proof} By assumption $A$ is contained in some cylinder $C_R$, which is fiberwise convex and weakly closed. The intersection of all fiberwise convex and weakly closed subsets containing $A$, which is fiberwise convex and weakly closed, is the sought-after set.
\end{proof}
Given a continuous map $p:Y\rightarrow X$ and a Hilbert bundle $\pi:E\rightarrow X$, the pull-back bundle $p^*E$ is the Hilbert bundle over $Y$ defined as
$$p^*E=\{(y, u)\in Y\times E: p(y)=\pi(u)\}.$$
Its bundle projection is the restriction of the first projection. Its topology is the subspace topology. We denote by $P:p^*E\rightarrow E$ the map defined by $P(y,u)=(p(y),u)$. We leave to the reader to check that, when $X$ and $Y$ are locally compact spaces, $(p^*E)_\sigma=p^*E_\sigma$, i.e. the weak topology of $p^*E$ agrees with the subspace topology of $Y\times E_\sigma$.
\begin{prop}\label{conv} Let $X,Y$ be locally compact spaces and let $p:Y\rightarrow X$ be continuous and open. Let $E$ be a Hilbert bundle over $X$ and let $F=p^*E$ be its pull-back over $Y$. If $A$ is a bounded subset of $E$, then $B=P^{-1}(A)$ is a bounded subset of $F$ and ${\rm conv}(B)=P^{-1}({\rm conv}(A))$.
\end{prop}
\begin{proof} It is clear that the range of the norm function is the same on $A$ and on $B$. Suppose that $C$ is a fiberwise convex and weakly closed set containing $B$. Consider its contraction
$$C'=\{(y,u)\in F: \quad p(y')=p(y) \Rightarrow (y',u)\in C\}.$$
It is fiberwise convex, because an intersection of convex sets is convex. I claim that it is weakly closed. Suppose that the net $(y_i, u_i)$ in $C'$ converges to $(y,u)$. Let $y'\in Y$ such that $p(y')=p(y)$. Since $p$ is open, there is a net $(y'_i)$ converging to $y'$ such that $p(y'_i)=p(y_i)$ for all $i$. Then $(y'_i,u_i)$ belongs to $C$. Let us show that $(y'_i,u_i)$ converges to $(y',u)$. Let $V$ be a compact neighborhood of $y'$. According to \cite[Proposition II.14.1]{fd:representations}, the sums of continuous sections of the form $f(y)=h(y)g\circ p(y)$ where $h\in C(V)$ and $g\in C(p(V),E)$ are dense in $C(V,p^*E)$ in the sup-norm topology; therefore, it suffices to check the convergence on such a section $f=h g\circ p$. Then
$(u_i|f(y'_i))=h(y'_i)(u_i|g\circ p(y_i))$ converges to $h(y')(u| g\circ p(y))=(u|f(y'))$. Since $C$ is weakly closed, $(y',u)$ belongs to $C$. Therefore $(y,u)$ belongs to $C'$ as claimed. Note that $P(C')$ is fiberwise convex and bounded. It is also weakly closed: let $(y_i, u_i)$ be a net in $C'$ such that $(p(y_i), u_i)$ converges to $(x, u)$. Let $y\in Y$ such that $p(y)=x$. There exists a net $(y'_i)$ converging to $y$ such that $p(y'_i)=p(y_i)$ for all $i$. Then $(y'_i,u_i)$ belongs also to $C'$ and the net $(y'_i, u_i)$ converges to $(y,u)$. Thus $(y,u)$ belongs to $C'$ and $(x,u)$ belongs to $P(C')$. Since $P(C')$ contains $A$, it contains ${\rm conv}(A)$. Therefore $C\supset P^{-1}(P(C')$ contains $P^{-1}({\rm conv}(A))$. This gives the inclusion
${\rm conv}(B)\supset P^{-1}({\rm conv}(A))$. On the other hand $P^{-1}({\rm conv}(A))$ is a subset of $F$ containing $B$ which is fiberwise convex and weakly closed. Hence it contains ${\rm conv}(B)$.
\end{proof}
Let us assume now that $E$ is a $G$-Hilbert bundle, where $G$ is a topological groupoid over $X$. Recall that we assume that for all $\gamma\in G$, $L(\gamma):E_{s(\gamma)}\rightarrow E_{r(\gamma)}$ is a linear isometry and that the action map $G*E\rightarrow E$ is continuous. We are also given a continuous cocycle $c:G\rightarrow r^*E$ which defines the affine isometric action of $G$ on $E$ given by:
$$\gamma u=L(\gamma)u+c(\gamma),\quad\forall (\gamma,u)\in G*E.$$
\begin{prop}\label{weak continuity} Let $(\gamma_i,u_i)$ be a net converging to $(\gamma,u)$ in $G*E_\sigma$. If $(\|u_i\|)$ is bounded, then $(\gamma_i u_i)$ converges to $\gamma u$ in $E_\sigma$.
\end{prop}
\begin{proof}
Let $f\in C(X,E)$. We have
$$\begin{array}{ccc}
(\gamma_iu_i|f\circ r(\gamma_i))&=&(L(\gamma_i)u_i|f\circ r(\gamma_i))+(c(\gamma_i)|f\circ r(\gamma_i))\\
&=&(u_i|L(\gamma^{-1}_i)f\circ r(\gamma_i))+(c(\gamma_i)|f\circ r(\gamma_i))\\
\end{array}$$
By continuity of the action, $L(\gamma^{-1}_i)f\circ r(\gamma_i)$ tends to $L(\gamma^{-1})f\circ r(\gamma)$ in $E$ and by \lemref{joint continuity}, $(u_i|L(\gamma^{-1}_i)f\circ r(\gamma_i))$ tends to $(u|L(\gamma^{-1})f\circ r(\gamma))$. By joint continuity of the scalar product in $E$, $(c(\gamma_i)|f\circ r(\gamma_i))$ tends to $(c(\gamma)|f\circ r(\gamma))$. Hence the result.
\end{proof}
We say that a subset $A$ of $E$ is invariant if for every $(\gamma,u)\in G\times A$ such that $s(\gamma)=\pi(u)$, $\gamma u$ belongs to $A$. Let us introduce the pull-back $s^*E=G*E$ of $E$ along the source map $s: G\rightarrow X$ and the map $W:s^*E\rightarrow s^*E$ defined by $W(\gamma, u)=(\gamma^{-1},\gamma u)$. This map is the fundamental involution of the action which is ubiquitous in the theory of quantum groups. For its use in a similar context, see \cite[Section 4]{leg:KKG}. Let also define the map $S: s^*E\rightarrow E$ by $S(\gamma,u)=(s(\gamma),u)$. Then, we have the following convenient criterium for invariance.
\begin{lem} A subset $A$ of $E$ is invariant if and only if $W(S^{-1}(A))=S^{-1}(A)$.
\end{lem}
\begin{prop}\label{inv conv} Let $E$ be a $G$-Hilbert bundle, where $G$ is a locally compact groupoid. We consider the affine isometric action of $G$ on $E$ defined by a continuous cocycle $c:G\rightarrow r^*E$. Let $A$ be a bounded subset of $E$. If $A$ is invariant, then its convex hull ${\rm conv}(A)$ is also invariant.
\end{prop}
\begin{proof} The fundamental involution $W:s^*E\rightarrow s^*E$ sends the fibre $E_\gamma=E_{s(\gamma)}$ onto the fibre $E_{\gamma^{-1}}=E_{r(\gamma)}$ through the affine map $u\mapsto \gamma u$. Therefore, it respects fiberwise convex sets. It also sends bounded weakly closed sets onto weakly closed sets. Suppose indeed that $B\subset s^*E$ is bounded and weakly closed. Let $(\gamma_i, u_i)$ be a net in $W(B)$ converging weakly to $(\gamma, u)$. Then $(\gamma_i)$ converges to $\gamma$; in particular, the net $(\|c(\gamma_i)\|)$ is bounded. The net $(\|\gamma_i u_i\|)$ is also bounded because $(\gamma_i^{-1},\gamma_iu_i)$ belongs to the bounded set $B$. Since
$u_i=L(\gamma_i^{-1})(\gamma_i u_i-c(\gamma_i))$, the net $(\|u_i\|)$ is bounded. According to \propref{weak continuity}, $(\gamma_i u_i)$ converges to $\gamma u$ in $E_\sigma$, therefore $(\gamma_i^{-1},\gamma_i u_i)$ converges to $(\gamma^{-1},\gamma u)$ in $(s^*E)_\sigma$. Since $B$ is weakly closed, $(\gamma^{-1},\gamma u)$ belongs to $B$ and $(\gamma, u)$ belongs to $W(B)$. Therefore, if $B$ is a bounded subset of $s^*E$, $W({\rm conv}(B))$ is a fiberwise convex weakly closed set containing $W(B)$. This shows that the convex hull ${\rm conv}(W(B))$ exists and is contained in $W({\rm conv}(B))$. If moreover $W(B)$ is bounded, the same argument gives ${\rm conv}(B)\subset W({\rm conv}(W(B)))$, hence the equality ${\rm conv}(W(B))=W({\rm conv}(B))$. Applied to $B=S^{-1}(A)$, where $A$ is an invariant bounded subset of $E$, this gives
${\rm conv}(W(S^{-1}(A)))=W({\rm conv}(S^{-1}(A)))$. Since $W(S^{-1}(A))=S^{-1}(A)$, we get ${\rm conv}(S^{-1}(A))=W({\rm conv}(S^{-1}(A)))$. Since, according to \propref{conv}, ${\rm conv}(S^{-1}(A))=S^{-1}({\rm conv}(A))$, this gives the invariance of ${\rm conv}(A)$.
\end{proof}
We are now ready to prove the existence of a weakly continuous equivariant section.
\begin{thm}\label{weak Renault} Let $G$ be a minimal locally compact groupoid on a compact space $X$, let $E$ be a continuous $G$-Hilbert bundle and let $c:G\rightarrow r^*E$ be a continuous cocycle. If there exists $x\in X$ such that $\|c(G_x)\|$ is bounded, then there exists a weakly continuous section $f:X\rightarrow E$ such that
$$\forall \gamma\in G,\quad c(\gamma)=f\circ r(\gamma)-L(\gamma)f\circ s(\gamma).$$
\end{thm}
\begin{proof} This part follows closely \cite{cnp:bdd orbits}. We consider the affine action of $G$ on $E$ defined by the cocycle $c$. The assumption is the existence of a bounded orbit $A$. The conclusion is the existence of a weakly continuous equivariant section. \propref{inv conv} gives the existence of a non-empty weakly closed, fiberwise convex, invariant and bounded subset, namely ${\rm conv}(A)$. Since $X$ is compact, according to \corref{compact}, this set is weakly compact. The family of all weakly compact fiberwise convex and invariant non-empty subsets of $E$ ordered by inclusion is inductive. By Zorn's lemma, there exists a minimal weakly compact fiberwise convex invariant non-empty subset $M$. Then $\pi(M)$ is a non-empty closed invariant subset of $X$. By minimality of $G$, $\pi(M)=X$. This says that for all $x\in X$, the fiber $M_x=M\cap E_x$ has at least one element. We are going to show that $M_x$ has at most one element. This is a classical trick which uses the uniform convexity of the Hilbert spaces $E_x$. Hilbert spaces have the uniform convexity module $\delta=\delta(\epsilon)=\sqrt{(1-{1\over 4}\epsilon^2)}$. Recall that this means that for $u_1,u_2\in E_x$,
$$\|u_1\|\le 1, \|u_2\|\le 1, \|u_1-u_2\|\ge \epsilon\quad \Longrightarrow\quad \|{1\over 2}(u_1+u_2)\|\le 1-\delta.$$
We fix $\epsilon>0$. We let $R=\sup_{u\in M}\|u\|$ and choose $u\in M$ such that $\|u\|>(1-\delta^2)R$. We choose $f\in C(X,E)$ such that $f\circ\pi(u)=u/\|u\|$. Let $V=\{y\in X: \|f(y)\|<1+\delta\}$. It is an open neighborhood of $\pi(u)$ in $X$.
Let $x$ be an arbitrary point in $X$ and assume that $u_1,u_2$ belong to the fiber $M_x$. The midpoint $m={1\over 2}(u_1+u_2)$ also belongs to the convex set $M_x$. Let us show that the orbit of $m$ meets the non-empty weakly open set
$$U=\{(y,e)\in E: y\in V,\quad |(e|f(y)|>(1-\delta^2)R\}.$$
If not, this orbit would be contained in the fiberwise convex weakly closed subset $M\setminus U$ and so would be its convex hull. This would contradict the minimality of $M$ because $M\setminus U$ is strictly contained in $M$: it does not contain $u$. Thus, there exists $\gamma\in G_x$ such that $\gamma m\in U$. Then, we must have $\|\gamma m\|> (1-\delta)R$. Since $G$ acts by affine transformations, $\gamma m$ is the midpoint of $\gamma u_1$ and $\gamma u_2$. Moreover, since these points belong to $M$, they satisfy $\|\gamma u_i\|\le R$. The above uniform convexity condition implies that $\|\gamma u_1-\gamma u_2\|<\epsilon R$. Since $G$ acts by isometries, this implies that $\|u_1-u_2\|<\epsilon R$. Since $\epsilon$ is arbitrary, this implies $u_1=u_2$.
Thus, the restriction of the bundle projection $\pi_{|M}: M\rightarrow X$ is a bijection. Since it is weakly continuous and $M$ is compact with respect to the weak topology of $E$, its reciprocal map $f: X\rightarrow M$ is weakly continuous. The invariance of $M$ says exactly that for all $\gamma\in G$, $\gamma f\circ s(\gamma)=f\circ r (\gamma)$.
\end{proof}
\section{Continuity of $G$-equivariant weakly continuous sections.}
As before, we assume that the base space $X$ of the Banach bundle $E$ is locally compact.
We are going to show that an equivariant weakly continuous section is norm continuous. Again, it is an adaptation of the proof given in \cite{cnp:bdd orbits}. However, it is no longer possible to use the oscillation function as in \cite{cnp:bdd orbits}: for a section $f:X\rightarrow E$ of a Banach bundle, one cannot compare directly the vectors $f(x)$ and $f(y)$ for $x\not=y$ since they belong to different spaces.
\begin{defn}\label{module of continuity} We define the module of continuity of a section $f:X\rightarrow E$ at $x$ as
$\omega(x)=\inf\{\sup_{y\in V}\|f(y)-g(y)\|\}$, where the infimum is taken over all pairs $(V,g)$, where $V$ is an open neighborhood of $x$ and $g:V\rightarrow E$ is a continuous section over $V$.
\end{defn}
\begin{prop}
Let $f:X\rightarrow E$ be a section of a Banach bundle $\pi:E\rightarrow X$ and $x\in X$. Then the following conditions are equivalent:
\begin{enumerate}
\item $f$ is continuous at $x$;
\item its module of continuity at $x$ vanishes.
\end{enumerate}
\end{prop}
\begin{proof}
This is a corollary of \cite[Proposition II.13.12]{fd:representations}; this can also be seen directly from the comments following \defnref{Banach bundle}.
\end{proof}
\begin{prop} Let $f:X\rightarrow E$ be a section with module of continuity $\omega$ and let $U=\{x\in X: \omega(x)<\epsilon\}$, where $\epsilon>0$. Then,
\begin{enumerate}
\item $U$ is an open subset of $X$.
\item If $f$ is $G$-equivariant with respect to a continuous affine isometric action of a topological groupoid $G$, then $U$ is $G$-invariant.
\end{enumerate}
\end{prop}
\begin{proof} The first assertion results directly from the definition of the module of continuity $\omega(x)$: if $x\in U$, there is a pair $(V,g)$ where $V$ is an open neighborhood of $x$ such that $\|f(y)-g(y)\|<\epsilon$ for all $y\in V$. Then $V$ is contained in $U$.
Let us prove $(ii)$. Let $\underline\gamma\in G$. We assume that $s(\underline\gamma)\in U$ and we will show that $r(\underline\gamma)\in U$. We fix $\epsilon'$ such that $\omega(s(\underline\gamma))<\epsilon'<\epsilon$. Then there exists an open neighborhood $V$ of $s(\underline\gamma)$ and a continuous section $g:V\rightarrow E$ such that for all $y\in V$, $\|f(y)-g(y)\|<\epsilon'$. Let us define $\tilde h:G_V=s^{-1}(V)\rightarrow r^*E$ by $\tilde h(\gamma)=\gamma g\circ s(\gamma)$. Let $\delta=\epsilon-\epsilon'$. Since $\tilde h$ is a continuous section of the pull-back bundle $r^*E$, according to \cite[Section 5]{dd:dixmier-douady}, there exists an open neighborhood $S\subset G_V$ of $\underline\gamma$ and a continuous section $h:r(S)\rightarrow E$ such that for all $\gamma\in S$, we have $\|\tilde h(\gamma)-h\circ r(\gamma)\|<\delta$. Then, for all $x\in W=r(S)$, we have:
$$\begin{array}{ccc}
\|f(x)-h(x)\|&=&\|f\circ r(\gamma)-h\circ r(\gamma)\| \\
&=&\|f\circ r(\gamma)-\tilde h(\gamma)\|+ \|\tilde h(\gamma)-h\circ r(\gamma)\|\\
&=&\|\gamma f\circ s(\gamma)-\gamma g\circ s(\gamma)\|+ \delta\\
&=&\|f\circ s(\gamma)-g\circ s(\gamma)\| +\delta\\
&<&\epsilon'+\delta=\epsilon .\\
\end{array}$$
This shows that $\omega(r(\underline\gamma))<\epsilon$ and $r(\underline\gamma)\in U$.
\end{proof}
\begin{cor}Let $f:X\rightarrow E$ be a $G$-equivariant section of a Banach bundle $E$ endowed with an affine isometric continuous action of a topological groupoid $G$. Then the set of points of continuity of $f$ is a countable intersection of open invariant subsets.
\end{cor}
\begin{proof} We define $U_n=\{u\in E: \omega(u)<1/n\}$. Then, we just observe that the set of points of continuity of $f$ is the intersection of the $U_n$ 's.
\end{proof}
\begin{cor}\label{inv} Let $f:X\rightarrow E$ be a $G$-equivariant section of a Banach bundle $E$ endowed with an affine isometric continuous action of a topological groupoid $G$. If $G$ is minimal, either $f$ is continuous or has no point of continuity.
\end{cor}
\begin{proof} If $f$ has at least one point of continuity, the $U_n$'s are non-empty. By minimality, they are all equal to $X$. Therefore the set of points of continuity of $f$ is $X$.
\end{proof}
Let us recall a general property of Hilbert C*-modules over C*-algebras.
\begin{prop}\label{approximation}\cite[Theorem 3.1]{ble:C*-modules} Let $\mathcal E$ be a C*-module over a C*-algebra $A$. Then there exists a directed set $I$, a net of integers $(n_i)$ and nets of contractive $A$-linear maps $\varphi_i:{\mathcal E}\rightarrow A^{n_i}$, of the form $\varphi_i(f)=(<g_{i,1},f>,\ldots,<g_{i,n_i},f>)$ with $g_{i,1},\ldots,g_{i,n_i}\in {\mathcal E}$, and $\psi_i: A^{n_i}\rightarrow{\mathcal E}$ such that for all $f\in{\mathcal E}$, $\psi_i\circ\varphi_i(f)$ tends to $f$. Moreover, if $\mathcal E$ is countably generated, one can choose $I={\bf{N}}$.
\end{prop}
Note that the maps $\varphi_i$ and $\psi_i$ of the above proposition are adjointable maps from a C*-module to another C*-module.
\begin{cor}\label{weak-strong} Let $E\rightarrow X$ be a Hilbert bundle. Assume that $X$ is compact and that the bundle space $E$ is second countable. Then the set of points of continuity of each weakly continuous section $f:X\rightarrow E$ is a dense $G_\delta$.
\end{cor}
\begin{proof} We apply \propref{approximation} to the C*-module ${\mathcal E}=C(X,E)$ over the C*-algebra $A=C(X)$. According to \cite[Proposition II.13.21]{fd:representations}, it is countably generated. Because the functor $E\mapsto C(X,E)$ gives an equivalence between the category of Hilbert bundles over $X$ and the category of Hilbert C*-modules over $C(X)$, we obtain a sequence of continuous bundle maps $\varphi_i:E\rightarrow X\times {\bf{C}}^{n_i}$ and $\psi_i: X\times {\bf{C}}^{n_i}\rightarrow E$ such that for all $u\in E$, $\psi_i\circ\varphi_i(u)$ tends to $u$.
Let $f$ be a weakly continuous section of $E$. Then $\varphi_i\circ f: X\rightarrow X\times {\bf{C}}^{n_i}$, which is of the form $x\mapsto (x,((g_{i,1}(x)|f(x)),\ldots,(g_{i,n_i}(x),f(x)))$, where $g_{i,1},\ldots,g_{i,n_i}\in C(X,E)$, is continuous. Therefore $f$ is the pointwise limit of the sequence of the continuous sections $f_i=\psi_i\circ(\varphi_i\circ f)$. Our assumption implies that $X$ is compact and second countable, hence metrizable. The space $E$ is also metrizable since, according to \cite{dd:dixmier-douady}, it can be embedded into a trivial bundle $X\times F$, where $F$ is a Hilbert space. According to a theorem of Baire, the set of points of continuity of $f:X\rightarrow E$ is a dense $G_\delta$.
\end{proof}
\begin{cor} Let $E\rightarrow X$ be a Hilbert bundle endowed with an affine isometric continuous action of a topological groupoid $G$. Assume that $X$ is compact, the bundle space $E$ is second countable and $G$ is minimal. Then each $G$-equivariant section $f:X\rightarrow E$ which is weakly continuous is necessarily strongly continuous.
\end{cor}
\begin{proof} \corref{weak-strong} shows that $f$ has at least one point of continuity. \corref{inv} says then that $f$ is continuous at all $x\in X$.
\end{proof}
\section{ Concluding remarks}
C. Anantharaman-Delaroche gives in \cite[Theorem 3.19]{ana:T} a measure theoretic version of \thmref{main}. Her proof is based on the ``lemma of the centre'' \cite[Lemma 2.2.7]{bhv:T}. Let us assume that $E$ is a continuous $G$-Hilbert bundle and $c:G\rightarrow r^*E$ is a bounded continuous cocycle. Then we can define for all $x\in X$, $f(x)$ as the centre of $c(G^x)$. Then, since $L(\gamma)$ is an isometry from $E_{s(\gamma)}$ to $E_{r(\gamma)}$, we have $c(\gamma)=f\circ r(\gamma)-L(\gamma)f\circ s(\gamma)$ for all $\gamma\in G$. In the measure theoretical framework, $f$ is measurable; however, its continuity is problematic (see \cite[Example 17]{cnp:bdd orbits}).
J.-L. Tu shows in \cite[3.3]{tu:conjecture} that, just as in the case of groups, every conditionally negative type continuous function $\psi: G\rightarrow {\bf{R}}$ on a topological groupoid $G$ is of the form $\psi(\gamma)=\|c(\gamma)\|^2$, where $c: G\rightarrow r^*E$ is a continuous cocycle of a continuous $G$-Hilbert bundle $E$. \thmref{main} shows that the condition that every conditionally negative type continuous function is bounded has the same cohomological interpretation for groupoids as for groups.
\vskip 5mm
{\it Acknowledgements.} I thank C.~Anantharaman-Delaroche and E.~Blanchard for their help in eliminating some obscurities in a preliminary draft of the manuscript.
\vskip3mm
|
2,877,628,089,414 | arxiv |
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8 and later.
I wish you the best of success.
\hfill mds
\hfill December 27, 2012
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Conclusion}
In this work, we pointed out the need to approach the task of detecting social groups in crowds from a learning perspective.
Many existing methods rely on specifically tuned parameters that limit their applicability in real world scenarios.
Our intuition is that there are crowds that preserve the same concept of social group, but in many cases this concept cannot be distilled
from spatial consideration only. We thus defined a set of social-inspired and strongly motivated features able to capture and characterize different groups peculiarities.
To learn a socially meaningful clustering rule to group pedestrians, we relied on the Structural SVM framework and designed a peculiar loss function able to account for
singletons as well as for group errors.
Even though the algorithm was originally designed to work with exact trajectories, we replicated the experiments on noisy tracklets extracted by a detector/tracker obtaining state-of-the-art results.
Moreover, we proposed an online training version of the method, able to achieve superior generalization performances on crowds with variable density.
We did note, however, that as we consider wider portions of the scene, the chance that many different densities groups coexist in different locations increases, leading to the necessity to learn more than one clustering rule per scene. To resolve this problem we plan, as future work, to learn a set of different distance measures and use latent variables to choose the most appropriate given a particular zone. Code and datasets are made publicly available\footnote{\texttt{http://imagelab.ing.unimore.it/group-detection}} in order to reproduce this paper results and allow the community to improve the proposed method.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Experimental Results}
\label{sec:exp}
We designed several experiments to evaluate the algorithm behavior on well-assessed benchmarks and its connections to the nature of the problem.
All the experiments were carried out on ground truth trajectory data, except for Sec.~\ref{exp:real} where the method is evaluated on tracklets extracted by a modern detector/tracker system.
We also propose new video sequences to stress the algorithm over a variety of challenges in real world scenarios. Since the method works on ground plane (metric) data, we also provide homography information for all the employed sequences.
\subsubsection*{Datasets}
We selected two publicly available datasets, namely the \emph{BIWI Walking Pedestrians} dataset~\cite{pellegrini_youll_2009} and the \emph{Crowds-By-Examples (CBE)} dataset~\cite{lerner_crowds_2007}. The former dataset records two low crowded scenes, outside a university and at a bus stop (\verb+eth+ and \verb+hotel+ in Tab.~\ref{tab:dataset}). The \emph{CBE} dataset records a medium density crowd outside another university (\verb+student003+, briefly \verb+stu003+) providing some challenges: the density of the pedestrians is significantly high and the presence of multiple entry and exit points. While \emph{BIWI} and \emph{CBE} are standard datasets in crowd analysis, we also use the more recent \emph{Vittorio Emanuele II Gallery (VEIIG)} dataset~\cite{BanGorVizPRLGallery}, from which we extracted a five minutes subsequence, \verb+gal1+, particularly interesting due to the fast and continuous change in crowd density.
We also propose a new dataset to cope with the increasing variety of application in dense-crowd management, \emph{MPT-$20$x$100$}, composed of 20 sequences of 100 frames where we manually annotated trajectories and social groups. The dataset comprises different videos~\cite{bolei_2014} all characterized by a high number of pedestrians with an heterogeneous set of scene conditions, ranging from density, scale, viewpoint and type of interactions, like walking in a mall, crossing the street or participating at public events.\\
In Tab.~\ref{tab:dataset} we report some measures useful to characterize the spatial complexity of the datasets:
\begin{itemize}
\item $d_\text{in}$ is the \emph{group compactness}, computed as the mean distance between members of the same groups;
\item $d_\text{out}$ is the \emph{group isolation} or the mean distance between each member and its closest unrelated pedestrian;
\item the ratio $d_\text{i/o}\stackrel{\text{\tiny def}}{=}d_\text{in}/d_\text{out}$ measures \emph{crowd collectiveness}: small values mean compact groups in a sparse crowd.
\end{itemize}
\begin{table}[t!]
\center
\caption{Datasets: number of pedestrians (\#p), groups (\#g) and density metrics.}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
& \#p & \#g & $d_\text{in}~(m)$ & $d_\text{out}~(m) $ & $d_\text{i/o}$ \\
\hline
\verb+student003+ & 406 & 108 & 0.41 & 0.70 & 0.59\\
\hline
\verb+eth+ & 117 & 18 & 0.99 & 2.79 & 0.35\\
\hline
\verb+hotel+ & 107 & 11 & 0.75 & 2.00 & 0.38\\
\hline
\verb+gal1+ & 630 & 207 & 0.77 & 1.66 & 0.46\\
\hline
\verb+MPT-20x100+ & 82 & 10 & 0.63 & 1.45 & 0.48\\
\hline
\end{tabular}
\label{tab:dataset}
\end{table}
\subsubsection*{Evaluation Scheme}
There is no consensus on which metrics should be used to evaluate groups correctness: we propose to use the $G$-MITRE precision $\mathcal{P}$ and recall $\mathcal{R}$ since it accounts for the correct classification of singletons as well. This is an important gain as in crowded scenes the number of people walking alone is rarely negligible.
Each measure is reported in terms of mean and standard deviation over 5 runs to account for the stochastic nature of the training of our algorithm. Where not differently specified, we used a 100s for training and a 10s sliding window with no overlap for features computation. The regularization parameter $C$ of QP.~\eqref{optpro} is fixed to 10.\\
\noindent For the heat-map based feature of Sec.~\ref{sec:heatmaps}, we run a grid search on the parameters. For all the experiments, the length of the cells edge is fixed to 30cm, $k_s=10^{-5}$ and $k_r=0.5$.
\subsection{Baseline and Benchmark Comparisons}
We compare our method with three recent state of the art group detection algorithms, namely~\cite{ge_vision-based_2012,yamaguchi_who_2011,zanotto12,shao14}, selected on the basis of their reported performances on public datasets and availability of code.
In addition, we devised a simple baseline version of our solution that performs the group partitioning with no use of the learning framework. The weights are randomly chosen to be the same for all the features, so that the randomness resides in the similarity/dissimilarity ratio.
\begin{table}[t!]
\centering
\caption{Evaluation of our proposal when trained with different loss functions.}
\begin{tabular}{|l|c|c|c|c|}
\hline
& \multicolumn{2}{|c|}{Pairwise $\Delta_{PW}$} & \multicolumn{2}{|c|}{MITRE $\Delta_{M}$} \\
& $\mathcal{P}$ & $\mathcal{R}$ & $\mathcal{P}$ & $\mathcal{R}$ \\
\hline
\verb+hotel+ & \textls{90.1 $\pm$ 2.0} & \textls{84.1 $\pm$ 3.2} & \textls{89.2 $\pm$ 3.0}& \textls{93.2 $\pm$ 1.9} \\
\hline
\verb+eth+ & \textls{88.7 $\pm$ 1.8} & \textls{87.3 $\pm$ 2.6} & \textls{91.9 $\pm$ 0.8}& \textls{92.9 $\pm$ 1.0}\\
\hline
\verb+stu003+ & \textls{68.9 $\pm$ 1.4} & \textls{69.9 $\pm$ 1.5} & \textls{80.1 $\pm$ 2.4}& \textls{80.9 $\pm$ 2.3}\\
\hline
\end{tabular}
\label{tab:loss}
\end{table}
\subsubsection{Quantitative Results}
Quantitative results are given in Tab. \ref{tab:comparison}. To highlight our algorithm superiority, results are presented both in terms of $G$-MITRE and a pairwise loss accounting only for positive (intra-group) relations but neglecting singletons, $\Delta_{PW}^+$~\cite{zanotto12}. The latter loss is not directly optimized by our algorithm, still our method outperforms the competitors in all the tested sequences. This can be explained through the ability of our algorithm to adapt the concept of groups to always different scenario by varying the feature importance and the use of sociologically inspired similarity functions. The slightly lower performances on the \verb+stu003+ sequence are due to the high complexity of the scene: the high value of the $d_\text{i/o}$ ratio in Tab. \ref{tab:dataset} suggests the presence of loose groups in a dense crowd and, as such, challenging to be detected.
\subsubsection{Evaluation of Different Loss Functions}
As structured learning relies upon a definition of \emph{what's wrong} to learn how to classify well, the choice of the loss function can greatly affect the final performances. By fixing the $G$-MITRE measure as a proper scoring scheme, we quantitatively test the influence of the choice of the loss on the \verb+eth+, \verb+hotel+ and \verb+stu003+ datasets (Tab.~\ref{tab:loss}).
As it could be expected by its definition, the improvement due to the use of the $G$-MITRE loss (reported in Tab.~\ref{tab:comparison}) is greater in the \verb+eth+ and \verb+hotel+ sequences where the ratio between the number of singletons and the people walking in groups is higher and as such learning to classify them as well becomes crucial. More interestingly, we observe how the pairwise loss obtains outstanding performances when the number of pedestrians is limited, but becomes ineffective when it starts to grow, as in \verb+stu003+.
\subsection{Features Weight Learning on \emph{MPT-${\bf 20}$x${\bf 100}$}}
\emph{CBE} and \emph{BIWI} datasets expose some interesting challenges of the problem but, with the only exception of \verb+stu003+ sequence, they have a limited number of pedestrians in scene and a low crowd density. Moreover, the scenarios are similar and the variety of interactions underlying the group formation is limited. The proposed \emph{MPT-$20$x$100$} datasets, on the other hand, presents different degrees of complexity.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/perf_curve.pdf}
\caption{Comparison against baseline and \cite{ge_vision-based_2012} on \emph{MPT-$20$x$100$}.}
\label{fig:perf_comp_b}
\end{figure}
First, we evaluate the general performance of the algorithm and compare with both our baseline and the proposal in \cite{ge_vision-based_2012} where, for the latter method, we manually tuned the thresholds to achieve best results.
These methods are clustering based, partially consistent with the social group axioms but no learning is employed.
Results are shown in Fig.~\ref{fig:perf_comp_b} as a \emph{survival curve} plot which reveals on how many sequences the algorithms where at least able to reach the specific lower-bound performance and per-video scores are in Fig.~\ref{fig:perf_comp_a}. Interestingly, the difference between our method and \cite{ge_vision-based_2012} increases here with respect to the previous datasets on an average of 10\%, suggesting that sequences can be really different in the concept of groups they embed and thus learning is mandatory to adapt to this new representations of social groups and keep performances stable.
\subsubsection{The Need for Learning from Examples}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/bar_plot_sequences.pdf}
\caption{Results on \emph{MPT-$20$x$100$} highlight the complexity of each scene.}
\label{fig:perf_comp_a}
\end{figure}
The confusion-like matrix, depicted in Fig. \ref{fig:matrix}, presents the F-1 scores obtained by training the algorithm on one sequence of \emph{MPT-$20$x$100$} (row labels) and testing it on all the other sequences (column labels).
By reading the matrix, and averaging each row over all the columns, it is possible to grasp how good a particular sequence was for training. At the same time, by observing the average of the columns over all the rows, we can get intuition about how much each sequence was effectively predicted by all the others.\\
We are interested in understanding whether a specific notion of group is shared across sequences and how it is influenced by both scene elements (\emph{e.g.} crowd density) and unobserved aspects (\emph{e.g.} intentions and social hierarchies).
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/clustered_matrix.pdf}
\caption{F-1 scores obtained by all combinations of train/test pair sequences in \emph{MPT-$20$x$100$}. Results were clustered (diagonal blocks C1-C4 from left to right) to highlight similar notion of group among sequences.}
\label{fig:matrix}
\end{figure}
With the purpose of capturing these invariants, we search the connected component of the matrix using the $\text{F-1 score}$ as the affinity value among elements. Clustering is performed through an asymmetric version of spectral clustering~\cite{naumann_combinatorial_2012} based on the Random Walk Laplacian defined as
\begin{equation}
L = AD^{-1},
\end{equation}
where $A$ is the affinity matrix defined as in Fig.~\ref{fig:matrix} and $D$ is the usual degree matrix. Following the eigen-gap heuristic we found $4$ distinct clusters in the \emph{MPT-$20$x$100$} dataset, highlighted with black lines in Fig. \ref{fig:matrix}; for every cluster we computed the $d_\text{in}$, $d_\text{out}$ and $d_\text{i/o}$ spatial measures, displayed in Tab. \ref{tab:spatial}, to verify if clusters with a similar notion of group also share a common configuration of distances among pedestrians and possibly if the performance are connected to crowd density.
\begin{table}[t!]
\centering
\caption{Spatial depiction, training efficacy and groups predictability of the clusters of sequences of Fig.~\ref{fig:matrix}.}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
cluster & $d_\text{in}$ (m)& $d_\text{out}$ (m)& $d_\text{i/o}$ & $\text{F}_1$ train & $\text{F}_1$ test\\
\hline
\hline
C1 & 0.58 & 1.03 & 0.54 & 0.82 & 0.82 \\
\hline
C2 & 0.59 & 1.28 & 0.47 & 0.85 & 0.84 \\
\hline
C3 & 0.59 & 0.99 & 0.59 & 0.77 & 0.64 \\
\hline
C4 & 0.89 & 3.00 & 0.34 & 0.75 & 0.84 \\
\hline
\end{tabular}
\label{tab:spatial}
\end{table}
Tab. \ref{tab:spatial} also reports a measure of \emph{training efficacy} (F$_1$ train), computed as the mean accuracy obtained on the whole dataset when only sequences in that specifi cluster were used for training and, analogously, a group \emph{predictability score} (F$_1$ test) or the mean accuracy obtained on the sequences of that cluster when all the sequences were used for training. They indicate how much a cluster is useful during training
and easy it is to predict groups inside its sequences.
A first observation that can be made is about the cluster C4, which presents the highest F$_1$ test and the lowest F$_1$ train. We found it was easy to predict groups in these videos but they were poorly informative as training examples, a result justified by its small $d_\text{i/o}$.
Nonetheless, clusters 1 and 3 exhibits very similar $d_\text{i/o}$ ratio but perform very differently in terms both of training efficacy and testing score, suggesting a trivial heuristic based on spatial information only is insufficient to visually discern groups.
Implicit aspects like motion constraints or cultural and social context also affect the group process formation, defending our hypothesis that learning is needed to adapt the concept of group to the current data.
\subsubsection{Do we Capture the Essence of Being a Group?}
As previously stated, \emph{MPT-$20$x$100$} comprises very different scenarios and situations and can provide important insights on which are the most important elements that reveal groups. To this end, recall the definition of feature vector ${\bf w} = [\boldsymbol\alpha, \boldsymbol\beta] = [w_1,w_2,\dots,w_8]$ from Eq.~\eqref{eq:cc_affinity_parametrization} of Sec.~\ref{sec:solution} is such that the affinity between two trajectories $T_a$ and $T_b$ can be written as:
\begin{equation}
\label{eq:w_decomposed}
\begin{aligned}
&W^{ab}_{\bf d} &= {\boldsymbol\alpha}^T ({\bf 1} - {\bf d}(a, b)) - {\boldsymbol\beta}^T&{\bf d}(a, b)\\
&&= w_1 + w_2 + w_3 + w_4 - &[(w_5+w_1)d_{ph} + \dots\\
&& &~(w_6+w_2)d_{sh} + \dots\\
&& &~(w_7+w_3)d_{ca} + \dots\\
&&\underbrace{\hspace{3cm}\vphantom{]}}_\text{constant term}~&\underbrace{(w_8+w_4)d_{he}}_\text{$(a,b)$-dependent term}]
\end{aligned}
\end{equation}
The contribution of each feature to the score, transformed from a distance to an affinity measure by the constant term of Eq.~\eqref{eq:w_decomposed}, is encoded in the absolute value of the coefficient of the features themselves.
As shown in Fig.~\ref{fig:weight_imp_a}, the proxemic inspired feature $d_{ph}$ dominates all the others while the importance of the remaining features vary greatly from sequence to sequence.
The two sequences \verb+1manko3+ (Fig.~\ref{fig:more_results}) and \verb+1dawei1+ (Fig.~\ref{fig:crowd}), for example, present very similar contribution from $d_\text{hm}$ and $d_\text{sh}$, while the importance assigned to $d_\text{ph}$ in \verb+1dawei1+ is shifted to $d_\text{ca}$ in \verb+1manko3+.
The former sequence present a particularly sparse crowd, making distance among elements a strong peculiarity of groups, but when the space among pedestrian is reduced both intra and inter-groups distances (and consequently $d_\text{ph}$) become less significant. Conversely, the causality feature $d_{ca}$ becomes more important when the density increases as pedestrians tend to follow each others to avoid getting separated from the rest of the group.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/stacked_features.pdf}
\caption{Features normalized coefficients of Eq.~\eqref{eq:w_decomposed}.}
\label{fig:weight_imp_a}
\end{figure}
Heat maps importance gain emphasis from comparing \verb+1manko3+ and \verb+3shatian6+ (Fig.~\ref{fig:more_results}), as they are very helpful in decoupling trajectories that stand very close in space but for a very limited amount of time. In particular, in \verb+1manko3+, people crossing from opposite sides of the road tend to be very close when meeting in the middle, even if they are not in the same group.
\begin{table*}[t!]
\centering
\caption{Performance of detector \cite{DollarPAMI14pyramids}, tracker \cite{Milan:2014:CEM} and group detection algorithms (in terms of G-MITRE) in a fully automatic pipeline.}
\begin{tabular}{|l|c|c||c|c|c|c||c|c|c|c|c|c|c|c|}
\hline
$ $ & \multicolumn{2}{|c||}{Detector} & \multicolumn{4}{|c||}{Tracker} & \multicolumn{2}{|c|}{our proposal} & \multicolumn{2}{|c|}{\cite{ge_vision-based_2012}} & \multicolumn{2}{|c|}{\cite{yamaguchi_who_2011}} & \multicolumn{2}{|c|}{\cite{shao14}}\\
$ $ & $\mathcal{P}$ & $\mathcal{R}$ & MOT(A/P) & MT & IDS & FRG & $\mathcal{P}$ & $\mathcal{R}$ & $\mathcal{P}$ & $\mathcal{R}$ & $\mathcal{P}$ & $\mathcal{R}$ & $\mathcal{P}$ & $\mathcal{R}$\\
\hline
\verb+hotel+ & 43.1 & 52.4 & 66.9 / 0.88 & 18.8 & 120 & 34 & 77.9 & 76.9 & 75.7 & 78.0 & 46.3 & 38.6 & 60.2 & 57.5\\
\verb+eth+ & 68.2 & 53.7 & 92.3 / 0.08 & 75.0 & 0 & 68 & 81.1 & 79.7 & 78.4 & 79.3 & 58.3 & 70.6 & 57.3 & 61.2\\
\verb+student+ & 56.7 & 36.8 & 43.3 / 1.22 & 06.0 & 342 & 876 & 75.0 & 71.3 & 63.2 & 56.4 & 40.2 & 52.4 & 35.1 & 40.2\\
\hline
\end{tabular}
\label{tab:track}
\end{table*}
\subsection{Evaluating the Influence of Density Changes
In this test setting we evaluate if the feature weights learned by the Structural SVM of Sec. \ref{sec:learning} are sufficiently general to deal with crowds at different densities and, at the same time, understand whether an online version of Alg.~\ref{BCFW} would bring any accuracy improvement. To this end we introduce a new video sequence, \verb+gal1+ from \emph{GVEII}, containing an average number of $70$ pedestrians simultaneously present in the scene.
The distribution of pedestrians is not uniform though, and increases over time, as well as for their density, represented by the $d_\text{i/o}$ ratio (Fig. \ref{fig:GVEIIpeople}).
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/peculiarities}
\caption{Pedestrians number and $d_\text{i/o}$ ratio temporal evolution in the \texttt{gal1} sequence of \emph{GVEII}.}
\label{fig:GVEIIpeople}
\end{figure}
In order to underline the importance of capturing changes in density, we compare the batch version of the training algorithm Alg. \ref{BCFW} with a sequential and a fully online version (Fig.~\ref{fig:res_online_comp}). In the former case, examples are fed to the supervised training procedure in temporal order one at a time, while for the latter case, the weights have been initialize to the ones learned batch and the algorithm at each step learns from the previous prediction, thus without supervision.\\
The plot in Fig.~\ref{fig:res_online_comp} shows the performance of the batch training version tends to decrease as the crowd density increases. While the sequential version of the algorithm performs better, it is slow to respond to sudden density changes like in time windows $15$. Indeed, a non-smooth density variation affects negatively the training process, leading to a performance drop further recovered in the subsequent temporal windows. Eventually, this behavior is partially mitigated in the fully online version. The higher performances are motivated by the implicit regularization: using the prediction as training input discourages the learner to drastically modify the weights vector and mimic the smooth variation in crowd density slightly adjusting in time.
\subsection{Performances on Real Detector and Tracker}
\label{exp:real}
Our algorithms assumes the availability of correct trajectories to detect groups, but what happens in a fully automatic video surveillance pipeline where a people detector and tracker are employed?
We carried out experiments
by extracting pedestrian positions through a state of the art detector~\cite{DollarPAMI14pyramids} and obtaining trajectories by means of a continuous energy minimization method~\cite{Milan:2014:CEM}. We compare with Ge~\emph{et~al.}~\cite{ge_vision-based_2012}, Yamaguchi~\emph{et al.}~\cite{yamaguchi_who_2011} and Shao~\emph{et al.}~\cite{shao14} over the same input data, results are shown in Tab.~\ref{tab:track}. Our proposal outperforms the competitors even in the case of noisy trajectories.
Tracking performances evidence a high number of tracks fragments, namely FRG, that are mainly due to the localization error introduced by the automatic people detector on non-trivial crowded scenes. FRGs are proportional to the number of small new tracks created by the system instead of correctly associating previously tracked objects, with the consequence of splitting ideal tracks into temporally disjoint segments.\\
A high FRG number affects the group detection performance as the $d_{ph}$ and $d_{ca}$ features are computed when the trajectories are simultaneously present in the scene and thus merging temporal disjoint fragments is strongly discouraged by the correlation clustering algorithm.
Intuitively, by reducing the size of the window we are able to minimize the number of split trajectories at each example and recover most of the original performances, as shown in Fig.~\ref{fig:detector_tracker}(c).
The improvement is basically achieved through the joint adoption of socially founded features and structural learning that weights the features according to the observed noisy trajectories. The experiment allow us to conclude that even in the case of a real application and imprecise input data the strengths of the proposed algorithm are maintained because are strongly related to the social rules that govern the group formation process, these rules are not data dependent and hold despite the applied feature extraction techniques.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/perfOnGVEII.pdf}
\caption{F-1 score comparison between differently trained version of the our method on \texttt{gal1} of \emph{GVEII}.}
\label{fig:res_online_comp}
\end{figure}
\begin{figure*}[t!]
\centering
\subfloat[]{
\includegraphics[width=0.28\textwidth]{images/0205_2}
}
\subfloat[]{
\includegraphics[width=0.28\textwidth]{images/0205_tracker}
}
\subfloat[]{
\includegraphics[width=0.4\textwidth]{images/variable_win.pdf}
}
\caption{Group detection results on \texttt{student003} are displayed when corrected tracks are used (a) and when input with people detector and tracker automatic responses (b). Regardless of the input noise, most of the groups can still be identified. This is due to the robustness of the features employed during learning and to the decrease in length of the time window (c) which prevents fragmented tracks to be split in different groups.}
\label{fig:detector_tracker}
\end{figure*}
\begin{figure*}[t!]
\centering
\subfloat[\texttt{1airport1}]{
\includegraphics[width=0.235\textwidth]{images/1airport1_000004}
}
\subfloat[\texttt{1manko3}]{
\includegraphics[width=0.235\textwidth]{images/1manko3_2}
}
\subfloat[\texttt{2jiansha5}]{
\includegraphics[width=0.235\textwidth]{images/2jiansha5}
}
\subfloat[\texttt{randomcross3}]{
\includegraphics[width=0.235\textwidth]{images/randomcross3}
}\\
\subfloat[\texttt{3shatian6}]{
\includegraphics[width=0.235\textwidth]{images/3shatian6}
}
\subfloat[\texttt{seq1}]{
\includegraphics[width=0.235\textwidth]{images/GVEII_0017.jpg}
}
\subfloat[\texttt{eth}]{
\includegraphics[width=0.235\textwidth]{images/eth.jpg}
}
\subfloat[\texttt{hotel}]{
\includegraphics[width=0.235\textwidth]{images/hotel.jpg}
}
\caption{Examples of groups detected through our method: sequences from (a) to (e) are from the \emph{MPT-$20$x$100$}, while (f) is part of \emph{GVEII} and finally, (g) and (h) belong to the \emph{BIWI} dataset. Groups are identified regardless of the scene context and errors are visually acceptable, as in (d).}
\label{fig:more_results}
\end{figure*}
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{C}{rowd} phenomena are complex and their logic still escapes formal rules and precise social explanations.
Eventually, the ambition of crowd analysis is to characterize people behaviors, predict and prevent potentially dangerous situations and improve the well-being of communities.
This has been traditionally provided by simulation models~\cite{vizzari12} or automatic video analysis~\cite{ge_vision-based_2012}.
Recently, \emph{groups} have been recognized as the
basic elements which compose the crowd~\cite{moussaid_walking_2010},
leading to an intermediate level of abstraction that is placed between two outfacing views: the crowd as a flow of indistinguishable people~\cite{Moore:2011:VCS:2043174.2043192} and its interpretation as a collection of individuals~\cite{PhysRevE.51.4282}.
Identifying groups is consequently a mandatory step to grasp the complex social dynamics ruling collective behaviors in crowds.
This poses new challenges for computer vision, since groups are definitely more difficult to characterize than pedestrians acting alone or as a whole.
In this work, we propose a learning based solution for visually detecting groups in low/medium density crowds (Fig.~\ref{fig:crowd}) under the hypothesis that the \emph{concept of group} can be visually discerned and people trajectories can be extracted up to some extent. The strong novelty of our approach is the joint adoption of sociologically grounded features and a learning framework able to specialize the concept of group accounting for different scenarios, motion constraints and crowd densities.
To this end, we adhere to a classical sociological interpretation of groups~\cite{turner81}, which can be formalized as follows.
\begin{definition}
\label{def:group}
A group is defined as two or more people interacting to reach a common goal and perceiving a shared membership, based on both physical and social identity.
\end{definition}
Accordingly, we propose a new formulation of the problem of detecting groups in crowds as a supervised \emph{Correlation Clustering} (CC)~\cite{bansal_correlation_2002}. We solve it through a \emph{Structural Support Vector Machines} (Structural SVM)~\cite{joachims_cutting-plane_2009} framework that learns a context dependent distance measure, based on a set of features inspired by Def.~\ref{def:group} effective on both ground truth trajectories and automatically obtained tracklets. The design of socially grounded features is one of the main contributions of the work.
\begin{figure}[t!]
\centering
\subfloat[\texttt{student003}]{
\includegraphics[width=0.31\columnwidth]{images/university1.png}
}
\subfloat[\texttt{1shatian3}]{
\includegraphics[width=0.31\columnwidth]{images/1shatian3.png}
}
\subfloat[\texttt{1dawei1}]{
\includegraphics[width=0.31\columnwidth]{images/1dawei1.png}
}
\caption{Examples of social groups detected in crowds.}
\label{fig:crowd}
\end{figure}
Moreover, a new socially based \emph{loss function} ($G$-MITRE) is defined for the Structural SVM.
Differently from previous solutions~\cite{pellegrini_improving_2010} and~\cite{ge_vision-based_2012}, our approach doesn't rely on scene-dependent parameters that would limit the applicability of the method in real world contexts. Finally, we also propose an online learning procedure that handles smooth variation in crowd composition and density, useful in online surveillance.
We annotated and made publicly available two new datasets: \emph{MPT-${\bf 20}$x${\bf 100}$} and \emph{GVEII} (see Sec.~\ref{sec:exp}). Results on standard benchmarks, as well as on the proposed datasets, outperform current methods.
We strongly believe that an automatic system for group detection will influence future public area visual surveillance and will bring benefits to modeling and simulation application for architectural planning by providing real and precise data observation of crowds phenomena.
\section{Related Work}
The modeling of pedestrian dynamics in crowds represents a relatively recent research field. Most of the works are based on sociological paradigms
and computer vision based approaches have also evolved under the influence of these theories.
\subsubsection*{Modeling and Observing the Crowd}
Most of the research work has tried to tackle the crowd as an exclusively collective phenomenon, where individuality does not exist. This recalls the primitive \emph{Popular Mind Theory}~\cite{bon2003crowd} by Gustave Le Bon, where the crowd was defined as a ``pathological monster with no individual consciousness''. Accordingly, crowds have been analyzed by means of physical models (\emph{e.g.} hydrodynamics~\cite{Moore:2011:VCS:2043174.2043192}), neglecting the existence of single individual purposes and goals. However, these models are effective mainly in extremely dense crowds.
Conversely, many other approaches have been inspired by the 70s \emph{Social Loafing Theory}~\cite{Ingham1974371}, which stated that individuality was a strong requirement for the pursuit of personal goals. Helbings \emph{Social Force Model}~\cite{PhysRevE.51.4282}, which asserts that anyone movements towards her goals are influenced by the surrounding pedestrians, has been the main building block for many crowd modeling and analysis works, ranging from abnormal behavior detection~\cite{5206641} to tracking~\cite{5509779}.
Recently, studies on people attending events have underlined that most of the people tend to move in groups and social relations influence the way people behave in crowds~\cite{moussaid_walking_2010,bandini_crowd_2012}.
These empirical observations are supported by Reicher in the recent \emph{Social Identity Model of Deindividuation Effects}~\cite{Reicher95}, which assumes that crowd behavior is regulated by the social rules and behaviors groups choose to adopt. This is the main social paradigm underpinning our research too.
\subsubsection*{Visual Detection of Groups in Crowds}
It was only recently that group detection showed promising results. The process is in fact built upon several open challenges in computer vision, starting from people detection and tracking in crowds~\cite{Rodriguez11} to analyzing and grouping extracted trajectories~\cite{solera_structured_2013}.
Some works employ the concept of \emph{F-formations} by Kendon~\cite{kendon1990conducting} to discern group formation process. Broadly speaking, F-formations can be seen as specific positional and orientational patterns that people must sustain in order to be considered engaged in a social relationship. Despite robust results~\cite{cristani2011social}, this theory is suited to stationary groups only and is not defined for moving groups, a case which cannot be ignored in crowd analysis.
Thus, complementary approaches analyze pedestrians motion paths; according to the type of available tracklets, they can be partitioned in group-based, individual-group joint and individual-based.
In \textit{group-based} approaches, groups are considered as atomic entities in the scene since no higher level information can be extracted neatly, typically due to high noise or high complexity of crowded scenes \cite{Feldmann11, shao14}. Since these models are often too simplistic to further infer on groups behavior, \textit{individual-group joint} approaches try to overcome the lack of finer information by hypothesizing trajectories while tracking groups at a coarser level \cite{Pang08, Bazzani12}.
Finally, \textit{individual-based} tracking algorithms build up on single pedestrians trajectories.
This kind of approach has been gaining momentum only recently since tracking even in high density crowds is becoming everyday a more feasible task~\cite{Rodriguez11}.
Pellegrini~\emph{et al.}~\cite{pellegrini_improving_2010} employ a Conditional Random Field to jointly predict trajectories and estimate group memberships, modeled as latent variables, over a short time window.
Yamaguchi~\emph{et al.}~\cite{yamaguchi_who_2011} predict whether two pedestrians are in the same group through a linear SVM on trivial distance, speed difference and time overlap information.
Recently, Chang~\emph{et al.}~\cite{Chang11} proposed a soft segmentation process to partition the crowd by constructing a weighted graph, where the edges represent the probability of individuals to belong to the same group.
An interesting unsupervised approach is Zanotto~\emph{et. al} ~\cite{zanotto12}, where a potentially infinite mixture model is fitted on pedestrians, regarded as sampled observations from the mixture. Previous frames data and predictions are used as prior information for the models (one for each group), but pairwise relations between individuals are neglected as groups are modeled only through the mean position and velocity of their members.
Above all, we mention Ge~\emph{et al.}~\cite{ge_vision-based_2012} that suggests the use of an agglomerative approach to cluster trajectories, as we do.
They hierarchically merge clusters by evaluating a well-founded sociological inter-group closeness measure defined on a combination of proximity and velocity features, stopping when a given condition is met.
Conversely, our method does not rely neither on relative position or velocity fixed thresholds~\cite{ge_vision-based_2012,zanotto12} nor on sequence-dependent parameters~\cite{pellegrini_improving_2010}; it is flexible and general as the features are not scene-specific~\cite{Chang11} and their contribution is learned from examples. Thanks to the use of a clustering inference rule, solutions proposed by our method are partitions and not coverings of the members of the crowd~\cite{yamaguchi_who_2011}, meaning that pairwise relations are consistent with the overall group structure found. Moreover, the use of a time window to predict groups let the method recognize that non-trivial behaviors (\emph{e.g.}~neglecting strict proximity) may occur, whereas frame-by-frame methods are limited to short term reasoning~\cite{zanotto12}. Yet, the discriminative nature of the employed framework makes learning compelling in terms of both required data and computational cost, as opposed to graphical models optimizing over a multiple hypothesis space~\cite{pellegrini_improving_2010}.\\
\noindent This work extends our preliminary attempt in~\cite{solera_structured_2013}. Here we prove our proposal complies with social theories of group formation, we devise and investigate new features to better adhere to the sociological theory underpinning our method and, eventually, extend the tests to new remarkably complex datasets and compare with more recent competing algorithms.
Besides, the experiments further probe the need for learning when dealing with heterogeneous crowds, shedding light on the nature of the problem itself.
\section{Problem Definition}
\label{sec:problem_def}
We cast the group detection task as a clustering problem. Consider a set of pedestrian $M = \{a, b, \dots\}$ and $\mathcal{Y}(M)$ as the set of all possible ways to partition $M$. Defining $y$ as a subset of pedestrians (also referred to as group or cluster) in $M$, a generic set of subsets ${\bf y}~=~\{y_1, y_2, \dots\}$ is a valid solution in $\mathcal{Y}(M)$ if the partitioning axioms are satisfied: $\forall a\in\mathcal{M}, \exists!y\in\mathcal{Y}(\mathcal{M}):a\in y$ and $\cup_{y\in\mathcal{Y}(\mathcal{M})}y = \mathcal{M}$.
Here, we call \emph{singletons} those pedestrians whose cluster is composed by themselves only, \emph{i.e.} $|y| = 1$.
In crowded contexts, this grouping cannot be solved by exploiting spatial (positional or orientational) information only, as proposed in F-formation theory, due both to confusion and motion. Moreover, it is often the case that the physical distance between a singleton and a member of a cluster is lower than that cluster intra-member mean distance. This is due to the fact that, in real situations, social aspects heavily intervene in the group formation process.
In order to obtain crowd partitions that are meaningful from a sociological point of view,
the following relevant properties of social groups
must hold.
{\bf Hierarchical Coherence.}
Groups are composed by individuals and sub-groups in a recursive fashion (Fig.\ref{fig:propertya}). This has been first observed in the seminal work of Canetti~\cite{canetti_crowds_1984}, based on the assumption that members within a group cannot erase already settled relationships as the crowd assembles.
{\bf Density Invariance.} To keep their group identities preserved at different crowd densities, members must be willing to change the inner distance among them. Groups in very crowded scenes will be more closed and compact, while groups in open spaces will tend to exhibit more dilated patterns (Fig.~\ref{fig:propertyb}); sociologically and empirical evidence can be found in Bandini \emph{et al.}~\cite{bandini_crowd_2012} and in Moussaid~\emph{et al.}~\cite{moussaid_walking_2010}.
{\bf Transitivity.} Not every member of a group needs to be strictly connected with every one else, but any two members may be part of the same group by means of a sufficiently dense subgroup of pedestrian standing between them (Fig.~\ref{fig:propertyc}). McPhail and Wohlstein's work \cite{mcphail_using_1982} formalized this idea: to be considered part of a group one typically will have to be connected with at least half of the members.
\begin{figure}[t!]
\centering
\subfloat[]{
\includegraphics[width=0.3\columnwidth]{images/hc}
\label{fig:propertya}
}~
\subfloat[]{
\includegraphics[width=0.3\columnwidth]{images/di}
\label{fig:propertyb}
}~
\subfloat[] {
\includegraphics[width=0.3\columnwidth]{images/t}
\label{fig:propertyc}
}
\caption{Highlights of social groups properties: (a) \emph{hierarchical coherence}, (b) \emph{density invariance} and (c) \emph{transitivity}.}
\end{figure}
\section{Socially Constrained Clustering for Groups Detection}
\label{sec:solution}
We propose to solve the crowd partitioning problem employing the \emph{Correlation Clustering} (CC)~\cite{bansal_correlation_2002} and we prove it is possible to achieve a quasi-optimal crowd partition guaranteed to satisfy the three aforementioned properties of Sec.~\ref{sec:problem_def}.
The CC algorithm takes as input an affinity matrix $W$ where, if $W^{ab}>0$ ($W^{ab}<0$), elements $a$ and $b$ belong to the same (different) cluster with certainty $|W^{ab}|$. The algorithm returns the partition ${\bf y}$ of a set of elements $M=\{a, b, \dots\}$ so that the sum of the affinities between item pairs in the same clusters $y$ is maximized:
\begin{equation}
\label{eq:correlation_clustering_objective}
\text{CC} = \arg\max
_{{\bf y}\in\mathcal{Y}(M)}\sum_{y\in{\bf y}}\sum_{a\neq b\in y}W^{ab}_{\bf d}.
\end{equation}
The pairwise elements affinity in $W$ is parameterized as weighted linear combination of a bounded dissimilarity measure and its complement:
\begin{equation}
\label{eq:cc_affinity_parametrization}
W^{ab}_{\bf d} = {\boldsymbol\alpha}^T ({\bf 1} - {\bf d}(a, b)) - {\boldsymbol\beta}^T {\bf d}(a, b).
\end{equation}
To be consistent
with the definition of groups of Sec.~\ref{sec:intro}, we devise the pairwise distance between pedestrian $a$ and $b$, ${\bf d}(a, b)$
as detailed in Sec.~\ref{sec:features}.
In clustering theory, changing the dissimilarity space results in different partitioning of the domain through the same algorithm.
By tuning $[{\boldsymbol\alpha}, {\boldsymbol\beta}]$ parameters in Eq.~\eqref{eq:cc_affinity_parametrization} we can evaluate many different groupings and we'll show that, under a restrict set of hypothesis, they all satisfy the social properties previously mentioned.
In order to efficiently learn those parameters according to different peculiarities groups exhibit in different scenarios, in Sec.~\ref{sec:learning} we introduce Structural SVM~\cite{tsochantaridis_large_2005} with both an approximated inference procedure and a loss function specifically designed for accurately measuring the compatibility among possible crowd partitions.
\\
The solution to Eq.~\eqref{eq:correlation_clustering_objective}, given the parametrization introduced in Eq.~\eqref{eq:cc_affinity_parametrization} and subject to a hierarchical inference procedure, guarantees the satisfaction of all the social groups properties:
\begin{theorem}
When the pairwise elements affinity in $W$ is a weighted linear combination of a bounded similarity measure and its complement, a bottom-up approximated solution to CC produces a partition that respects the hierarchical coherence, density invariance and transitivity properties of social groups.
\end{theorem}
\begin{proof}
Let ${\bf d}:M\times M\rightarrow[0, 1]^p$ be a bounded distance on the set of members of a crowd $M$ so that $(M, {\bf d})$ is a dissimilarity space and suppose the affinity matrix of CC is constructed as in Eq.~\eqref{eq:cc_affinity_parametrization}, for some appropriate positive values of ${\boldsymbol\alpha}, {\boldsymbol\beta} \in \mathbb{R}^p$. To demonstrate that the \emph{density invariance} holds for all solutions of CC consider that when the density increases, both distances between groups and between members of the same group diminish. This phenomenon is a less formal statement of the scale invariance axiom of clustering defined in Kleiberg \cite{kleinberg_impossibility_2002} which is known to hold for sum-of-pairs clustering algorithm.
We must thus show that it holds when we are maximizing affinities instead of minimizing distances as well. To this aim let ${\bf d} = \lambda\bar{\bf d}$ and $\bar{\bf d}:M\times M\rightarrow[0, \frac{1}{\lambda}]^p$ so that
\begin{equation}
\begin{aligned}
W_{\bf d} &= {\boldsymbol\alpha}^T ({\bf 1} - \lambda\bar{\bf d}) - {\boldsymbol\beta}^T \lambda\bar{\bf d}\\
&= \lambda[{\boldsymbol\alpha}^T(\frac{1}{\lambda} - \bar{\bf d})-{\boldsymbol\beta}^T\bar{\bf d}] = \lambda W_{\bar{\bf d}},
\end{aligned}
\end{equation}
where the notation for the elements is dropped for clarity. Consequently, CC satisfies the scale invariance axiom since multiplying all distances by a constant results in multiplying the total affinity of each cluster by a constant and hence the maximum affinity clustering solution is not changed. \emph{Transitivity} follows directly from the objective function of CC in Eq.~\eqref{eq:correlation_clustering_objective}:
to be assigned to the same group it suffices the existence of any number of members such that the net effect of all the involved pairwise relations is non-decreasing.
Last, the \emph{hierarchical coherence} requires a greedy approximation algorithm to optimize the CC that initially consider each pedestrian in its own cluster and then iteratively merges the two clusters whose union would produce the best clustering score, stopping when joining clusters would decrease the overall affinity.
Hence, elements in the same cluster at lower levels of the hierarchy are also together in higher level clusters.
\end{proof}
\begin{figure*}[t!]
\centering
\subfloat[Physical distance]{
\includegraphics[width=0.22\textwidth]{images/d_ph_print.pdf}
\label{fig:d_ph}
}
~
\subfloat[Motion causality]{
\includegraphics[width=0.22\textwidth]{images/d_ca_print.pdf}
\label{fig:d_ca}
}
~
\subfloat[Trajectory shape]{
\includegraphics[width=0.22\textwidth]{images/d_sh_print.pdf}
\label{fig:d_sh}
}
~
\subfloat[Paths convergence]{
\includegraphics[width=0.22\textwidth]{images/d_he_print.pdf}
\label{fig:d_he}
}
\caption{Features: physical identity (a) and social identity (b,c) provide a computational interpretation of the concept of group membership, while (d) evaluates the likeliness of the existence of a shared goal between pedestrians.}
\label{fig:features}
\vspace{-0.5cm}
\end{figure*}
\section{Social Features for Social Groups}
\label{sec:features}
Given the problem formulation in Sec.~\ref{sec:problem_def} and the CC parametrization of Eq.~\eqref{eq:cc_affinity_parametrization}, here we define the distance function ${\bf d}$ which acts on trajectories pairs.
We consider the pedestrian trajectory $T_a = \{(t, {\bf p}_a^t)\}_t$, projected onto the ground plane, as multivariate time series of metric (in meters) spatial observations ${\bf p}_a^t$ for pedestrian $a$ at different times $t$.
In order to deal with the continuously changing nature of groups (splitting, merging, switching members, $\dots$) we reduce the observation period to a time window $\mathcal{T}$ of fixed length. As a consequence, groups can be differently detected even between (potentially overlapped) sequential time windows $\mathcal{T}_k$ and $\mathcal{T}_{k+1}$.
According to Def.~\ref{def:group},
we devise four features able to capture both the pedestrian physical and social identity as well as to discern the presence of a shared goal among them,
namely: \textit{physical identity} $d_\text{ph}$, \textit{trajectories shape-similarity} $d_\text{sh}$, \textit{pedestrians causality} $d_\text{ca}$ and \textit{heat-maps} $d_\text{hm}$.
A pairwise feature vector ${\bf d}^k(a, b)$ is hence defined for every couple of trajectories $T_a$ and $T_b$ and for every time window $\mathcal{T}_k$, as
\begin{equation}
{\bf d}(a,b)\stackrel{\text{\tiny def}}{=} {\bf d}^k(a,b) = [d_\text{ph}, d_\text{sh}, d_\text{ca}, d_\text{he}]_{a,b}^k.
\end{equation}
\subsection{From Physical Distances to Physical Identity}
\begin{figure}[t]
\center
\subfloat[]{
\includegraphics[width=0.60\columnwidth]{images/prox3.jpg}
}
\subfloat[]{
\includegraphics[width=0.30\columnwidth]{images/GMM.pdf}
}
\caption{Proxemics, modeled by gaussians (b), reveal physical identity trough physical distance (a).}
\label{fig:GMM}
\end{figure}
The \emph{physical identity} can be regarded as a static relation connecting physical distance to group membership.
In his \emph{Proxemic Theory}, Hall~\cite{hall66} focused on the physical interactions between pairs of individuals. More precisely, the theory is about ``the study of ways in which man gains knowledge of the content of other men's minds through judgments of behaviour patterns associated with varying degrees of proximity to them.''
\begin{table}[t!]
\center
\caption{Proxemics characterization as found in Hall's Theory.}
\begin{tabular}{|l|c|l|}
\hline
\multicolumn{1}{|c|}{\bfseries space} & \multicolumn{1}{c|}{\bfseries boundaries ($m$)} & \multicolumn{1}{c|}{\bfseries description}\\
\hline
intimate & 0.0 - 0.5 & unmistakable involvement\\
\hline
personal & 0.5 - 1.2 & familiar interactions\\
\hline
social & 1.2 - 3.7 & formal relationships\\
\hline
public & 3.7 - 7.6 & non-personal interactions\\
\hline
\end{tabular}
\label{tab:prox}
\end{table}
The proxemic model fomalizes how people use physical space in interpersonal interactions and
defines a set of concentric bubbles around every individual, as depicted in Fig.~\ref{fig:d_ph}.
Nevertheless, the transition between the four different proxemic zones is abrupt (Tab.~\ref{tab:prox}).
Spatial quantization can be heavily affected by noise or errors, leading to wrong classification.
Several approaches assign a score to proxemic classes in order to obtain a continuous real-valued similarity measure, \cite{6239351,6113127,vizzari12}.
To grasp the distance based characteristics of group formation, we relax the original Hall's quantization by employing a Gaussian Mixture Model (GMM) on the ground plane, centered on person location and with fixed proxemics-inspired covariance matrices.
The resulting GMM is a weighted sum of zero mean Gaussians with diagonal covariance matrices reflecting Hall's boundaries (\emph{i.e.} $\Sigma_1\leftarrow0.5$, $\Sigma_2\leftarrow1.2$, \ldots):
\begin{equation}
\text{GMM}({\bf p}_a^t-{\bf p}_b^t)=\frac{1}{4}\sum\limits_{z=1}^4 \mathcal{N}( {\bf p}_a^t-{\bf p}_b^t \vert 0,\Sigma_z)
\label{eq:GMM}
\end{equation}
Given a pair of trajectories $T_a$ and $T_b$ we evaluate the mixture model of Eq. \eqref{eq:GMM} on the vector of distances at each time instance.
This is equivalent to place the mixture on ${\bf p}_a^t$ and measure where the point ${\bf p}_b^t$ lies inside the proxemic space at each instant $t$, as shown in Fig.~\ref{fig:GMM} and in Fig.~\ref{fig:prox2}.
The static measure of social cohesion, called $d_\text{ph}$, is then defined by averaging the mixture model responses over the the set of time instances where trajectories $T_a$ and $T_b$ are simultaneously present in the current time window, $\overline{\mathcal{T}}\subseteq\mathcal{T}^k$:
\begin{equation}
d_\text{ph}^k(a,b) = \frac{1}{|\overline{\mathcal{T}}|}\sum_{t\in\overline{\mathcal{T}}}\text{GMM}({\bf p}_a^t-{\bf p}_b^t)
\end{equation}
Averaging is required since the physical identity among group members is established in time and must remain coherent in order to be a valid measure of social cohesion.
\subsection{Motion as an Indicator of Social Identity}
\emph{Social Identity}~\cite{haslam2004psychology,turner81} is a psychological paradigm built on the intuition that group behavior is an emerging dynamic, reflecting a shift in self-conception of the members who start to define themselves in terms of their common membership.
According to~\cite{Oldmeadow05task-groupsas}, social identity
reflects in the way people mutually influence each other and consequently move in groups, suggesting that
social identity could be observed through trajectories shape similarity and paths temporal causality
\subsubsection{Temporal Causality}
Under the hypothesis of sufficiently stationary trajectories, which is typically true for the observation of a time window, we can employ the econometric model of Granger causality~\cite{granger_investigating_1969} to measure to what extent pedestrians are mutually affecting their motion paths~\cite{couzin_effective_2005}. Accordingly, we formalize two requirements:
\begin{enumerate}
\item the causal pedestrian will move before the effect pedestrian, and
\item the motion of the causal pedestrian contains information about the way the effect pedestrian moves that cannot be found in any other pedestrian motion.
\end{enumerate}
A consequence of these statements is that the causal pedestrian trajectory can help forecast the effect pedestrian trajectory even after other data has first been used. Let's define $m$ as the lag value for the causality analysis and denote the optimum least-squares predictor of a stationary trajectory $T_a$ at time $t$ using the set of values $\bar{T}_a(t-m)$ by $P_t(T_a|\bar{T}_a(t-m))$. Here $\bar{T}_a(t-m)$ is all the information about trajectory $T_a$ accumulated since time $t-m$ (inside the current time window $\mathcal{T}^k$) up to time $t-1$. The predictive error series will be denoted by $\varepsilon_t(T_a|\bar{T}_a(t-m)) = T_a(t) - P_t(T_a|\bar{T}_a(t-m))$ and define $\sigma^2(T_a|\bar{T}_a(t-m))$ as the variance of $\varepsilon_t(T_a|\bar{T}_a(t-m))$. It is said trajectory $T_b$ \emph{Granger causes} $T_a$, briefly $b\rightarrow a$, if
\begin{equation}
\sigma^2(T_a|\bar{T}_a(t-m)) > \sigma^2(T_a|\bar{T}_a(t-m), \bar{T}_b(t-m))
\end{equation}
\noindent The feature is then derived from a specific testing procedure used to evaluate Granger causality trustworthiness.
Let's introduce the sum of squared residuals for the constrained and unconstrained models as
\begin{equation}
\begin{aligned}
RSS_c &= \sum_{t=1}^{K}\varepsilon_t(T_a|\bar{T}_a(t-m))^2 \quad\text{and}\quad\\
RSS_u &= \sum_{t=1}^{K}\varepsilon_t(T_a|\bar{T}_a(t-m), \bar{T}_b(t-m))^2,
\end{aligned}
\end{equation}
where $K$ is the number of samples considered for the analysis.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/FS_distribution.pdf}
\caption{Visual example of causality probability. The vertical line is the $S$ of Eq.~\eqref{eq:ca_S} while the shaded area is $d_\text{ca}$.}
\label{fig:caus_S}
\end{figure}
We design our feature $d_\text{ca}$ so as to be the critical confidence measure of the hypothesis that Granger causality exists between $T_a$ and $T_b$. To this end, we consider the test statistic
\begin{equation}
\label{eq:ca_S}
S_{b\rightarrow a} = \frac{(RSS_c-RSS_u)/m}{RSS_u/(K-2m-1)}.
\end{equation}
and compute the area under the Fisher-Snedecor probability function $\mathcal{F}$ to the left of $S$, as shown in Fig.~\ref{fig:caus_S}. This results in the following closed form solution~\cite{hazewinkel_encyclopaedia_1989} integral:
\begin{equation}
d^k_\text{ca}(a,b) = \max_{S\in\{S_{b\rightarrow a}, S_{a\rightarrow b}\}}\int_{0}^{S} \mathcal{F}(x\vert m, K-2m-1)\mathrm{d}x,
\end{equation}
where $S_{b\rightarrow a}$ and $S_{a\rightarrow b}$ are both considered in order to obtain symmetry, but as we value the existence of causality over its direction, we only keep the one which maximize the probability.
\subsubsection{Shape Similarity}
Shape similarity may also be useful in describing social identity as it overcomes the limit of the proxemics punctual and static evaluation.
We use the Dynamic Time Warping (DTW)~\cite{berndt_using_1994} on euclidean coordinates to map one time series to another by minimizing the distance between the two. In particular, DTW flexibility allows two time series that are similar but locally out of phase to align in a non-linear manner.
Suppose we have two trajectories $T_a$ and $T_b$ of lengths $§A$ and $B$ respectively.
To align these two sequences using DTW, we first construct a distance matrix $\{D^{ij}_{ab}\}_{ij}\in\mathbb{R}^{A\times B}$ that encodes the squared euclidean distance between any $i$-th element of $T_a$ and $j$-th element of $T_b$ inside the current time window.
The best alignment can be found by a recursive minimization of the cumulative cost $\gamma_{ab}$ of any path through the distance matrix originating in $D_{ab}^{11}$:
\begin{equation}
\gamma_{ab}(i, j) = D^{ij}_{ab} + \min\{\gamma_{ab}(i\text{-}1, j), \gamma_{ab}(i\text{-}1, j\text{-}1), \gamma_{ab}(i, j\text{-}1)\}.
\end{equation}
In particular, we construct our feature to be the distance of the two sequences once they are optimally aligned, that is the sum of the Euclidean distances of associated points of $T_a$ and $T_b$:
\begin{equation}
d_\text{sh}(a,b) = \gamma_{ab}(A, B)/\max(A,B)
\end{equation}
where the denominator is the optimal warping path length used as a normalization factor.
\subsection{Common Goals from People Motion}
\label{sec:heatmaps}
Previously described features focus on both static and dynamic aspect of trajectories when groups are already established, but neglect the smooth process of group formation. People may merge in groups starting from different location (\emph{e.g.} meeting action) or groups may split into subgroups and singletons (according to the \textit{hierarchical coherence} property of group formation).
Meeting or being close for a sufficient amount of time may indicate the presence of a shared goal. Following the results in \cite{lin_heat-map-based_2013}, where heat maps were used to recognize group activities, we also employ a heat map inspired feature to holistically model groups.
A heat map $H_a:\mathbb{N}_R\times\mathbb{N}_C\rightarrow[0,1]$ associated to the trajectory $T_a$ is a $R$-by-$C$ grid of heat sources $h_a$ that partitions the ground plane. The heat source $h_a(i,j)$ activates if the trajectory $T_a$ happens to walk in the relative grid cell $(i,j)$ and once activated it is subject to thermal decay and thermal diffusion processes:
\begin{equation}
H_a(i,j) = \sum_{p=1}^R\sum_{q=1}^C E_a(p, q) \cdot e^{-k_s\|(p-i,q-j)\|},
\end{equation}
where $k_s$ is a parameter suggesting the relative importance of different patches at different distances and $E_a(p, q)$ is the thermal energy produced by $T_a$ on the patch $(p, q)$. If we let $\bar{E}_a(p, q)$ be the accumulated thermal energy, we have
\begin{equation}
E_a(p, q) = \bar{E}_a(p, q)\cdot e^{-k_rt_\text{int}},
\end{equation}
being $k_r$ a parameter regulating the slow down of the heat accumulation and dispersion and $t_\text{int}$ the duration of the interaction between pedestrian $a$ and cell $(p,q)$ inside the current time window $\mathcal{T}^k$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.93\columnwidth]{images/hm2.pdf}
\caption{Intersecting heat maps are generated by converging trajectories, which project on the $xy$ plane their shared goal.}
\label{fig:hm_feature}
\end{figure}
Once we have constructed heat maps for every trajectory, we define a similarity metric between two trajectories $T_a$ and $T_b$ as the volume under the combined heat surface $\Upsilon_{ab}$ obtained as the pointwise product of the two heat maps $H_a$ and $H_b$:
\begin{equation}
d^k_\text{he}(a,b) = \sum_{i=1}^R\sum_{j=1}^C \Upsilon_{ab}(i,j) = \sum_{i=1}^R\sum_{j=1}^C H_a(i,j)H_b(i,j)
\end{equation}
The volume under $\Upsilon_{ab}$ reveals to what extent $T_a$ and $T_b$ have been close in space during the observation period, something that proxemics could already measure indeed. Nevertheless, heat maps relax the constraint by which only elements from the same frame can be compared, in practice this is accomplished through the thermal diffusion process.
At the same time, heat maps also expose the history of their respective trajectories, allowing the metric to capture the temporal aspect of motion similarity.
Proxemics, DTW and Granger causality would rate two pedestrians meeting and parting ways analogously, even if the former case is more likely to represent a group formation process.
Recognizing motion trajectories also encode temporal information is a great advantage of heap maps based analysis.
\section{Learning Framework}
\label{sec:learning}
The linear parametrization of the affinity matrix $W_{\bf d}$ of Eq.~\eqref{eq:cc_affinity_parametrization} guarantees to reach a partition of the crowd which is consistent with the social groups properties. The parameters ${\bf w} = [{\boldsymbol\alpha}, {\boldsymbol\beta}]$ govern both the importance of each feature alone and their similarity/dissimilarity optimal combinations, resulting in different clustering rules.
The choice of the best rule should account for all factors affecting the group formation process, such as environmental constraints or cultural influences.
The complexity of explicitly evaluating these factors resides in the impossibility to directly observe them. Still, we can gain important insights by observing the grouping process. On these premises, we adopt a learning framework capable of choosing the most suitable clustering rule by finding a set of feature weights that implicitly embodies these non-observable aspects.
\subsection{Supervised CC Through Structured Learning}
Let us consider the input ${\bf x}_i = \{[{\bf 1} - {\bf d}^i(a,b); {\bf d}^i(a,b)]\}_{a,b}$ to be the set of pairwise features computed on all the possible pairs of trajectories $T_a$ and $T_b$ in the $i$-th temporal window and ${\bf y}_i$ the clustering solution, \emph{i.e.} the set of all social groups appearing in the crowd $M_i$. Since ${\bf y}_i$ cannot be described by a single valued function, we adopt the
Structural SVM~\cite{tsochantaridis_large_2005} framework to model and learn predicting the solution.
The goal is to learn a classification mapping $f:\mathcal{X}\rightarrow\mathcal{Y}$ between input space $\mathcal{X}$ and structured output space $\mathcal{Y}$ given a set of input-output pairs $\{({\bf x}_1, {\bf y}_1),\dots,({\bf x}_n, {\bf y}_n)\}$.
A discriminant score function $F:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}$ is defined over the joint input-output space and $F({\bf x}, {\bf y})$ can be interpreted as measuring the compatibility of ${\bf x}$ and ${\bf y}$. Now, the prediction function $f$ can be defined as
\begin{equation}
\label{eq:pred_fun}
f({\bf x})~=~\arg\max_{{\bf y}\in\mathcal{Y}({\bf x})}F({\bf x}, {\bf y})
\end{equation}
where the maximizer over the label space $\mathcal{Y}({\bf x})$ is the predicted label, \emph{i.e.} the solution of the inference problem.
For simplicity we choose to restrict the space of $F$ to linear functions over some combined feature representation $\Psi({\bf x}, {\bf y})$ subject to a ${\bf w}$ parametrization. This feature mapping cannot be defined out of the context of the problem, as it is the problem itself that specifies, given a particular input, the nature of the desired solution. Following the definition of correlation clustering in Eq.~\ref{eq:correlation_clustering_objective} and its parametrization introduced in Eq.~\ref{eq:cc_affinity_parametrization}, the compatibility of an input-output pair is directly described as
\begin{equation}
F({\bf x}, {\bf y}; {\bf w}) = {\bf w}^T\Psi({\bf x}, {\bf y}) = {\bf w}^T\sum_{y\in{\bf y}}\sum_{a\neq b\in y}{\bf x}^{ab}.
\end{equation}
The problem of learning in structured and interdependent output spaces can been formulated as a maximum-margin problem. We adopt the $n$-slack, margin-rescaling formulation:
\begin{equation}
\begin{aligned}
& \min_{{\bf w}, {\bf \xi}}
& & \frac{1}{2}\|{\bf w}\|^2+\frac{C}{n}\sum_{i=1}^n\xi_i \\
& \,\,\,\,\,\text{s.t.}
& & \forall i:\xi_i\ge0, \\
&&& \forall i,\forall{\bf y}\in\mathcal{Y}({\bf x}_i)\backslash{\bf y}_i:{\bf w}^T\delta\Psi_i({\bf y})\ge\Delta({\bf y}, {\bf y}_i)-\xi_i,
\end{aligned}
\label{optpro}
\end{equation}
where $\delta\Psi_i({\bf y}) \overset{\text{def}}{=} \Psi({\bf x}_i, {\bf y}_i) - \Psi({\bf x}_i, {\bf y})$, $\xi_i$ are the slack variables introduced in order to accommodate for margin violations, $\Delta({\bf y}_i, {\bf y})$ is the loss function further defined in Sec.~\ref{sec:loss_score} and $C$ is the regularization trade-off. Intuitively, we want to maximize the margin and jointly guarantee that for a given input, every possible output result is considered worst than the correct one by at least a margin of $\Delta({\bf y}_i, {\bf y})-\xi_i$, where $\Delta({\bf y}_i, {\bf y})$ is bigger when the two predictions are known to be more different.
Remarkably, correlation clustering doesn't need to know in advance how many groups are present in the scene. Moreover, a positive overall cluster score can group two elements even if their affinity measure is negative, implicitly modeling the transitive property of relationships in groups, as stated in Sec.~\ref{sec:problem_def}.
\subsection{Batch Sequential Optimization}
The quadratic program (QP)~\eqref{optpro} introduces a constraint for every possible wrong clustering of the $n$ examples, more precisely $\sum_{i=1}^n(|\mathcal{Y}({\bf x}_i)|-1)$. Unfortunately, the number of ways to partition a set $M$ scales more than exponentially with the number of items according to the Bell sequence~\cite{rota_number_1964}
\begin{equation}
|\mathcal{Y}(M)| = \sum_{i=0}^{|M|}\frac{1}{i!}\sum_{j=0}^i(-1)^{i-j}{k\choose j} j^{|M|},
\end{equation}
making the optimization intractable. As an example, for a crowd composed of 20 pedestrians the number of potential solutions would be about $5.8\cdot 10^{12}$. In order to deal with this high number of constraints many approximation schemes have been proposed, where cutting plane algorithms or subgradient methods
are among the most commonly used. In particular, all the constraints of QP~\eqref{optpro} can be replaced by $n$ piecewise-linear ones by defining the structured hinge-loss:
\begin{equation}
\widetilde{H}({\bf x}_i) \overset{\text{def}}{=} \max_{{\bf y}\in\mathcal{Y}}\Delta({\bf y}_i, {\bf y}) - {\bf w}^T\delta\Psi_i({\bf y}).
\label{eq:maxoracle}
\end{equation}
The computation of the structured hinge-loss for each element $i$ of the training set, described in Sec.~\ref{sec:oracle}, amounts to finding the most ``violating'' output ${\bf y}$ for a given input ${\bf x}_i$ and its correct associated output ${\bf y}_i$.
We only have $n$ constraints of the form $\xi_i \geq \tilde{H}({\bf x}_i)$ and the non-smooth version of QP~\eqref{optpro} reduces to
\begin{equation}
\begin{aligned}
& \min_{{\bf w}}
& & \frac{1}{2}\|{\bf w}\|^2+\frac{C}{n}\sum_{i=1}^n\widetilde{H}({\bf x}_i).
\end{aligned}
\label{optpro_unconstrained}
\end{equation}
By disposing of a maximization oracle, \emph{i.e.} a solver for Eq.~\eqref{eq:maxoracle}, and a computed solution ${\bf y}^*$,
subgradient methods can easily be applied to QP~\eqref{optpro_unconstrained}, being $\partial_{\bf w}\widetilde{H}({\bf x}_i) = -\delta\Psi_i({\bf y}^*)$.
\begin{algorithm}
\setstretch{1.35}
\caption{Block-Coordinate Frank-Wolfe Algorithm}
\label{BCFW}
\begin{algorithmic}[1]
\STATE Let ${\bf w}^{(0)}, {\bf w}_i^{(0)} := {\bf 0} $ and $ l^{(0)}, l_i^{(0)} := 0$
\FOR{$\text{it} := 0$ \TO $\text{maxIterations}$ }
\STATE Pick $i$ at random in $\{1, \dots, n\}$
\STATE Solve ${\bf y}* := \arg\max_{{\bf y}\in\mathcal{Y}}\Delta({\bf y}_i, {\bf y}) - {\bf w}^T\delta\Psi_i({\bf y})$
\STATE Let ${\bf w}_s := \frac{C}{n}\delta\Psi_i({\bf y}^*)$ and $l_s := \frac{C}{n}\Delta({\bf y}_i, {\bf y}^*)$
\STATE Let $\gamma := \frac{({\bf w}_i^{(\text{it})}-{\bf w}_s)^T{\bf w}^{(\text{it})}+\frac{C}{n}(l_s-l_i^{(\text{it})})}{|{\bf w}_i^{(\text{it})}-{\bf w}_s\|^2}$ and clip to $[0,1]$
\STATE Update ${\bf w}_i^{(\text{it}+1)} := (1-\gamma){\bf w}_i^{(\text{it})} + \gamma {\bf w}_s$ \\$\quad$ and $l_i^{(\text{it}+1)}:= (1-\gamma)l_i^{(\text{it})}+\gamma l_s$
\STATE Update ${\bf w}^{(\text{it}+1)}:= {\bf w}^{(\text{it})} + {\bf w}_i^{(\text{it}+1)} - {\bf w}_i^{(\text{it})}$\\ $\quad$ and $l^{(\text{it}+1)} := l^{(\text{it})} + l_i^{(\text{it}+1)}-l_i^{(\text{it})}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
To exploit the domain separability of the constraints and limit the number of oracle calls needed to converge to the optimal solution, we choose to adopt a Block-Coordinate version of the Frank-Wolfe algorithm (BCFW)~\cite{julien_block_coordinate_2012}, delineated in Alg.~\eqref{BCFW}.
The algorithm works by minimizing the objective function of Eq.~\eqref{optpro_unconstrained} but restricted to a single random example at each iteration. By calling the max oracle upon the selected training sample (line 4) we obtain a new sub-optimal parameter set ${\bf w}_s$ by simple derivation (line 5).
The best update is then found through a closed-form line search (line~6), greatly reducing convergence time compared to other subgradient methods.\\
\noindent In order to solve QP~\eqref{optpro_unconstrained} effectively, it is important to choose an appropriate loss function as the learning ability of Structural SVM highly depends on it. In Sec.~\ref{sec:loss_score} we introduce and discuss different potential loss functions and their respective descriptive ability. Given the loss function, in Sec.~\ref{sec:oracle} an efficient method to compute the maximization oracle (line 4 of Alg.~\ref{BCFW}) is described.
\subsection{Loss Function and Scoring Procedure}
\label{sec:loss_score}
One common choice of loss function for clustering is the \emph{pairwise loss} $\Delta_{PW}({\bf y}_i, {\bf y})$, which is a generalization of the Rand coefficient~\cite{rand_objective_1971}, and is defined as the ratio between the number of pairs on which ${\bf y}_i$ and ${\bf y}$ disagree on their cluster membership and the number of all possible pairs of elements in the set.
Due to the quadratic number of connections that exist among crowd members, this measure tends to be imprecise when dealing with large crowds: as the crowdness increases, the number of positive links connecting group members becomes negligible with respect to the total number of links. As a consequence, erroneous solutions won't be strongly penalized.\\
The \emph{MITRE~loss}~\cite{vilain_model-theoretic_1995}, $\Delta_{M}({\bf y}_i, {\bf y})$, founded on the understanding that connected components are sufficient to describe groups, partially mitigates this problem by representing groups as spanning trees, instead of complete graphs, inducing a linear amount of both positive and negative links among members (and not quadratic as in the pairwise case).
For any crowd partitioning, a spanning forest is an equivalence class as many trees that describe the same group configuration may exist.
The final score is obtained by accounting for the number of links that needs to be removed or added to recover a spanning forest of the correct solution.
Nonetheless, problems arise when working on relations and not directly on members, as singletons have no connections at all but should still be considered positively when correctly classified.
\begin{figure}[t!]
\centering
\subfloat[${\bf y}_i$ PAIRWISE links]{
\includegraphics[width=0.45\columnwidth]{images/loss_PAIRWISE.pdf}
}
\subfloat[${\bf y}, \Delta_{PW}({\bf y}_i, {\bf y})=0.27$]{
\includegraphics[width=0.45\columnwidth]{images/loss_PAIRWISE_error.pdf}
}\\
\subfloat[${\bf y}_i$ MITRE links]{
\includegraphics[width=0.45\columnwidth]{images/loss_MITRE.pdf}
}
\subfloat[${\bf y}, \Delta_{M}({\bf y}_i, {\bf y})=0.6$]{
\includegraphics[width=0.45\columnwidth]{images/loss_MITRE_error.pdf}
}\\
\subfloat[${\bf y}_i$ $G$-MITRE links]{
\includegraphics[width=0.45\columnwidth]{images/loss_GMITRE.pdf}
}
\subfloat[${\bf y}, \Delta_{GM}({\bf y}_i, {\bf y})=0.75$]{
\includegraphics[width=0.45\columnwidth]{images/loss_GMITRE_error.pdf}
}
\caption{Differences in the way losses account for errors. Singletons are white. Figures (a, c, e) depict solution ${\bf y_i}$ and the links considered by the respective losses, while (b, d, f) color pedestrians according to solution ${\bf y}$ and show the links on which the two solutions ${\bf y}_i$ and ${\bf y}$ disagree.}
\label{fig:losses}
\end{figure}
For this motivation, we propose a loss function, \emph{GROUP-MITRE loss} ($G$-MITRE) $\Delta_{GM}({\bf y}_i, {\bf y})$, that overcomes this limitation by adding, for each pedestrian described by the trajectory $T_i$, a fake counterpart $\alpha_{T_i}$ to which only singletons are connected.
Through this shrewdness we can now take into consideration singletons as well when computing the discrepancy between two solutions. The particular design choice to link to the fake counterparts only singleton members generates two discrepancies when committing errors involving singletons and is thus a further effort in generating more plausible hierarchical groups in the solution, as depicted in Fig.~\ref{fig:losses}.
More formally, consider two clustering solutions ${\bf y}_i$, ${\bf y}$ and a representative of their respective spanning forests $Q$ and $R$. The connected components of $Q$ and $R$ are identified respectively by the set of trees $Q_{1}, Q_{2}, \dots$ and $R_1,R_2,\dots$. Note that if the number of elements in $Q_j$ is $|Q_j|$, then only $c(Q_j)\stackrel{\text{\tiny def}}{=}|Q_j|-1$ links are needed in order to create a spanning tree. Let us define $\pi_{\scriptscriptstyle R}(Q_j)$ as the partition of a tree $Q_j$ with respect to the forest $R$, that is the set of subtrees obtained by considering only the membership relations in $Q_j$ also found in $R$. Besides, if $R$ partitions $Q_j$ in $|\pi_{\scriptscriptstyle R}(Q_j)|$ subtrees then $v(Q_j)\stackrel{\text{\tiny def}}{=}|\pi_{\scriptscriptstyle R}(Q_j)| - 1$ links are sufficient to restore the original tree. It follows that the recall error for $Q_j$ can be computed as the number of missing links divided by the minimum number of links needed to create that spanning tree. Accounting for all trees $Q_j$ the global recall measure of $Q$ is:
\begin{equation}
\label{eq:recall_mitre}
\begin{aligned}
\mathcal{R}_{Q} = 1 - \frac{\sum_{j} v(Q_j)}{\sum_{j} c(Q_j)} = \frac{\sum_{j} |Q_j|- |\pi_{\scriptscriptstyle R}(Q_j)|}{\sum_{j}|Q_j|-1}
\end{aligned}
\end{equation}
The precision of $Q$ (recall of $R$) can be computed by exchanging $Q$ and $R$. Given the definition of precision, recall and employing the standard $F$-score $F_1$, the loss is defined as
\begin{equation}
\Delta_{GM}=1-F_1.
\end{equation}
The complete algorithm for the computation of the $G$-MITRE loss is reported in Alg.~\ref{alg:G_MITRE}. We employ disjoint-set arrays due to the efficiency of checking whether two pedestrians belong to the same group. Recall that {\footnotesize UNION} and {\footnotesize FIND} are the standard functions defined over the disjoint-set arrays and denote the operations to merge two clusters and to find an element membership respectively. In the pseudo-code we use the notation ${\bf y}_i/{\bf y}$ to indicate that the algorithm first work on the solution ${\bf y}_i$ and then analogously on ${\bf y}$.
\begin{algorithm}[t!]
\setstretch{1.35}
\caption{$G$-MITRE loss $\Delta_{GM}({\bf y}_i, {\bf y})$ computation}
\label{alg:G_MITRE}
\begin{algorithmic}[1]
\REQUIRE ${\bf y}_i$ and ${\bf y}$ as \emph{disjoint-set data structures}
\STATE $\varphi(x)$ are the unique roots of connected components $x$
\STATE $\Gamma(x)$ is the size of the connected component with root $x$
\FORALL{$T \in {\bf y}_i/{\bf y}$}
\STATE ${\bf y}_i/{\bf y}= {\bf y}_i/{\bf y}\cup\alpha_{T}$
\IF{$\Gamma(\text{\footnotesize FIND}({\bf y}_i/{\bf y}(T)) = 1$}
\STATE $\text{\footnotesize UNION}({\bf y}_i/{\bf y}(T), {\bf y}_i/{\bf y}(\alpha_{T}))$
\ENDIF
\ENDFOR
%
\FORALL{$q \in \varphi({\bf y}_i/{\bf y})$}
\STATE $v_{{\bf y}_i/{\bf y}}~{+=}~|\varphi(\bigcup_{\text{\scriptsize FIND}({\bf y}_i/{\bf y}(T)) = q}{\bf y}/{\bf y}_i(T))| - 1$
\STATE $c_{{\bf y}_i/{\bf y}}~{+=}~\Gamma(q) - 1$
\ENDFOR
%
\STATE $\mathcal{R}_{{\bf y}_i/{\bf y}} = 1 - v_{{\bf y}_i/{\bf y}} / c_{{\bf y}_i/{\bf y}}$
\STATE $\Delta({\bf y}_i, {\bf y}) = 1 - 2\mathcal{R}_{{\bf y}_i}\mathcal{R}_{{\bf y}} / (\mathcal{R}_{{\bf y}_i}+\mathcal{R}_{{\bf y}})$
\end{algorithmic}
\end{algorithm}
\subsection{Approximate Oracle}
\label{sec:oracle}
Despite the simplicity of the algorithm, the intrinsic complexity of the optimization is hidden in the search for the most violating solution ${\bf y}^*$ for the $i$-th example (line 4 of Alg.~\eqref{BCFW}): finding the most violated constraint requires to solve the loss augmented decoding subproblem. Note that the original prediction problem of Eq.~\eqref{eq:pred_fun} is NP-hard and the insertion of a non-linear loss in the computation of the maximum is not likely to help.
Nevertheless, thanks to its iterative nature, the inference scheme introduced in Sec.~\ref{sec:solution} can be adapted to approximate the oracle as well. Starting from the trivial solution having each pedestrian of the $i$-th example in its own cluster, the algorithm repeatedly merges the two clusters which reflect in the highest increment in the structured hinge-loss $\tilde{H}({\bf x}_i)$ of Eq.~\eqref{eq:maxoracle}, until a local maxima is found.
Of course by following a greedy procedure, there is no guarantee to select the most violated constraint. Interestingly enough, Lacoste-Julien~\emph{et al.}~\cite{julien_block_coordinate_2012} show that all convergence results known for exact maximizer of the loss augmented problem also hold for approximate maximizers by allowing the algorithm to iterate longer toward convergence.
For further details, please refer to their original work.
\section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE Computer Society journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8 and later.
I wish you the best of success.
\hfill mds
\hfill December 27, 2012
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Conclusion}
In this work, we pointed out the need to approach the task of detecting social groups in crowds from a learning perspective.
Many existing methods rely on specifically tuned parameters that limit their applicability in real world scenarios.
Our intuition is that there are crowds that preserve the same concept of social group, but in many cases this concept cannot be distilled
from spatial consideration only. We thus defined a set of social-inspired and strongly motivated features able to capture and characterize different groups peculiarities.
To learn a socially meaningful clustering rule to group pedestrians, we relied on the Structural SVM framework and designed a peculiar loss function able to account for
singletons as well as for group errors.
Even though the algorithm was originally designed to work with exact trajectories, we replicated the experiments on noisy tracklets extracted by a detector/tracker obtaining state-of-the-art results.
Moreover, we proposed an online training version of the method, able to achieve superior generalization performances on crowds with variable density.
We did note, however, that as we consider wider portions of the scene, the chance that many different densities groups coexist in different locations increases, leading to the necessity to learn more than one clustering rule per scene. To resolve this problem we plan, as future work, to learn a set of different distance measures and use latent variables to choose the most appropriate given a particular zone. Code and datasets are made publicly available\footnote{\texttt{http://imagelab.ing.unimore.it/group-detection}} in order to reproduce this paper results and allow the community to improve the proposed method.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Experimental Results}
\label{sec:exp}
We designed several experiments to evaluate the algorithm behavior on well-assessed benchmarks and its connections to the nature of the problem.
All the experiments were carried out on ground truth trajectory data, except for Sec.~\ref{exp:real} where the method is evaluated on tracklets extracted by a modern detector/tracker system.
We also propose new video sequences to stress the algorithm over a variety of challenges in real world scenarios. Since the method works on ground plane (metric) data, we also provide homography information for all the employed sequences.
\subsubsection*{Datasets}
We selected two publicly available datasets, namely the \emph{BIWI Walking Pedestrians} dataset~\cite{pellegrini_youll_2009} and the \emph{Crowds-By-Examples (CBE)} dataset~\cite{lerner_crowds_2007}. The former dataset records two low crowded scenes, outside a university and at a bus stop (\verb+eth+ and \verb+hotel+ in Tab.~\ref{tab:dataset}). The \emph{CBE} dataset records a medium density crowd outside another university (\verb+student003+, briefly \verb+stu003+) providing some challenges: the density of the pedestrians is significantly high and the presence of multiple entry and exit points. While \emph{BIWI} and \emph{CBE} are standard datasets in crowd analysis, we also use the more recent \emph{Vittorio Emanuele II Gallery (VEIIG)} dataset~\cite{BanGorVizPRLGallery}, from which we extracted a five minutes subsequence, \verb+gal1+, particularly interesting due to the fast and continuous change in crowd density.
We also propose a new dataset to cope with the increasing variety of application in dense-crowd management, \emph{MPT-$20$x$100$}, composed of 20 sequences of 100 frames where we manually annotated trajectories and social groups. The dataset comprises different videos~\cite{bolei_2014} all characterized by a high number of pedestrians with an heterogeneous set of scene conditions, ranging from density, scale, viewpoint and type of interactions, like walking in a mall, crossing the street or participating at public events.\\
In Tab.~\ref{tab:dataset} we report some measures useful to characterize the spatial complexity of the datasets:
\begin{itemize}
\item $d_\text{in}$ is the \emph{group compactness}, computed as the mean distance between members of the same groups;
\item $d_\text{out}$ is the \emph{group isolation} or the mean distance between each member and its closest unrelated pedestrian;
\item the ratio $d_\text{i/o}\stackrel{\text{\tiny def}}{=}d_\text{in}/d_\text{out}$ measures \emph{crowd collectiveness}: small values mean compact groups in a sparse crowd.
\end{itemize}
\begin{table}[t!]
\center
\caption{Datasets: number of pedestrians (\#p), groups (\#g) and density metrics.}
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
& \#p & \#g & $d_\text{in}~(m)$ & $d_\text{out}~(m) $ & $d_\text{i/o}$ \\
\hline
\verb+student003+ & 406 & 108 & 0.41 & 0.70 & 0.59\\
\hline
\verb+eth+ & 117 & 18 & 0.99 & 2.79 & 0.35\\
\hline
\verb+hotel+ & 107 & 11 & 0.75 & 2.00 & 0.38\\
\hline
\verb+gal1+ & 630 & 207 & 0.77 & 1.66 & 0.46\\
\hline
\verb+MPT-20x100+ & 82 & 10 & 0.63 & 1.45 & 0.48\\
\hline
\end{tabular}
\label{tab:dataset}
\end{table}
\subsubsection*{Evaluation Scheme}
There is no consensus on which metrics should be used to evaluate groups correctness: we propose to use the $G$-MITRE precision $\mathcal{P}$ and recall $\mathcal{R}$ since it accounts for the correct classification of singletons as well. This is an important gain as in crowded scenes the number of people walking alone is rarely negligible.
Each measure is reported in terms of mean and standard deviation over 5 runs to account for the stochastic nature of the training of our algorithm. Where not differently specified, we used a 100s for training and a 10s sliding window with no overlap for features computation. The regularization parameter $C$ of QP.~\eqref{optpro} is fixed to 10.\\
\noindent For the heat-map based feature of Sec.~\ref{sec:heatmaps}, we run a grid search on the parameters. For all the experiments, the length of the cells edge is fixed to 30cm, $k_s=10^{-5}$ and $k_r=0.5$.
\subsection{Baseline and Benchmark Comparisons}
We compare our method with three recent state of the art group detection algorithms, namely~\cite{ge_vision-based_2012,yamaguchi_who_2011,zanotto12,shao14}, selected on the basis of their reported performances on public datasets and availability of code.
In addition, we devised a simple baseline version of our solution that performs the group partitioning with no use of the learning framework. The weights are randomly chosen to be the same for all the features, so that the randomness resides in the similarity/dissimilarity ratio.
\begin{table}[t!]
\centering
\caption{Evaluation of our proposal when trained with different loss functions.}
\begin{tabular}{|l|c|c|c|c|}
\hline
& \multicolumn{2}{|c|}{Pairwise $\Delta_{PW}$} & \multicolumn{2}{|c|}{MITRE $\Delta_{M}$} \\
& $\mathcal{P}$ & $\mathcal{R}$ & $\mathcal{P}$ & $\mathcal{R}$ \\
\hline
\verb+hotel+ & \textls{90.1 $\pm$ 2.0} & \textls{84.1 $\pm$ 3.2} & \textls{89.2 $\pm$ 3.0}& \textls{93.2 $\pm$ 1.9} \\
\hline
\verb+eth+ & \textls{88.7 $\pm$ 1.8} & \textls{87.3 $\pm$ 2.6} & \textls{91.9 $\pm$ 0.8}& \textls{92.9 $\pm$ 1.0}\\
\hline
\verb+stu003+ & \textls{68.9 $\pm$ 1.4} & \textls{69.9 $\pm$ 1.5} & \textls{80.1 $\pm$ 2.4}& \textls{80.9 $\pm$ 2.3}\\
\hline
\end{tabular}
\label{tab:loss}
\end{table}
\subsubsection{Quantitative Results}
Quantitative results are given in Tab. \ref{tab:comparison}. To highlight our algorithm superiority, results are presented both in terms of $G$-MITRE and a pairwise loss accounting only for positive (intra-group) relations but neglecting singletons, $\Delta_{PW}^+$~\cite{zanotto12}. The latter loss is not directly optimized by our algorithm, still our method outperforms the competitors in all the tested sequences. This can be explained through the ability of our algorithm to adapt the concept of groups to always different scenario by varying the feature importance and the use of sociologically inspired similarity functions. The slightly lower performances on the \verb+stu003+ sequence are due to the high complexity of the scene: the high value of the $d_\text{i/o}$ ratio in Tab. \ref{tab:dataset} suggests the presence of loose groups in a dense crowd and, as such, challenging to be detected.
\subsubsection{Evaluation of Different Loss Functions}
As structured learning relies upon a definition of \emph{what's wrong} to learn how to classify well, the choice of the loss function can greatly affect the final performances. By fixing the $G$-MITRE measure as a proper scoring scheme, we quantitatively test the influence of the choice of the loss on the \verb+eth+, \verb+hotel+ and \verb+stu003+ datasets (Tab.~\ref{tab:loss}).
As it could be expected by its definition, the improvement due to the use of the $G$-MITRE loss (reported in Tab.~\ref{tab:comparison}) is greater in the \verb+eth+ and \verb+hotel+ sequences where the ratio between the number of singletons and the people walking in groups is higher and as such learning to classify them as well becomes crucial. More interestingly, we observe how the pairwise loss obtains outstanding performances when the number of pedestrians is limited, but becomes ineffective when it starts to grow, as in \verb+stu003+.
\subsection{Features Weight Learning on \emph{MPT-${\bf 20}$x${\bf 100}$}}
\emph{CBE} and \emph{BIWI} datasets expose some interesting challenges of the problem but, with the only exception of \verb+stu003+ sequence, they have a limited number of pedestrians in scene and a low crowd density. Moreover, the scenarios are similar and the variety of interactions underlying the group formation is limited. The proposed \emph{MPT-$20$x$100$} datasets, on the other hand, presents different degrees of complexity.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/perf_curve.pdf}
\caption{Comparison against baseline and \cite{ge_vision-based_2012} on \emph{MPT-$20$x$100$}.}
\label{fig:perf_comp_b}
\end{figure}
First, we evaluate the general performance of the algorithm and compare with both our baseline and the proposal in \cite{ge_vision-based_2012} where, for the latter method, we manually tuned the thresholds to achieve best results.
These methods are clustering based, partially consistent with the social group axioms but no learning is employed.
Results are shown in Fig.~\ref{fig:perf_comp_b} as a \emph{survival curve} plot which reveals on how many sequences the algorithms where at least able to reach the specific lower-bound performance and per-video scores are in Fig.~\ref{fig:perf_comp_a}. Interestingly, the difference between our method and \cite{ge_vision-based_2012} increases here with respect to the previous datasets on an average of 10\%, suggesting that sequences can be really different in the concept of groups they embed and thus learning is mandatory to adapt to this new representations of social groups and keep performances stable.
\subsubsection{The Need for Learning from Examples}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/bar_plot_sequences.pdf}
\caption{Results on \emph{MPT-$20$x$100$} highlight the complexity of each scene.}
\label{fig:perf_comp_a}
\end{figure}
The confusion-like matrix, depicted in Fig. \ref{fig:matrix}, presents the F-1 scores obtained by training the algorithm on one sequence of \emph{MPT-$20$x$100$} (row labels) and testing it on all the other sequences (column labels).
By reading the matrix, and averaging each row over all the columns, it is possible to grasp how good a particular sequence was for training. At the same time, by observing the average of the columns over all the rows, we can get intuition about how much each sequence was effectively predicted by all the others.\\
We are interested in understanding whether a specific notion of group is shared across sequences and how it is influenced by both scene elements (\emph{e.g.} crowd density) and unobserved aspects (\emph{e.g.} intentions and social hierarchies).
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/clustered_matrix.pdf}
\caption{F-1 scores obtained by all combinations of train/test pair sequences in \emph{MPT-$20$x$100$}. Results were clustered (diagonal blocks C1-C4 from left to right) to highlight similar notion of group among sequences.}
\label{fig:matrix}
\end{figure}
With the purpose of capturing these invariants, we search the connected component of the matrix using the $\text{F-1 score}$ as the affinity value among elements. Clustering is performed through an asymmetric version of spectral clustering~\cite{naumann_combinatorial_2012} based on the Random Walk Laplacian defined as
\begin{equation}
L = AD^{-1},
\end{equation}
where $A$ is the affinity matrix defined as in Fig.~\ref{fig:matrix} and $D$ is the usual degree matrix. Following the eigen-gap heuristic we found $4$ distinct clusters in the \emph{MPT-$20$x$100$} dataset, highlighted with black lines in Fig. \ref{fig:matrix}; for every cluster we computed the $d_\text{in}$, $d_\text{out}$ and $d_\text{i/o}$ spatial measures, displayed in Tab. \ref{tab:spatial}, to verify if clusters with a similar notion of group also share a common configuration of distances among pedestrians and possibly if the performance are connected to crowd density.
\begin{table}[t!]
\centering
\caption{Spatial depiction, training efficacy and groups predictability of the clusters of sequences of Fig.~\ref{fig:matrix}.}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
cluster & $d_\text{in}$ (m)& $d_\text{out}$ (m)& $d_\text{i/o}$ & $\text{F}_1$ train & $\text{F}_1$ test\\
\hline
\hline
C1 & 0.58 & 1.03 & 0.54 & 0.82 & 0.82 \\
\hline
C2 & 0.59 & 1.28 & 0.47 & 0.85 & 0.84 \\
\hline
C3 & 0.59 & 0.99 & 0.59 & 0.77 & 0.64 \\
\hline
C4 & 0.89 & 3.00 & 0.34 & 0.75 & 0.84 \\
\hline
\end{tabular}
\label{tab:spatial}
\end{table}
Tab. \ref{tab:spatial} also reports a measure of \emph{training efficacy} (F$_1$ train), computed as the mean accuracy obtained on the whole dataset when only sequences in that specifi cluster were used for training and, analogously, a group \emph{predictability score} (F$_1$ test) or the mean accuracy obtained on the sequences of that cluster when all the sequences were used for training. They indicate how much a cluster is useful during training
and easy it is to predict groups inside its sequences.
A first observation that can be made is about the cluster C4, which presents the highest F$_1$ test and the lowest F$_1$ train. We found it was easy to predict groups in these videos but they were poorly informative as training examples, a result justified by its small $d_\text{i/o}$.
Nonetheless, clusters 1 and 3 exhibits very similar $d_\text{i/o}$ ratio but perform very differently in terms both of training efficacy and testing score, suggesting a trivial heuristic based on spatial information only is insufficient to visually discern groups.
Implicit aspects like motion constraints or cultural and social context also affect the group process formation, defending our hypothesis that learning is needed to adapt the concept of group to the current data.
\subsubsection{Do we Capture the Essence of Being a Group?}
As previously stated, \emph{MPT-$20$x$100$} comprises very different scenarios and situations and can provide important insights on which are the most important elements that reveal groups. To this end, recall the definition of feature vector ${\bf w} = [\boldsymbol\alpha, \boldsymbol\beta] = [w_1,w_2,\dots,w_8]$ from Eq.~\eqref{eq:cc_affinity_parametrization} of Sec.~\ref{sec:solution} is such that the affinity between two trajectories $T_a$ and $T_b$ can be written as:
\begin{equation}
\label{eq:w_decomposed}
\begin{aligned}
&W^{ab}_{\bf d} &= {\boldsymbol\alpha}^T ({\bf 1} - {\bf d}(a, b)) - {\boldsymbol\beta}^T&{\bf d}(a, b)\\
&&= w_1 + w_2 + w_3 + w_4 - &[(w_5+w_1)d_{ph} + \dots\\
&& &~(w_6+w_2)d_{sh} + \dots\\
&& &~(w_7+w_3)d_{ca} + \dots\\
&&\underbrace{\hspace{3cm}\vphantom{]}}_\text{constant term}~&\underbrace{(w_8+w_4)d_{he}}_\text{$(a,b)$-dependent term}]
\end{aligned}
\end{equation}
The contribution of each feature to the score, transformed from a distance to an affinity measure by the constant term of Eq.~\eqref{eq:w_decomposed}, is encoded in the absolute value of the coefficient of the features themselves.
As shown in Fig.~\ref{fig:weight_imp_a}, the proxemic inspired feature $d_{ph}$ dominates all the others while the importance of the remaining features vary greatly from sequence to sequence.
The two sequences \verb+1manko3+ (Fig.~\ref{fig:more_results}) and \verb+1dawei1+ (Fig.~\ref{fig:crowd}), for example, present very similar contribution from $d_\text{hm}$ and $d_\text{sh}$, while the importance assigned to $d_\text{ph}$ in \verb+1dawei1+ is shifted to $d_\text{ca}$ in \verb+1manko3+.
The former sequence present a particularly sparse crowd, making distance among elements a strong peculiarity of groups, but when the space among pedestrian is reduced both intra and inter-groups distances (and consequently $d_\text{ph}$) become less significant. Conversely, the causality feature $d_{ca}$ becomes more important when the density increases as pedestrians tend to follow each others to avoid getting separated from the rest of the group.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/stacked_features.pdf}
\caption{Features normalized coefficients of Eq.~\eqref{eq:w_decomposed}.}
\label{fig:weight_imp_a}
\end{figure}
Heat maps importance gain emphasis from comparing \verb+1manko3+ and \verb+3shatian6+ (Fig.~\ref{fig:more_results}), as they are very helpful in decoupling trajectories that stand very close in space but for a very limited amount of time. In particular, in \verb+1manko3+, people crossing from opposite sides of the road tend to be very close when meeting in the middle, even if they are not in the same group.
\begin{table*}[t!]
\centering
\caption{Performance of detector \cite{DollarPAMI14pyramids}, tracker \cite{Milan:2014:CEM} and group detection algorithms (in terms of G-MITRE) in a fully automatic pipeline.}
\begin{tabular}{|l|c|c||c|c|c|c||c|c|c|c|c|c|c|c|}
\hline
$ $ & \multicolumn{2}{|c||}{Detector} & \multicolumn{4}{|c||}{Tracker} & \multicolumn{2}{|c|}{our proposal} & \multicolumn{2}{|c|}{\cite{ge_vision-based_2012}} & \multicolumn{2}{|c|}{\cite{yamaguchi_who_2011}} & \multicolumn{2}{|c|}{\cite{shao14}}\\
$ $ & $\mathcal{P}$ & $\mathcal{R}$ & MOT(A/P) & MT & IDS & FRG & $\mathcal{P}$ & $\mathcal{R}$ & $\mathcal{P}$ & $\mathcal{R}$ & $\mathcal{P}$ & $\mathcal{R}$ & $\mathcal{P}$ & $\mathcal{R}$\\
\hline
\verb+hotel+ & 43.1 & 52.4 & 66.9 / 0.88 & 18.8 & 120 & 34 & 77.9 & 76.9 & 75.7 & 78.0 & 46.3 & 38.6 & 60.2 & 57.5\\
\verb+eth+ & 68.2 & 53.7 & 92.3 / 0.08 & 75.0 & 0 & 68 & 81.1 & 79.7 & 78.4 & 79.3 & 58.3 & 70.6 & 57.3 & 61.2\\
\verb+student+ & 56.7 & 36.8 & 43.3 / 1.22 & 06.0 & 342 & 876 & 75.0 & 71.3 & 63.2 & 56.4 & 40.2 & 52.4 & 35.1 & 40.2\\
\hline
\end{tabular}
\label{tab:track}
\end{table*}
\subsection{Evaluating the Influence of Density Changes
In this test setting we evaluate if the feature weights learned by the Structural SVM of Sec. \ref{sec:learning} are sufficiently general to deal with crowds at different densities and, at the same time, understand whether an online version of Alg.~\ref{BCFW} would bring any accuracy improvement. To this end we introduce a new video sequence, \verb+gal1+ from \emph{GVEII}, containing an average number of $70$ pedestrians simultaneously present in the scene.
The distribution of pedestrians is not uniform though, and increases over time, as well as for their density, represented by the $d_\text{i/o}$ ratio (Fig. \ref{fig:GVEIIpeople}).
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{images/peculiarities}
\caption{Pedestrians number and $d_\text{i/o}$ ratio temporal evolution in the \texttt{gal1} sequence of \emph{GVEII}.}
\label{fig:GVEIIpeople}
\end{figure}
In order to underline the importance of capturing changes in density, we compare the batch version of the training algorithm Alg. \ref{BCFW} with a sequential and a fully online version (Fig.~\ref{fig:res_online_comp}). In the former case, examples are fed to the supervised training procedure in temporal order one at a time, while for the latter case, the weights have been initialize to the ones learned batch and the algorithm at each step learns from the previous prediction, thus without supervision.\\
The plot in Fig.~\ref{fig:res_online_comp} shows the performance of the batch training version tends to decrease as the crowd density increases. While the sequential version of the algorithm performs better, it is slow to respond to sudden density changes like in time windows $15$. Indeed, a non-smooth density variation affects negatively the training process, leading to a performance drop further recovered in the subsequent temporal windows. Eventually, this behavior is partially mitigated in the fully online version. The higher performances are motivated by the implicit regularization: using the prediction as training input discourages the learner to drastically modify the weights vector and mimic the smooth variation in crowd density slightly adjusting in time.
\subsection{Performances on Real Detector and Tracker}
\label{exp:real}
Our algorithms assumes the availability of correct trajectories to detect groups, but what happens in a fully automatic video surveillance pipeline where a people detector and tracker are employed?
We carried out experiments
by extracting pedestrian positions through a state of the art detector~\cite{DollarPAMI14pyramids} and obtaining trajectories by means of a continuous energy minimization method~\cite{Milan:2014:CEM}. We compare with Ge~\emph{et~al.}~\cite{ge_vision-based_2012}, Yamaguchi~\emph{et al.}~\cite{yamaguchi_who_2011} and Shao~\emph{et al.}~\cite{shao14} over the same input data, results are shown in Tab.~\ref{tab:track}. Our proposal outperforms the competitors even in the case of noisy trajectories.
Tracking performances evidence a high number of tracks fragments, namely FRG, that are mainly due to the localization error introduced by the automatic people detector on non-trivial crowded scenes. FRGs are proportional to the number of small new tracks created by the system instead of correctly associating previously tracked objects, with the consequence of splitting ideal tracks into temporally disjoint segments.\\
A high FRG number affects the group detection performance as the $d_{ph}$ and $d_{ca}$ features are computed when the trajectories are simultaneously present in the scene and thus merging temporal disjoint fragments is strongly discouraged by the correlation clustering algorithm.
Intuitively, by reducing the size of the window we are able to minimize the number of split trajectories at each example and recover most of the original performances, as shown in Fig.~\ref{fig:detector_tracker}(c).
The improvement is basically achieved through the joint adoption of socially founded features and structural learning that weights the features according to the observed noisy trajectories. The experiment allow us to conclude that even in the case of a real application and imprecise input data the strengths of the proposed algorithm are maintained because are strongly related to the social rules that govern the group formation process, these rules are not data dependent and hold despite the applied feature extraction techniques.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{images/perfOnGVEII.pdf}
\caption{F-1 score comparison between differently trained version of the our method on \texttt{gal1} of \emph{GVEII}.}
\label{fig:res_online_comp}
\end{figure}
\begin{figure*}[t!]
\centering
\subfloat[]{
\includegraphics[width=0.28\textwidth]{images/0205_2}
}
\subfloat[]{
\includegraphics[width=0.28\textwidth]{images/0205_tracker}
}
\subfloat[]{
\includegraphics[width=0.4\textwidth]{images/variable_win.pdf}
}
\caption{Group detection results on \texttt{student003} are displayed when corrected tracks are used (a) and when input with people detector and tracker automatic responses (b). Regardless of the input noise, most of the groups can still be identified. This is due to the robustness of the features employed during learning and to the decrease in length of the time window (c) which prevents fragmented tracks to be split in different groups.}
\label{fig:detector_tracker}
\end{figure*}
\begin{figure*}[t!]
\centering
\subfloat[\texttt{1airport1}]{
\includegraphics[width=0.235\textwidth]{images/1airport1_000004}
}
\subfloat[\texttt{1manko3}]{
\includegraphics[width=0.235\textwidth]{images/1manko3_2}
}
\subfloat[\texttt{2jiansha5}]{
\includegraphics[width=0.235\textwidth]{images/2jiansha5}
}
\subfloat[\texttt{randomcross3}]{
\includegraphics[width=0.235\textwidth]{images/randomcross3}
}\\
\subfloat[\texttt{3shatian6}]{
\includegraphics[width=0.235\textwidth]{images/3shatian6}
}
\subfloat[\texttt{seq1}]{
\includegraphics[width=0.235\textwidth]{images/GVEII_0017.jpg}
}
\subfloat[\texttt{eth}]{
\includegraphics[width=0.235\textwidth]{images/eth.jpg}
}
\subfloat[\texttt{hotel}]{
\includegraphics[width=0.235\textwidth]{images/hotel.jpg}
}
\caption{Examples of groups detected through our method: sequences from (a) to (e) are from the \emph{MPT-$20$x$100$}, while (f) is part of \emph{GVEII} and finally, (g) and (h) belong to the \emph{BIWI} dataset. Groups are identified regardless of the scene context and errors are visually acceptable, as in (d).}
\label{fig:more_results}
\end{figure*}
\section{Introduction}
\label{sec:intro}
\IEEEPARstart{C}{rowd} phenomena are complex and their logic still escapes formal rules and precise social explanations.
Eventually, the ambition of crowd analysis is to characterize people behaviors, predict and prevent potentially dangerous situations and improve the well-being of communities.
This has been traditionally provided by simulation models~\cite{vizzari12} or automatic video analysis~\cite{ge_vision-based_2012}.
Recently, \emph{groups} have been recognized as the
basic elements which compose the crowd~\cite{moussaid_walking_2010},
leading to an intermediate level of abstraction that is placed between two outfacing views: the crowd as a flow of indistinguishable people~\cite{Moore:2011:VCS:2043174.2043192} and its interpretation as a collection of individuals~\cite{PhysRevE.51.4282}.
Identifying groups is consequently a mandatory step to grasp the complex social dynamics ruling collective behaviors in crowds.
This poses new challenges for computer vision, since groups are definitely more difficult to characterize than pedestrians acting alone or as a whole.
In this work, we propose a learning based solution for visually detecting groups in low/medium density crowds (Fig.~\ref{fig:crowd}) under the hypothesis that the \emph{concept of group} can be visually discerned and people trajectories can be extracted up to some extent. The strong novelty of our approach is the joint adoption of sociologically grounded features and a learning framework able to specialize the concept of group accounting for different scenarios, motion constraints and crowd densities.
To this end, we adhere to a classical sociological interpretation of groups~\cite{turner81}, which can be formalized as follows.
\begin{definition}
\label{def:group}
A group is defined as two or more people interacting to reach a common goal and perceiving a shared membership, based on both physical and social identity.
\end{definition}
Accordingly, we propose a new formulation of the problem of detecting groups in crowds as a supervised \emph{Correlation Clustering} (CC)~\cite{bansal_correlation_2002}. We solve it through a \emph{Structural Support Vector Machines} (Structural SVM)~\cite{joachims_cutting-plane_2009} framework that learns a context dependent distance measure, based on a set of features inspired by Def.~\ref{def:group} effective on both ground truth trajectories and automatically obtained tracklets. The design of socially grounded features is one of the main contributions of the work.
\begin{figure}[t!]
\centering
\subfloat[\texttt{student003}]{
\includegraphics[width=0.31\columnwidth]{images/university1.png}
}
\subfloat[\texttt{1shatian3}]{
\includegraphics[width=0.31\columnwidth]{images/1shatian3.png}
}
\subfloat[\texttt{1dawei1}]{
\includegraphics[width=0.31\columnwidth]{images/1dawei1.png}
}
\caption{Examples of social groups detected in crowds.}
\label{fig:crowd}
\end{figure}
Moreover, a new socially based \emph{loss function} ($G$-MITRE) is defined for the Structural SVM.
Differently from previous solutions~\cite{pellegrini_improving_2010} and~\cite{ge_vision-based_2012}, our approach doesn't rely on scene-dependent parameters that would limit the applicability of the method in real world contexts. Finally, we also propose an online learning procedure that handles smooth variation in crowd composition and density, useful in online surveillance.
We annotated and made publicly available two new datasets: \emph{MPT-${\bf 20}$x${\bf 100}$} and \emph{GVEII} (see Sec.~\ref{sec:exp}). Results on standard benchmarks, as well as on the proposed datasets, outperform current methods.
We strongly believe that an automatic system for group detection will influence future public area visual surveillance and will bring benefits to modeling and simulation application for architectural planning by providing real and precise data observation of crowds phenomena.
\section{Related Work}
The modeling of pedestrian dynamics in crowds represents a relatively recent research field. Most of the works are based on sociological paradigms
and computer vision based approaches have also evolved under the influence of these theories.
\subsubsection*{Modeling and Observing the Crowd}
Most of the research work has tried to tackle the crowd as an exclusively collective phenomenon, where individuality does not exist. This recalls the primitive \emph{Popular Mind Theory}~\cite{bon2003crowd} by Gustave Le Bon, where the crowd was defined as a ``pathological monster with no individual consciousness''. Accordingly, crowds have been analyzed by means of physical models (\emph{e.g.} hydrodynamics~\cite{Moore:2011:VCS:2043174.2043192}), neglecting the existence of single individual purposes and goals. However, these models are effective mainly in extremely dense crowds.
Conversely, many other approaches have been inspired by the 70s \emph{Social Loafing Theory}~\cite{Ingham1974371}, which stated that individuality was a strong requirement for the pursuit of personal goals. Helbings \emph{Social Force Model}~\cite{PhysRevE.51.4282}, which asserts that anyone movements towards her goals are influenced by the surrounding pedestrians, has been the main building block for many crowd modeling and analysis works, ranging from abnormal behavior detection~\cite{5206641} to tracking~\cite{5509779}.
Recently, studies on people attending events have underlined that most of the people tend to move in groups and social relations influence the way people behave in crowds~\cite{moussaid_walking_2010,bandini_crowd_2012}.
These empirical observations are supported by Reicher in the recent \emph{Social Identity Model of Deindividuation Effects}~\cite{Reicher95}, which assumes that crowd behavior is regulated by the social rules and behaviors groups choose to adopt. This is the main social paradigm underpinning our research too.
\subsubsection*{Visual Detection of Groups in Crowds}
It was only recently that group detection showed promising results. The process is in fact built upon several open challenges in computer vision, starting from people detection and tracking in crowds~\cite{Rodriguez11} to analyzing and grouping extracted trajectories~\cite{solera_structured_2013}.
Some works employ the concept of \emph{F-formations} by Kendon~\cite{kendon1990conducting} to discern group formation process. Broadly speaking, F-formations can be seen as specific positional and orientational patterns that people must sustain in order to be considered engaged in a social relationship. Despite robust results~\cite{cristani2011social}, this theory is suited to stationary groups only and is not defined for moving groups, a case which cannot be ignored in crowd analysis.
Thus, complementary approaches analyze pedestrians motion paths; according to the type of available tracklets, they can be partitioned in group-based, individual-group joint and individual-based.
In \textit{group-based} approaches, groups are considered as atomic entities in the scene since no higher level information can be extracted neatly, typically due to high noise or high complexity of crowded scenes \cite{Feldmann11, shao14}. Since these models are often too simplistic to further infer on groups behavior, \textit{individual-group joint} approaches try to overcome the lack of finer information by hypothesizing trajectories while tracking groups at a coarser level \cite{Pang08, Bazzani12}.
Finally, \textit{individual-based} tracking algorithms build up on single pedestrians trajectories.
This kind of approach has been gaining momentum only recently since tracking even in high density crowds is becoming everyday a more feasible task~\cite{Rodriguez11}.
Pellegrini~\emph{et al.}~\cite{pellegrini_improving_2010} employ a Conditional Random Field to jointly predict trajectories and estimate group memberships, modeled as latent variables, over a short time window.
Yamaguchi~\emph{et al.}~\cite{yamaguchi_who_2011} predict whether two pedestrians are in the same group through a linear SVM on trivial distance, speed difference and time overlap information.
Recently, Chang~\emph{et al.}~\cite{Chang11} proposed a soft segmentation process to partition the crowd by constructing a weighted graph, where the edges represent the probability of individuals to belong to the same group.
An interesting unsupervised approach is Zanotto~\emph{et. al} ~\cite{zanotto12}, where a potentially infinite mixture model is fitted on pedestrians, regarded as sampled observations from the mixture. Previous frames data and predictions are used as prior information for the models (one for each group), but pairwise relations between individuals are neglected as groups are modeled only through the mean position and velocity of their members.
Above all, we mention Ge~\emph{et al.}~\cite{ge_vision-based_2012} that suggests the use of an agglomerative approach to cluster trajectories, as we do.
They hierarchically merge clusters by evaluating a well-founded sociological inter-group closeness measure defined on a combination of proximity and velocity features, stopping when a given condition is met.
Conversely, our method does not rely neither on relative position or velocity fixed thresholds~\cite{ge_vision-based_2012,zanotto12} nor on sequence-dependent parameters~\cite{pellegrini_improving_2010}; it is flexible and general as the features are not scene-specific~\cite{Chang11} and their contribution is learned from examples. Thanks to the use of a clustering inference rule, solutions proposed by our method are partitions and not coverings of the members of the crowd~\cite{yamaguchi_who_2011}, meaning that pairwise relations are consistent with the overall group structure found. Moreover, the use of a time window to predict groups let the method recognize that non-trivial behaviors (\emph{e.g.}~neglecting strict proximity) may occur, whereas frame-by-frame methods are limited to short term reasoning~\cite{zanotto12}. Yet, the discriminative nature of the employed framework makes learning compelling in terms of both required data and computational cost, as opposed to graphical models optimizing over a multiple hypothesis space~\cite{pellegrini_improving_2010}.\\
\noindent This work extends our preliminary attempt in~\cite{solera_structured_2013}. Here we prove our proposal complies with social theories of group formation, we devise and investigate new features to better adhere to the sociological theory underpinning our method and, eventually, extend the tests to new remarkably complex datasets and compare with more recent competing algorithms.
Besides, the experiments further probe the need for learning when dealing with heterogeneous crowds, shedding light on the nature of the problem itself.
\section{Problem Definition}
\label{sec:problem_def}
We cast the group detection task as a clustering problem. Consider a set of pedestrian $M = \{a, b, \dots\}$ and $\mathcal{Y}(M)$ as the set of all possible ways to partition $M$. Defining $y$ as a subset of pedestrians (also referred to as group or cluster) in $M$, a generic set of subsets ${\bf y}~=~\{y_1, y_2, \dots\}$ is a valid solution in $\mathcal{Y}(M)$ if the partitioning axioms are satisfied: $\forall a\in\mathcal{M}, \exists!y\in\mathcal{Y}(\mathcal{M}):a\in y$ and $\cup_{y\in\mathcal{Y}(\mathcal{M})}y = \mathcal{M}$.
Here, we call \emph{singletons} those pedestrians whose cluster is composed by themselves only, \emph{i.e.} $|y| = 1$.
In crowded contexts, this grouping cannot be solved by exploiting spatial (positional or orientational) information only, as proposed in F-formation theory, due both to confusion and motion. Moreover, it is often the case that the physical distance between a singleton and a member of a cluster is lower than that cluster intra-member mean distance. This is due to the fact that, in real situations, social aspects heavily intervene in the group formation process.
In order to obtain crowd partitions that are meaningful from a sociological point of view,
the following relevant properties of social groups
must hold.
{\bf Hierarchical Coherence.}
Groups are composed by individuals and sub-groups in a recursive fashion (Fig.\ref{fig:propertya}). This has been first observed in the seminal work of Canetti~\cite{canetti_crowds_1984}, based on the assumption that members within a group cannot erase already settled relationships as the crowd assembles.
{\bf Density Invariance.} To keep their group identities preserved at different crowd densities, members must be willing to change the inner distance among them. Groups in very crowded scenes will be more closed and compact, while groups in open spaces will tend to exhibit more dilated patterns (Fig.~\ref{fig:propertyb}); sociologically and empirical evidence can be found in Bandini \emph{et al.}~\cite{bandini_crowd_2012} and in Moussaid~\emph{et al.}~\cite{moussaid_walking_2010}.
{\bf Transitivity.} Not every member of a group needs to be strictly connected with every one else, but any two members may be part of the same group by means of a sufficiently dense subgroup of pedestrian standing between them (Fig.~\ref{fig:propertyc}). McPhail and Wohlstein's work \cite{mcphail_using_1982} formalized this idea: to be considered part of a group one typically will have to be connected with at least half of the members.
\begin{figure}[t!]
\centering
\subfloat[]{
\includegraphics[width=0.3\columnwidth]{images/hc}
\label{fig:propertya}
}~
\subfloat[]{
\includegraphics[width=0.3\columnwidth]{images/di}
\label{fig:propertyb}
}~
\subfloat[] {
\includegraphics[width=0.3\columnwidth]{images/t}
\label{fig:propertyc}
}
\caption{Highlights of social groups properties: (a) \emph{hierarchical coherence}, (b) \emph{density invariance} and (c) \emph{transitivity}.}
\end{figure}
\section{Socially Constrained Clustering for Groups Detection}
\label{sec:solution}
We propose to solve the crowd partitioning problem employing the \emph{Correlation Clustering} (CC)~\cite{bansal_correlation_2002} and we prove it is possible to achieve a quasi-optimal crowd partition guaranteed to satisfy the three aforementioned properties of Sec.~\ref{sec:problem_def}.
The CC algorithm takes as input an affinity matrix $W$ where, if $W^{ab}>0$ ($W^{ab}<0$), elements $a$ and $b$ belong to the same (different) cluster with certainty $|W^{ab}|$. The algorithm returns the partition ${\bf y}$ of a set of elements $M=\{a, b, \dots\}$ so that the sum of the affinities between item pairs in the same clusters $y$ is maximized:
\begin{equation}
\label{eq:correlation_clustering_objective}
\text{CC} = \arg\max
_{{\bf y}\in\mathcal{Y}(M)}\sum_{y\in{\bf y}}\sum_{a\neq b\in y}W^{ab}_{\bf d}.
\end{equation}
The pairwise elements affinity in $W$ is parameterized as weighted linear combination of a bounded dissimilarity measure and its complement:
\begin{equation}
\label{eq:cc_affinity_parametrization}
W^{ab}_{\bf d} = {\boldsymbol\alpha}^T ({\bf 1} - {\bf d}(a, b)) - {\boldsymbol\beta}^T {\bf d}(a, b).
\end{equation}
To be consistent
with the definition of groups of Sec.~\ref{sec:intro}, we devise the pairwise distance between pedestrian $a$ and $b$, ${\bf d}(a, b)$
as detailed in Sec.~\ref{sec:features}.
In clustering theory, changing the dissimilarity space results in different partitioning of the domain through the same algorithm.
By tuning $[{\boldsymbol\alpha}, {\boldsymbol\beta}]$ parameters in Eq.~\eqref{eq:cc_affinity_parametrization} we can evaluate many different groupings and we'll show that, under a restrict set of hypothesis, they all satisfy the social properties previously mentioned.
In order to efficiently learn those parameters according to different peculiarities groups exhibit in different scenarios, in Sec.~\ref{sec:learning} we introduce Structural SVM~\cite{tsochantaridis_large_2005} with both an approximated inference procedure and a loss function specifically designed for accurately measuring the compatibility among possible crowd partitions.
\\
The solution to Eq.~\eqref{eq:correlation_clustering_objective}, given the parametrization introduced in Eq.~\eqref{eq:cc_affinity_parametrization} and subject to a hierarchical inference procedure, guarantees the satisfaction of all the social groups properties:
\begin{theorem}
When the pairwise elements affinity in $W$ is a weighted linear combination of a bounded similarity measure and its complement, a bottom-up approximated solution to CC produces a partition that respects the hierarchical coherence, density invariance and transitivity properties of social groups.
\end{theorem}
\begin{proof}
Let ${\bf d}:M\times M\rightarrow[0, 1]^p$ be a bounded distance on the set of members of a crowd $M$ so that $(M, {\bf d})$ is a dissimilarity space and suppose the affinity matrix of CC is constructed as in Eq.~\eqref{eq:cc_affinity_parametrization}, for some appropriate positive values of ${\boldsymbol\alpha}, {\boldsymbol\beta} \in \mathbb{R}^p$. To demonstrate that the \emph{density invariance} holds for all solutions of CC consider that when the density increases, both distances between groups and between members of the same group diminish. This phenomenon is a less formal statement of the scale invariance axiom of clustering defined in Kleiberg \cite{kleinberg_impossibility_2002} which is known to hold for sum-of-pairs clustering algorithm.
We must thus show that it holds when we are maximizing affinities instead of minimizing distances as well. To this aim let ${\bf d} = \lambda\bar{\bf d}$ and $\bar{\bf d}:M\times M\rightarrow[0, \frac{1}{\lambda}]^p$ so that
\begin{equation}
\begin{aligned}
W_{\bf d} &= {\boldsymbol\alpha}^T ({\bf 1} - \lambda\bar{\bf d}) - {\boldsymbol\beta}^T \lambda\bar{\bf d}\\
&= \lambda[{\boldsymbol\alpha}^T(\frac{1}{\lambda} - \bar{\bf d})-{\boldsymbol\beta}^T\bar{\bf d}] = \lambda W_{\bar{\bf d}},
\end{aligned}
\end{equation}
where the notation for the elements is dropped for clarity. Consequently, CC satisfies the scale invariance axiom since multiplying all distances by a constant results in multiplying the total affinity of each cluster by a constant and hence the maximum affinity clustering solution is not changed. \emph{Transitivity} follows directly from the objective function of CC in Eq.~\eqref{eq:correlation_clustering_objective}:
to be assigned to the same group it suffices the existence of any number of members such that the net effect of all the involved pairwise relations is non-decreasing.
Last, the \emph{hierarchical coherence} requires a greedy approximation algorithm to optimize the CC that initially consider each pedestrian in its own cluster and then iteratively merges the two clusters whose union would produce the best clustering score, stopping when joining clusters would decrease the overall affinity.
Hence, elements in the same cluster at lower levels of the hierarchy are also together in higher level clusters.
\end{proof}
\begin{figure*}[t!]
\centering
\subfloat[Physical distance]{
\includegraphics[width=0.22\textwidth]{images/d_ph_print.pdf}
\label{fig:d_ph}
}
~
\subfloat[Motion causality]{
\includegraphics[width=0.22\textwidth]{images/d_ca_print.pdf}
\label{fig:d_ca}
}
~
\subfloat[Trajectory shape]{
\includegraphics[width=0.22\textwidth]{images/d_sh_print.pdf}
\label{fig:d_sh}
}
~
\subfloat[Paths convergence]{
\includegraphics[width=0.22\textwidth]{images/d_he_print.pdf}
\label{fig:d_he}
}
\caption{Features: physical identity (a) and social identity (b,c) provide a computational interpretation of the concept of group membership, while (d) evaluates the likeliness of the existence of a shared goal between pedestrians.}
\label{fig:features}
\vspace{-0.5cm}
\end{figure*}
\section{Social Features for Social Groups}
\label{sec:features}
Given the problem formulation in Sec.~\ref{sec:problem_def} and the CC parametrization of Eq.~\eqref{eq:cc_affinity_parametrization}, here we define the distance function ${\bf d}$ which acts on trajectories pairs.
We consider the pedestrian trajectory $T_a = \{(t, {\bf p}_a^t)\}_t$, projected onto the ground plane, as multivariate time series of metric (in meters) spatial observations ${\bf p}_a^t$ for pedestrian $a$ at different times $t$.
In order to deal with the continuously changing nature of groups (splitting, merging, switching members, $\dots$) we reduce the observation period to a time window $\mathcal{T}$ of fixed length. As a consequence, groups can be differently detected even between (potentially overlapped) sequential time windows $\mathcal{T}_k$ and $\mathcal{T}_{k+1}$.
According to Def.~\ref{def:group},
we devise four features able to capture both the pedestrian physical and social identity as well as to discern the presence of a shared goal among them,
namely: \textit{physical identity} $d_\text{ph}$, \textit{trajectories shape-similarity} $d_\text{sh}$, \textit{pedestrians causality} $d_\text{ca}$ and \textit{heat-maps} $d_\text{hm}$.
A pairwise feature vector ${\bf d}^k(a, b)$ is hence defined for every couple of trajectories $T_a$ and $T_b$ and for every time window $\mathcal{T}_k$, as
\begin{equation}
{\bf d}(a,b)\stackrel{\text{\tiny def}}{=} {\bf d}^k(a,b) = [d_\text{ph}, d_\text{sh}, d_\text{ca}, d_\text{he}]_{a,b}^k.
\end{equation}
\subsection{From Physical Distances to Physical Identity}
\begin{figure}[t]
\center
\subfloat[]{
\includegraphics[width=0.60\columnwidth]{images/prox3.jpg}
}
\subfloat[]{
\includegraphics[width=0.30\columnwidth]{images/GMM.pdf}
}
\caption{Proxemics, modeled by gaussians (b), reveal physical identity trough physical distance (a).}
\label{fig:GMM}
\end{figure}
The \emph{physical identity} can be regarded as a static relation connecting physical distance to group membership.
In his \emph{Proxemic Theory}, Hall~\cite{hall66} focused on the physical interactions between pairs of individuals. More precisely, the theory is about ``the study of ways in which man gains knowledge of the content of other men's minds through judgments of behaviour patterns associated with varying degrees of proximity to them.''
\begin{table}[t!]
\center
\caption{Proxemics characterization as found in Hall's Theory.}
\begin{tabular}{|l|c|l|}
\hline
\multicolumn{1}{|c|}{\bfseries space} & \multicolumn{1}{c|}{\bfseries boundaries ($m$)} & \multicolumn{1}{c|}{\bfseries description}\\
\hline
intimate & 0.0 - 0.5 & unmistakable involvement\\
\hline
personal & 0.5 - 1.2 & familiar interactions\\
\hline
social & 1.2 - 3.7 & formal relationships\\
\hline
public & 3.7 - 7.6 & non-personal interactions\\
\hline
\end{tabular}
\label{tab:prox}
\end{table}
The proxemic model fomalizes how people use physical space in interpersonal interactions and
defines a set of concentric bubbles around every individual, as depicted in Fig.~\ref{fig:d_ph}.
Nevertheless, the transition between the four different proxemic zones is abrupt (Tab.~\ref{tab:prox}).
Spatial quantization can be heavily affected by noise or errors, leading to wrong classification.
Several approaches assign a score to proxemic classes in order to obtain a continuous real-valued similarity measure, \cite{6239351,6113127,vizzari12}.
To grasp the distance based characteristics of group formation, we relax the original Hall's quantization by employing a Gaussian Mixture Model (GMM) on the ground plane, centered on person location and with fixed proxemics-inspired covariance matrices.
The resulting GMM is a weighted sum of zero mean Gaussians with diagonal covariance matrices reflecting Hall's boundaries (\emph{i.e.} $\Sigma_1\leftarrow0.5$, $\Sigma_2\leftarrow1.2$, \ldots):
\begin{equation}
\text{GMM}({\bf p}_a^t-{\bf p}_b^t)=\frac{1}{4}\sum\limits_{z=1}^4 \mathcal{N}( {\bf p}_a^t-{\bf p}_b^t \vert 0,\Sigma_z)
\label{eq:GMM}
\end{equation}
Given a pair of trajectories $T_a$ and $T_b$ we evaluate the mixture model of Eq. \eqref{eq:GMM} on the vector of distances at each time instance.
This is equivalent to place the mixture on ${\bf p}_a^t$ and measure where the point ${\bf p}_b^t$ lies inside the proxemic space at each instant $t$, as shown in Fig.~\ref{fig:GMM} and in Fig.~\ref{fig:prox2}.
The static measure of social cohesion, called $d_\text{ph}$, is then defined by averaging the mixture model responses over the the set of time instances where trajectories $T_a$ and $T_b$ are simultaneously present in the current time window, $\overline{\mathcal{T}}\subseteq\mathcal{T}^k$:
\begin{equation}
d_\text{ph}^k(a,b) = \frac{1}{|\overline{\mathcal{T}}|}\sum_{t\in\overline{\mathcal{T}}}\text{GMM}({\bf p}_a^t-{\bf p}_b^t)
\end{equation}
Averaging is required since the physical identity among group members is established in time and must remain coherent in order to be a valid measure of social cohesion.
\subsection{Motion as an Indicator of Social Identity}
\emph{Social Identity}~\cite{haslam2004psychology,turner81} is a psychological paradigm built on the intuition that group behavior is an emerging dynamic, reflecting a shift in self-conception of the members who start to define themselves in terms of their common membership.
According to~\cite{Oldmeadow05task-groupsas}, social identity
reflects in the way people mutually influence each other and consequently move in groups, suggesting that
social identity could be observed through trajectories shape similarity and paths temporal causality
\subsubsection{Temporal Causality}
Under the hypothesis of sufficiently stationary trajectories, which is typically true for the observation of a time window, we can employ the econometric model of Granger causality~\cite{granger_investigating_1969} to measure to what extent pedestrians are mutually affecting their motion paths~\cite{couzin_effective_2005}. Accordingly, we formalize two requirements:
\begin{enumerate}
\item the causal pedestrian will move before the effect pedestrian, and
\item the motion of the causal pedestrian contains information about the way the effect pedestrian moves that cannot be found in any other pedestrian motion.
\end{enumerate}
A consequence of these statements is that the causal pedestrian trajectory can help forecast the effect pedestrian trajectory even after other data has first been used. Let's define $m$ as the lag value for the causality analysis and denote the optimum least-squares predictor of a stationary trajectory $T_a$ at time $t$ using the set of values $\bar{T}_a(t-m)$ by $P_t(T_a|\bar{T}_a(t-m))$. Here $\bar{T}_a(t-m)$ is all the information about trajectory $T_a$ accumulated since time $t-m$ (inside the current time window $\mathcal{T}^k$) up to time $t-1$. The predictive error series will be denoted by $\varepsilon_t(T_a|\bar{T}_a(t-m)) = T_a(t) - P_t(T_a|\bar{T}_a(t-m))$ and define $\sigma^2(T_a|\bar{T}_a(t-m))$ as the variance of $\varepsilon_t(T_a|\bar{T}_a(t-m))$. It is said trajectory $T_b$ \emph{Granger causes} $T_a$, briefly $b\rightarrow a$, if
\begin{equation}
\sigma^2(T_a|\bar{T}_a(t-m)) > \sigma^2(T_a|\bar{T}_a(t-m), \bar{T}_b(t-m))
\end{equation}
\noindent The feature is then derived from a specific testing procedure used to evaluate Granger causality trustworthiness.
Let's introduce the sum of squared residuals for the constrained and unconstrained models as
\begin{equation}
\begin{aligned}
RSS_c &= \sum_{t=1}^{K}\varepsilon_t(T_a|\bar{T}_a(t-m))^2 \quad\text{and}\quad\\
RSS_u &= \sum_{t=1}^{K}\varepsilon_t(T_a|\bar{T}_a(t-m), \bar{T}_b(t-m))^2,
\end{aligned}
\end{equation}
where $K$ is the number of samples considered for the analysis.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/FS_distribution.pdf}
\caption{Visual example of causality probability. The vertical line is the $S$ of Eq.~\eqref{eq:ca_S} while the shaded area is $d_\text{ca}$.}
\label{fig:caus_S}
\end{figure}
We design our feature $d_\text{ca}$ so as to be the critical confidence measure of the hypothesis that Granger causality exists between $T_a$ and $T_b$. To this end, we consider the test statistic
\begin{equation}
\label{eq:ca_S}
S_{b\rightarrow a} = \frac{(RSS_c-RSS_u)/m}{RSS_u/(K-2m-1)}.
\end{equation}
and compute the area under the Fisher-Snedecor probability function $\mathcal{F}$ to the left of $S$, as shown in Fig.~\ref{fig:caus_S}. This results in the following closed form solution~\cite{hazewinkel_encyclopaedia_1989} integral:
\begin{equation}
d^k_\text{ca}(a,b) = \max_{S\in\{S_{b\rightarrow a}, S_{a\rightarrow b}\}}\int_{0}^{S} \mathcal{F}(x\vert m, K-2m-1)\mathrm{d}x,
\end{equation}
where $S_{b\rightarrow a}$ and $S_{a\rightarrow b}$ are both considered in order to obtain symmetry, but as we value the existence of causality over its direction, we only keep the one which maximize the probability.
\subsubsection{Shape Similarity}
Shape similarity may also be useful in describing social identity as it overcomes the limit of the proxemics punctual and static evaluation.
We use the Dynamic Time Warping (DTW)~\cite{berndt_using_1994} on euclidean coordinates to map one time series to another by minimizing the distance between the two. In particular, DTW flexibility allows two time series that are similar but locally out of phase to align in a non-linear manner.
Suppose we have two trajectories $T_a$ and $T_b$ of lengths $§A$ and $B$ respectively.
To align these two sequences using DTW, we first construct a distance matrix $\{D^{ij}_{ab}\}_{ij}\in\mathbb{R}^{A\times B}$ that encodes the squared euclidean distance between any $i$-th element of $T_a$ and $j$-th element of $T_b$ inside the current time window.
The best alignment can be found by a recursive minimization of the cumulative cost $\gamma_{ab}$ of any path through the distance matrix originating in $D_{ab}^{11}$:
\begin{equation}
\gamma_{ab}(i, j) = D^{ij}_{ab} + \min\{\gamma_{ab}(i\text{-}1, j), \gamma_{ab}(i\text{-}1, j\text{-}1), \gamma_{ab}(i, j\text{-}1)\}.
\end{equation}
In particular, we construct our feature to be the distance of the two sequences once they are optimally aligned, that is the sum of the Euclidean distances of associated points of $T_a$ and $T_b$:
\begin{equation}
d_\text{sh}(a,b) = \gamma_{ab}(A, B)/\max(A,B)
\end{equation}
where the denominator is the optimal warping path length used as a normalization factor.
\subsection{Common Goals from People Motion}
\label{sec:heatmaps}
Previously described features focus on both static and dynamic aspect of trajectories when groups are already established, but neglect the smooth process of group formation. People may merge in groups starting from different location (\emph{e.g.} meeting action) or groups may split into subgroups and singletons (according to the \textit{hierarchical coherence} property of group formation).
Meeting or being close for a sufficient amount of time may indicate the presence of a shared goal. Following the results in \cite{lin_heat-map-based_2013}, where heat maps were used to recognize group activities, we also employ a heat map inspired feature to holistically model groups.
A heat map $H_a:\mathbb{N}_R\times\mathbb{N}_C\rightarrow[0,1]$ associated to the trajectory $T_a$ is a $R$-by-$C$ grid of heat sources $h_a$ that partitions the ground plane. The heat source $h_a(i,j)$ activates if the trajectory $T_a$ happens to walk in the relative grid cell $(i,j)$ and once activated it is subject to thermal decay and thermal diffusion processes:
\begin{equation}
H_a(i,j) = \sum_{p=1}^R\sum_{q=1}^C E_a(p, q) \cdot e^{-k_s\|(p-i,q-j)\|},
\end{equation}
where $k_s$ is a parameter suggesting the relative importance of different patches at different distances and $E_a(p, q)$ is the thermal energy produced by $T_a$ on the patch $(p, q)$. If we let $\bar{E}_a(p, q)$ be the accumulated thermal energy, we have
\begin{equation}
E_a(p, q) = \bar{E}_a(p, q)\cdot e^{-k_rt_\text{int}},
\end{equation}
being $k_r$ a parameter regulating the slow down of the heat accumulation and dispersion and $t_\text{int}$ the duration of the interaction between pedestrian $a$ and cell $(p,q)$ inside the current time window $\mathcal{T}^k$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.93\columnwidth]{images/hm2.pdf}
\caption{Intersecting heat maps are generated by converging trajectories, which project on the $xy$ plane their shared goal.}
\label{fig:hm_feature}
\end{figure}
Once we have constructed heat maps for every trajectory, we define a similarity metric between two trajectories $T_a$ and $T_b$ as the volume under the combined heat surface $\Upsilon_{ab}$ obtained as the pointwise product of the two heat maps $H_a$ and $H_b$:
\begin{equation}
d^k_\text{he}(a,b) = \sum_{i=1}^R\sum_{j=1}^C \Upsilon_{ab}(i,j) = \sum_{i=1}^R\sum_{j=1}^C H_a(i,j)H_b(i,j)
\end{equation}
The volume under $\Upsilon_{ab}$ reveals to what extent $T_a$ and $T_b$ have been close in space during the observation period, something that proxemics could already measure indeed. Nevertheless, heat maps relax the constraint by which only elements from the same frame can be compared, in practice this is accomplished through the thermal diffusion process.
At the same time, heat maps also expose the history of their respective trajectories, allowing the metric to capture the temporal aspect of motion similarity.
Proxemics, DTW and Granger causality would rate two pedestrians meeting and parting ways analogously, even if the former case is more likely to represent a group formation process.
Recognizing motion trajectories also encode temporal information is a great advantage of heap maps based analysis.
\section{Learning Framework}
\label{sec:learning}
The linear parametrization of the affinity matrix $W_{\bf d}$ of Eq.~\eqref{eq:cc_affinity_parametrization} guarantees to reach a partition of the crowd which is consistent with the social groups properties. The parameters ${\bf w} = [{\boldsymbol\alpha}, {\boldsymbol\beta}]$ govern both the importance of each feature alone and their similarity/dissimilarity optimal combinations, resulting in different clustering rules.
The choice of the best rule should account for all factors affecting the group formation process, such as environmental constraints or cultural influences.
The complexity of explicitly evaluating these factors resides in the impossibility to directly observe them. Still, we can gain important insights by observing the grouping process. On these premises, we adopt a learning framework capable of choosing the most suitable clustering rule by finding a set of feature weights that implicitly embodies these non-observable aspects.
\subsection{Supervised CC Through Structured Learning}
Let us consider the input ${\bf x}_i = \{[{\bf 1} - {\bf d}^i(a,b); {\bf d}^i(a,b)]\}_{a,b}$ to be the set of pairwise features computed on all the possible pairs of trajectories $T_a$ and $T_b$ in the $i$-th temporal window and ${\bf y}_i$ the clustering solution, \emph{i.e.} the set of all social groups appearing in the crowd $M_i$. Since ${\bf y}_i$ cannot be described by a single valued function, we adopt the
Structural SVM~\cite{tsochantaridis_large_2005} framework to model and learn predicting the solution.
The goal is to learn a classification mapping $f:\mathcal{X}\rightarrow\mathcal{Y}$ between input space $\mathcal{X}$ and structured output space $\mathcal{Y}$ given a set of input-output pairs $\{({\bf x}_1, {\bf y}_1),\dots,({\bf x}_n, {\bf y}_n)\}$.
A discriminant score function $F:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}$ is defined over the joint input-output space and $F({\bf x}, {\bf y})$ can be interpreted as measuring the compatibility of ${\bf x}$ and ${\bf y}$. Now, the prediction function $f$ can be defined as
\begin{equation}
\label{eq:pred_fun}
f({\bf x})~=~\arg\max_{{\bf y}\in\mathcal{Y}({\bf x})}F({\bf x}, {\bf y})
\end{equation}
where the maximizer over the label space $\mathcal{Y}({\bf x})$ is the predicted label, \emph{i.e.} the solution of the inference problem.
For simplicity we choose to restrict the space of $F$ to linear functions over some combined feature representation $\Psi({\bf x}, {\bf y})$ subject to a ${\bf w}$ parametrization. This feature mapping cannot be defined out of the context of the problem, as it is the problem itself that specifies, given a particular input, the nature of the desired solution. Following the definition of correlation clustering in Eq.~\ref{eq:correlation_clustering_objective} and its parametrization introduced in Eq.~\ref{eq:cc_affinity_parametrization}, the compatibility of an input-output pair is directly described as
\begin{equation}
F({\bf x}, {\bf y}; {\bf w}) = {\bf w}^T\Psi({\bf x}, {\bf y}) = {\bf w}^T\sum_{y\in{\bf y}}\sum_{a\neq b\in y}{\bf x}^{ab}.
\end{equation}
The problem of learning in structured and interdependent output spaces can been formulated as a maximum-margin problem. We adopt the $n$-slack, margin-rescaling formulation:
\begin{equation}
\begin{aligned}
& \min_{{\bf w}, {\bf \xi}}
& & \frac{1}{2}\|{\bf w}\|^2+\frac{C}{n}\sum_{i=1}^n\xi_i \\
& \,\,\,\,\,\text{s.t.}
& & \forall i:\xi_i\ge0, \\
&&& \forall i,\forall{\bf y}\in\mathcal{Y}({\bf x}_i)\backslash{\bf y}_i:{\bf w}^T\delta\Psi_i({\bf y})\ge\Delta({\bf y}, {\bf y}_i)-\xi_i,
\end{aligned}
\label{optpro}
\end{equation}
where $\delta\Psi_i({\bf y}) \overset{\text{def}}{=} \Psi({\bf x}_i, {\bf y}_i) - \Psi({\bf x}_i, {\bf y})$, $\xi_i$ are the slack variables introduced in order to accommodate for margin violations, $\Delta({\bf y}_i, {\bf y})$ is the loss function further defined in Sec.~\ref{sec:loss_score} and $C$ is the regularization trade-off. Intuitively, we want to maximize the margin and jointly guarantee that for a given input, every possible output result is considered worst than the correct one by at least a margin of $\Delta({\bf y}_i, {\bf y})-\xi_i$, where $\Delta({\bf y}_i, {\bf y})$ is bigger when the two predictions are known to be more different.
Remarkably, correlation clustering doesn't need to know in advance how many groups are present in the scene. Moreover, a positive overall cluster score can group two elements even if their affinity measure is negative, implicitly modeling the transitive property of relationships in groups, as stated in Sec.~\ref{sec:problem_def}.
\subsection{Batch Sequential Optimization}
The quadratic program (QP)~\eqref{optpro} introduces a constraint for every possible wrong clustering of the $n$ examples, more precisely $\sum_{i=1}^n(|\mathcal{Y}({\bf x}_i)|-1)$. Unfortunately, the number of ways to partition a set $M$ scales more than exponentially with the number of items according to the Bell sequence~\cite{rota_number_1964}
\begin{equation}
|\mathcal{Y}(M)| = \sum_{i=0}^{|M|}\frac{1}{i!}\sum_{j=0}^i(-1)^{i-j}{k\choose j} j^{|M|},
\end{equation}
making the optimization intractable. As an example, for a crowd composed of 20 pedestrians the number of potential solutions would be about $5.8\cdot 10^{12}$. In order to deal with this high number of constraints many approximation schemes have been proposed, where cutting plane algorithms or subgradient methods
are among the most commonly used. In particular, all the constraints of QP~\eqref{optpro} can be replaced by $n$ piecewise-linear ones by defining the structured hinge-loss:
\begin{equation}
\widetilde{H}({\bf x}_i) \overset{\text{def}}{=} \max_{{\bf y}\in\mathcal{Y}}\Delta({\bf y}_i, {\bf y}) - {\bf w}^T\delta\Psi_i({\bf y}).
\label{eq:maxoracle}
\end{equation}
The computation of the structured hinge-loss for each element $i$ of the training set, described in Sec.~\ref{sec:oracle}, amounts to finding the most ``violating'' output ${\bf y}$ for a given input ${\bf x}_i$ and its correct associated output ${\bf y}_i$.
We only have $n$ constraints of the form $\xi_i \geq \tilde{H}({\bf x}_i)$ and the non-smooth version of QP~\eqref{optpro} reduces to
\begin{equation}
\begin{aligned}
& \min_{{\bf w}}
& & \frac{1}{2}\|{\bf w}\|^2+\frac{C}{n}\sum_{i=1}^n\widetilde{H}({\bf x}_i).
\end{aligned}
\label{optpro_unconstrained}
\end{equation}
By disposing of a maximization oracle, \emph{i.e.} a solver for Eq.~\eqref{eq:maxoracle}, and a computed solution ${\bf y}^*$,
subgradient methods can easily be applied to QP~\eqref{optpro_unconstrained}, being $\partial_{\bf w}\widetilde{H}({\bf x}_i) = -\delta\Psi_i({\bf y}^*)$.
\begin{algorithm}
\setstretch{1.35}
\caption{Block-Coordinate Frank-Wolfe Algorithm}
\label{BCFW}
\begin{algorithmic}[1]
\STATE Let ${\bf w}^{(0)}, {\bf w}_i^{(0)} := {\bf 0} $ and $ l^{(0)}, l_i^{(0)} := 0$
\FOR{$\text{it} := 0$ \TO $\text{maxIterations}$ }
\STATE Pick $i$ at random in $\{1, \dots, n\}$
\STATE Solve ${\bf y}* := \arg\max_{{\bf y}\in\mathcal{Y}}\Delta({\bf y}_i, {\bf y}) - {\bf w}^T\delta\Psi_i({\bf y})$
\STATE Let ${\bf w}_s := \frac{C}{n}\delta\Psi_i({\bf y}^*)$ and $l_s := \frac{C}{n}\Delta({\bf y}_i, {\bf y}^*)$
\STATE Let $\gamma := \frac{({\bf w}_i^{(\text{it})}-{\bf w}_s)^T{\bf w}^{(\text{it})}+\frac{C}{n}(l_s-l_i^{(\text{it})})}{|{\bf w}_i^{(\text{it})}-{\bf w}_s\|^2}$ and clip to $[0,1]$
\STATE Update ${\bf w}_i^{(\text{it}+1)} := (1-\gamma){\bf w}_i^{(\text{it})} + \gamma {\bf w}_s$ \\$\quad$ and $l_i^{(\text{it}+1)}:= (1-\gamma)l_i^{(\text{it})}+\gamma l_s$
\STATE Update ${\bf w}^{(\text{it}+1)}:= {\bf w}^{(\text{it})} + {\bf w}_i^{(\text{it}+1)} - {\bf w}_i^{(\text{it})}$\\ $\quad$ and $l^{(\text{it}+1)} := l^{(\text{it})} + l_i^{(\text{it}+1)}-l_i^{(\text{it})}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
To exploit the domain separability of the constraints and limit the number of oracle calls needed to converge to the optimal solution, we choose to adopt a Block-Coordinate version of the Frank-Wolfe algorithm (BCFW)~\cite{julien_block_coordinate_2012}, delineated in Alg.~\eqref{BCFW}.
The algorithm works by minimizing the objective function of Eq.~\eqref{optpro_unconstrained} but restricted to a single random example at each iteration. By calling the max oracle upon the selected training sample (line 4) we obtain a new sub-optimal parameter set ${\bf w}_s$ by simple derivation (line 5).
The best update is then found through a closed-form line search (line~6), greatly reducing convergence time compared to other subgradient methods.\\
\noindent In order to solve QP~\eqref{optpro_unconstrained} effectively, it is important to choose an appropriate loss function as the learning ability of Structural SVM highly depends on it. In Sec.~\ref{sec:loss_score} we introduce and discuss different potential loss functions and their respective descriptive ability. Given the loss function, in Sec.~\ref{sec:oracle} an efficient method to compute the maximization oracle (line 4 of Alg.~\ref{BCFW}) is described.
\subsection{Loss Function and Scoring Procedure}
\label{sec:loss_score}
One common choice of loss function for clustering is the \emph{pairwise loss} $\Delta_{PW}({\bf y}_i, {\bf y})$, which is a generalization of the Rand coefficient~\cite{rand_objective_1971}, and is defined as the ratio between the number of pairs on which ${\bf y}_i$ and ${\bf y}$ disagree on their cluster membership and the number of all possible pairs of elements in the set.
Due to the quadratic number of connections that exist among crowd members, this measure tends to be imprecise when dealing with large crowds: as the crowdness increases, the number of positive links connecting group members becomes negligible with respect to the total number of links. As a consequence, erroneous solutions won't be strongly penalized.\\
The \emph{MITRE~loss}~\cite{vilain_model-theoretic_1995}, $\Delta_{M}({\bf y}_i, {\bf y})$, founded on the understanding that connected components are sufficient to describe groups, partially mitigates this problem by representing groups as spanning trees, instead of complete graphs, inducing a linear amount of both positive and negative links among members (and not quadratic as in the pairwise case).
For any crowd partitioning, a spanning forest is an equivalence class as many trees that describe the same group configuration may exist.
The final score is obtained by accounting for the number of links that needs to be removed or added to recover a spanning forest of the correct solution.
Nonetheless, problems arise when working on relations and not directly on members, as singletons have no connections at all but should still be considered positively when correctly classified.
\begin{figure}[t!]
\centering
\subfloat[${\bf y}_i$ PAIRWISE links]{
\includegraphics[width=0.45\columnwidth]{images/loss_PAIRWISE.pdf}
}
\subfloat[${\bf y}, \Delta_{PW}({\bf y}_i, {\bf y})=0.27$]{
\includegraphics[width=0.45\columnwidth]{images/loss_PAIRWISE_error.pdf}
}\\
\subfloat[${\bf y}_i$ MITRE links]{
\includegraphics[width=0.45\columnwidth]{images/loss_MITRE.pdf}
}
\subfloat[${\bf y}, \Delta_{M}({\bf y}_i, {\bf y})=0.6$]{
\includegraphics[width=0.45\columnwidth]{images/loss_MITRE_error.pdf}
}\\
\subfloat[${\bf y}_i$ $G$-MITRE links]{
\includegraphics[width=0.45\columnwidth]{images/loss_GMITRE.pdf}
}
\subfloat[${\bf y}, \Delta_{GM}({\bf y}_i, {\bf y})=0.75$]{
\includegraphics[width=0.45\columnwidth]{images/loss_GMITRE_error.pdf}
}
\caption{Differences in the way losses account for errors. Singletons are white. Figures (a, c, e) depict solution ${\bf y_i}$ and the links considered by the respective losses, while (b, d, f) color pedestrians according to solution ${\bf y}$ and show the links on which the two solutions ${\bf y}_i$ and ${\bf y}$ disagree.}
\label{fig:losses}
\end{figure}
For this motivation, we propose a loss function, \emph{GROUP-MITRE loss} ($G$-MITRE) $\Delta_{GM}({\bf y}_i, {\bf y})$, that overcomes this limitation by adding, for each pedestrian described by the trajectory $T_i$, a fake counterpart $\alpha_{T_i}$ to which only singletons are connected.
Through this shrewdness we can now take into consideration singletons as well when computing the discrepancy between two solutions. The particular design choice to link to the fake counterparts only singleton members generates two discrepancies when committing errors involving singletons and is thus a further effort in generating more plausible hierarchical groups in the solution, as depicted in Fig.~\ref{fig:losses}.
More formally, consider two clustering solutions ${\bf y}_i$, ${\bf y}$ and a representative of their respective spanning forests $Q$ and $R$. The connected components of $Q$ and $R$ are identified respectively by the set of trees $Q_{1}, Q_{2}, \dots$ and $R_1,R_2,\dots$. Note that if the number of elements in $Q_j$ is $|Q_j|$, then only $c(Q_j)\stackrel{\text{\tiny def}}{=}|Q_j|-1$ links are needed in order to create a spanning tree. Let us define $\pi_{\scriptscriptstyle R}(Q_j)$ as the partition of a tree $Q_j$ with respect to the forest $R$, that is the set of subtrees obtained by considering only the membership relations in $Q_j$ also found in $R$. Besides, if $R$ partitions $Q_j$ in $|\pi_{\scriptscriptstyle R}(Q_j)|$ subtrees then $v(Q_j)\stackrel{\text{\tiny def}}{=}|\pi_{\scriptscriptstyle R}(Q_j)| - 1$ links are sufficient to restore the original tree. It follows that the recall error for $Q_j$ can be computed as the number of missing links divided by the minimum number of links needed to create that spanning tree. Accounting for all trees $Q_j$ the global recall measure of $Q$ is:
\begin{equation}
\label{eq:recall_mitre}
\begin{aligned}
\mathcal{R}_{Q} = 1 - \frac{\sum_{j} v(Q_j)}{\sum_{j} c(Q_j)} = \frac{\sum_{j} |Q_j|- |\pi_{\scriptscriptstyle R}(Q_j)|}{\sum_{j}|Q_j|-1}
\end{aligned}
\end{equation}
The precision of $Q$ (recall of $R$) can be computed by exchanging $Q$ and $R$. Given the definition of precision, recall and employing the standard $F$-score $F_1$, the loss is defined as
\begin{equation}
\Delta_{GM}=1-F_1.
\end{equation}
The complete algorithm for the computation of the $G$-MITRE loss is reported in Alg.~\ref{alg:G_MITRE}. We employ disjoint-set arrays due to the efficiency of checking whether two pedestrians belong to the same group. Recall that {\footnotesize UNION} and {\footnotesize FIND} are the standard functions defined over the disjoint-set arrays and denote the operations to merge two clusters and to find an element membership respectively. In the pseudo-code we use the notation ${\bf y}_i/{\bf y}$ to indicate that the algorithm first work on the solution ${\bf y}_i$ and then analogously on ${\bf y}$.
\begin{algorithm}[t!]
\setstretch{1.35}
\caption{$G$-MITRE loss $\Delta_{GM}({\bf y}_i, {\bf y})$ computation}
\label{alg:G_MITRE}
\begin{algorithmic}[1]
\REQUIRE ${\bf y}_i$ and ${\bf y}$ as \emph{disjoint-set data structures}
\STATE $\varphi(x)$ are the unique roots of connected components $x$
\STATE $\Gamma(x)$ is the size of the connected component with root $x$
\FORALL{$T \in {\bf y}_i/{\bf y}$}
\STATE ${\bf y}_i/{\bf y}= {\bf y}_i/{\bf y}\cup\alpha_{T}$
\IF{$\Gamma(\text{\footnotesize FIND}({\bf y}_i/{\bf y}(T)) = 1$}
\STATE $\text{\footnotesize UNION}({\bf y}_i/{\bf y}(T), {\bf y}_i/{\bf y}(\alpha_{T}))$
\ENDIF
\ENDFOR
%
\FORALL{$q \in \varphi({\bf y}_i/{\bf y})$}
\STATE $v_{{\bf y}_i/{\bf y}}~{+=}~|\varphi(\bigcup_{\text{\scriptsize FIND}({\bf y}_i/{\bf y}(T)) = q}{\bf y}/{\bf y}_i(T))| - 1$
\STATE $c_{{\bf y}_i/{\bf y}}~{+=}~\Gamma(q) - 1$
\ENDFOR
%
\STATE $\mathcal{R}_{{\bf y}_i/{\bf y}} = 1 - v_{{\bf y}_i/{\bf y}} / c_{{\bf y}_i/{\bf y}}$
\STATE $\Delta({\bf y}_i, {\bf y}) = 1 - 2\mathcal{R}_{{\bf y}_i}\mathcal{R}_{{\bf y}} / (\mathcal{R}_{{\bf y}_i}+\mathcal{R}_{{\bf y}})$
\end{algorithmic}
\end{algorithm}
\subsection{Approximate Oracle}
\label{sec:oracle}
Despite the simplicity of the algorithm, the intrinsic complexity of the optimization is hidden in the search for the most violating solution ${\bf y}^*$ for the $i$-th example (line 4 of Alg.~\eqref{BCFW}): finding the most violated constraint requires to solve the loss augmented decoding subproblem. Note that the original prediction problem of Eq.~\eqref{eq:pred_fun} is NP-hard and the insertion of a non-linear loss in the computation of the maximum is not likely to help.
Nevertheless, thanks to its iterative nature, the inference scheme introduced in Sec.~\ref{sec:solution} can be adapted to approximate the oracle as well. Starting from the trivial solution having each pedestrian of the $i$-th example in its own cluster, the algorithm repeatedly merges the two clusters which reflect in the highest increment in the structured hinge-loss $\tilde{H}({\bf x}_i)$ of Eq.~\eqref{eq:maxoracle}, until a local maxima is found.
Of course by following a greedy procedure, there is no guarantee to select the most violated constraint. Interestingly enough, Lacoste-Julien~\emph{et al.}~\cite{julien_block_coordinate_2012} show that all convergence results known for exact maximizer of the loss augmented problem also hold for approximate maximizers by allowing the algorithm to iterate longer toward convergence.
For further details, please refer to their original work. |
2,877,628,089,415 | arxiv | \section{Introduction} \label{s:intro}
\par Recently, social media platforms, like Twitter, have been extensively used by users across the world to share opinions, promote products, spread awareness and updates on events and disasters. It has been often observed that an event is reported on social media platforms before it is covered on any mainstream media~\cite{wold2016twitter}. Similarly, the active participation of users on Twitter has made it an important source for continuous updates on any disaster~\cite{priya2019should}. However, due to the high volume of Twitter messages and continuous generation of real-time updates, it is very difficult to get a holistic view of an ongoing event, like disasters, which continuously generates new information. Continuous monitoring and summarization of messages related to a disaster are required by government and volunteer organizations for effective disaster response. Prior studies~\cite{imran2015towards, roy2020classification,imran2016twitter} show that messages during a disaster are of different categories~\footnote{A category consists of information related to a same topic/sub-event with respect to any disaster.}~\cite{imran2015towards}, like \textit{infrastructure damage}~\cite{priya2020endea, priya2020taqe}, \textit{victim needs}~\cite{basu2018automatic, dutt2019utilizing}, \textit{volunteer operations}~\cite{basu2019extracting, imran2020using}, emotional response~\cite{lifang2020effect}, \textit{affected population}~\cite{ghosh2018exploitation} and many others. An effective summary of the disaster should include all the relevant information related to all the categories while ensuring information diversity.
\par Therefore, there are two major steps for tweet summarization related to disasters, identification of the different groups/categories of tweets such that each group/category comprises of tweets that are related to the same topic/sub-event and further, selection of representative tweets from each group/category to form a summary~\cite{rudra2015extracting, roy2020classification, rudra2018classifying, nguyen2015tsum4act}. Existing disaster summarization approaches have proposed unsupervised and supervised learning based approaches for identifying the categories/groups. Existing unsupervised approaches, such as graph-based approaches~\cite{dutta2015graph,dutta2019community, dutta2018ensemble} and topic-based approaches~\cite{modha2017summarizing} utilize the content similarity of the tweets to identify different categories. However, the high vocabulary diversity, presence of inherent noise and the high number of overlapping keywords present in tweets makes automatic identification of categories and fails of the inherent semantic meaning of tweets by unsupervised approaches challenging. To address this, Rudra et al.~\cite{rudra2016summarizing, rudra2019summarizing} proposed a supervised approach for category identification which explicitly identifies the categories of tweets by AIDR~\cite{imran2014aidr} and then, create a summary summarize by selecting appropriate tweets from each category. However, the proposed approach is dependent on AIDR~\cite{imran2014aidr} for category identification of a tweet that requires human intervention for each dataset~\cite{imran2014coordinating}. Therefore, while unsupervised approaches fail to automatically identify categories, supervised approaches require significant human intervention, which is highly difficult to obtain.
\par In order to select the representative tweets from each category/group in the second step of summarization, existing research works~\cite{rudra2015extracting, dutta2018ensemble, roy2020classification} have proposed Integer Linear Programming (ILP)~\cite{rudra2018extracting}, LUHN~\cite{luhn1958automatic}, PageRank~\cite{page1999pagerank}, Centrality~\cite{borgatti2005centrality}, maximum degree~\cite{dutta2015graph}, maximum length~\cite{dutta2019community} and maximum marginal relevance~\cite{carbonell1998use}. However, existing disaster summarization approaches consider equal importance of each category given a disaster. Moreover, all categories might not be equally important~\cite{castillo2016big}. Furthermore, none of these approaches considers that the information present in the same category varies across disasters (as shown in Figure~\ref{fig:Wordcloud} and Table~\ref{table:Wiki}). Understanding the importance of each category with respect to a disaster guide us in selecting the appropriate number of representative tweets from each category to create the summary. However, to ensure real-time summarization, we need to minimize human intervention.
\begin{figure}[t!]
\centering
\subfigure[]{\includegraphics[width=1.7in]{Figures/USFlood_AP.eps} \label{fig:USFlood}}
\subfigure[]{\includegraphics[width=1.7in]{Figures/Harda_AP.eps} \label{fig:Harda}}
\subfigure[]{\includegraphics[width=1.7in]{Figures/Hagupit_AP.eps} \label{fig:Hagupit}}
\caption{We show the word cloud of category \textit{Affected Population} of $3$ different disasters, such as \textit{U.S. Floods} in Figure~\ref{fig:USFlood}, \textit{Harda Twin Train Derailment} in Figure~\ref{fig:Harda} and \textit{Hagupit Typhoon} in Figure~\ref{fig:Hagupit}.}
\label{fig:Wordcloud}
\end{figure}
\begin{table}[ht]
\centering
\caption{We show the presence of $5$ categories in $6$ disasters, like \textit{Chile Earthquake, 2010} (${\bf D_{1}}$), \textit{Italy Earthquake, 2016} (${\bf D_{2}}$), \textit{India-Pakistan Flood, 2014} (${\bf D_{3}}$), \textit{South Sulawesi Flood, 2019} (${\bf D_{4}}$), \textit{Haiyan Typhoon, 2013} (${\bf D_{5}}$), and \textit{Megi Typhoon, 2010} (${\bf D_{6}}$) based on the information from Wikipedia.}
\label{table:Wiki}
\begin{tabular} {|>{\centering\arraybackslash}p{0.04\linewidth}|p{0.23\linewidth}|>{\centering\arraybackslash}p{0.05\linewidth}|>{\centering\arraybackslash}p{0.05\linewidth}|>{\centering\arraybackslash}p{0.05\linewidth}|>{\centering\arraybackslash}p{0.05\linewidth}|>{\centering\arraybackslash}p{0.05\linewidth}|>{\centering\arraybackslash}p{0.05\linewidth}|}
\hline
{\bf SNo} & {\bf Category} & ${\bf D_{1}}$ & ${\bf D_{2}}$ & ${\bf D_{3}}$ & ${\bf D_{4}}$ & ${\bf D_{5}}$ & ${\bf D_{6}}$ \\
\hline
1 & Affected Population & Yes & Yes & Yes & Yes & Yes & Yes \\\hline
2 & Infrastructure Damage & Yes & Yes & Yes & Yes & Yes & Yes \\\hline
3 & Aftershocks & Yes & Yes & No & No & No & No \\\hline
4 & Donations & No & No & Yes & No & Yes & Yes \\\hline
5 & International Aid & No & No & Yes & No & Yes & Yes \\\hline
\end{tabular}
\end{table}
\par In this paper, we propose an ontology based real-time tweet summarizer, \textit{OntoRealSumm} to generate a summary automatically given the tweets related to a disaster with minimal human intervention. \textit{OntoRealSumm} explicitly captures the importance of a category with respect to the disaster and further, ensures information coverage of each category in the final summary by a three-phase system. In the first phase, \textit{OntoRealSumm} identifies the category of a tweet by ontology based pseudo-relevance based feedback approach. It follows self-supervised learning by utilizing existing disaster ontology, namely \textit{Empathi}~\cite{gaur2019empathi} for automatic classification of tweets into categories and then, utilizes these classified tweets as feedback to determine the categories of the remaining tweets. Therefore, by using an existing ontology, it can handle both the information diversity of each category and manage the vocabulary gap by utilizing semantic similarity~\cite{wu2003ontology} without any human intervention. Furthermore, by integration of feedback based information from already classified tweets, it ensures it can handle the inherent issues of tweets, like short length, presence of noise and difference in vocabulary with the ontology to identify the category of a tweet given a disaster with high precision. In the second phase, we determine the specific importance of each category with respect to the disaster. For this, identify disasters which are similar on the basis of categories to automatically identify the importance of a category with respect to each type of disaster. Therefore, identifying a disaster which has similar information content in categories as that of the given disaster aids in automatic determination of the importance of each category of the disaster and thus, determining the number of tweets to be selected from each category. Finally, \textit{OntoRealSumm} selects the number of tweets from each category which maximizes the information coverage of the category and ensures diversity in summary to form the disaster summary by Maximal Marginal Relevance based~\cite{carbonell1998use}. Therefore, it can automatically identify the category of tweets, capture category importance, ensure both information coverage of each category and diversity in summary with minimal human intervention.
\par We evaluate the effectiveness of \textit{OntoRealSumm} on $10$ disaster events and compare our results with the existing research works in terms of ROUGE-N~\cite{lin2004rouge} scores. Our results indicate that \textit{OntoRealSumm} is better than existing research works by $2.69$\% to $31.05$\% ROUGE-1 F1-scores, $5.51$\% to $41.93$\% ROUGE-2 F1-scores, and $4.08$\% to $17.55$\% ROUGE-L F1-scores on an average. Additionally, we perform experiments to analyze the performance of the \textit{OntoRealSumm} in identifying the category of a tweets and study the role of the different components of the \textit{OntoRealSumm}. We finally perform a failure analysis of to highlight the shortcomings of \textit{OntoRealSumm}. We discuss related works in Section~\ref{s:rworks} and the datasets details in Section~\ref{s:data}. In Section~\ref{s:pstat}, we present problem definition and discuss details of \textit{OntoRealSumm} in Section~\ref{s:prop}. We discuss the experiment details in Section~\ref{s:expt} and results in Section~\ref{s:res}. Finally, we conclude our summary in Section~\ref{s:con}.
\section{Related Works} \label{s:rworks}
\par There is a plethora of existing tweet summarization approaches which differ based on their application, like sports events~\cite{goyal2019multilevel, huang2018event, gillani2017post}, political events~\cite{panchendrarajan2021emotion, kim2014tweet}, social events~\cite{narmadha2016survey, schinas2015visual}, disaster~\cite{saini2019multiobjective,saini2021microblog} and news event~\cite{zheng2021tweet, duan2019across, chakraborty2017network}. Based on the application, these tweets summarization approaches utilize the temporal~\cite{wang2019microblog}, the content diversity~\cite{chakraborty2019tweet,mallick2019graph,ali2020topic} or both to select the summary tweets. However, as discussed in Section~\ref{s:intro}, there is high content diversity across different categories of a disaster and further, the presence and importance of categories differ from one disaster to another. Therefore, these existing summarization approaches on different applications can not be directly applied to tweets related to a disaster.
\par During a disaster, a huge number of tweets that comprises of real-time and situational information is posted by eye-witnesses and affected people~\cite{zahra2020automatic}. Therefore, to ensure immediate help, there is a requirement of automated techniques that can identify, extract, and summarize the relevant required information from this huge information overload ~\cite{tapia2011seeking, basu2019extracting, kaufhold2020rapid}. Existing disaster tweet summarization approaches can be primarily categorized into abstractive~\cite{lin2021preserve, rudra2016summarize} or extractive summarization~\cite{saini2021microblog, dusart2021issumset} approaches. In this paper, we focus on developing an extractive summarization approach. Most of the existing extractive tweet summarization approaches can further be segregated based on their proposed methodology into graph based~\cite{dutta2018ensemble}, content based~\cite{sharma2019going}, deep learning approaches~\cite{li2021twitter, dusart2021tssubert}, or integration of multiple approaches~\cite{saini2020mining}. We discuss each of these approaches in details next.
\par Existing content based disaster summarization approaches explore the high variance in the frequency and presence of keywords related to a disaster~\cite{rudra2015extracting, rudra2018extracting} to generate the summary. Additionally, several approaches initially classify each tweet either as relevant or non-relevant tweets and then, they summarize relevant tweets by semi-supervised learning~\cite{chen2015search} or supervised learning~\cite{rudra2018classifying, roy2020classification, madichetty2020detection} on the basis of the tweet content followed by selection of representative tweets from relevant tweets by different techniques, like ILP~\cite{rudra2018extracting}, LSA~\cite{gong2001generic}, LUHN~\cite{luhn1958automatic} or PageRank~\cite{page1999pagerank}. Additionally, recent research works utilize neural network-based techniques to utilize the tweet contents, such as Dusart et al.~\cite{dusart2021tssubert} propose utilization of BERT model~\cite{liu2019text} to identify the importance of a tweet. Li et al.~\cite{li2021twitter} proposed the utilization of the graph convolutional neural networks on a tweet similarity graph to calculate the tweet importance and thereby, select tweets with most importance to create a summary. However, both of these approaches require huge training data. However, these summarization approaches do not consider the categories and therefore, fail to ensure the information coverage of each category in the summary~\cite{vieweg2014integrating}.
\par In order to handle these challenges, several research works have proposed graph based tweet summarization approaches~\cite{dutta2019community, dutta2019summarizing} which initially creates the tweet similarity graph which represents tweets as nodes and an edge represent the similarity between a pair of tweets and then, groups similar tweets together by identifying communities from the tweet similarity graph. Finally, these approaches select representative tweets from each group based on the length, degree or centrality based measures to generate the summary~\cite{borgatti2005centrality}. Therefore, these approaches ensure integration of similar information together by the edge relationships in the graph, implicit identification of categories and further, ensure information coverage and reduction of redundancy by selecting representative tweets from each category. For example, Dutta et al.~\cite{dutta2015graph} propose a community based approach to identify the different sub-groups from the tweet similarity graph and finally, select representative tweets by centrality based measures to create a summary. However, these approaches rely on community based measures to inherently identify the categories of the disaster, which is very challenging due to high vocabulary overlap across categories in a disaster. These approaches consider only content based similarity to identify the category, which can not ensure handling of the inherent issues in tweets~\cite{chakraborty2019tweet}. Additionally, these approaches do not consider the difference in importance of categories and their information content across different disasters.
\par Therefore, to cater to the different requirements, recent disaster summarization based approaches initially identify the categories from the tweets and then, create a summary from each of these categories. For example, Rudra et al.~\cite{rudra2016summarizing,rudra2018identifying} use existing category identification classifier, i.e., AIDR~\cite{imran2014aidr} to identify the categories. In ~\cite{rudra2016summarizing}, the authors create a word graph for each category and then, select representative tweets from each word graph on the basis of the presence of the most important words while ensuring information coverage in summary. Similarly, in \cite{rudra2018identifying}, the authors initially identify the sub-event from each category for a disaster and then select the representative tweets from each category such that the selected tweet has the maximum number of content words of that (i.e., nouns, verbs, and numerals) to create a summary. However, AIDR requires human intervention for each new disaster event and is applicable only for real-time disaster events. Furthermore, none of these approaches considers the difference in category vocabulary and importance across disasters. Therefore, there is a need to develop a system that can automatically identify the categories of the disaster with minimum human intervention and further, capture the specific and implicit importance of each category given a disaster in the disaster summary.
\par However, there are several challenges in identifying the category of a tweet, like the presence of a category and high content diversity within a category vary across the different disasters. Therefore, we propose \textit{OntoRealSumm} that utilizes an existing disaster ontology, \textit{Empathi}~\cite{gaur2019empathi} to effectively identify the category of a tweet irrespective of the disaster in the first phase and then, ensure category based representation and information coverage to generate the summary. We discuss datasets details next.
\section{Dataset} \label{s:data}
\par In this Section, we discuss the datasets, pre-processing details and gold standard summary.
\subsection{Dataset Details and Pre-processing}
\par We evaluate the performance of \textit{OntoRealSumm} on $10$ disaster datasets which are as follows. An overview of these datasets is shown in Table \ref{table:dataset}.
\begin{enumerate}
\item \textit{$D_1$}: This dataset is prepared based on the \textit{Sandy Hook Elementary School Shooting}~\footnote{https://en.wikipedia.org/wiki/Sandy\_Hook\_Elementary\_School\_shooting} in which around $26$ people, including $20$ children and $6$ adults were killed in December, $2012$. This dataset is taken from~\cite{dutta2018ensemble}.
\item \textit{$D_2$}: This dataset is prepared based on the \textit{Uttarakhand Flood}~\footnote{https://en.wikipedia.org/wiki/2013\_North\_India\_floods} which caused dreadful floods and landslides in Uttarakhand, India in June, $2013$. This dataset is taken from~\cite{dutta2018ensemble}.
\item \textit{$D_3$}: This dataset is prepared based on the devastating impact of the strong cyclone, \textit{Hagupit Typhoon}~\footnote{https://en.wikipedia.org/wiki/Typhoon\_Hagupit\_(2014)} on Philippines in December, $2014$ which led to the death of around $18$ people and evacuation of $916$ people. This dataset is taken from~\cite{dutta2018ensemble}.
\item \textit{$D_4$}: This dataset is prepared based on the \textit{Hyderabad Blast, India}~\footnote{https://en.wikipedia.org/wiki/2013\_Hyderabad\_blasts} in which two consecutive bomb blasts killed $17$ people and injured $119$ people in February, $2013$. This dataset is taken from~\cite{dutta2018ensemble}.
\item \textit{$D_5$}: This dataset is prepared based on the \textit{Harda Twin Train Derailment, India}~\footnote{https://en.wikipedia.org/wiki/Harda\_twin\_train\_derailment} in which $31$ people died, and $100$ got injured. The incident happened in August, $2015$. This dataset is taken from~\cite{rudra2018extracting}.
\item \textit{$D_6$}: This dataset is prepared based on the \textit{Los Angeles International Airport Shooting}~\footnote{https://en.wikipedia.org/wiki/2013\_Los\_Angeles\_International\_Airport\_shooting} in which around $15$ people were injured and $1$ person was killed. The incident happened in November, $2013$. This dataset is taken from~\cite{olteanu2015expect}.
\item \textit{$D_7$}: This dataset is prepared based on the devastating impact of the terrible hurricane, \textit{Hurricane Matthew}~\footnote{https://en.wikipedia.org/wiki/Hurricane\_Matthew} on Haiti in October, $2016$ which led to the death of $603$ people and evacuation of $1.5$ million people. This dataset is taken from~\cite{Alam2021humaid}.
\item \textit{$D_8$}: This dataset is prepared based on the \textit{Puebla Mexico Earthquake}~\footnote{https://en.wikipedia.org/wiki/2017\_Puebla\_earthquake} in which $370$ people died, and $6011$ got injured. The incident happened in September, $2017$. This dataset is taken from~\cite{Alam2021humaid}.
\item \textit{$D_9$}: This dataset is prepared based on the \textit{Pakistan Earthquake}~\footnote{https://en.wikipedia.org/wiki/2019\_Kashmir\_earthquake} in which $40$ people died, and $850$ got injured. The incident happened in September, $2019$. This dataset is taken from~\cite{Alam2021humaid}.
\item \textit{$D_{10}$}: This dataset is prepared based on the \textit{Midwestern U.S. Floods}~\footnote{https://en.wikipedia.org/wiki/2019\_Midwestern\_U.S.\_floods} which caused dreadful floods and massive damages in Midwestern United States in March $2019$ to December $2019$. This dataset is taken from~\cite{Alam2021humaid}.
\end{enumerate}
\begin{table}[ht]
\centering
\caption{We show the details of the $10$ datasets, including dataset number, year, number of tweets, type of disaster, country, and continent.}
\label{table:dataset}
\begin{tabular}{|c|c|c|>{\centering\arraybackslash}p{0.09\linewidth}|>{\centering\arraybackslash}p{0.12\linewidth}|c|c|}
\hline
{\bf SNo} & {\bf Dataset} & {\bf Year} & {\bf Number of tweets} & {\bf Type of disaster} & {\bf Country} & {\bf Continent} \\ \hline
1 & $D_1$ & 2012 & 2080 & Man-made & United States & USA \\\hline
2 & $D_2$ & 2013 & 2069 & Natural & India & Asian \\\hline
3 & $D_3$ & 2014 & 1461 & Natural & Philippines & Asian \\\hline
4 & $D_4$ & 2013 & 1413 & Man-made & India & Asian \\\hline
5 & $D_5$ & 2015 & 1676 & Man-made & India & Asian \\\hline
6 & $D_6$ & 2013 & 1409 & Man-made & United States & USA \\\hline
7 & $D_7$ & 2016 & 1654 & Natural & Haiti & USA \\\hline
8 & $D_8$ & 2017 & 2015 & Natural & Mexico & USA \\\hline
9 & $D_9$ & 2019 & 1958 & Natural & Pakistan & Asian \\\hline
10 & $D_{10}$ & 2019 & 1880 & Natural & United States & USA \\\hline
\end{tabular}
\end{table}
{\textit{Pre-processing and Gold Standard Summary}:} As we consider only the tweet text in the \textit{OntoRealSumm}, we perform pre-processing to remove \textit{URLs}, \textit{usernames}, \textit{emoticons}, \textit{punctuation marks}, and \textit{noise} from the tweet text. We use gold standard summary provided by Dutta et al.~\cite{dutta2018ensemble} for $D_1$-$D_4$ and by Rudra et al.~\cite{rudra2018extracting} for $D_5$. For $D_6$-$D_{10}$, we ask $3$ annotators to prepare a summary of $40$ tweets for each dataset. We follow the procedure by Dutta et al.~\cite{dutta2018ensemble} to combine the individual summaries to prepare the final gold standard summary.
\section{Problem Statement} \label{s:pstat}
\par Given a disaster event, $D$, that comprises of $n$ tweets, $\mathcal{T}=\{ \mathcal{T}_1,\mathcal{T}_2,\ldots,\mathcal{T}_n \}$, we intend to create a summary, $\mathcal{S}$ of $\mathcal{T}$. As in most summarization applications, we assume that the length of the summary, $m$, is provided. We previously discussed in Section \ref{s:intro} that a disaster tweet summarization approach must ensure information coverage of all the categories present in $\mathcal{T}$ where information coverage of a category refers to representation of all the important aspects of that category in $\mathcal{S}$~\cite{yan2011evolutionary}. As there are many different mechanisms, such as topics, keywords, a combination of content and context based information, to represent aspects~\cite{rudra2016summarize}, we do not provide any specific method to calculate information coverage and only provide an intuition of information coverage next. We refer to information of $C_i$ as $In(C_i)$ which refers to the aspects present in $C_i$ and measure the information coverage provided by a tweet, $\mathcal{T}_j$, of $C_i$ as $ICov(\mathcal{T}_j,In(C_i))$. Therefore, $ICov(\mathcal{T}_j,In(C_i))$ measures the number of aspects that are present in $C_i$ which are covered by $\mathcal{T}_j$. We show all the used notations and corresponding descriptions for \textit{OntoRealSumm} in Table~\ref{table:notation}.
\begin{table}[ht]
\centering
\caption{We show notations and their corresponding description used in \textit{OntoRealSumm}.}
\label{table:notation}
\begin{tabular}{|c|c|}
\hline
{\bf Notation} & {\bf Description} \\ \hline
$D$ & Given a disaster event dataset \\\hline
$n$ & Number of tweets in $D$ \\\hline
$T$ & Set of tweets in $D$ \\\hline
$T_j$ & $j^{th}$ indexed tweet \\ \hline
$m$ & Desired length summary (number of tweets) \\\hline
$K$ & Total number of categories \\\hline
$S$ & Generated summary \\ \hline
$C_i$ & $i^{th}$ category in $D$ \\ \hline
$\alpha, \beta$ & Tunable parameters \\ \hline
$I(C_i)$ & Importance of a category $C_i$ \\ \hline
$ICov(\mathcal{T}_j,In(C_i))$ & Information coverage provided by tweet, $T_j$ of $C_i$ \\ \hline
$\mathcal{D}(T_j,S))$ & Diversity if tweet $T_j$ added to $S$ \\ \hline
$SemSIM(T_j,C_i)$ & Semantic Similarity score between $T_j$ and $C_i$ \\ \hline
$ConSIM(T_j,C_i)$ & Content based similarity between $T_j$ and $C_i$ \\ \hline
$W(C_i)$ & Weight score of $C_i$ \\ \hline
$Kw({T}_j)$ & Keywords of $T_j$, such as nouns, verbs, and adjectives \\ \hline
$Kw(C_i)$ & Comprise keywords of $C_i$ \\ \hline
$MaxSIM(T_j)$ & Highest Semantic Similarity score among all $C_i$ for $T_j$ \\ \hline
\end{tabular}
\end{table}
\par As previously highlighted by existing summarization approaches, the tweets selected in summary, $\mathcal{S}$, must be diverse among each other, i.e., no two tweets selected in $\mathcal{S}$ convey same information and every tweet in $\mathcal{S}$ adds novel information~\cite{carbonell1998use,yan2011evolutionary}. Therefore, diversity ensures to minimize redundancy $\mathcal{S}$. In order to ensure diversity in $\mathcal{S}$, it is required to select that tweet into $\mathcal{S}$ such that it maximizes the novelty of information and reduces redundancy in $\mathcal{S}$. Existing research works measure diversity, i.e., the novelty of information, by the presence of keywords, aspects, content or contextual information that has not been covered by the tweets already selected in summary~\cite{rudra2015extracting, rudra2018identifying}. We use $\mathcal{D}(\mathcal{T}_j,S)$ to measure the diversity provided by selecting $\mathcal{T}_j$ into $\mathcal{S}$. Therefore, to create $\mathcal{S}$, we need to select tweets iteratively that can maximize both information coverage of each $C_i$, $ICov(\mathcal{T}_j,In(C_i))$, and diversity of the summary, $\mathcal{D}(\mathcal{T}_j,S)$ simultaneously.
\par However, this requires knowledge about the category of $\mathcal{T}$ which is not known apriori as discussed in Section \ref{s:intro}. Additionally, the importance of a category, i.e., the number of tweets to be selected from a category in $\mathcal{S}$ is required to create $\mathcal{S}$. However, we have previously shown that the importance of a category varies from disaster to disaster and therefore, requires an automated system that can determine the importance of each category. We refer to the list of categories as $\mathcal{C}= \{ \mathcal{C}_1,\mathcal{C}_2,\ldots, \mathcal{C}_K\}$ such that there are $K$ categories present in $\mathcal{T}$ and the importance of a category, $C_i$ as $I(C_i)$ which measures the number of tweets to be selected from $C_i$. Thus, we intend to select $m$ tweets from $\mathcal{T}$ such that it maximizes the information coverage present in each category, $In(C_i)$ in $\mathcal{C}$ and diversity in $\mathcal{S}$ on the basis of $I(C_i)$. We formally define the problem as
\begin{equation}
\begin{aligned}
&S= \bigcup\limits_{i=1}^{K} \bigcup\limits_{j=1}^{I_i} \max_{\mathcal{T}_j \in \mathcal{C}_i, 1\leq j \leq n} (\alpha \quad ICov(\mathcal{T}_j,In(C_i)) + \beta \quad \mathcal{D}(\mathcal{T}_j,S)) \\
s.t. \quad \sum\limits_{i=1}^{K}{I_i}=m\\
\label{eq:probForm1}
\end{aligned}
\end{equation}
where, $\mathcal{I}=\{ \mathcal{I}_1,\mathcal{I}_2,\mathcal{I}_3,\ldots, \mathcal{I}_K\}$ represents of the importance of each category/ number of tweets to be selected from each category, $\alpha$ and $\beta$ are tunable parameters that consider the importance of information coverage and diversity respectively \cite{carbonell1998use}. Therefore, we intend to propose an automatic tweet summarization approach that fulfils all these objectives.
\begin{itemize}
\item \textit{Identification of the category of each tweet, Phase-I :} We propose a self-supervised approach to identify the category, $C_i$ of a tweet, $\mathcal{T}_j$. The approach identifies the category of $\mathcal{T}_j$ on the basis of semantic similarity of $\mathcal{T}_j$ with an existing ontology and then, uses the relevant knowledge of the identified tweets as feedback to identify the category of tweets which could not be resolved directly by semantic similarity.
\item \textit{Determination of importance of each category, Phase-II :} We determine the importance of each $C_i$, $I(C_i)$, with respect to $D$ using a Linear Regression model. On the basis of $I(C_i)$, we select the number of tweets from $C_i$ in $\mathcal{S}$.
\item \textit{Representative tweets selection from $C_i$, Phase-III :} We select the representative tweets from each $C_i$ to ensure $ICov(\mathcal{T}_j,In(C_i))$ and $\mathcal{D}(T_j,S))$ of a $C_i$ in $\mathcal{S}$.
\end{itemize}
\section{Proposed Approach}\label{s:prop}
In this Section, we discuss proposed approach in details.
\subsection{Overview} The proposed approach, \textit{OntoRealSumm} comprises of three phases, Phase-I where we identify the category of $\mathcal{T}$, Phase-II where we compute the importance of $C_i$ for $D$, and finally, Phase-III in which we select representative tweets from $C_i$ that ensures information coverage of each category while maintaining the importance of the category and diversity in summary. For Phase-I, we propose an ontology based pseudo-relevance feedback approach to identify the category of a tweet. For Phase-II, we proposed a metric to compare the similarity between two disaster events considering content and probability distribution of tweets across categories. By identifying a similar disaster, we predict the number of tweets to be selected from each category in the final summary. Finally, in Phase-III, we select the number of tweets as predicted in Phase-II from each category which ensures information coverage and diversity of each category in the final summary.
\par Although in this sequential process of summarizing, we require some data pre-processing and knowledge base in the form of ontology, it does not require human intervention to get an effective summary in real-time. We show the overview of \textit{OntoRealSumm} in Figure~\ref{figure:flowchart}. We next describe each of these steps of the \textit{OntoRealSumm} in detail.
\begin{figure}
\centering
\includegraphics[width=\textwidth] {Figures/pp.eps}
\caption{An overview of \textit{OntoRealSumm} is shown}
\label{figure:flowchart}
\end{figure}
\subsection{Phase-I} \label{s:phase1} To identify the category of a tweet, we propose an ontology based pseudo-relevance feedback approach. Although there are several different disasters specific ontology available~\cite{limbu2012management, Moi2016ontology, sermet2019towards, yahya2020ontology}, we choose \textit{Empathi}~\cite{gaur2019empathi} as it provides the maximum number of categories and covers information related to different types of disasters compared to others. The \textit{Empathi} disaster ontology comprises of $70$ categories and the vocabulary of these categories. We identify the category of those tweets which have high similarity with \textit{Empathi} by \textit{Semantic Similarity score}~\cite{wu2003ontology}, $SemSIM(T_j,C_i)$ of a category, $C_i$ with the tweet, $T_j$ as shown in Equation~\ref{eq:simScore}.
\begin{align}
SemSIM(T_j,C_i) = ConSIM(T_j,C_i) * W(C_i)
\label{eq:simScore}
\end{align}\
$SemSIM(T_j,C_i)$ is the product of content based similarity of the tweet with the category, $ConSIM(T_j,C_i)$, and the weight of the category, $W(C_i)$ that represents the number of keywords in $C_i$. We calculate $ConSIM(T_j,C_i)$ as the overlap of the tweet keywords, $Kw({T}_j)$, with the category keywords, $Kw({C}_i)$. We consider only nouns, verbs, and adjectives of $T_j$ as $Kw(T_j)$~\cite{khan2013multi} and $W(C_i)$ as the normalised number of keywords in $C_i$ as shown in Equation~\ref{eq:IScore}.
\begin{align}
W(C_i) = \frac{Kw(C_i)}{\sum_{i \in K} Kw(C_i)}
\label{eq:IScore}
\end{align}
where, $K$ is the total number of categories. On the basis of $SemSIM(T_j,C_i)$, we assign the category of $T_j$ which has the \textit{highest Semantic Similarity score}, $MaxSIM(T_j)$ among all categories.
\begin{align}
MaxSIM(T_j) = \underset{i\in K}{\operatorname{\argmax}} (SemSIM(T_j,C_i))
\label{eq:MaxSim}
\end{align}
Our observations indicate that around $10-30\%$ of the total tweets remain unclassified as they have no overlap with any of the category keywords. The reason being the high vocabulary diversity in tweets~\cite{pradhan2019event, hasan2019real}. Therefore, we use the information from the classified tweets of the category as feedback to increase the existing vocabulary of the disaster category. We create an extended vocabulary of the category by including only those keywords with frequencies greater than $3$ of the already classified tweets~\cite{silva2003importance}. We use this extended vocabulary to determine the category of the unclassified tweets. Although we can not ensure the classification of all the tweets, we observe only a small number of tweets, i.e., around $1-10\%$ of the total tweets remain unclassified after this step. We show in Table~\ref{table:ClasifyTweet} the fraction of tweets classified in each of the steps and the fraction of tweets that are unclassified for $10$ disaster events. We do not consider the tweets whose category could not be determined in Phase-I.
\begin{table}[ht]
\centering\caption{We show the \% of classified tweets using ontology vocabulary (Pass-I), extended vocabulary (Pass-II), and remaining tweets after Pass-I and Pass-II for $10$ datasets.}
\label{table:ClasifyTweet}
\begin{tabular}{|>{\centering\arraybackslash}p{0.06\linewidth}|>{\centering\arraybackslash}p{0.07\linewidth}|>{\centering\arraybackslash}p{0.09\linewidth}|>{\centering\arraybackslash}p{0.13\linewidth}||>{\centering\arraybackslash}p{0.06\linewidth}|>{\centering\arraybackslash}p{0.07\linewidth}|>{\centering\arraybackslash}p{0.09\linewidth}|>{\centering\arraybackslash}p{0.13\linewidth}|}
\hline
{\bf Event} & {\bf Pass-I} & {\bf Pass-II} & {\bf Remaining tweets} & {\bf Event} & {\bf Pass-I} & {\bf Pass-II} & {\bf Remaining tweets} \\ \hline
$D_1$ & 72.64 & 27.15 & 0.19 & $D_6$ & 80.57 & 14.41 & 5.01 \\\hline
$D_2$ & 89.72 & 7.43 & 2.84 & $D_7$ & 99.51 & 0.36 & 0.12 \\\hline
$D_3$ & 67.22 & 22.35 & 10.41 & $D_8$ & 99.20 & 0.44 & 0.29 \\\hline
$D_4$ & 73.58 & 17.55 & 8.85 & $D_9$ & 98.05 & 1.38 & 0.56 \\\hline
$D_5$ & 76.95 & 11.29 & 11.75 & $D_{10}$ & 97.17 & 2.34 & 0.47 \\\hline
\end{tabular}
\end{table}
\subsection{Phase-II} \label{s:phase2} After identifying the category of the tweets related to a disaster, we determine the importance of a category with respect to a disaster (say $D_i$). The importance of a category, $I_j$, is needed to determine the number of tweets to be selected from each category in the final disaster summary. Therefore, $I_j$ of $C_j$ represents the total number of tweets to be selected from $C_j$ given $D_i$. We use a linear regression model to predict the number of tweets to be included in the final summary. This regression model is trained with a disaster dataset (say $D_j$), which is very similar to $D_i$ in terms of content and tweet distribution across categories. To identify $D_j$ for a given $D_i$, we propose a metric \textit{disaster similarity score}.
\par \textbf{Disaster Similarity Score :} \label{s:sim}
Content of the tweets across the categories and the distributions of the tweets across categories vary with disasters. We propose a \textit{disaster similarity score}, $DisSIM(D_i,D_j)$ to compute the similarity between any pair of disasters, $D_i$ and $D_j$. We define $DisSIM(D_i,D_j)$ as the weighted score of information content of the categories, $C_{IC}$ and similarity in probability distribution between categories $C_p$. We compute $C_p$ as (1-$C_{ds}$) where $C_{ds}$ is the Jensen Shannon Divergence of two events. $DisSIM(D_i,D_j)$ is calculated as :
\begin{align}
DisSIM(D_i,D_j) = w_1 * C_{IC} + w_2 * C_{p} \\
\textrm{s.t.} \quad w_1 + w_2 =1 \\
w_1, w_2 \in (0, 1)
\label{eq:Cweight1}
\end{align}
where, $w_1$ and $w_2$ are the weights of $C_{IC}$ and $C_{p}$ respectively. We calculate $C_{IC}^i$ as the cosine similarity~\cite{nguyen2010cosine} of the most frequent occurring keywords between $D_i$ and $D_j$ for a category, $i$ and $C_{IC}$ as the average cosine similarity over all categories as shown in Equation~\ref{eq:Csim}.
\begin{align}
C_{IC} = \frac{1}{K}\sum_{i=1}^{K} C_{IC}^i
\label{eq:Csim}
\end{align}
where $K$ is the total number of categories. We calculate $C_{p}$ by the Jensen-Shannon divergence~\footnote{https://en.wikipedia.org/wiki/Jensen\%E2\%80\%93Shannon\_divergence} score of a disaster pair $D_i$ and $D_j$. $C_{p}$ measures the similarity of the probability distributions of the categories between $D_i$ and $D_j$. We select $D_q$ as the disaster which has maximum similarity with $D_i$ on the basis of $DisSIM(D_i,D_e)$ where, $e \in {{0,1,\ldots,Q}}$ and $Q$ is the total number of disasters we have. We discuss this experiment and our observations in detail in Subsection~\ref{s:similar}.
\subsection{Phase-III} \label{s:phase3} We finally select $I_j$ from each category using a representative tweet selection approach that can captures the maximum information coverage and ensures diversity in summary. Based on the experiment where we compare different selection approaches, such as \textit{Semantic similarity score based}~\cite{wu2003ontology}, \textit{Eigenvector centrality based}~\cite{bonacich1972factoring}, \textit{PageRank based}~\cite{brin1998anatomy}, \textit{K-mean clustering based}~\cite{macqueen1967some}, and \textit{Maximal Marginal Relevance based} selection~\cite{carbonell1998use}, we select MMR in \textit{OntoRealSumm}. We show the experiment and our observations in details in Subsection~\ref{s:regression}. MMR uses an incremental approach, comparing each tweet to the current summary tweet to determine its inclusion in summary. We discuss the experiments next.
\section{Experiments}\label{s:expt}
\par In this Section, we initially discuss the details of the existing research works related to disaster summarization which we use as baseline methods for comparison. Then we provide a performance comparison of baseline methods with the proposed approach \textit{OntoRealSumm}.
\subsection{Baselines} \label{s:base}
\par We compare \textit{OntoRealSumm} with following state-of-the-art summarization approaches:
\begin{enumerate}
\item \textit{$B_1$}: Rudra et al.~\cite{rudra2019summarizing} propose a summarization framework that initially identifies the tweets, which comprises of most important disaster-specific keywords, such as numerals, nouns, verbs, and location. Then, the authors create a graph where the nodes represent all the words of the identified tweets, and an edge is created between two words if they had occurred as a bigram in any of the tweets. Finally, they select those paths from the graph, which can ensure maximum information coverage to generate the summary.
\item \textit{$B_2$}: Dutta et al.~\cite{dutta2018ensemble} propose an ensemble graph based summarization approach which initially generates summary by $9$ existing text summarization algorithms. The authors create a tweet similarity graph where the nodes represent the tweets present in the summary of anyone of the $9$ existing text summarization algorithms and the edges represent their content and context similarity. Finally, the authors follow a community detection algorithm to automatically identify the categories and then select representative tweets from each category on the basis of length, informativeness and centrality scores to create the summary.
\item \textit{$B_3$}: Rudra et al.~\cite{rudra2018identifying} propose a sub-event based summarization approach that initially identifies the sub-events and then, they generate a summary by selecting representative tweets by Integer Linear Programming based selection such that it ensures maximum information coverage of sub-events.
\end{enumerate}
\subsection{Comparison with Existing Research Works} \label{s:res}
\par To evaluate the performance of the \textit{OntoRealSumm}, we compare the summary generated by the \textit{OntoRealSumm} and the existing research works with the ground truth summary on the basis of ROUGE-N score~\cite{lin2004rouge}. ROUGE-N score computes the overlapping words in the generated summary with a set of ground truth summary. We calculate precision, recall and F1-score for $3$ different variants of ROUGE-N score, i.e., N=$1$, $2$ and L, respectively. Our observations from Table~\ref{table:Result} indicate that \textit{OntoRealSumm} ensure better ROUGE-N precision, recall and F1-scores in comparison with baselines. The improvement in summary scores of ROUGE-1 F1-score ranges from $1.96$\% to $62.74$\%, ROUGE-2 F1-score ranges from $4.34$\% to $73.33$\% and ROUGE-L F1-score ranges from $3.57$\% to $20.68$\% respectively. The improvement is highest over the $B_3$ baseline and lowest with the $B_2$ baseline. The performance of the \textit{OntoRealSumm} is the best for $D_5$ with $0.58-0.29$ and worst for $D_4$ with $0.47-0.28$ in Rouge-1, Rouge-2 and Rouge-L F1-scores.
\begin{table*}[ht]
\centering
\caption{Precision, recall and F1-score of ROUGE-1, ROUGE-2 and ROUGE-L score of \textit{OntoRealSumm} and baselines on $10$ datasets is shown.}
\label{table:Result}
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
\textbf{Dataset} & \textbf{Approaches} & \multicolumn{3}{c|}{\textbf{ROUGE-1}} & \multicolumn{3}{c|} {\textbf{ROUGE-2}} & \multicolumn{3}{c|}{\textbf{ROUGE-L}} \\ \cline{3-11}
& & \textbf{Precision} & \textbf{Recall} & \textbf{F1-score} & \textbf{Precision} & \textbf{Recall} & \textbf{F1-score} & \textbf{Precision} & \textbf{Recall} & \textbf{F1-score} \\ \hline
& $OntoRealSumm$ & 0.56 & 0.48 & 0.51 & 0.21 & 0.18 & 0.20 & 0.31 & 0.27 & 0.29 \\ \cline{2-11}
${D_1}$ & $B_1$ & 0.49 & {\bf0.58} & {\bf0.53} & 0.24 & {\bf0.29} & {\bf0.26} & 0.31 & {\bf0.36} & {\bf0.33}\\ \cline{2-11}
& $B_2$ & {\bf 0.59} & 0.47 & 0.52 & {\bf 0.25} & 0.20 & 0.22 & {\bf0.32} & 0.27 & 0.29\\ \cline{2-11}
& $B_3$ & 0.44 & 0.53 & 0.48 & 0.18 & 0.22 & 0.20 & 0.25 & 0.29 & 0.27 \\ \cline{1-11} \cline{1-11}
& $OntoRealSumm$ & {\bf0.46} & {\bf0.45} & {\bf0.46} & 0.17 & {\bf0.17} & {\bf0.17} & 0.27 & {\bf0.26} & {\bf0.27} \\ \cline{2-11}
${D_2}$ & $B_1$ & 0.32 & 0.35 & 0.33 & 0.14 & 0.15 & 0.14 & 0.23 & 0.25 & 0.24 \\ \cline{2-11}
& $B_2$ & 0.44 & 0.32 & 0.37 & {\bf 0.18} & 0.14 & 0.16 & {\bf 0.28} & 0.22 & 0.25 \\ \cline{2-11}
& $B_3$ & 0.39 & 0.44 & 0.41 & 0.11 & 0.13 & 0.12 & 0.23 & 0.26 & 0.24 \\ \cline{1-11} \cline{1-11}
& $OntoRealSumm$ & {\bf0.50} & {\bf0.48} & {\bf0.49} & 0.19 & {\bf0.18} & {\bf0.19} & 0.28 & 0.27 & 0.28 \\ \cline{2-11}
${D_3}$ & $B_1$ & 0.36 & 0.36 & 0.36 & 0.14 & 0.14 & 0.14 & 0.29 & {\bf0.29} & {\bf0.29} \\ \cline{2-11}
& $B_2$ & 0.50 & 0.34 & 0.40 & {\bf0.23} & 0.15 & 0.18 & {\bf0.32} & 0.23 & 0.27 \\ \cline{2-11}
& $B_3$ & 0.42 & 0.47 & 0.44 & 0.17 & 0.18 & 0.17 & 0.23 & 0.25 & 0.24 \\ \cline{1-11} \cline{1-11}
& $OntoRealSumm$ & {\bf0.49} & 0.44 & {\bf0.47} & {\bf0.19} & 0.17 & {\bf0.18} & {\bf0.29} & {\bf0.27} & {\bf0.28} \\ \cline{2-11}
${D_4}$ & $B_1$ & 0.29 & 0.36 & 0.32 & 0.12 & 0.14 & 0.13 & 0.22 & 0.27 & 0.24 \\ \cline{2-11}
& $B_2$ & 0.48 & 0.35 & 0.41 & 0.19 & 0.14 & 0.16 & 0.29 & 0.23 & 0.26 \\ \cline{2-11}
& $B_3$ & 0.43 & {\bf0.48} & 0.45 & 0.16 & {\bf0.18} & 0.17 & 0.24 & 0.26 & 0.25 \\ \cline{1-11} \cline{1-11}
& $OntoRealSumm$ & {\bf0.58} & 0.58 & {\bf0.58} & {\bf0.29} & 0.29 & {\bf0.29} & {\bf0.31} & 0.30 & {\bf0.31} \\ \cline{2-11}
${D_5}$ & $B_1$ & 0.35 & {\bf0.62} & 0.44 & 0.18 & {\bf 0.32} & 0.23 & 0.24 & {\bf 0.40} & 0.29 \\ \cline{2-11}
& $B_2$ & 0.48 & 0.54 & 0.51 & 0.22 & 0.25 & 0.23 & 0.26 & 0.29 & 0.28 \\ \cline{2-11}
& $B_3$ & 0.48 & 0.58 & 0.53 & 0.23 & 0.28 & 0.25 & 0.25 & 0.29 & 0.27 \\ \cline{1-11} \cline{1-11}
& $OntoRealSumm$ & {\bf0.57} & {\bf0.56} & {\bf0.56} & {\bf0.23} & {\bf0.23} & {\bf0.23} & {\bf0.29} & {\bf0.28} & {\bf0.29} \\ \cline{2-11}
${D_6}$ & $B_1$ & 0.50 & 0.47 & 0.49 & 0.23 & 0.22 & 0.22 & 0.29 & 0.28 & 0.29 \\ \cline{2-11}
& $B_2$ & 0.57 & 0.41 & 0.48 & 0.21 & 0.15 & 0.18 & 0.29 & 0.22 & 0.25 \\ \cline{2-11}
& $B_3$ & 0.50 & 0.55 & 0.52 & 0.20 & 0.22 & 0.21 & 0.25 & 0.27 & 0.23 \\ \cline{1-11} \cline{1-11}
& $OntoRealSumm$ & 0.54 & 0.45 & {\bf0.49} & {\bf0.17} & {\bf0.14} & {\bf0.15} & 0.25 & {\bf0.22} & {\bf0.23} \\ \cline{2-11}
${D_7}$ & $B_1$ & 0.45 & {\bf 0.51} & 0.48 & 0.12 & 0.14 & 0.13 & 0.24 & 0.21 & 0.22 \\ \cline{2-11}
& $B_2$ & {\bf 0.57} & 0.40 & 0.47 & 0.17 & 0.12 & 0.14 & {\bf0.27} & 0.21 & 0.22 \\ \cline{2-11}
& $B_3$ & 0.49 & 0.40 & 0.44 & 0.14 & 0.11 & 0.12 & 0.24 & 0.20 & 0.22 \\ \cline{1-11} \cline{1-11}
& $OntoRealSumm$ & 0.48 & {\bf0.49} & {\bf0.48} & {\bf0.15} & {\bf0.16} & {\bf0.16} & 0.25 & {\bf0.24} & {\bf0.25} \\ \cline{2-11}
${D_8}$ & $B_1$ & 0.43 & 0.48 & 0.45 & 0.13 & 0.14 & 0.13 & 0.22 & 0.24 & 0.23 \\ \cline{2-11}
& $B_2$ & {\bf0.51} & 0.43 & 0.46 & 0.15 & 0.13 & 0.14 & {\bf0.26} & 0.22 & 0.24 \\ \cline{2-11}
& $B_3$ & 0.43 & 0.46 & 0.44 & 0.13 & 0.14 & 0.14 & 0.22 & 0.24 & 0.23 \\ \cline{1-11} \cline{1-11}
& $OntoRealSumm$ & {\bf0.56} & {\bf0.45} & {\bf0.50} & {\bf0.17} & {\bf0.14} & {\bf0.15} & {\bf0.25} & {\bf0.21} & {\bf0.23} \\ \cline{2-11}
${D_9}$ & $B_1$ & 0.26 & 0.16 & 0.20 & 0.05 & 0.03 & 0.04 & 0.25 & 0.17 & 0.20 \\ \cline{2-11}
& $B_2$ & 0.50 & 0.44 & 0.47 & 0.15 & 0.13 & 0.14 & 0.22 & 0.20 & 0.21 \\ \cline{2-11}
& $B_3$ & 0.45 & 0.45 & 0.45 & 0.11 & 0.11 & 0.11 & 0.21 & 0.21 & 0.21 \\ \cline{1-11} \cline{1-11}
& $OntoRealSumm$ & {\bf0.51} & {\bf0.50} & {\bf0.51} & {\bf0.13} & {\bf0.12} & {\bf0.13} & {\bf0.22} & {\bf0.22} & {\bf0.22} \\ \cline{2-11}
${D_{10}}$ & $B_1$ & 0.26 & 0.15 & 0.19 & 0.06 & 0.03 & 0.04 & 0.22 & 0.15 & 0.18 \\ \cline{2-11}
& $B_2$ & 0.51 & 0.45 & 0.48 & 0.10 & 0.09 & 0.10 & 0.22 & 0.19 & 0.20 \\ \cline{2-11}
& $B_3$ & 0.50 & 0.49 & 0.50 & 0.12 & 0.12 & 0.12 & 0.22 & 0.22 & 0.22 \\ \cline{1-11} \cline{1-11}
\end{tabular} }
\end{table*}
\subsection{Identification of Category of a Tweet} \label{s:catdeter}
\par In this Subsection, we evaluate the effectiveness of Phase-I of \textit{OntoRealSumm}, proposed self-supervised tweet category identification approach with an existing unsupervised approach (\textit{UnSuA}). We select the approach followed by Dutta et al.~\cite{dutta2019community} for identifying categories to summarize disaster tweets as \textit{UnSuA}. Although there are several existing unsupervised approaches, we found ~\cite{dutta2019community} to be most successful. In ~\cite{dutta2019community}, the authors propose utilization of Louvain community detection algorithm~\cite{traag2015faster} to identify communities which inherently represents categories related to the disaster. For an experiment, we select $20\%$ tweets randomly from each category for Phase-I of \textit{OntoRealSumm} and \textit{UnSuA} from $3$ disaster events, such as, $D_2$, $D_3$ and $D_6$. We ask $3$ volunteers who had a good knowledge of English and provide them with these tweets for annotation.
On comparing the performance of Phase-I of \textit{OntoRealSumm} with \textit{UnSuA} as shown in Table~\ref{table:catdet}, we observe that Phase-I of \textit{OntoRealSumm} has better performance of atleast $43\%$ in F1-score. Based on our observations, we found that unsupervised approaches can not directly segregate the tweets into different categories due to the high vocabulary overlap across different categories and immense vocabulary diversity in each category on the basis of information related to tweet text. Utilization of the semantic information from ontology in Phase-I helps in categorization of tweets and therefore, increases the performance.
\begin{table*}[ht]
\centering
\caption{F1-score of Phase-I of \textit{OntoRealSumm} and unsupervised tweet category identification approach (\textit{UnSuA}) on comparing the volunteers annotations for $3$ disasters, such as, $D_2$, $D_3$, and $D_6$ is shown.}
\label{table:catdet}
\begin{tabular}{|c|c|c|} \hline
\textbf{Dataset} & \textbf{Approach} & \textbf{F1-score} \\ \hline
${D_2}$ & Phase-I of \textit{OntoRealSumm} & \bf{67.07} \\ \cline{2-3}
& \textit{UnSuA} & 38.72 \\ \hline
${D_3}$ & Phase-I of \textit{OntoRealSumm} & \bf{64.58} \\ \cline{2-3}
& \textit{UnSuA} & 23.37 \\ \hline
${D_6}$ & Phase-I of \textit{OntoRealSumm} & \bf{62.99} \\ \cline{2-3}
& \textit{UnSuA} & 29.93 \\ \hline
\end{tabular}
\end{table*}
\subsection{Determining Relevance of Category Importance} \label{s:catIm}
\par In Phase-II of \textit{OntoRealSumm}, we propose a linear regression based model to calculate the importance of each category given a disaster based on the assumption that the importance of the category varies across disasters. Although we discuss this assumption by intuition in Section~\ref{s:intro}, we show this through an experiment next. For our experiments, we compare the summaries generated by \textit{OntoRealSumm} and summaries generated by an approach when each category is given equal importance, \textit{EqCatSumm} on $6$ datasets, such as, $D_2$, $D_3$, $D_4$, $D_5$, $D_6$, and $D_9$. To generate the summary by \textit{EqCatSumm}, we follow Phase-III of \textit{OntoRealSumm} for selection of tweets from each category. We show the precision, recall, and F1-score for $3$ different variants of the ROUGE-N score, i.e., N=1, 2, and L, in Table~\ref{table:cat_IMP} which indicates that \textit{OntoRealSumm} performs consistently better than \textit{EqCatSumm}. Therefore, we have empirically shown how identifying the importance of each category given a disaster is essential for effective disaster tweet summarization.
\begin{table*}[ht]
\centering
\caption{F1-score of ROUGE-1, ROUGE-2 and ROUGE-L score on comparing \textit{OntoRealSumm} which identifies the importance of each category with an approach when each category gives equal importance, \textit{EqCatSumm} is shown.}
\label{table:cat_IMP}
\begin{tabular}{|c|c|c|c|c|} \hline
\textbf{Dataset} & \textbf{Approach} & \textbf{ROUGE-1} & \textbf{ROUGE-2} & \textbf{ROUGE-L} \\ \cline{3-5}
& & \textbf{F1-score} & \textbf{F1-score} & \textbf{F1-score} \\ \hline
${D_2}$ & \textit{OntoRealSumm} & \bf{0.46} & \bf{0.17} & \bf{0.27}\\ \cline{2-5}
& \textit{EqCatSumm} & 0.42 & 0.14 & 0.24 \\ \hline
${D_3}$ & \textit{OntoRealSumm} & \bf{0.49} & \bf{0.19} & \bf{0.28}\\ \cline{2-5}
& \textit{EqCatSumm} & 0.47 & 0.17 & 0.27 \\ \hline
${D_4}$ & \textit{OntoRealSumm} & \bf{0.47} & \bf{0.18} & \bf{0.28}\\ \cline{2-5}
& \textit{EqCatSumm} & 0.42 & 0.13 & 0.24 \\ \hline
${D_5}$ & \textit{OntoRealSumm} & \bf{0.58} & \bf{0.29} & \bf{0.31}\\ \cline{2-5}
& \textit{EqCatSumm} & 0.50 & 0.20 & 0.26 \\ \hline
${D_6}$ & \textit{OntoRealSumm} & \bf{0.56} & \bf{0.23} & \bf{0.29}\\ \cline{2-5}
& \textit{EqCatSumm} & 0.53 & 0.19 & 0.25 \\ \hline
${D_9}$ & \textit{OntoRealSumm} & \bf{0.50} & \bf{0.15} & \bf{0.23}\\ \cline{2-5}
& \textit{EqCatSumm} & 0.48 & 0.14 & 0.21 \\ \hline
\end{tabular}
\end{table*}
\subsection{Comparing Disaster Similarity} \label{s:similar}
\par On comparing $DisSIM(D_i,D_j)$ for every disaster pair, we observe that any disaster has the most similarity score with a disaster that belongs to the same continent, such as USA or ASIA and is of the same type, i.e., man-made or natural. For example, \textit{Uttarakhand Flood} ($D_2$) has the highest score with disaster \textit{Pakistan Earthquake} ($D_9$) among all the disasters. Similarly, \textit{Sandy Hook Elementary School Shooting} ($D_1$) has the highest score with \textit{Los Angeles International Airport Shooting} ($D_6$) among all the disasters. Therefore, we term $D_i$ and $D_j$ as \textit{homogeneous disasters} if they belong to the same continent and is of the same type. Subsequently, we term $D_i$ and $D_j$ as \textit{heterogeneous disasters} if they do not satisfy either of these two conditions. We validate next the impact of training on a \textit{homogeneous disaster} or a \textit{heterogeneous disaster} on the summary quality. We show the results in Table~\ref{table:Combined-similarity}.
\par For each disaster $D_i$, we predict the number of tweets to be selected from each category as predicted by the linear regression model if it was trained on \textit{homogeneous disaster} or \textit{heterogeneous disaster}. We, then, select the representative tweets from each category based on the predicted number of tweets to create the summary and finally, compare the generated summary with the ground-truth summary on the basis of ROUGE-N scores. For the experiments, we randomly select $5$ disasters, such as, $D_2$, $D_4$, $D_5$, $D_7$, and $D_8$. We show the precision, recall, and F1-score for $3$ different variants of the ROUGE-N score, i.e., N=1, 2, and L, respectively, in Table~\ref{table:cat_train}. Our observations indicate that there is around $7-21\%$ and $6-9\%$ increase in ROUGE-2 F1-score for any disaster in Asia and USA, respectively, for both man-made and natural disasters. Therefore, our observations indicate that an effective summary for $D_i$ can be ensured if we identify the importance of categories from a $D_j$ is which of the same type and from the same continent as $D_j$.
\begin{table}[ht]
\centering
\caption{We show the disaster similarity scores for every disaster pair. The first row and first column of this Table represent the various disasters.}
\label{table:Combined-similarity}
\begin{tabular} {|p{0.04\linewidth}|p{0.05\linewidth}|p{0.05\linewidth}|p{0.05\linewidth}|p{0.05\linewidth}|p{0.05\linewidth}|p{0.05\linewidth}|p{0.05\linewidth}|p{0.05\linewidth}|p{0.05\linewidth}|p{0.05\linewidth}|} \hline
{} & \textbf{$D_1$} & \textbf{$D_2$} & \textbf{$D_3$} & \textbf{$D_4$} & \textbf{$D_5$} & \textbf{$D_6$} & \textbf{$D_7$} & \textbf{$D_8$} & \textbf{$D_9$} & \textbf{$D_{10}$}\\ \hline
$D_1$ & & 0.4 & 0.43 & 0.44 & 0.44 & \bf{0.52} & 0.43 & 0.39 & 0.45 & 0.4 \\\hline
$D_2$ & 0.4 & & 0.54 & 0.5 & 0.48 & 0.44 & 0.5 & 0.5 & \bf{0.55} & 0.5 \\\hline
$D_3$ & 0.43 & \bf{0.54} & & 0.44 & 0.4 & 0.47 & 0.46 & 0.46 & 0.51 & 0.44 \\\hline
$D_4$ & 0.44 & 0.5 & 0.44 & & \bf{0.51} & 0.45 & 0.47 & 0.45 & 0.48 & 0.45 \\\hline
$D_5$ & 0.44 & 0.48 & 0.4 & \bf{0.51} & & 0.45 & 0.46 & 0.45 & 0.48 & 0.47 \\\hline
$D_6$ & \bf{0.52} & 0.44 & 0.47 & 0.45 & 0.45 & & 0.44 & 0.45 & 0.48 & 0.43 \\\hline
$D_7$ & 0.43 & 0.5 & 0.46 & 0.47 & 0.46 & 0.44 & & 0.52 & 0.48 & \bf{0.55} \\\hline
$D_8$ & 0.39 & 0.5 & 0.46 & 0.45 & 0.45 & 0.45 & \bf{0.52} & & 0.49 & \bf{0.52} \\\hline
$D_9$ & 0.45 & \bf{0.55} & 0.51 & 0.48 & 0.48 & 0.48 & 0.48 & 0.49 & & 0.46 \\\hline
$D_{10}$ & 0.4 & 0.5 & 0.44 & 0.45 & 0.47 & 0.43 & \bf{0.55} & 0.52 & 0.46 & \\\hline
\end{tabular}
\end{table}
\begin{table*}[ht]
\centering
\caption{F1-score of ROUGE-1, ROUGE-2 and ROUGE-L score of \textit{OntoRealSumm} for training a linear regression model on a dataset from homogeneous and heterogeneous disasters on $5$ datasets is shown.}
\label{table:cat_train}
\begin{tabular}{|c|c|c|c|c|} \hline
\textbf{Dataset} & \textbf{Training} & \textbf{ROUGE-1} & \textbf{ROUGE-2} & \textbf{ROUGE-L} \\ \cline{3-5}
& \textbf{disaster} & \textbf{F1-score} & \textbf{F1-score} & \textbf{F1-score} \\ \hline
& {$D_5$} & 0.50 & 0.19 & 0.29 \\ \cline{2-5}
\scalebox{1.1}{\bm{${D_1}$}} & \scalebox{1.1}{\bm{$D_6$}} & {\bf0.51} & {\bf0.20} & {\bf0.30} \\ \cline{2-5}
& {$D_9$} & 0.49 & 0.18 & 0.28 \\ \cline{2-5}
& $D_{10}$ & 0.49 & 0.19 & 0.28 \\ \hline
& {$D_1$} & 0.43 & 0.14 & 0.25 \\ \cline{2-5}
\scalebox{1.1}{\bm{${D_2}$}} & {$D_5$} & 0.43 & 0.15 & 0.25 \\ \cline{2-5}
& {$D_8$} & 0.44 & 0.15 & 0.25 \\ \cline{2-5}
& \scalebox{1.1}{\bm{$D_9$}} & {\bf0.46} & {\bf0.17} & {\bf0.27} \\ \hline
& \scalebox{1.1}{\bm{$D_4$}} & {\bf0.58} & {\bf0.29} & {\bf0.30} \\ \cline{2-5}
\scalebox{1.1}{\bm{${D_5}$}} & {$D_6$} & 0.56 & 0.27 & 0.28 \\ \cline{2-5}
& {$D_7$} & 0.56 & 0.27 & 0.29 \\ \cline{2-5}
& {$D_9$} & 0.53 & 0.23 & 0.28 \\ \hline
& {$D_1$} & 0.47 & 0.12 & 0.21 \\ \cline{2-5}
\scalebox{1.1}{\bm{${D_7}$}} & {$D_5$} & 0.46 & 0.11 & 0.21 \\ \cline{2-5}
& {$D_9$} & 0.47 & 0.13 & 0.22 \\ \cline{2-5}
& \scalebox{1.1}{\bm{$D_{10}$}} & {\bf0.49} & {\bf0.15} & {\bf0.23} \\ \hline
& {$D_1$} & 0.45 & 0.12 & 0.23 \\ \cline{2-5}
\scalebox{1.1}{\bm{${D_8}$}} & {$D_2$} & 0.46 & 0.15 & 0.24 \\ \cline{2-5}
& {$D_4$} & 0.46 & 0.14 & 0.24 \\ \cline{2-5}
& \scalebox{1.1}{\bm{$D_7$}} & {\bf0.48} & {\bf0.16} & {\bf0.25} \\ \hline
\end{tabular}
\end{table*}
\begin{comment}
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
\multirow{\textbf{Dataset}} & \multirow{\textbf{Category}} & \multicolumn{3}{c|}{\textbf{ROUGE-1}} & \multicolumn{3}{c|} {\textbf{ROUGE-2}} & \multicolumn{3}{c|}{\textbf{ROUGE-L}} \\ \cline{3-11}
& & \textbf{Precision} & \textbf{Recall} & \textbf{F1-score} & \textbf{Precision} & \textbf{Recall} & \textbf{F1-score} & \textbf{Precision} & \textbf{Recall} & \textbf{F1-score} \\ \hline
& $C_4$ & {\bf0.56} & {\bf0.48} & {\bf0.51} & {\bf0.21} & {\bf0.18} & {\bf0.20} & {\bf0.32} & {\bf0.28} & {\bf0.30} \\ \cline{2-11}
${D_1}$ & $C_1$ & 0.52 & 0.47 & 0.49 & 0.20 & 0.18 & 0.18 & 0.30 & 0.27 & 0.28 \\ \cline{2-11}
& $C_2$ & 0.54 & 0.47 & 0.50 & 0.21 & 0.18 & 0.19 & 0.31 & 0.27 & 0.29 \\ \cline{2-11}
& $C_3$ & 0.53 & 0.46 & 0.49 & 0.21 & 0.18 & 0.19 & 0.30 & 0.27 & 0.28 \\ \cline{1-11} \cline{1-11}
& $C_1$ & {\bf0.46} & {\bf0.45} & {\bf0.46} & {\bf0.17} & {\bf0.17} & {\bf0.17} & {\bf0.27} & {\bf0.26} & {\bf0.27} \\ \cline{2-11}
${D_2}$ & $C_2$ & 0.45 & 0.41 & 0.43 & 0.15 & 0.15 & 0.15 & 0.26 & 0.25 & 0.25 \\ \cline{2-11}
& $C_3$ & 0.45 & 0.43 & 0.44 & 0.15 & 0.15 & 0.15 & 0.26 & 0.25 & 0.25 \\ \cline{2-11}
& $C_4$ & 0.44 & 0.42 & 0.43 & 0.14 & 0.13 & 0.14 & 0.26 & 0.25 & 0.25 \\ \cline{1-11} \cline{1-11}
& $C_2$ & {\bf0.58} & 0.58 & {\bf0.58} & {\bf0.29} & {\bf0.29} & {\bf0.29} & {\bf0.31} & {\bf0.30} & {\bf0.30} \\ \cline{2-11}
${D_5}$ & $C_1$ & 0.52 & 0.54 & 0.53 & 0.27 & 0.23 & 0.23 & 0.28 & 0.29 & 0.28 \\ \cline{2-11}
& $C_3$ & 0.54 & 0.59 & 0.56 & 0.26 & 0.29 & 0.27 & 0.28 & 0.30 & 0.29 \\ \cline{2-11}
& $C_4$ & 0.53 & {\bf0.60} & 0.56 & 0.25 & 0.29 & 0.27 & 0.27 & 0.30 & 0.28 \\ \cline{1-11} \cline{1-11}
& $C_3$ & {\bf0.54} & 0.45 & {\bf0.49} & {\bf0.17} & {\bf0.14} & {\bf0.15} & {\bf0.25} & {\bf0.22} & {\bf0.23} \\ \cline{2-11}
${D_7}$ & $C_1$ & 0.53 & 0.42 & 0.47 & 0.14 & 0.12 & 0.13 & 0.25 & 0.21 & 0.22 \\ \cline{2-11}
& $C_2$ & 0.49 & 0.43 & 0.46 & 0.12 & 0.11 & 0.11 & 0.22 & 0.20 & 0.21 \\ \cline{2-11}
& $C_4$ & 0.49 & {\bf0.46} & 0.47 & 0.12 & 0.12 & 0.12 & 0.22 & 0.21 & 0.21 \\ \cline{1-11} \cline{1-11}
& $C_3$ & {\bf0.48} & {\bf0.49} & {\bf0.48} & {\bf0.15} & {\bf0.16} & {\bf0.16} & {\bf0.25} & {\bf0.24} & {\bf0.25} \\ \cline{2-11}
${D_8}$ & $C_1$ & 0.44 & 0.48 & 0.46 & 0.14 & 0.16 & 0.15 & 0.24 & 0.25 & 0.24 \\ \cline{2-11}
& $C_2$ & 0.46 & 0.46 & 0.46 & 0.14 & 0.14 & 0.14 & 0.24 & 0.24 & 0.24 \\ \cline{2-11}
& $C_4$ & 0.45 & 0.46 & 0.45 & 0.12 & 0.13 & 0.12 & 0.22 & 0.23 & 0.23 \\ \cline{1-11} \cline{1-11}
\end{tabular} }
\caption{Precision, recall and F1-score of ROUGE-1, Rouge-2 and Rouge-L score of \textit{OntoRealSumm} for training a regression model on a dataset from different categories, i.e., \textit{Asian natural} ($C_1$), \textit{Asian man-made} ($C_2$), \textit{USA natural} ($C_3$), and \textit{USA man-made} ($C_4$). \RC{replace with 3heatmaps for each rouge score}}
\label{table:cat_train}
\end{table*}
\end{comment}
\subsection{Comparing different Regression Models} \label{s:regression}
\par In this Subsection, we compare $3$ different types of regression models, such as, \textit{Linear regression} model ($R_1$), \textit{Ridge regression} ($R_2$) model, and \textit{Bayesian regression} model ($R_3$) to understand which model should be selected to predict the number of tweets from each category into summary given $D_i$. In order to understand this, given any similar disaster to $D_i$, say $D_j$, we train $R_1$, $R_2$ and $R_3$ respectively on the categories of $D_j$ and predict the number of tweets from each category for $D_i$. We evaluate the performance of $R_1$, $R_2$ and $R_3$ through mean squared error as shown in Equation~\ref{eq:MSE}.
\begin{align}
MSE(R_i) = \frac{1}{K} \sum_{s=1}^{K} (N_g - N_r)^2
\label{eq:MSE}
\end{align}
where $N_g$ and $N_r$ are the number of tweets from each category in ground truth summary and predicted by the regression model respectively, $K$ is the number of categories in $D_i$ and $i$ could be $1$, $2$ or $3$. Therefore, lower the value of $MSE(R_i)$, better is the performance of $R_i$. For the experiments, we randomly select $5$ different disasters, such as, $D_1$, $D_4$, $D_5$, $D_6$, and $D_9$ and compare the performance of $R_1$, $R_2$ or $R_3$ on the basis of $MSE(R_i)$. We show our observations in Table~\ref{table:MSE} which indicate that $R_1$ performs the best followed by $R_3$ and $R_2$. Therefore, based on these observations, we select $R_1$, i.e., \textit{Linear regression} model in \textit{OntoRealSumm} to automatically identify the number of tweets to be selected from a category given a disaster.
\begin{table}[ht]
\centering
\caption{We compare Mean Square Error ($MSE$) of $3$ different regression models, i.e, \textit{Linear}, \textit{Ridge}, and \textit{Bayesian} models.}
\label{table:MSE}
\begin{tabular} {|c|c|c|c|c|c|c|c|}
\hline
\textbf{Dataset} & \textbf{Linear} & \textbf{Ridge} & \textbf{Bayesian} \\ \hline
$D_1$ & {\bf2.84} & 2.93 & 3.22 \\\hline
$D_4$ & {\bf1.12} & 1.14 & 1.15 \\\hline
$D_5$ & {\bf1.37} & 2.19 & 2.28 \\\hline
$D_6$ & {\bf1.66} & 1.85 & 2.88 \\\hline
$D_9$ & {\bf4.13} & {\bf4.13} & 4.40 \\\hline
\end{tabular}
\end{table}
\subsection{Identifying Representative Tweets from a Category} \label{s:selection}
\par In this Subsection, we compare the performance of \textit{Semantic similarity score based}~\cite{wu2003ontology} ($S_1$), \textit{Eigenvector centrality based}~\cite{bonacich1972factoring} ($S_2$), \textit{PageRank based}~\cite{brin1998anatomy} ($S_3$), \textit{K-mean clustering based}~\cite{macqueen1967some} ($S_4$), and \textit{Maximal Marginal Relevance based} selection~\cite{carbonell1998use} ($S_5$) for \textbf{Phase-III}. For the experiments, we randomly select $4$ different disasters, such as $D_2$, $D_3$, $D_5$, and $D_6$. We compute the F1-scores for $3$ different variants of ROUGE-N score, i.e., N=1, 2, and L respectively. Our observation from Figure~\ref{fig:IdenRep} indicate that $S_5$ consistently performs the best followed by $S_1$ and $S_2$. $S_3$ performs the worst irrespective of the dataset. Therefore, based on our observations, we select $S_5$, i.e., \textit{MMR} based selection approach in \textit{OntoRealSumm}.
\begin{figure}[t!]
\subfigure[]{\includegraphics[width=2.6in]{Figures/ROUGE-1.eps} \label{fig:Rouge_1}}
\subfigure[]{\includegraphics[width=2.6in]{Figures/ROUGE-2.eps} \label{fig:Rouge_2}}
\subfigure[]{\includegraphics[width=2.6in]{Figures/ROUGE-L.eps} \label{fig:Rouge_l}}
\caption{F1-score of ROUGE-1 in Figure~\ref{fig:Rouge_1}, ROUGE-2 in Figure~\ref{fig:Rouge_2} and ROUGE-L score in Figure~\ref{fig:Rouge_l} of \textit{OntoRealSumm} for $5$ different representative tweets selection methods, i.e., \textit{Semantic similarity score based} ($S_1$), \textit{Eigenvector centrality based} ($S_2$), \textit{PageRank based} ($S_3$), \textit{K-mean clustering based} ($S_4$), and \textit{Maximal Marginal Relevance (MMR) based} ($S_5$) on $4$ disasters is shown.
}
\label{fig:IdenRep}
\end{figure}
\begin{comment}
\begin{table}[ht]
\centering
\caption{F1-score of ROUGE-1, Rouge-2 and Rouge-L score of \textit{OntoRealSumm} for $5$ different representative tweets selection methods, i.e., \textit{Semantic similarity score based} ($S_1$), \textit{Eigenvector centrality based} ($S_2$), \textit{PageRank based} ($S_3$), \textit{K-mean clustering based} ($S_4$), and \textit{Maximal Marginal Relevance (MMR) based} ($S_5$) on $4$ disasters is shown. \RC{make a bargraph for this table; 3 figs for 3 diff types of rouge scores; in each fig, x axis represents the selection methods and y axis the rouge scores; u can change xaxis n y axis as u feel suitable } }
\label{table:selection1}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{\textbf{Datatset}} & \multirow{\textbf{Selection}} & \textbf{ROUGE-1} & \textbf{ROUGE-2} & \textbf{ROUGE-L} \\ \cline{3-5}
& \textbf{methods} & \textbf{F1-score} & \textbf{F1-score} & \textbf{F1-score} \\ \hline
& $S_1$ & 0.44 & 0.16 & 0.26 \\ \cline{2-5}
& $S_2$ & 0.43 & 0.16 & 0.26 \\ \cline{2-5}
$D_2$ & $S_3$ & 0.43 & 0.15 & 0.25 \\ \cline{2-5}
& $S_4$ & 0.46 & 0.17 & 0.27 \\ \cline{2-5}
& $S_5$ & {\bf0.46} & {\bf0.17} & {\bf0.27} \\ \cline{1-5}
& $S_1$ & 0.46 & 0.17 & 0.26 \\ \cline{2-5}
& $S_2$ & 0.46 & 0.17 & 0.27 \\ \cline{2-5}
$D_3$ & $S_3$ & 0.41 & 0.15 & 0.26 \\ \cline{2-5}
& $S_4$ & 0.47 & 0.17 & 0.26 \\ \cline{2-5}
& $S_5$ & {\bf0.49} & {\bf0.19} & {\bf0.28} \\ \cline{1-5}
& $S_1$ & 0.52 & 0.26 & 0.30 \\ \cline{2-5}
& $S_2$ & 0.49 & 0.22 & 0.27 \\ \cline{2-5}
$D_5$ & $S_3$ & 0.47 & 0.20 & 0.24 \\ \cline{2-5}
& $S_4$ & 0.52 & 0.27 & 0.30 \\ \cline{2-5}
& $S_5$ & {\bf0.58} & {\bf0.29} & {\bf0.31} \\ \cline{1-5}
& $S_1$ & 0.55 & 0.22 & 0.28 \\ \cline{2-5}
& $S_2$ & 0.55 & 0.23 & 0.28 \\ \cline{2-5}
$D_6$ & $S_3$ & 0.50 & 0.20 & 0.28 \\ \cline{2-5}
& $S_4$ & 0.55 & 0.23 & 0.29 \\ \cline{2-5}
& $S_5$ & {\bf0.56} & {\bf0.23} & {\bf0.29} \\ \cline{1-5}
\end{tabular}
\end{table}
\end{comment}
\subsection{Limitations of \textit{OntoRealSumm}} \label{s:fail}
\par We discuss the limitations of \textit{OntoRealSumm} that we have observed next.
\begin{enumerate}
\item \textit{Dependence on Existing Ontology} : As \textit{OntoRealSumm} relies on an existing ontology to identify the categories of a tweet, it is not directly applicable for other summarization applications, like news events, user opinions regarding products, etc., unless an existing ontology for that application is available. We are working towards developing an ontology automatically from publicly available resources for any application as a future direction so that \textit{OntoRealSumm} is not dependent on an existing ontology.
\item \textit{Identification of category of a Tweet} :
\textit{OntoRealSumm} can not identify the categories of all tweets. For example, we found among all the disasters, \textit{OntoRealSumm} performs the worst for $D_5$, where it could not identify the category for around $12\%$ of the tweets. Therefore, we believe Phase-I of \textit{OntoRealSumm} could be further improved such that the category of more number of tweets could be identified.
\item \textit{Generalizability of \textit{OntoRealSumm}} : As we currently do not have any dataset from other continents except Asia and USA, we can not ensure the generalizability of \textit{OntoRealSumm} irrespective of location. To resolve this, we are working towards collection of tweets related to disasters from different locations and preparation of ground truth summaries for them such that we can validate the performance of \textit{OntoRealSumm} as one of our future directions.
\end{enumerate}
\section{Conclusions and Future works} \label{s:con}
\par In this paper, we propose \textit{OntoRealSumm} which can generate a real-time tweet summary for a disaster with minimal human intervention. \textit{OntoRealSumm} utilizes a three-phase approach to explicitly handles multiple challenges, like improving categorization quality, finding the importance of each category, ensuring information coverage and information diversity of each category in the final summary. Our experimental analysis shows that \textit{OntoRealSumm} can ensure $6-42\%$ increase in Rouge-N F1-scores over existing research works. Through experiments, we show the effectiveness of each Phase of \textit{OntoRealSumm} in generating a summary with minimum human intervention. As a future goal, we plan to extend \textit{OntoRealSumm} such that it is not dependent on an ontology by automatically creating the ontology as well as we are working towards improvement in the categorization of Phase-I.
\begin{comment}
\begin{table}[ht]
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{\textbf{Datatset}} & \multirow{\textbf{Regression}} & \textbf{ROUGE-1} & \textbf{ROUGE-2} & \textbf{ROUGE-L} & \multirow{\textbf{Datatset}} & \multirow{\textbf{Regression}} & \textbf{ROUGE-1} & \textbf{ROUGE-2} & \textbf{ROUGE-L} \\ \cline{3-5} \cline{8-10}
& \textbf{methods} & \textbf{F1-score} & \textbf{F1-score} & \textbf{F1-score} & & \textbf{methods} & \textbf{F1-score} & \textbf{F1-score} & \textbf{F1-score} \\ \hline
& Linear & {\bf0.53} & {\bf0.22} & {\bf0.31} & & Linear & {\bf0.57} & {\bf0.25} & {\bf0.29} \\ \cline{2-5} \cline{7-10}
${D_1}$ & Ridge & 0.51 & 0.21 & 0.30 & ${D_6}$ & Ridge & 0.55 & 0.22 & 0.27 \\ \cline{2-5} \cline{7-10}
& Bayesian & 0.51 & 0.21 & 0.29 & & Bayesian & 0.57 & 0.22 & 0.28 \\ \hline
& Linear & 0.44 & 0.16 & 0.26 & & Linear & 0.46 & 0.13 & 0.22 \\ \cline{2-5} \cline{7-10}
${D_2}$ & Ridge & 0.44 & 0.15 & 0.25 & ${D_7}$ & Ridge & 0.49 & 0.15 & 0.24 \\ \cline{2-5} \cline{7-10}
& Bayesian & {\bf0.45} & {\bf0.17} & {\bf0.26} & & Bayesian & {\bf0.49} & {\bf0.15} & {\bf0.24} \\ \hline
& Linear & 0.46 & 0.16 & 0.26 & & Linear & 0.45 & 0.14 & 0.24 \\ \cline{2-5} \cline{7-10}
${D_3}$ & Ridge & 0.47 & 0.16 & 0.26 & ${D_8}$ & Ridge & 0.43 & 0.12 & 0.22 \\ \cline{2-5} \cline{7-10}
& Bayesian & {\bf0.47} & {\bf0.16} & {\bf0.26} & & Bayesian & {\bf0.46} & {\bf0.15} & {\bf0.24} \\ \hline
& Linear & {\bf0.47} & {\bf0.18} & {\bf0.28} & & Linear & {\bf0.51} & {\bf0.17} & {\bf0.23} \\ \cline{2-5} \cline{7-10}
${D_4}$ & Ridge & 0.47 & 0.17 & 0.26 & ${D_9}$ & Ridge & 0.48 & 0.14 & 0.22 \\ \cline{2-5} \cline{7-10}
& Bayesian & 0.47 & 0.18 & 0.27 & & Bayesian & 0.51 & 0.16 & 0.23 \\ \hline
& Linear & {\bf0.58} & {\bf0.29} & 0.29 & & Linear & 0.50 & 0.12 & 0.22 \\ \cline{2-5} \cline{7-10}
${D_5}$ & Ridge & 0.56 & 0.26 & 0.29 &${D_{10}}$& Ridge & 0.53 & 0.13 & 0.22 \\ \cline{2-5} \cline{7-10}
& Bayesian & 0.57 & 0.29 & {\bf0.30} & & Bayesian & {\bf0.53} & {\bf0.16} & {\bf0.24} \\ \hline
\end{tabular} }
\caption{We compare F1-score of ROUGE-N (N= 1, 2, and L) of the summaries generated by \textit{OntoRealSumm} for $3$ different regression models, like Linear, Ridge, and Bayesian regression.}
\label{table:Regression}
\end{table}
\PG{another table for the compare of different regression models}
\end{comment}
\begin{comment}
\par Existing disaster summarization approaches initially identify the groups of similar tweets followed by the selection of representative tweets from each group to form a summary~\cite{rudra2015extracting, roy2020classification, rudra2018classifying, nguyen2015tsum4act}. The identification of similar groups are further categorized into supervised and unsupervised methods. Several approaches~\cite{} proposed to identify the group of similar tweets using unsupervised approaches, such as graph-based~\cite{}, topic-based~\cite{}, cluster-based~\cite{}. These approaches utilize the content similarity of the tweets, and further, based on the similarity among tweets, they identify the similar groups of tweets. However, these approaches do not explicitly consider the representation of each group in summary. Furthermore, the importance of each group differs with respect to disasters. To handle this, Rudra et al.~\cite{rudra2016summarizing, rudra2019summarizing} propose a supervised approach that explicitly identifies the categories of disaster tweets and further, summarize each category to generate the disaster summary. However, these approaches are dependent on AIDR~\cite{imran2014aidr} for category identification of a tweet, thus being applicable only or real-time disaster events and requires human intervention for each dataset~\cite{imran2014coordinating}. Therefore, automatic summarization of disasters tweets requires both automatic identification of the category of a tweet followed by understanding of the importance of each category with respect to a disaster and then, summarization of each of these categories to ensure representation and information coverage of each category in the summary.
\end{comment}
\bibliographystyle{ACM-Reference-Format}
|
2,877,628,089,416 | arxiv | \section{Introduction}
Driven by the large-scale pre-training, today's NLP models have become much more powerful~\citep{Devlin2019BERT,Yang2019XLNet,Lan2020ALBERT,Raffel2020T5,Sun2020Colake,Brown2020GPT3,Qiu2020survey}. As a consequence of this drastic increase in performance, these pre-trained language models (PLMs) are notorious for becoming more and more computationally expensive due to the increasing number of parameters. Therefore, rather than pre-training a larger model to achieve a new state-of-the-art (SOTA) accuracy,
most studies are pursuing improvement on other dimensions such as the number of parameters or FLOPs~\citep{Gordon2020Compressing,Sanh2019DistilBERT,Jiao2020TinyBERT,Lan2020ALBERT,Shen2020QBERT}. For these works, the goal has shifted from simple SOTA to "Pareto SOTA". A Pareto SOTA model means that there is no other model is currently better than it on all the dimensions of interest. For example, a model may claim to be Pareto SOTA as long as it achieves the best accuracy under the same number of parameters or FLOPs. For these efficient models with fewer parameters or FLOPs, it is unfair to get them evaluated on the accuracy-centric benchmarks such as GLUE~\cite{Wang2019GLUE}, and ranked among many large-scale models.
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{source/pareto.png}
\caption{An illustration to show our motivation, that is, building the Pareto frontier can help recognizing whether and how much a method achieves Pareto improvement.\vspace{-0.5cm}}
\label{fig:pareto}
\end{figure}
The shifted goal has outpaced the existing benchmarks, which cannot provide a comprehensive and intuitive comparison for efficient methods. In the absence of a proper benchmark, measures of efficiency in different studies cannot be standardized, and different methods cannot be fairly compared. As a result, it is difficult to say \textit{whether} and \textit{how much} a method achieves Pareto improvement. To that end, we aim to build the Pareto frontier for various tasks with standard evaluation for both performance and efficiency. Our motivation can be briefly illustrated by Figure~\ref{fig:pareto}.
\paragraph{Need for a standard evaluation}
As the goal has shifted, a new benchmark is urgently needed to comprehensively compare the NLP models in multiple dimensions. Currently, this multi-dimensional comparison is done in the individual papers, resulting in the following issues: \textbf{(a) Incomprehensive comparison.} The comparison is usually point-to-point, e.g. comparing model performance under the same FLOPs. The comparison in a broader range is usually missed, especially for works in conditional computation where the model performance varies with FLOPs. \textbf{(b) Unaccessible results.} Even if the comprehensive line-to-line comparison is conducted, the results are usually presented in form of figure, in which the data points are not accessible for the following work. As a result, the following work has to reproduce or estimate the results (e.g. \citet{Xin2021BERxiT} estimate values from the figures of \citet{Zhou2020PABEE}). \textbf{(c) Non-standard measurements.} Different works may adopt different metrics such as physical elapsed time, FLOPs, and executed model layers, making them hard to directly compare. Even if the adopted metrics are the same, there is no guarantee that they will be calculated in the same way (e.g. the hardware infrastructure, or the software to calculate FLOPs can be very different\footnote{We find that the FLOPs of Transformers calculated by different libraries (\texttt{thop}, \texttt{ptflops}, and \texttt{torchstat}) can be different. And besides, all of them missed FLOPs in some operations such as self-attention and layer normalization.}). \textbf{(d) Inconvenience.} Recent studies usually choose GLUE~\cite{Wang2019GLUE} as the main benchmark, which, however, is not suitable for dynamic methods due to its submission limitation that is designed to avoid overfitting on test sets.
\paragraph{Need for a strong baseline}
Currently, there are roughly two branches of efficient methods in NLP: static methods (e.g. distillation, pruning, quantization, etc.) and dynamic methods (e.g. early exiting). \textbf{(a) Static models} are obtained given an expected number of parameters or inference latency. These methods often use the first few layers (to keep the same number of parameters or FLOPs) of some pre-trained model followed by a classification head as their baseline, which, however, is too weak to serve as a baseline. \textbf{(b) Dynamic models} usually add multiple internal classifiers to the pre-trained LMs, and therefore allow flexible inference conditioned on the input. Nevertheless, the injected internal classifiers introduce a gap between pre-training and fine-tuning. Training the internal classifiers on downstream tasks often degenerates the performance of the entire model~\cite{Xin2021BERxiT}. Thus, static models need a strong baseline, and dynamic models need a strong backbone.
\paragraph{Contributions} In this work, we address the above needs by contributing the following:
\begin{itemize}
\item \textbf{ELUE}(\textbf{E}fficient \textbf{L}anguage \textbf{U}nderstanding \textbf{E}valuation) -- a standard benchmark for efficient NLP models. (1) ELUE supports online evaluation for model performance, FLOPs, and number of parameters. (2) ELUE is also an open-source platform that can facilitate future research. We reproduce and evaluate multiple compressed and early exiting methods on ELUE. All of the results are publicly accessible on ELUE. (3) ELUE provides an online leaderboard that uses a specific metric to measure how much a model oversteps the current Pareto frontier. ELUE leaderboard also maintains several separate tracks for models with different sizes. (4) ELUE covers six NLP datasets spanning sentiment analysis, natural language inference, similarity and paraphrase tasks. The ELUE benchmark is publicly available at~\url{http://eluebenchmark.fastnlp.top/}.
\item \textbf{ElasticBERT} -- a strong baseline (backbone) for static (dynamic) models. ElasticBERT is a multi-exit Transformer~\cite{Vaswani2017Attention} pre-trained on $\sim$160GB corpus. The pre-training objectives, MLM and SOP~\cite{Lan2020ALBERT}, are applied to multiple Transformer layers instead of only the last layer. Gradient equilibrium~\cite{Li2019Improve} is adopted to alleviate the conflict of the losses at different layers. For static models, ElasticBERT is a strong baseline that can reach or even outperform distilled models. For dynamic models, ElasticBERT is a robust backbone that closes the gap between pre-training and fine-tuning. We release the pre-trained model weights of ElasticBERT\footnote{\href{https://huggingface.co/fnlp}{https://huggingface.co/fnlp}} as well as code\footnote{\href{https://github.com/fastnlp/ElasticBERT}{https://github.com/fastnlp/ElasticBERT}}.
\end{itemize}
\section{Related Work}
\paragraph{NLP Benchmarks}
Evaluating the quality of language representations on multiple downstream tasks has become a common practice in the community. These evaluations have measured and pushed the progress of NLP in recent years. SentEval~\cite{Conneau2018SentEval} introduces a standard evaluation toolkit for multiple NLP tasks.
Further, GLUE~\cite{Wang2019GLUE} and SuperGLUE~\cite{Wang2019Superglue} provide a set of more difficult datasets for model-agnostic evaluation. Another line of work is multi-dimensional evaluations. EfficientQA~\cite{Min2020EfficientQA} is an open-domain question answering challenge that evaluates both accuracy and system size. The system size is measured as the number of bytes required to store a Docker image that contains the submitted system. Dynabench~\cite{Kiela2021Dynabench}, an open-source benchmark for dynamic dataset creation and model evaluation, also supports multi-dimensional evaluation. In particular, Dynabench measures model performance, throughput, memory use, fairness, and robustness. Both EfficientQA and Dynabench require the user to upload the model along with the required environment to the server, which is costly for users to upload and also for the server to evaluate. In contrast, ELUE adopts a cheaper way to evaluate performance and efficiency of the model. Recently, Long-Range Arena (LRA)~\cite{Tay2021LRA} is proposed to evaluate models under the long-context scenario. Different from ELUE, LRA mainly focuses on Xformers~\cite{lin2021transformers}. Besides, some tasks included in LRA are not NLP tasks, or even not real-world tasks, while ELUE consists of common language understanding tasks. In addition, ELUE is also inspired by other well-known benchmarks, such as SQuAD~\cite{Rajpurkar2016SQuAD}, MultiNLI~\cite{Williams2018MNLI}, DecaNLP~\cite{McCann2018DecaNLP}, CLUE~\cite{Xu2020CLUE}, HotpotQA~\cite{Yang2018HotpotQA}, GEM~\cite{gem2021Gehrmann}, etc.
\paragraph{Efficient NLP Models} Current efficient NLP models can be roughly categorized as two streams: model compression (static methods) and conditional computation (dynamic methods). Model compression is to reduce the number or precision of model parameters to achieve faster training and inference. Currently, there are several ways to achieve model compression: (1) \textit{Knowledge Distillation}, which is to learn a compact student model that learns from the output distribution of a large-scale teacher model~\cite{Sanh2019DistilBERT,Jiao2020TinyBERT} (2) \textit{Model Pruning}, which is to remove parts of parameters that are less important~\cite{Gordon2020Compressing}, (3) \textit{Weight Sharing} across different parts of the model~\cite{Lan2020ALBERT} is also a common technique to significantly reduce parameters, (4) \textit{Quantization}, which is to use low bit precision to store parameter and accelerate inference with low bit hardware operations~\cite{Shen2020QBERT}, and (5) \textit{Module Replacing}, which is to replace the modules of a big model with more compact substitutes~\cite{Xu2020BERTTheseus}. In contrast, conditional computation is to selectively execute only parts of the model conditioned on a given input~\cite{Bengio2013Estimating,Davis2013Lowrank}. As a representative, an end-to-end halting approach, Adaptive Computation Time (ACT)~\cite{Graves2016Adaptive}, is developed to perform input-adaptive computation for recurrent networks. The idea of ACT is later adopted in Universal Transformer~\cite{Dehghani2019Universal}. Recently, as the rising of deep models for natural language processing, early exiting is widely used to speedup inference of transformer models~\cite{Liu2020FastBERT,Xin2020DeeBERT,Schwartz2020Right,Zhou2020BERT,Elbayad2020Depth,Liao2021Global,Xin2021BERxiT,Sun2021Early,Zhu2021Leebert,Li2021cascadebert}.
\section{ELUE: A Standard Benchmark for Efficient NLP Models}
ELUE aims to offer a standard evaluation for various efficient NLP models, such that they can be fairly and comprehensively compared. In Section~\ref{sec:design}, we list the design considerations to achieve this motivation. In Section~\ref{sec:task}, we describe the tasks and datasets included in ELUE. In Section~\ref{sec:eval}, we illustrate how to make a submission on ELUE, and how the submission is evaluated. In Section~\ref{sec:leaderboard}, we discuss the design of our leaderboard.
\subsection{Design Considerations}
\label{sec:design}
Now we enumerate main considerations in the design of ELUE to ensure that it meets the needs mentioned early.
\paragraph{Multi-dimensional Evaluation}
The evaluation of ELUE should be multi-dimensional for comprehensive comparison. Instead of point-to-point comparison, methods can be compared in a line-to-line style in ELUE, where the "line" is a performance-efficiency trade-off curve.
\paragraph{Public Accessible}
All data points in ELUE should be publicly accessible such that the following work does not need to reproduce or estimate results from previous work. To facilitate future research, some representative methods should be reproduced and evaluated in ELUE.
\paragraph{Standard Evaluation}
The measurement of model efficiency should be standardized in ELUE such that this line of methods can be fairly compared. Current studies usually use number of parameters~\cite{Lan2020ALBERT,Jiao2020TinyBERT}, FLOPs~\cite{Jiao2020TinyBERT,Liu2020FastBERT,Li2020Accelerating}, actual inference time~\cite{Sanh2019DistilBERT,Schwartz2020Right}, or number of executed layers~\cite{Zhou2020PABEE,Sun2021Early} to measure model efficiency. Among these metrics, measuring actual inference time is costly for both users and the server, and highly depends on the computation infrastructure and software implementation, while number of executed layers ignores the shape of input and hidden layers, therefore is inaccurate. Thus, ELUE adopts number of parameters and FLOPs as the metrics for model efficiency.
\paragraph{Easy-to-Use}
ELUE should be friendly to users, which means that the submission should be as simple as possible. Roughly speaking, there are currently two ways of submissions: (1) submitting the trained model such as SQuAD~\cite{Rajpurkar2016SQuAD}, Dynabench~\cite{Kiela2021Dynabench}, and (2) submitting the predicted test files such as GLUE~\cite{Wang2019GLUE}, SuperGLUE~\cite{Wang2019Superglue}, and CLUE~\cite{Xu2020CLUE}. The submission of ELUE lies in the latter way. Nevertheless, to evaluate number of parameters and FLOPs, the submitted test files should conform to a specific format, and besides, a Python file to define the used model is also required. For more details about submission and evaluation, see Appendix~\ref{sec:eval}.
\subsection{Task and Dataset Selection}
\label{sec:task}
Following GLUE~\cite{Wang2019GLUE}, SuperGLUE~\cite{Wang2019Superglue}, and CLUE~\cite{Xu2020CLUE}, we collect tasks that can be formatted as single sentence classification or sentence pair classification. Since ELUE mainly focuses on efficient models, the difficulty of dataset is not a primary consideration. Instead, we collect tasks and datasets that are commonly used and publicly available in the community. The statistics of the collected datasets are listed in Table~\ref{tab:dataset}.
\begin{table}[h]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{llrrr}
\toprule
\textbf{Tasks} & \textbf{Datasets} & \textbf{|Train|} & \textbf{|Dev|} & \textbf{|Test|} \\ \midrule
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Sentiment\\ Analysis\end{tabular}} & SST-2 & 8,544 & 1,101 & 2,208 \\
& IMDb & 20,000 & 5,000 & 25,000 \\ \midrule
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Natural Language\\ Inference\end{tabular}} & SNLI & 549,367 & 9,842 & 9,824 \\
& SciTail & 23,596 & 1,304 & 2,126 \\ \midrule
\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Similarity and\\ Paraphrase\end{tabular}} & MRPC & 3,668 & 408 & 1,725 \\
& STS-B & 5,749 & 1,500 & 1,379 \\ \bottomrule
\end{tabular}
}
\caption{Statistics of datasets in ELUE.\vspace{-0.5cm}}
\label{tab:dataset}
\end{table}
\paragraph{Sentiment Analysis}
Sentiment analysis, which is to classify the polarity of a given text, is a fundamental task in NLP. We select two well-known movie review datasets, Stanford Sentiment Treebank (SST)~\cite{Socher2013SST} and IMDb~\cite{Maas2011IMDb}. For SST, we use the two-way class split, i.e. SST-2. Different from GLUE, SST-2 samples in ELUE are complete sentences instead of phrases. For IMDb, we randomly select 2.5k positive samples and 2.5k negative samples from training set to construct a development set.
\paragraph{Natural Language Inference}
Natural language inference (NLI) is a task to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither. NLI is often formulated as a sentence pair classification task~\cite{Devlin2019BERT,Sun2021Paradigm}. We select two NLI datasets, SNLI~\cite{Bowman2015SNLI} and SciTail~\cite{Khot2018SciTail}. SNLI is a crowd-sourced collection of sentence pairs with balanced labels: \textit{entailment}, \textit{contradiction}, and \textit{neutral}. We use the spell-checked version of the test and development sets\footnote{\href{https://nlp.stanford.edu/projects/snli/}{https://nlp.stanford.edu/projects/snli/}}. The hard samples, which do not have golden labels due to the disagreement of annotators, are removed from the dataset and left for model diagnostic. SciTail is a two-way (\textit{entail} or \textit{neutral}) entailment classification dataset, which is derived from multiple-choice science exams and web sentences.
\paragraph{Similarity and Paraphrase}
For similarity and paraphrase tasks, we also select two datasets, Microsoft Research Paraphrase Corpus (MRPC)~\cite{Dolan2005MRPC}, and Semantic Textual Similarity Benchmark (STS-B)~\cite{Cer2017SemEval2017}, both of which are also included in GLUE. MRPC is a collection of automatically extracted sentence pairs, each manually-labeled with a judgment to indicate whether the pair constitutes a paraphrase. STS-B is a corpus of sentence pairs, each of which is labeled with a score from 0 to 5 to represent the degree to which two sentences are semantically equivalent.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{source/elue_score.png}
\caption{An illustration to show how ELUE score is computed.\vspace{-0.5cm}}
\label{fig:elue_score}
\end{figure}
\subsection{Leaderboard}
\label{sec:leaderboard}
Following prior work~\cite{Yang2018HotpotQA,Wang2019GLUE,Xu2020CLUE}, we also integrate a leaderboard in ELUE. For dynamic models that have multiple performance-FLOPs coordinates on each dataset, we need to sum up these coordinates as a score. A critical problem is to measure how good a coordinate is. In other words, to measure a coordinate $(p, f)$, where $p$ is performance and $f$ is FLOPs, we need a baseline performance under the same FLOPs. We choose ElasticBERT as the baseline curve. We evaluate different layers of ElasticBERT, and obtained 12 coordinates $(p^{EB}_i, f^{EB}_i)_{i=1}^{12}$, which are then used to interpolate to get a performance-FLOPs function $p^{EB}(f)$. With the baseline curve at hand, we can score a submission curve as
\begin{equation}
\text{ELUEScore} = \frac{1}{n}\sum_{i=1}^n [p_i - p^{EB}(f_i)].
\label{eq:elue_score}
\end{equation}
Note that the coordinates of ElasticBERT are separately interpolated on different datasets. The final ELUE score is an unweighted average of the scores on all the 6 datasets. Figure~\ref{fig:elue_score} gives an illustration of how ELUE score is computed. The ELUE score reflects the extent to which the submission oversteps the ElasticBERT.
In addition, following EfficientQA~\cite{Min2020EfficientQA}, ELUE leaderboard also maintains four additional separate tracks, corresponding to models below 40M, 55M, 70M, 110M parameters. Models in these tracks are ranked by the average performance on all the datasets.
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{source/ElasticBERT.pdf}
\caption{ElasticBERT is pre-trained with multiple pre-training heads attached at the intermediate layers. For static usage (left), it can be pruned on demand while outperforming previous pre-trained models with the same size. For dynamic usage (right), it can serve as the backbone for early exiting methods, achieving better performance-efficiency trade-off than early exiting models with other backbones.}
\label{fig:elasticity of elasticbert}
\end{figure*}
\section{ElasticBERT: A Strong Baseline for Efficient Inference}
Despite the encouraging results achieved by existing efficient models, we argue that a strong baseline (backbone) is needed for both static methods and dynamic methods. Static methods often choose the first few layers of some pre-trained models as their baseline (e.g. \citet{Sun2019PKD,Jiao2020TinyBERT}), which can be weak. Dynamic methods that enable early exiting by training multiple internal classifiers usually introduce a gap between pre-training and fine-tuning, and therefore hurt the performance of the entire model~\cite{Xin2021BERxiT}. Thus, as illustrated in Figure ~\ref{fig:elasticity of elasticbert}, we present the ElasticBERT that bridges the gap between static and dynamic methods, and therefore can serve as a strong baseline for static methods and also a strong backbone for dynamic methods.
ElasticBERT is a multi-exit pre-trained language model with the following training objective:
\begin{equation}
\mathcal{L} = \sum_{l=1}^{L}(\mathcal{L}^{\text{MLM}}_l + \mathcal{L}^{\text{SOP}}_l),
\label{eq:loss}
\end{equation}
where $L$ is the total number of layers, $\mathcal{L}^{\text{MLM}}$ is the n-gram masked language modeling loss, $\mathcal{L}^{\text{SOP}}$ is the sentence order prediction loss~\cite{Lan2020ALBERT}. The two losses are applied to each layer of the model, such that the number of layers can be flexibly scaled on downstream tasks, and therefore it is named "ElasticBERT".
\paragraph{Bridge the Gap Between Static and Dynamic Methods}
As a baseline for static methods, the depth of ElasticBERT can be flexibly reduced on demand. Compared with the first $l$ layer of BERT~\cite{Devlin2019BERT}, the $l$-layered ElasticBERT is a complete model~\cite{Turc2019BERTComplete,Li2021cascadebert} and can achieve better performance. It is worth noticing that ElasticBERT can be regarded as a special instance of LayerDrop~\cite{Fan2020LayerDrop} where the dropped layers are constrained to the top consecutive layers. As a backbone for dynamic methods, training classifiers injected in intermediate layers would be consistent with pre-training. Therefore, ElasticBERT can not only be used as a static complete model, but also be used as a backbone model of dynamic early exiting.
\paragraph{Gradient Equilibrium}
Pre-training with the simply summed loss in Eq. (\ref{eq:loss}) could lead to a \textit{gradient imbalance} issue~\cite{Li2019Improve}. In particular, due to the overlap of subnetworks, the variance of the gradient may grow overly large, leading to unstable training. To address this issue, we follow \citet{Li2019Improve} and adopt the gradient equilibrium (GE) strategy\footnote{The reader is referred to the original paper for more details. In brief, the gradients of $\mathcal{L}_j$ w.r.t. the parameters of the $i$-th layer ($i<j$) would be properly rescaled.} in the pre-training of ElasticBERT.
\paragraph{Grouped Training}
In our preliminary experiments, we found that summing up losses at all layers could slow down pre-training and increase memory footprints. To alleviate this, we divide $L$ exits into $G$ groups. During training, we optimize the losses of the exits within each group by cycling alternately between different batches:
\begin{equation}
\mathcal{L} = \sum_{l\in \mathcal{G}_i} (\mathcal{L}^{\text{MLM}}_l + \mathcal{L}^{\text{SOP}}_l).
\label{eq:group}
\end{equation}
In Section~\ref{sec:ablation} we explore the performance of different grouping methods. As a result, we group the 12 exits of ElasticBERT\textsubscript{BASE} into $\mathcal{G}_1$=\{1, 3, 5, 7, 9, 11, 12\} and $\mathcal{G}_2$=\{2, 4, 6, 8, 10, 12\}, and group the 24 exits of ElasticBERT\textsubscript{LARGE} into $\mathcal{G}_1$=\{1, 4, 7, ..., 22, 24\}, $\mathcal{G}_2$=\{2, 5, 8, ..., 23, 24\}, and $\mathcal{G}_3$=\{3, 6, 9, ..., 21, 24\}. Our experiments demonstrate that grouped training can significantly speedup the process of pre-training without a loss in performance.
\section{Experiments}
\label{sec:exp}
\subsection{Experimental Setup}
\paragraph{Pre-training Setup}
Following BERT~\cite{Devlin2019BERT}, we train ElasticBERT in two different configurations: ElasticBERT\textsubscript{BASE} and ElasticBERT\textsubscript{LARGE}, which have the same model sizes with BERT\textsubscript{BASE} and BERT\textsubscript{LARGE}, respectively. The detailed description can be found in Appendix \ref{sec:trainingsetup}.
\paragraph{Downstream Evaluation}
We evaluate ElasticBERT on the ELUE benchmark, as a static model and as a dynamic model. As a static model, we evaluate different layers of ElasticBERT, denoted as ElasticBERT-$n$L. As a dynamic model, we inject and train internal classifiers in ElasticBERT\textsubscript{BASE} and adopt two strategies, entropy~\cite{Xin2020DeeBERT} and patience~\cite{Zhou2020PABEE}, to enable early exiting, denoted as ElasticBERT\textsubscript{entropy} and ElasticBERT\textsubscript{patience}. To compare with previous work, we also evaluate ElasticBERT on the GLUE benchmark~\cite{Wang2019GLUE}. The comparison results is shown in Appendix \ref{sec:elasticBERTonGLUE}. For static usage, we fine-tune ElasticBERT and our baseline models for 10 epochs with early stopping using AdamW optimizer~\cite{Ilya2019AdamW} with learning rates of \{1e-5, 2e-5, 3e-5\} and batch size of 32, and warm up the learning rate for the first 6 percent of total steps. In dynamic usage, for the models using two-stage training methods, we train for 3 epochs for each stage, and we train for 5 epochs for other models. Other optimization configurations are the same as those in static scenario.
\begin{table*}[t!]
\centering
\resizebox{.95\linewidth}{!}{
\begin{tabular}{lccccccccc}
\toprule
\textbf{Model} & \textbf{\#Params} & \textbf{\#FLOPs} & \textbf{SST-2} & \textbf{IMDb} & \textbf{MRPC} & \textbf{STS-B} & \textbf{SNLI} & \textbf{SciTail} & \textbf{Average} \\
\midrule
\multicolumn{10}{c}{\textit{BASE Models}} \\
\midrule
BERT\textsubscript{BASE} & 109M & 13399M & 85.1 & 93.0 & 83.1 & 84.2 & 90.4 & 93.2 & 88.2 \\
ALBERT\textsubscript{BASE} & 12M & 13927M & 86.6 & 92.9 & 87.8 & 88.3 & 90.1 & 93.4 & 89.9 \\
RoBERTa\textsubscript{BASE} & 125M & 13103M & 88.3 & \textbf{94.9} & 88.0 & \textbf{89.6} & 91.3 & 92.8 & \textbf{90.8} \\
LayerDrop\textsubscript{BASE} & 125M & 13103M & 88.5 & 94.2 & \textbf{88.2} & 87.1 & 90.7 & 92.8 & 90.3 \\
\textbf{ElasticBERT}\textsubscript{BASE} & 109M & 13399M & \textbf{88.6} & 93.9 & 87.9 & 87.6 & \textbf{91.3} & \textbf{93.8} & 90.5 \\
\midrule
BERT\textsubscript{BASE}-6L & 67M & 6700M & 83.3 & 91.0 & 82.6 & 82.5 & 88.9 & 90.7 & 86.5 \\
ALBERT\textsubscript{BASE}-6L & 12M & 6972M & 84.7 & 92.0 & 85.3 & 83.5 & 89.3 & 92.3 & 87.9 \\
RoBERTa\textsubscript{BASE}-6L & 82M & 6552M & 86.8 & 92.6 & 86.7 & 84.5 & \textbf{90.2} & 91.3 & 88.7 \\
LayerDrop\textsubscript{BASE}-6L & 82M & 6552M & 86.3 & \textbf{92.9} & 86.3 & 86.1 & 89.5 & 90.3 & 88.6 \\
HeadPrune-BERT\textsubscript{BASE} & 86M & 9249M & 84.8 & 84.7 & 77.8 & 74.8 & 87.8 & 88.3 & 83.0 \\
DistilBERT & 67M & 6700M & 84.8 & 92.0 & 83.8 & 81.7 & 89.2 & 89.7 & 86.9 \\
TinyBERT-6L & 67M & 6700M & 85.3 & 89.0 & 86.2 & 85.7 & 89.3 & 90.0 & 87.6 \\
BERT-of-Theseus & 67M & 6700M & 84.4 & 90.7 & 82.4 & 85.0 & 89.4 & 92.1 & 87.3 \\
\textbf{ElasticBERT}\textsubscript{BASE}-6L & 67M & 6700M & \textbf{87.0} & 92.7 & \textbf{87.3} & \textbf{86.9} & 90.1 & \textbf{92.5} & \textbf{89.4} \\
\midrule
\multicolumn{10}{c}{\textit{LARGE Models}} \\
\midrule
BERT\textsubscript{LARGE} & 335M & 47214M & 87.9 & 94.0 & 85.9 & 86.7 & 90.8 & 93.9 & 89.9 \\
ALBERT\textsubscript{LARGE} & 18M & 48876M & 87.7 & 93.8 & 88.1 & 89.3 & 90.2 & 93.6 & 90.5 \\
RoBERTa\textsubscript{LARGE} & 355M & 46042M & \textbf{90.5} & \textbf{95.7} & \textbf{89.9} & 90.5 & \textbf{91.6} & \textbf{95.8} & \textbf{92.3} \\
LayerDrop\textsubscript{LARGE} & 355M & 46042M & 90.4 & 95.3 & 89.5 & \textbf{91.0} & 91.4 & 95.2 & 92.1 \\
\textbf{ElasticBERT}\textsubscript{LARGE} & 335M & 47214M & 89.8 & 95.0 & 89.8 & 90.9 & 91.4 & 95.7 & 92.1 \\
\midrule
BERT\textsubscript{LARGE}-6L & 108M & 11922M & 80.4 & 89.6 & 74.3 & 70.5 & 87.4 & 84.4 & 81.1 \\
ALBERT\textsubscript{LARGE}-6L & 18M & 12397M & 84.5 & 92.0 & 84.7 & 85.1 & 89.4 & 90.8 & 87.8 \\
RoBERTa\textsubscript{LARGE}-6L & 129M & 11664M & 83.5 & 91.7 & 77.9 & 72.7 & 88.6 & 84.7 & 83.2 \\
LayerDrop\textsubscript{LARGE}-6L & 129M & 11664M & 85.4 & 92.5 & 77.3 & 75.9 & 88.8 & 84.1 & 84.0 \\
\textbf{ElasticBERT}\textsubscript{LARGE}-6L & 108M & 11922M & \textbf{86.8} & \textbf{92.9} & \textbf{86.2} & \textbf{86.3} & \textbf{89.8} & \textbf{92.4} & \textbf{89.1} \\
\bottomrule
\end{tabular}
}
\caption{ElasticBERT and static baseline performance on ELUE task test sets. We report the mean of Accuracy and F1 for MRPC, Pearson and Spearman correlation for STS-B and Accuracy for other tasks. The reported FLOPs is the average over all the datasets.\vspace{-0.5cm}}
\label{tab:elue_static}
\end{table*}
\begin{figure}[t]
\centering
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture} [scale=0.8]
\begin{axis}[
enlargelimits=0.25,
legend style={at={(0.5,-0.35)},
anchor=north,legend columns=-1},
ylabel={Average Perf},
symbolic x coords={3L,4L,6L},
xtick=data,
ybar=3pt
bar width=7pt,
font=\small,
grid=major,
width=\linewidth,
height=.5\linewidth,
]
\addplot
coordinates {(3L,81.8) (4L,84.1) (6L,86.5)};
\addplot
coordinates {(3L,84.6) (4L,85.8) (6L,87.9)};
\addplot
coordinates {(3L,81.4) (4L,85.6) (6L,88.7)};
\addplot
coordinates {(3L,83.4) (4L,85.7) (6L,88.6)};
\addplot
coordinates {(3L,87.3) (4L,88.2) (6L,89.4)};
\legend{BERT, ALBERT, RoBERTa, LayerDrop, ElasticBERT}
\end{axis}
\end{tikzpicture}
\caption{BASE models.}
\end{subfigure}
\\
\begin{subfigure}{\linewidth}
\centering
\begin{tikzpicture} [scale=0.8]
\begin{axis}[
enlargelimits=0.25,
legend style={at={(0.5,-0.35)},
anchor=north,legend columns=-1},
ylabel={Average Perf},
symbolic x coords={6L,8L,12L},
xtick=data,
ybar=3pt
bar width=7pt,
grid=major,
font=\small,
width=\linewidth,
height=.5\linewidth,
]
\addplot
coordinates {(6L,81.1) (8L,83.5) (12L,85.8)};
\addplot
coordinates {(6L,87.9) (8L,89.4) (12L,90.0)};
\addplot
coordinates {(6L,83.2) (8L,85.4) (12L,90.9)};
\addplot
coordinates {(6L,84.0) (8L,87.0) (12L,90.9)};
\addplot
coordinates {(6L,89.1) (8L,89.7) (12L,90.9)};
\legend{BERT, ALBERT, RoBERTa, LayerDrop, ElasticBERT}
\end{axis}
\end{tikzpicture}
\caption{LARGE models.}
\end{subfigure}
\caption{Comparison of the average performance on ELUE test sets between ElasticBERT and baselines.\vspace{-0.5cm}}
\label{fig:elue_layer}
\end{figure}
\paragraph{Baselines}
We compare ElasticBERT with three types of baselines: \textbf{(1)} Directly fine-tuning pre-trained models and their first $n$ layers. We choose BERT~\cite{Devlin2019BERT}, ALBERT~\cite{Lan2020ALBERT}, RoBERTa~\cite{Liu2019roberta} and LayerDrop~\cite{Fan2020LayerDrop} as our baselines. For the use of the first $n$ layers, we simply add a linear classifier on top of the truncated model. \textbf{(2)} Compressed models. We choose two distilled models, DistilBERT~\cite{Sanh2019DistilBERT} and TinyBERT~\cite{Jiao2020TinyBERT}, one pruned model, HeadPrune~\cite{michel19headprune}, and one model obtained by using \textit{module replacing}, BERT-of-Theseus~\cite{Xu2020BERTTheseus} as our baseline models. \textbf{(3)} Dynamic early exiting models. To verify the effectiveness of ElasticBERT as a strong backbone of dynamic early exiting methods, we also compare ElasticBERT\textsubscript{entropy} and ElasticBERT\textsubscript{patience} which have the same early exiting strategy as DeeBERT~\cite{Xin2020DeeBERT} and PABEE~\cite{Zhou2020PABEE} with four representative early exiting models: DeeBERT~\cite{Xin2020DeeBERT}, FastBERT~\cite{liu20fastbert}, PABEE~\cite{Zhou2020PABEE}, and CascadeBERT~\cite{Li2021cascadebert}.
\begin{figure*}[t!]
\centering
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{source/SST-2-EE.png}
\caption{SST-2}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{source/IMDB-EE.png}
\caption{IMDb}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{source/SNLI-EE.png}
\caption{SNLI}
\end{subfigure}
\\
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{source/SciTail-EE.png}
\caption{SciTail}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{source/MRPC-EE.png}
\caption{MRPC}
\end{subfigure}
\begin{subfigure}{0.32\linewidth}
\centering
\includegraphics[width=\linewidth]{source/STS-B-EE.png}
\caption{STS-B}
\end{subfigure}
\caption{Performance-FLOPs trade-offs on ELUE task test sets. Because STS-B is a regression task, for which the entropy-based methods are not applicable, we only evaluate patience-based methods, i.e., PABEE and ElasticBERT\textsubscript{patience}. }
\label{fig:elue_dynamic}
\end{figure*}
\subsection{Evaluating ElasticBERT on ELUE}
ElasticBERT and our baselines are evaluated on ELUE tasks. For the BASE version of ElasticBERT, BERT, ALBERT, RoBERTa and LayerDrop, we evaluate the first 3/4/6/12 layers. For the LARGE version of the models, we evaluate the first 6/8/12/24 layers. For dynamic methods, we fine-tune ElasticBERT along with the injected internal classifiers using the gradient equilibrium (GE) strategy~\cite{Li2019Improve}, and adopt two different early exiting strategies: entropy-based strategy~\cite{Xin2020DeeBERT} and patience-based strategy~\cite{Zhou2020PABEE}.
\begin{table*}[t!]
\centering
\small
\resizebox{.85\linewidth}{!}{
\begin{tabular}{lccccccc}
\toprule
Model & \textbf{SST-2} & \textbf{IMDb} & \textbf{MRPC} & \textbf{STS-B} & \textbf{SNLI} & \textbf{SciTail} & \textbf{Average} \\
\midrule
\textbf{ElasticBERT}\textsubscript{BASE} & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\midrule
\multicolumn{8}{c}{\textit{Static Models}} \\
\midrule
BERT\textsubscript{BASE} & -4.55 & -2.15 & -5.88 & -4.75 & -1.50 & -3.35 & -3.70 \\
ALBERT\textsubscript{BASE} & -2.41 & -1.08 & -2.34 & -2.81 & -1.55 & -1.50 & \textbf{-1.95} \\
RoBERTa\textsubscript{BASE} & \textbf{-0.89} & \textbf{-0.11} & -2.95 & -5.38 & \textbf{-0.66} & -3.32 & -2.22 \\
LayerDrop\textsubscript{BASE} & -1.17 & -0.13 & \textbf{-2.17} & -2.98 & -1.36 & -4.14 & -1.99 \\
HeadPrune-BERT\textsubscript{BASE} & -3.81 & -8.61 & -9.73 & -11.9 & -2.89 & -4.18 & -6.85 \\
DistilBERT & -2.20 & -0.70 & -3.50 & -5.20 & -0.90 & -2.80 & -2.55 \\
TinyBERT-6L & -1.70 & -3.70 & -2.60 & -1.90 & -0.80 & -2.50 & -2.20 \\
BERT-of-Theseus & -4.21 & -2.61 & -5.13 & \textbf{-1.67} & -1.29 & \textbf{-0.38} & -2.55 \\
\midrule
\multicolumn{8}{c}{\textit{Dynamic Models}} \\
\midrule
PABEE & -1.33 & -0.23 & -2.93 & -2.13 & -0.85 & -0.43 & -1.50 \\
DeeBERT & -12.1 & -14.0 & -4.88 & - & -8.35 & -6.19 & - \\
FastBERT & -1.51 & 0.16 & -3.70 & - & -0.22 & -1.23 & - \\
CascadeBERT & -2.13 & -0.12 & -4.05 & - & -0.23 & 0.14 & - \\
\textbf{ElasticBERT}\textsubscript{patience} & 0.40 & 0.20 & -1.00 & \textbf{-0.44} & \textbf{0.03} & 0.36 & \textbf{-0.08} \\
\textbf{ElasticBERT}\textsubscript{entropy} & \textbf{0.97} & \textbf{1.02} & \textbf{-0.14} & - & 0.02 & \textbf{0.64} & - \\
\bottomrule
\end{tabular}
}
\caption{ELUE scores calculated using Eq. (\ref{eq:elue_score}) for static and dynamic baseline models. '-' denotes that the dataset/metric is not applicable to the model.\vspace{-0.5cm}}
\label{tab:elue scores}
\end{table*}
\paragraph{Results of Static Models}
The performance of ElasticBERT and our baseline models on ELUE task test sets is shown in Table~\ref{tab:elue_static}, where we find that ElasticBERT\textsubscript{BASE} and ElasticBERT\textsubscript{LARGE} outperform BERT and ALBERT with the same number of layers, but are slightly weaker than RoBERTa\textsubscript{BASE} and RoBERTa\textsubscript{LARGE}. Besides, we find that the superiority of ElasticBERT over its baselines can be significant with fewer layers (See Figure~\ref{fig:elue_layer} for the results of 3/4 (6/8) layers of the BASE (LARGE) models).
\paragraph{Results of Dynamic Models}
We compare ElasticBERT\textsubscript{entropy} and ElasticBERT\textsubscript{patience} with four dynamic models: DeeBERT~\cite{Xin2020DeeBERT}, FastBERT~\cite{liu20fastbert}, PABEE~\cite{Zhou2020PABEE}, and CascadeBERT~\cite{Li2021cascadebert}. The performance-FLOPs trade-off of the dynamic models on ELUE task test sets are shown in Figure~\ref{fig:elue_dynamic}, which demonstrates that ElasticBERT can achieve better performance-FLOPs trade-off.
\paragraph{Evaluating ELUE Scores}
According to Eq. (\ref{eq:elue_score}), we also evaluate the ELUE scores of these baselines. As shown in Table \ref{tab:elue scores}, the ELUE score of ElasticBERT\textsubscript{BASE} is natural to be zero on all tasks. Among the other baselines, we find that ElasticBERT\textsubscript{patience} achieves the best ELUE score, while HeadPrune achieves the worst ELUE score. In addition, we find that dynamic models perform better than static models on average.
\section{Conclusion and Future Work}
In this work, we present ELUE, which is a public benchmark and platform for efficient models, and ElasticBERT, which is a strong baseline (backbone) for efficient static (dynamic) models. Both of the two main contributions are aimed to build the Pareto frontier for NLU tasks, such that the position of existing work can be clearly recognized, and future work can be easily and fairly measured.
Our future work is mainly in four aspects: (1) Including more baselines in ELUE, (2) Supporting the evaluation for more frameworks such as TensorFlow~\cite{Abadi2016TensorFlow}, (3) Supporting diagnostics for submissions, (4) Supporting the evaluation of more different types of tasks.
\section*{Acknowledgment}
This work was supported by the National Key Research and Development Program of China (No.2020AAA0106702) and National Natural Science Foundation of China (No.62022027).
\section*{Ethical Considerations}
The proposed ELUE benchmark aims to standardize efficiency measurement of NLP models. The collected datasets are widely used in previous work and, to our knowledge, do not have any attached privacy or ethical issues. Our proposed ElasticBERT is a pre-trained model to reduce computation cost and carbon emission. The pre-training data is public resources adopted in previous work, and therefore would not introduce new ethical concerns.
|
2,877,628,089,417 | arxiv | \section{A categorical generalization of Fra\"iss\'e's theorem}
In this section we present a categorical generalization of Fra\"iss\'e's construction in model theory.
Our result is technically similar to (though more general than) the categorical theorem in \cite{DG1}, but follows as an application of the theory developed by Kubi\'s in \cite{Kubis}.\\
First, let us introduce the relevant terminology.
\begin{definition}
A category $\cal C$ is said to satisfy the \emph{amalgamation property} (AP) if for every objects $a,b,c\in {\cal C}$ and morphisms $f:a\rightarrow b$, $g:a\rightarrow c$ in $\cal C$ there exists an object $d\in \cal C$ and morphisms $f':b\rightarrow d$, $g':c\rightarrow d$ in $\cal C$ such that $f'\circ f=g'\circ g$:
\[
\xymatrix {
a \ar[d]_{g} \ar[r]^{f} & b \ar@{-->}[d]^{f'} \\
c \ar@{-->}[r]_{g'} & d }
\]
\end{definition}
Notice that $\cal C$ satisfies the amalgamation property if and only if ${\cal C}^{\textrm{op}}$ satisfies the right Ore condition. So if $\cal C$ satisfies AP then we may equip ${\cal C}^{\textrm{op}}$ with the atomic topology. This point will be the basis of our topos-theoretic interpretation described in the next section.
\begin{definition}
A category $\cal C$ is said to satisfy the \emph{joint embedding property} (JEP) if for every pair of objects $a,b\in {\cal C}$ there exists an object $c\in \cal C$ and morphisms $f:a\rightarrow c$, $g:b\rightarrow c$ in $\cal C$:
\[
\xymatrix {
& a \ar@{-->}[d]^{f} \\
b \ar@{-->}[r]_{g} & c }
\]
\end{definition}
Notice that if $\cal C$ has a weakly initial object then AP on $\cal C$ implies JEP on $\cal C$; however in general the two notions are quite distinct from one another.
\begin{definition}
Given an embedding $i:{\cal C}\rightarrow {\cal D}$, an object $u\in {\cal D}$ is said to be \emph{$\cal C$-homogeneous} if for every objects $a,b \in {\cal C}$ and arrows $j:a\rightarrow b$ in ${\cal C}$ and $\chi:a\rightarrow u$ in $\cal D$ there exists an arrow $\tilde{\chi}:b\rightarrow u$ such that $\tilde{\chi}\circ j=\chi$:
\[
\xymatrix {
a \ar[d]_{j} \ar[r]^{\chi} & u \\
b \ar@{-->}[ur]_{\tilde{\chi}} & }
\]
$u$ is said to be \emph{$\cal C$-ultrahomogeneous} if for every objects $a,b \in {\cal C}$ and arrows $j:a\rightarrow b$ in ${\cal C}$ and $\chi_{1}:a\rightarrow u$, $\chi_{2}:b\rightarrow u$ in $\cal D$ there exists an isomorphism $\check{j}:u\rightarrow u$ such that $\check{j}\circ \chi_{1}=\chi_{2}\circ j$:
\[
\xymatrix {
a \ar[d]_{j} \ar[r]^{\chi_{1}} & u \ar@{-->}[d]^{\check{j}}\\
b \ar[r]_{\chi_{2}} & u }
\]
$u$ is said to be \emph{$\cal C$-universal} if it is $\cal C$-cofinal, that is for every $a\in {\cal C}$ there exists an arrow $\chi:a\rightarrow u$ in $\cal D$:
\[
\xymatrix {
a \ar@{-->}[r]^{\chi} & u }
\]
\end{definition}
\begin{rmks}
\emph{It is easy to see that if $u$ is $\cal C$-ultrahomogeneous and $\cal C$-universal then $u$ is $\cal C$-homogeneous. Also, to verify that an object $u$ in $\cal D$ is $\cal C$-ultrahomogeneous one can clearly suppose, without loss of generality, that the arrow $j$ in the definition is an identity.}
\end{rmks}
Let us recall the following definitions from \cite{Kubis}.\\
Given a category $\cal C$ and a collection of arrows ${\cal F}\subseteq arr(\cal C)$, $\cal F$ is said to be dominating in $\cal C$ if the family $Dom(\cal F)$ of objects which are domains of an arrow in $\cal F$ is cofinal in $\cal C$ and satisfies the following property: for every $a\in Dom(\cal F)$ and every arrow $f:a\rightarrow x$ in $\cal C$ there exists an arrow $g:x\rightarrow cod(g)$ in $\cal C$ such that $g\circ f\in \cal F$.\\
Notice that $arr(\cal C)$ is always dominating in $\cal C$, and if $\cal C'$ is a skeleton of $\cal C$, $arr(\cal C')$ is dominating in $\cal C$.\\
Given a category $\cal C$ and an ordinal $\kappa\gt 0$, an inductive $\kappa$-sequence (or $\kappa$-chain) in $\cal C$ is a functor $\vec{u}:\kappa\rightarrow \cal C$, where $\kappa$ is regarded as a poset category. For $i\in \kappa$ we denote $\vec{u}(i)$ by $u_{i}$ and for $i,j\in \kappa$ such that $i\leq j$ we denote $\vec{u}(i\rightarrow j):u_{i}\rightarrow u_{j}$ by $u_{i}^{j}$. $\vec{u}$ is said to be a Fra\"iss\'e sequence of length $\kappa$ (or, briefly, a $\kappa$-Fra\"iss\'e sequence) in $\cal C$ if it satisfies the following conditions:\\
(1) For every $a\in \cal C$ there exists $i\in \kappa$ and an arrow $\chi:a\rightarrow u_{i}$ in $\cal C$;\\
(2) For every $i\in \kappa$ and for every arrow $f:u_{i}\rightarrow cod(f)$ in $\cal C$, there exists $j \in \kappa$ with $j\geq i$ and an arrow $g:cod(f)\rightarrow u_{j}$ such that $u_{i}^{j}=g\circ f$.\\
$\vec{u}$ is said to have the extension property if it satisfies the following condition:\\
For every arrows $f:a\rightarrow b, g:a\rightarrow u_{i}$ in $\cal C$ where $i\in \kappa$, there exists $j \in \kappa$ with $j\geq i$ and an arrow $h:b\rightarrow u_{j}$ such that $u_{i}^{j}\circ g = h\circ f$.\\
Of course, every sequence satisfying the extension property satisfies property (2) in the definition of Fra\"iss\'e sequence.\\
A category $\cal C$ is said to be $\kappa$-bounded if every chain in $\cal C$ of length $\lambda\lt \kappa$ has a cocone in $\cal C$ over it. Clearly, every category is $\omega$-bounded.\\
A $\kappa$-chain $\vec{u}:\kappa\rightarrow \cal C$ is said to be continuous if for each limit ordinal $j\in \kappa$, $u_{j}$ is the colimit of the $j$-chain obtained as the restriction of $\vec{u}$ to $j$ with universal colimit arrows given by the arrows $u_{i}\rightarrow u_{j}$ ($i\lt j$) of the chain.\\
Given an infinite cardinal $\kappa$ and an embedding $i:{\cal C}\rightarrow {\cal D}$, we denote by ${\cal D}_{\kappa}$ the full subcategory of $\cal D$ on the objects that can be expressed as colimits of $\kappa$-chains in $\cal C$ and by ${\cal D}_{\kappa}^{c}$ the full subcategory of $\cal D$ on the objects that can be expressed as colimits of continuous $\kappa$-chains in $\cal C$. We will say that an embedding $i:{\cal C}\rightarrow {\cal D}$ is $\kappa$-continuous if ${\cal D}_{\kappa}={\cal D}_{\kappa}^{c}$. Obviously, every embedding is $\omega$-continuous.\\
Following the terminology in \cite{DG1}, we will say that an object $a$ in $\cal C$ is $\kappa$-small in $\cal D$ if the functor $Hom_{\cal D}(i(a),- ):{\cal D}\rightarrow \Set$ preserves all colimits of $\kappa$-chains in $\cal D$; in particular, every finitely presentable object in $\cal C$ is $\kappa$-small.\\
Notice that, given an embedding $i:{\cal C}\rightarrow {\cal D}$ such that all the objects in $\cal C$ are $\kappa$-small in $\cal D$, for $i$ to be $\kappa$-continuous it suffices that $\cal C$ is closed under colimits of $\lambda$-chains in $\cal D$ for each $\lambda \lt \kappa$; indeed, given an inductive $\kappa$-sequence $\vec{u}$ in $\cal C$ with colimit $u$ we can construct (by transfinite recursion) a continuous $\kappa$-chain $\vec{v}$ in $\cal C$ with a universal colimiting cone $D$ to $u$ (cfr. also the proof of Lemma 1 in \cite{rosicky}); indeed, denoted by $j_{i}:u_{i}\rightarrow u$ (for $i\lt \kappa$) the universal colimit arrows for $\vec{u}$, we define $\vec{v}$ as follows:\\
$\vec{v}(0)=\vec{u}(0)$ and $D(0)=j_{0}$;\\
given $\vec{v}(i)$ and $D(i):v_{i}\rightarrow u$, $v_{i}$ being $\kappa$-small in $\cal D$, there exists $j\gt i$ and an arrow $h:v_{i}\rightarrow u_{j}$ such that $D(i)=j_{i}\circ h$; we put $\vec{v}(i+1)=u_{j}$ and $D(i+1)=h$;\\
if $i\lt \kappa$ is a limit ordinal then we define $\vec{v}(i)$ and $D(i)$ respectively as the colimit $colim_{j\lt i}\vec{v}(j)$ and the unique arrow $colim_{j\lt i}\vec{v}(j)\rightarrow u$ induced via the universal property of the colimit by the arrows $D(j):\vec{v}(j)\rightarrow u$ (for $j\lt i$).\\
The sequence $\vec{v}$ is defined on the arrows in the obvious way.\\
If $i$ is the embedding of the full subcategory on the $\kappa$-presentable objects of a $\kappa$-accessible category $\cal C$ having directed colimits into $\cal C$, then, denoted by ${\cal C}^{{\kappa}^{+}}$ the full subcategory of $\cal C$ on the ${\kappa}^{+}$-presentable objects, by the proof of Lemma 1 in \cite{rosicky} we have that ${\cal C}^{{\kappa}^{+}}={\cal C}_{\kappa}={\cal C}_{\kappa}^{c}$; in particular $i$ is $\kappa$-continuous.
\begin{theorem}\label{teofond}
Let $\kappa$ be an infinite regular cardinal and $\cal C$ be a $\kappa$-bounded category satisfying the amalgamation and the joint embedding properties. If there exists a dominating family of arrows $\cal F$ in $\cal C$ such that $|{\cal F}|\leq \kappa$, then for any embedding $i:{\cal C}\rightarrow D$ such that $\cal D$ has all colimits of $\kappa$-chains in $\cal C$ and all the objects in $\cal C$ are $\kappa$-small in $\cal D$, there exists in ${\cal D}_{\kappa}$ a $\cal C$-homogeneous and $\cal C$-universal object; if moreover all the morphisms in ${\cal D}_{\kappa}^{c}$ are monic (as arrows in ${\cal D}_{\kappa}^{c}$) then every $\cal C$-homogeneous and $\cal C$-universal object in ${\cal D}_{\kappa}^{c}$ is $\cal C$-ultrahomogeneous and unique (up to isomorphism) with these properties in ${\cal D}_{\kappa}^{c}$.\\
Conversely, given an embedding $i:{\cal C}\rightarrow {\cal D}$ such that all the morphisms in ${\cal D}_{\kappa}$ are monic, if there exists in ${\cal D}_{\kappa}$ an object which is $\cal C$-homogeneous and $\cal C$-universal, then the category $\cal C$ satisfies the amalgamation and joint embedding properties.
\end{theorem}
\begin{proofs}
Let $u$ be the colimit in $\cal D$ of an inductive $\kappa$-sequence $\vec{u}$ in $\cal C$. Then the following facts hold:\\
(1) If $u$ is $\cal C$-homogeneous and $\cal C$-universal and all the morphisms in ${\cal D}_{\kappa}$ (respectively, in ${\cal D}_{\kappa}^{c}$ if $u$ belongs to ${\cal D}_{\kappa}^{c}$) are monic then $\vec{u}$ is a Fra\"iss\'e sequence.\\
(2) If $\vec{u}$ is a $\kappa$-Fra\"iss\'e sequence then $u$ is $\cal C$-homogeneous and $\cal C$-universal; moreover, if $\vec{u}$ is continuous then $u$ is $\cal C$-ultrahomogeneous.\\
To prove (1), let us suppose that $u$ is $\cal C$-homogeneous and $\cal C$-universal and all the morphisms in ${\cal D}_{\kappa}$ (respectively, in ${\cal D}_{\kappa}^{c}$ if $u$ belongs to ${\cal D}_{\kappa}^{c}$) are monic. Condition (1) in the definition of Fra\"iss\'e sequence trivially follows from the fact that $u$ is $\cal C$-universal and every object of $\cal C$ is $\kappa$-small in ${\cal D}$. To verify condition (2), we prove that $\vec{u}$ satisfies the extension property. Since $\vec{u}$ is $\cal C$-homogeneous, then given arrows $f:a\rightarrow b$ and $g:a\rightarrow u_{i}$ in $\cal C$ where $i\in \kappa$, and the colimit map $j_{i}:u_{i}\rightarrow u$, there exists an arrow $h:b\rightarrow u$ such that $h\circ f=j_{i}\circ g$. Now, $b$ being $\kappa$-small in ${\cal D}$, $h$ factors as $b\stackrel{h_{j}}{\rightarrow}u_{j}\stackrel{j_{j}}{\rightarrow}u$ for a sufficiently large $j$. If we take a $j\geq i$ then we clearly have $u_{i}^{j}\circ g=h_{j}\circ f$, $j_{j}$ being monic.\\
Let us now prove fact (2). By condition (1) in the definition of Fra\"iss\'e sequence, $u$ is clearly $\cal C$-universal. Let us now prove that $u$ is $\cal C$-homogeneous.\\
Given objects $a,b \in {\cal C}$ and arrows $f:a\rightarrow b$ in ${\cal C}$ and $\chi:a\rightarrow u$ in $\cal D$, we want to prove that there exists an arrow $\tilde{\chi}:b\rightarrow u$ such that $\tilde{\chi}\circ f=\chi$:
\[
\xymatrix {
a \ar[d]_{f} \ar[r]^{\chi} & u \\
b \ar@{-->}[ur]_{\tilde{\chi}} & }
\]
Since $a$ is $\kappa$-small in $\cal D$, $\chi$ factors as $a\stackrel{\chi_{i}}{\rightarrow}u_{i}\stackrel{j_{i}}{\rightarrow}u$ for some $i\in \kappa$. From the fact that $\cal C$ satisfies AP we obtain an object $d\in {\cal C}$ and two arrows $h:u_{i}\rightarrow d$ and $l:b\rightarrow d$ such that $l\circ f=h\circ \chi_{i}$. Now by condition (2) in the definition of Fra\"iss\'e sequence we get a $j\in \kappa$ with $j\geq i$ and an arrow $m:d\rightarrow u_{j}$ such that $u_{i}^{j}=m\circ h$. Hence the arrow $\tilde{\chi}:=j_{j}\circ m\circ l$ satisfies the required property.\\
Let us now prove the following fact, to which we will refer as to fact (3):\\
If $\vec{u}$ and $\vec{v}$ are two continuous $\kappa$-Fra\"iss\'e sequences in $\cal C$ and $f:u_{k}\rightarrow v_{l}$ is an arrow between ``elements'' respectively of $\vec{u}$ and $\vec{v}$ there exists in $\cal D$ an isomorphism $\tilde{f}:\varinjlim \vec{u}\rightarrow \varinjlim \vec{v}$ such that $\tilde{f}\circ j_{k}=j'_{l}\circ f$ (where $j_{k}:u_{k}\rightarrow \varinjlim \vec{u}$ and $j'_{l}:v_{l}\rightarrow \varinjlim \vec{v}$ are the obvious colimit arrows).\\
To this end, let us establish the following fact: given any $k, l \in \kappa$ and any arrow $f:u_{k}\rightarrow v_{l}$ there exist two strictly increasing functions $k,l:\kappa\rightarrow \kappa$ and two natural transformations $F:\vec{u}\circ k\rightarrow \vec{v}\circ l$ and $G:\vec{v}\circ l\rightarrow \vec{u}\circ k^{+}$, where $k^{+}$ is the function defined by $k^{+}(i)=k(i+1)$ for each $i\lt \kappa$, with the following properties:\\
$k(0)=k$ and $l(0)=l$,\\
$F(0)=f$ and $F(i+1)\circ G(i)=v^{l(i+1)}_{l(i)}, G(i)\circ F(i)=u^{k(i+1)}_{k(i)}$ (for each $i\in \kappa$).\\
We define $k(i), l(i), F(i), G(i)$ (and prove that they satisfy the required properties) by transfinite induction on $i\lt \kappa$.\\
For $i=0$ we put $k(0)=k$, $l(0)=l$, $F(0)=f$ and define $k(1)$ and $G(0):v_{l}\rightarrow u_{k(1)}$ as follows: by condition (2) in the definition of Fra\"iss\'e sequence applied to $\vec{u}$ there exist $j\in \kappa$ with $j\gt k$ and an arrow $s:v_{l}\rightarrow u_{j}$ such that $s\circ f=u_{k}^{j}$; we put $k(1)=j$ and $G(0)=s$.\\
Given $k(i), k(i+1), l(i), F(i), G(i)$ we define $k(i+2), l(i+1), F(i+1), G(i+1)$ as follows: by condition (2) in the definition of Fra\"iss\'e sequence applied to $\vec{v}$ there exist $j\in \kappa$ with $j\gt l(i)$ and an arrow $s:u_{k(i+1)}\rightarrow v_{j}$ such that $s\circ G(i)=v_{l(i)}^{j}$; we put $l(i+1)=j$ and $F(i+1)=s$. Again, by condition (2) in the definition of Fra\"iss\'e sequence applied to $\vec{u}$ there exist $j'\in \kappa$ with $j'\gt k(i+1)$ and an arrow $s':v_{l(i+1)}\rightarrow u_{j'}$ such that $s'\circ F(i+i)=u_{k(i+1)}^{j'}$; we put $k(i+2)=j'$ and $G(i+1)=s'$.\\
If $i=sup_{j\lt i}j$ is a limit ordinal we put $k(i)=sup_{j\lt i}k(j)\in \kappa$, $l(i)=sup_{j\lt i}l(j)\in \kappa$ and define both $F(i)$ and $G(i)$ by taking colimits. More precisely, since the restriction of $k$ to $i$ is strictly increasing and the chain $\vec{u}$ is $\kappa$-continuous then $u_{k(i)}$ is the colimit of the restriction of the chain $\vec{u}\circ k$ to $i$; analogously, $v_{l(i)}$ is the colimit of the restriction of the chain $\vec{v}\circ l$ to $i$; then we define $F(i):u_{k(i)}\rightarrow v_{l(i)}$ to be the unique arrow, given by the universal property of the colimit, such that for each $j\lt i$ $F(i)\circ u_{k(j)}^{k(i)}=v_{l(j)}^{l(i)}\circ F(j)$. $G(i)$ is defined similarly.\\ The verification that all the required properties are satisfied is easily done by induction on $i\lt k$.\\
Now, since $k$ and $l$ are strictly increasing functions, then, regarded as functors $\kappa \rightarrow \kappa$, they are cofinal. This implies that $u=\varinjlim \vec{u}=\varinjlim (\vec{u}\circ k)$ and $v=\varinjlim \vec{v}=\varinjlim (\vec{v}\circ l)$. Hence the natural transformations $F$ and $G$ respectively induce arrows $\tilde{f}:u\rightarrow v$ and $g:v\rightarrow u$ such that, denoted by $j_{k(i)}:u_{k(i)}\rightarrow u$ and $j'_{l(i)}:v_{l(i)}\rightarrow v$ the colimit arrows, $\tilde{f}\circ j_{k(i)}=j'_{l(i)}\circ F(i)$ and $g\circ j'_{l(i)}=j_{k(i+1)}\circ G(i)$ for each $i\in \kappa$; in particular $\tilde{f}\circ j_{k}=j'_{l}\circ f$. We have $g\circ \tilde{f}=1_{u}$ and $\tilde{f}\circ g=1_{v}$ in $\cal D$, from which it follows that $\tilde{f}$ is an isomorphism with the required property. Indeed, let us for example prove that first equality; the second follows similarly. By the universal property of the colimit $u=\varinjlim (\vec{u}\circ k)$, it is equivalent to check that $g\circ f\circ j_{k(i)}=j_{k(i)}$ for each $i\in \kappa$. Now by the equalities above we obtain $g\circ \tilde{f}\circ j_{k(i)}=g\circ j'_{l(i)}\circ F(i)=j_{k(i+1)}\circ G(i)\circ F(i)=j_{k(i+1)}\circ u^{k(i+1)}_{k(i)}=j_{k(i)}$, as required. This completes the proof of fact (3).\\
Coming back to our Fra\"iss\'e sequence $\vec{u}$, by taking $\vec{v}=\vec{u}$ in fact (3), we see that if $\vec{u}$ is continuous then $u$ satisfies the property of ultrahomogeneity with respect to any arrow between elements of the Fra\"iss\'e sequence $\vec{u}$; it remains to extend this result to hold for any arrow $f:a\rightarrow b$ in $\cal C$.
So we have to prove that for any arrows $s:a\rightarrow u$ and $t:b\rightarrow u$ in $\cal D$ there exists an automorphism $\tilde{f}:u\rightarrow u$ such that $\tilde{f}\circ s=t\circ f$:
\[
\xymatrix {
a \ar[d]_{f} \ar[r]^{s} & u \ar@{-->}[d]^{\tilde{f}}\\
b \ar[r]_{t} & u }
\]
Since $a$ is $\kappa$-small in $\cal D$, $s$ factors as $a\stackrel{s_{i}}{\rightarrow}u_{i}\stackrel{j_{i}}{\rightarrow}u$ for some $i\in \kappa$. From the fact that $u$ is $\cal C$-homogeneous (which we observed above), it follows that there exists an arrow $h:u_{i}\rightarrow u$ such that $h\circ s_{i}=t\circ f$. Now fact (3) implies the existence of an automorphism $\tilde{f}$ of $u$ such that $\tilde{f}\circ j_{i}=h$; then $\tilde{f}\circ s=\tilde{f}\circ j_{i}\circ {{s_{i}}}=h\circ s_{i}=t\circ f$, that is $\tilde{f}$ satisfies the required condition.\\
So far we have proved facts (1), (2) and (3).\\
Now, if $\cal C$ is $\kappa$-bounded, satisfies AP, JEP and has a dominating family of arrows $\cal F$ such that $|{\cal F}|\leq \kappa$, then by Theorem 3.5 in \cite{Kubis} there exists a $\kappa$-Fra\"iss\'e sequence in $\cal C$; hence by fact (2) there exists in ${\cal D}_{\kappa}$ a $\cal C$-homogeneous and $\cal C$-universal object $u$.\\
Let us now suppose that all the morphisms in ${\cal D}_{\kappa}^{c}$ are monic. If $u$ is a $\cal C$-universal and $\cal C$-homogeneous object in ${\cal D}_{\kappa}^{c}$ then by facts (1) and (2) above $u$ is $\cal C$-ultrahomogeneous. Now, suppose that $u,v\in {\cal D}_{\kappa}^{c}$ are both $\cal C$-ultrahomogeneous and $\cal C$-universal. Then by writing $u=\varinjlim \vec{u}$ and $v=\varinjlim \vec{v}$ where $\vec{u}$ and $\vec{v}$ are continuous inductive $\kappa$-sequences in $\cal C$, from fact (1) above we deduce that both $\vec{u}$ and $\vec{v}$ are continuous $\kappa$-Fra\"iss\'e sequences in $\cal C$; then by fact (3) there is an isomorphism $u\cong v$.\\
It remains to prove the last part of the theorem. From the proof of fact (2) above, it follows that there exists in $\cal C$ a $\kappa$-Fra\"iss\'e sequence satisfying the extension property; then the thesis follows from Proposition 3.1 in \cite{Kubis}.\\
\end{proofs}
\begin{rmk}
\emph{One can relax the condition in the second part of the theorem that all the morphisms in ${\cal D}_{\kappa}^{c}$ are monic to the weaker condition that all the universal colimit arrows to the colimits of continuous $\kappa$-chains in $\cal D$ are monic in ${\cal D}_{\kappa}$, which is what one just needs in the proof of the theorem; however, in case all the morphisms in $\cal C$ are monic, this weaker condition turns out to be equivalent to the original condition.}
\end{rmk}
\begin{rmk}
\emph{The categorical theorem in Droste and G\"obel \cite{DG1} can be obtained as the particular case of Theorem \ref{teofond} when $i$ is the embedding of the category ${\cal C}_{\lt\kappa}$ of $\kappa$-small objects of a $\kappa$-algebroidal category $\cal C$ whose morphisms are all monic into $\cal C$ and $\cal F$ is the collection of arrows of some skeleton of ${\cal C}_{\lt\kappa}$. Fra\"iss\'e's theorem is already a particular case of Droste and G\"obel's result (as it is observed in \cite{DG}), hence it is \emph{a fortiori} a particular case of our theorem.}
\end{rmk}
Let us note that given a category $\cal C$ as in Theorem \ref{teofond}, there is always an embedding $i:{\cal C}\rightarrow {\cal D}$ satisfying the hypotheses of the first part of the theorem, that is such that $\cal D$ has all colimits of $\kappa$-chains in $\cal C$ and all the objects in $\cal C$ are $\kappa$-small in $\cal D$; in fact, one can take as $\cal D$ the ind-completion $\Ind{\cal C}$ of $\cal C$ or the completion $(\Ind{\cal C})_{\kappa}$ of $\cal C$ in $\Ind{\cal C}$ under colimits of $\kappa$-chains. Recall that in case $\cal C$ is Cauchy-complete, $\cal C$ can be recoved from $\Ind{\cal C}$ as the full subcategory on the finitely presentable objects; also, we have seen above that $(\Ind{\cal C})_{\omega}$ can be identified with the full subcategory of $\Ind{\cal C}$ on the $\omega^{+}$-presentable objects.\\
Let us now apply Theorem \ref{teofond} in the context of first-order theories.
\begin{corollary}\label{cor_fo}
Let $\Sigma$ be a one-sorted signature, $T$ a first-order theory over $\Sigma$ and $\kappa$ an infinite cardinal such that $\kappa \gt card(\Sigma)$. Let ${{\mathbb T}\textrm{-mod}}_{e}$ be the category of $\mathbb T$-models and elementary embeddings between them and $i_{k}:{{\mathbb T}\textrm{-mod}}_{e}^{\kappa}\rightarrow {{\mathbb T}\textrm{-mod}}_{e}$ be the embedding of the full subcategory ${{\mathbb T}\textrm{-mod}}_{e}^{\kappa}$ of ${{\mathbb T}\textrm{-mod}}_{e}$ on the $\kappa$-presentable objects into ${{\mathbb T}\textrm{-mod}}_{e}$. Then if ${{\mathbb T}\textrm{-mod}}_{e}^{\kappa}$ satisfies AP, JEP and has a dominating family of arrows in it of cardinality at most $\kappa$, $\mathbb T$ has a model of cardinality $\leq \kappa$ which is ${{\mathbb T}\textrm{-mod}}_{e}^{\kappa}$-ultrahomogeneous and ${{\mathbb T}\textrm{-mod}}_{e}^{\kappa}$-universal; moreover, a $\mathbb T$-model with these properties is unique (up to isomorphism) among the $\mathbb T$-models of cardinality $\leq \kappa$.
\end{corollary}
\begin{proofs}
This immediately follows from Theorem \ref{teofond}, Proposition 1 in \cite{rosicky} and the remarks preceding Theorem \ref{teofond}.
\end{proofs}
Finally, some cardinality considerations. If $\cal C$ is a category structured over $\Set$, or more generally over a functor category $[I,\Set]$ (where $I$ is a set, regarded here as a discrete category) via a ``forgetful'' functor $U:{\cal C}\rightarrow [I,\Set]$, then one can naturally define a notion of cardinality for objects of $\cal C$. Indeed, one can define the cardinality of an object $c\in \cal C$ by the formula $card(c)=|\coprod_{i\in I}U(c)(i)|=\coprod_{i\in I}|U(c)(i)|$. These definitions apply for instance to the case of models of a many-sorted (geometric) theory (in this case $\cal C$ is the category of such models while $I$ is the set of sorts of the theory), giving a notion of cardinality for such models that generalizes the definition of cardinality of a model in classical model theory. Suppose $i:{\cal C}\rightarrow{\cal D}$ is an embedding as in Theorem \ref{teofond}; if $\cal D$ is structured over a functor category $[I,\Set]$ via a functor $U:{\cal D}\rightarrow [I,\Set]$ then we have a notion of cardinality for objects of $\cal D$ and in particular of $\cal C$, and we might want to estimate the cardinality of the ultrahomogeneous universal object given by Theorem \ref{teofond} in terms of the cardinality of the objects of $\cal C$. This is particularly easy to do in case the functor $U$ creates colimits of $\kappa$-chains; in fact we know that the colimits in $[I,\Set]$ are computed pointwise and we have a particularly elegant description of filtered colimits (in particular colimits of $\kappa$-chains) in $\Set$ (see for example p. 77 in \cite{borceux}). Specifically, if $u=\varinjlim \vec{u}$ is the colimit in $\cal D$ of an inductive $\kappa$-sequence with values in $\cal C$, we have $card(u)=card(\varinjlim_{\cal D} \vec{u})=card(\varinjlim_{[I,Set]}(U\circ \vec{u}))=\coprod_{i\in I}|\varinjlim_{\Set}(U\circ \vec{u})(i)|$. Notice that for each $i\in I$, $(U\circ \vec{u})(i)$ defines a $\kappa$-chain in $\Set$. From this expression one can then deduce that if $|I|\leq \kappa$ and for each $i\in I$ and $j\in \kappa$ $|(U\circ \vec{u})(i)(j)|\leq \kappa$ then $card(u)\leq \kappa$. Thus for example if all the objects in $\cal C$ have cardinality $\leq \kappa$ and $|I|\leq \kappa$ then every object in ${\cal D}_{\kappa}$ has cardinality $\leq \kappa$. This is for instance the case of the classical Fra\"iss\'e's construction, where in fact the Fra\"iss\'e's limit is always at most countable.
\section{The topos-theoretic interpretation}
A remark on notation: all the toposes in this section will be Grothendieck toposes, if not otherwise stated.\\
Let us recall that there exists an initial object in the category of toposes and geometric morphisms, which is given by the terminal category $1$ having just one object and the identity morphism on it; in fact, this category is a (coherent, atomic) Grothendieck topos, being the category of sheaves on the empty category with respect to the atomic topology on it (another presentation of it is obtained by taking the sheaves on $1$ with respect to the maximal Grothendieck topology on it, that is the topology in which all sieves cover). We will say that a topos $\cal E$ is trivial if it is naturally equivalent to $1$; of course, this is the same as saying that $\cal E$ is degenerate, that is $0_{\cal E}\cong 1_{\cal E}$.\\
Let us recall that a topos $\cal E$ is said to have enough points if the inverse image functors of the geometric morphisms $\Set\rightarrow \cal E$ are jointly conservative; every coherent topos has enough points (see for example \cite{El2}).
\begin{lemma}\label{teo1}
Let $\cal E$ be a topos with enough points. Then $\cal E$ is trivial if and only if it has no points.
\end{lemma}
\begin{proofs}
In one direction, let us suppose $\cal E$ trivial. Then $\cal E$ has no points because if $f:\Set\rightarrow \cal E$ were a point then we would have $0_{\Set}\cong f^{\ast}(0_{\cal E})\cong f^{\ast}(1_{\cal E})\cong 1_{\Set}$, which is absurd. Conversely, if $\cal E$ has no points then by taking the unique arrow $0:0_{\cal E}\rightarrow 1_{\cal E}$ in $\cal E$ then we trivially have that for each point $f$ of $\cal E$ $f^{\ast}(0)$ is an isomorphism; from the fact that $\cal E$ has enough points we can thus conclude that $0$ is an isomorphism, that is $\cal E$ is trivial.
\end{proofs}
\begin{lemma}\label{teo2}
Let $\cal C$ be a category satisfying the right Ore condition, and $J_{at}$ the atomic topology on it. Then $\Sh({\cal C},J_{at})$ is trivial if and only if $\cal C$ is the empty category.
\end{lemma}
\begin{proofs}
Recall that $1_{\Sh({\cal C},J_{at})}$ is given by the constant functor $\Delta{1_{\Set}}:{\cal C}^{\textrm{op}}\rightarrow \Set$, while $0_{\Sh({\cal C},J_{at})}$ is given by the result of applying the associated sheaf functor $a:[{\cal C}^{\textrm{op}}, \Set]\rightarrow \Sh({\cal C},J_{at})$ to the initial object of $[{\cal C}^{\textrm{op}}, \Set]$, that is the constant functor $\Delta{\emptyset}:{\cal C}^{\textrm{op}}\rightarrow \Set$. But this functor is trivially a sheaf with respect to the atomic topology on $\cal C$, since all its covering sieves are non-empty, so $a(\Delta{\emptyset})\cong \Delta{\emptyset}$. Now, clearly, $\Delta{\emptyset}\cong \Delta{1_{\Set}}$ if and only if $\cal C$ is the empty category.
\end{proofs}
\begin{lemma}\label{teo3}
Let $\cal C$ be a category satisfying the right Ore condition, and $J_{at}$ the atomic topology on it. Then if $[{\cal C}^{\textrm{op}}, \Set]$ is coherent, $\Sh({\cal C},J_{at})$ is coherent.
\end{lemma}
\begin{proofs}
From \cite{flatcoh} we know that if $[{\cal C}^{\textrm{op}}, \Set]$ is coherent, then we can axiomatize the theory of flat functors on $\cal C$ with coherent axioms in the language of presheaves on $\cal C$. Then, to obtain a coherent axiomatization for the theory of flat $J_{at}$-continuous functors on $\cal C$, it suffices to add to these axioms, for each arrow $f:c\rightarrow d$, the following (coherent) axiom:
\[
\top \: \vdash_{y}\: (\exists x\in c)(f(x)=y).
\]
\end{proofs}
{\flushleft
We recall that in \cite{flatcoh} Beke, Karazeris and Rosick\'y have introduced a notion of category having all fc finite limits and proved the following result: $[{\cal C}^{\textrm{op}}, \Set]$ is coherent if and only if $\cal C$ has all fc finite limits. Without going into details, we just remark that this fact can be profitably applied in connection with Lemma \ref{teo3} (see for example Theorem \ref{teocons} below).\\
We recall that a geometric theory $\mathbb T$ is said to be of presheaf type if its classifying topos is a presheaf topos (equivalently, the topos $[{\cal C},\Set]$, where ${\cal C}:=(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$ is the category of finitely presentable $\mathbb T$-models in $\Set$). We will say that two geometric theories are Morita-equivalent if they have the same category of models - up to natural equivalence - into every Grothendieck topos $\cal{E}$ naturally in $\cal{E}$, equivalently the same classifying topos.\\
We recall from \cite{OC} that if $\mathbb T$ is a theory of presheaf type such that the category $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}$ satisfies the right Ore condition (equivalently $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ satisfies AP), then the topos $\Sh((\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}, J_{at})$ classifies the homogeneous $\mathbb T$-models. We note that the notion of homogeneity of a model of $\mathbb T$ in $\Set$ defined in \cite{OC} coincides with the notion of $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$-homogeneous object of the category ${\mathbb T}\textrm{-mod}(\Set)$ with respect to the embedding $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))\hookrightarrow {\mathbb T}\textrm{-mod}(\Set)$ that we defined in the first section of this paper.\\
We will sometimes identify theories with their Morita-equivalence classes; the theory of flat $J_{at}$-continuous functors on $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}$, which can be taken as the ``canonical'' representative for the Morita-equivalence class of theories classified by the topos $\Sh((\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}, J_{at})$, will be called ``the theory of homogeneous $\mathbb T$-models''.}\\
A geometric theory is said to be consistent if it has at least one model in $\Set$.\\
The previous lemmas combine to give the following consistency result
\begin{theorem}\label{teocons}
Let $\mathbb T$ be a theory of presheaf type such that the category $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ has the amalgamation property. If the theory of homogeneous $\mathbb T$-models is Morita-equivalent to a coherent theory (for example when the category $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ has all fc finite colimits) and there is at least one $\mathbb T$-model in $\Set$, then there exists at least one homogeneous $\mathbb T$-model in $\Set$.
\end{theorem}
\begin{proofs}
The theory $\mathbb T'$ of homogeneous $\mathbb T$-models is Morita-equivalent to a coherent theory if and only if its classifying topos $\Sh((\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}, J_{at})$ is a coherent topos. Notice that for any category $\cal C$, $\cal C$ is empty if and only if $\Ind{{\cal C}}$ is empty; so if $\mathbb T$ is a theory of presheaf type then $\mathbb T$ has a model in $\Set$ if and only if it has a finitely presentable model in $\Set$. Then, since $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ is not the empty category, it follows from Lemma \ref{teo2} that the topos $\Sh((\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}, J_{at})$ is not trivial. Hence, by Lemma \ref{teo1}, it has a point. This point corresponds to a $\mathbb T'$-model in $\Set$, that is, to a homogeneous $\mathbb T$-model in $\Set$.
The fact that when the category $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ has all fc finite colimits, $\mathbb T'$ is Morita-equivalent to a coherent theory follows from Lemma \ref{teo3}.
\end{proofs}
A (many-sorted) geometric theory is said to be atomic if it is classified by an atomic topos. Of course, the property of atomicity for a theory is stable under Morita-equivalence. A geometric theory $\mathbb T$ over a signature $\Sigma$ is said to be complete if every sentence over $\Sigma$ is $\mathbb T$-provably equivalent to $\top$ or $\bot$, but not both. It is well-known that if $\mathbb T$ is atomic then $\mathbb T$ is complete if and only if its classifying topos $\Set[\mathbb T]$ is connected (equivalently, two-valued - see the proof of Theorem \ref{teo4} below). Recall that if a theory is coherent then its completeness implies its consistency (cfr. for example Lemma \ref{teo1}), but this implication does not hold for a general geometric theory; in fact, there exist connected atomic toposes without points (see for example \cite{El2}). We also remark that the property of completeness for a geometric theory is stable under Morita-equivalence, being equivalent to a categorical property (to be two-valued) of the corresponding classifying topos.\\
\begin{theorem}\label{teo4}
Let $\cal C$ be a category satisfying the right Ore condition, and $J_{at}$ the atomic topology on it. Then the atomic topos $\Sh({\cal C},J_{at})$ is connected if and only if $\cal C$ is a connected category.
\end{theorem}
\begin{proofs}
Recall that a topos $\cal E$ is said to be locally connected if the geometric morphism $\gamma:{\cal E}\rightarrow \Set$ is essential, that is the inverse image functor $\gamma^{\ast}:\Set \rightarrow \cal E$ has a left adjoint $\gamma_{!}:{\cal E}\rightarrow \Set$. An object $A$ of a locally connected topos $\cal E$ is said to be connected if $\gamma_{!}(A)\cong 1_{\Set}$. Every atomic topos $\cal E$ is locally connected (see for example p. 684 of \cite{El2}), and the objects of $\cal E$ which are connected are also called atoms.\\
We observe that an object $A$ of an atomic topos $\cal E$ is an atom if and only if the only subobjects of $A$ in $\cal E$ are $0_{A}:0\rightarrow A$ and $1_{A}:A\rightarrow A$ and they are distinct from each other. Indeed, this easily follows from the bijection $\Sub_{\cal E}(A)\cong \Sub_{\Set}(\gamma_{!}(A))$ (cfr. p. 685 of \cite{El2}). Hence, since every atomic topos is locally connected, Lemma C.3.3.3 in \cite{El2} gives the following characterization, to which we refer as to $(\ast)$: an atomic topos $\cal E$ is connected if and only if the only subobjects of $1_{\cal E}$ in $\cal E$ are $0_{1}:0\rightarrow 1$ and $1_{1}:1\rightarrow 1$ and they are distinct from each other. We use this criterion to prove our theorem.\\
We can identify the subterminals in $\Sh({\cal C},J_{at})$ with $J_{at}$-ideals on $\cal C$ (see p. 576 of \cite{El2}). By recalling (from the proof of Lemma \ref{teo2}) that $0_{\Sh({\cal C},J_{at})}$ is the constant functor $\Delta{\emptyset}:{\cal C}^{\textrm{op}}\rightarrow \Set$, condition $(\ast)$ can thus be rephrased as follows:\\
$\cal C$ is non-empty and every non-empty subset $I\subseteq ob({\cal C})$ which is a sieve (that is, for each arrow $f:a\rightarrow b$ in $\cal C$, $b\in I$ implies $a\in I$) and satisfies the property $(\forall R\in J_{at}(U)((\forall f_{i}:U_{i}\rightarrow U \in R, U_{i}\in I)\imp (U\in I))$ is the whole of $\cal C$.\\
Being $J_{at}$ the atomic topology on $\cal C$, this condition simplifies to:\\
$\cal C$ is non-empty and every non-empty subset $I\subseteq ob({\cal C})$ which is a sieve and satisfies the property $\forall f:V\rightarrow U$ in $\cal C$, $((V\in I) \imp (U\in I))$ is the whole of $ob(\cal C)$; but this is clearly equivalent to saying that $\cal C$ is connected.
\end{proofs}
\begin{theorem}\label{teo5}
Let $\cal C$ be a non-empty category satisfying the amalgamation property. Then $\cal C$ satisfies the joint embedding property if and only if it is a connected category.
\end{theorem}
\begin{proofs}
If $\cal C$ satisfies JEP then for any objects $a,b\in {\cal C}$ there exists an object $c\in \cal C$ and morphisms $f:a\rightarrow c$, $g:b\rightarrow c$ in $\cal C$:
\[
\xymatrix {
& a \ar[d]^{f} \\
b \ar[r]_{g} & c }
\]
Then we have the following zig-zag between $a$ and $b$:
\[
\xymatrix {
& a \ar[dl]_{1_{a}} \ar[dr]^{f} & & b \ar[dl]_{g} \ar[dr]^{1_{b}} \\
a & & c & & b. }
\]
Conversely, we prove that for any objects $a,b\in {\cal C}$ there exists an object $c\in \cal C$ and morphisms $f:a\rightarrow c$, $g:b\rightarrow c$ in $\cal C$ by induction on the length $n$ of a zig-zag that connects $a$ and $b$. If $n=1$ then the thesis follows immediately from the amalgamation property. If $n\gt 1$ we have a zig-zag
\[
\xymatrix {
& \ldots & \ldots & d'_{n} \ar[dl]_{f_{n}} \ar[dr]^{g_{n}} \\
d_{0}=a & \ldots & d_{n-1} & & d_{n}=b. }
\]
By applying the induction hypothesis to the pair $a, d_{n-1}$ one gets an object $d\in \cal C$ and morphisms $h:a\rightarrow d$, $k:d_{n-1}\rightarrow d$ in $\cal C$. The amalgamation property applied to the pair of morphisms $k\circ f_{n}$ and $g_{n}$ then gives an object $c$ and two morphisms $s:d\rightarrow c$ and $t:b\rightarrow c$. Then we have morphisms $f:=s\circ h:a\rightarrow c$ and $g:=t:b\rightarrow c$, as required.
\end{proofs}
From Theorems \ref{teo4} and \ref{teo5} we thus deduce that given a consistent theory of presheaf type $\mathbb T$ such that the category $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ satisfies the amalgamation property, the condition that $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ satisfies JEP is exactly what makes the theory $\mathbb T'$ of homogeneous $\mathbb T$-models complete. Indeed, $\mathbb T'$ is complete if and only if $\Set[\mathbb T']\simeq \Sh((\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}, J_{at})$ is connected, if and only if $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}$ is connected, if and only if $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$ is connected, if and only if $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$ satisfies JEP.\\
A geometric theory is said to be countably categorical if any two countable models of $\mathbb T$ in $\Set$ are isomorphic (where by `countable' we mean either finite or denumerable). Notice that, by our definition, any geometric theory having no models in $\Set$ is (vacously) countably categorical. We recall from \cite{OC5} that every complete atomic geometric theory is countably categorical; so, by the remarks above, we obtain the following result.
\begin{theorem}\label{teo6}
Let $\mathbb T$ be a consistent theory of presheaf type such that the category $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ has the amalgamation and joint embedding properties. If $\mathbb T'$ is a geometric theory which is Morita-equivalent to the theory of homogeneous $\mathbb T$-models then $\mathbb T'$ is complete and countably categorical.
\end{theorem}
\pushright{$\square$}\penalty-700 \smallskip
\begin{rmk}
\emph{Concerning the existence of homogeneous $\mathbb T$-models in $\Set$, we note that if the theory $\mathbb T$ in Theorem \ref{teo6} is coherent then, by Lemma \ref{teo3} and Theorem \ref{teocons}, there exists a homogeneous $\mathbb T$-model in $\Set$. If moreover the signature of $\mathbb T$ is countable then, by the results in \cite{OC5}, there is a countable homogeneuos $\mathbb T$-model in $\Set$.}
\end{rmk}
The usefulness of Theorem \ref{teo6} lies in the fact that it is generally not difficult to see, given a theory of presheaf type $\mathbb T$, if a certain theory is Morita-equivalent to the theory of homogeneous $\mathbb T$-models. In fact, one can use Corollary 4.7 in \cite{OC} and the explicit description of the homogeneous models given in \cite{OC}. For example, in \cite{OC} we saw that, given the theory $\mathbb T$ of linear orders, the dense linear orders without endpoints corresponded precisely to the homogeneous $\mathbb T$-models. By using similar methods, one can also show that, given the theory of decidable objects, the infinite decidable objects are exactly the homogeneous decidable objects and that, given the algebraic theory of Boolean algebras, the atomless Boolean algebras are exactly the homogeneous Boolean algebras. This leads, via Theorem \ref{teo6}, to an alternative proof that the theory of dense linear orders without endpoints and the theory of atomless Boolean algebras are complete and countably categorical.\\
Moreover, we know from \cite{OC5} that, under the hypotheses of Theorem \ref{teo6}, the Booleanization of the theory $\mathbb T$ axiomatizes the $\mathbb T$-homogeneous models, and hence we may deduce that any two countable $\mathbb T$-homogeneous models in $\Set$ are isomorphic (cfr. Theorem 3.3 \cite{OC5}).\\
We also recall from \cite{OC5} that if $\mathbb T$ is an atomic, complete countable geometric theory with infinite models in $\Set$ then, denoted by $M$ the unique countable model of $\mathbb T$ (up to isomorphism), we have the following representation for the classifying topos $\Set[\mathbb T]$ of $\mathbb T$:
\[
\Set[\mathbb T]\simeq \Cont(Aut(M)),
\]
where $\Cont(Aut(M))$ is the topos of continuous $Aut(M)$-sets, $Aut(M)$ being endowed with the topology of pointwise convergence.\\
Let us now apply the categorical theorem in the first section in the context of the theories of presheaf type.
\begin{theorem}
Let $\mathbb T$ be a consistent theory of presheaf type such that the category $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ satisfies AP, JEP. If there exists in $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$ a dominating family of arrows of finite or countable cardinality then there exists in $\Set$ a $\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set)$-homogeneous and $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$-universal ${\mathbb T}$-model; also, given a $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$-homogeneous and $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$-universal ${\mathbb T}$-model $M$, if $M$ can be written as the colimit in ${\mathbb T}\textrm{-mod}(\Set)$ of a $\omega$-chain of finitely presentable $\mathbb T$-models (equivalently, is $\omega^{+}$-presentable) then, provided that all the morphisms in $({\mathbb T}\textrm{-mod}(\Set))_{\omega}$ are monic, $M$ is $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$-ultrahomogeneous and unique (up to isomorphism) with this property among the $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$-universal and $\omega^{+}$-presentable $\mathbb T$-models.\\ If $\mathbb T'$ is a geometric theory whose models (in any Grothendieck topos) are the homogeneous $\mathbb T$-models, then $\mathbb T'$ is complete and countably categorical. In particular, if $\mathbb T$ is countable and has infinite models in $\Set$ there exists a unique (up to isomorphism) countable homogeneous $\mathbb T$-model $M$, and
\[
\Set[\mathbb T']\simeq \Sh((\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}, J_{at}) \simeq \Cont(Aut(M)),
\]
$Aut(M)$ being endowed with the topology of pointwise convergence.
\end{theorem}
\begin{proofs}
This is immediate from Theorem \ref{teofond}, the remarks following it, Theorem \ref{teo6} and the remark above.
\end{proofs}
Let us now introduce the following notions.\\
Given an embedding $i:{\cal C}\hookrightarrow {\cal D}$ and an object $u\in {\cal D}$ together with a choice of an arrow $f_{c}:c\rightarrow u$ in $\cal D$ for each object $c$ of $\cal C$, we can consider a category $\tilde{\cal C}$, defined as the full subcategory of $({\cal C}\downarrow u)$ on the arrows $f:a\rightarrow u$ in $\cal D$ such that there exists an automorphism $\alpha$ of $u$ (that is, an isomorphism $\alpha:u\rightarrow u$ in the category $\cal D$) such that $f=\alpha \circ f_{a}$. Then we can define a functor $\chi:\tilde{\cal C}^{\textrm{op}}\rightarrow Subgr(Aut(u))$, where $Subgr(Aut(u))$ is the collection of the subgroups of $Aut(u)$ regarded as a poset category with respect to the inclusion, in the following way: $\chi$ sends an object $f:a\rightarrow u$ in $\tilde{\cal C}$ to the subgroup $Aut_{f}(u)$ of $Aut(u)$ formed by the automorphisms $\alpha$ of $u$ such that $\alpha\circ f=f$ and an arrow $h:f\rightarrow g$ in $\tilde{\cal C}$ to the inclusion $Aut_{g}(u)\subseteq Aut_{f}(u)$. If $\chi$ is full and faithful and reflects identities (that is, for each pair of arrows $h,k$ in $\tilde{\cal C}^{\textrm{op}}$ $\chi(h)=\chi(k)$ implies $h=k$) we say that $u$ satisfies the Galois property with respect to $\tilde{\cal C}$; notice that if $\cal C$ is skeletal and $\chi$ is full and faithful then $u$ satisfies the Galois property with respect to $\tilde{\cal C}$. Also, we can endow the group $Aut(u)$ with a topology $\cal U$ by saying that the subgroups in the image of the functor $\chi$ form a base of neighbourhoods of the identity.\\
In the context of these notions, the following proposition holds.
\begin{proposition}
Given an embedding $i:{\cal C}\hookrightarrow {\cal D}$ such that all the arrows $f_{c}$ (for $c\in {\cal C}$) are monic, let $u$ be a $\cal C$-ultrahomogeneous object in $\cal D$ which satisfies the Galois property with respect to $\tilde{\cal C}$. Then the category $\cal C$ satisfies the amalgamation property and there is a natural equivalence
\[
\Sh({\cal C}^{\textrm{op}},J_{at})\simeq \Cont(Aut(u))
\]
where $\Cont(Aut(u))$ is the topos of continuous $Aut(u)$-sets, $Aut(u)$ being endowed with the topology $\cal U$.
\end{proposition}
\begin{proofs}
From Theorem 2 p. 154 \cite{MM} we deduce that $\Cont(Aut(u))$ is naturally equivalent to $\Sh({\bf S}_{\cal U}(Aut(u)),J_{at})$, where ${\bf S}_{\cal U}(Aut(u))$ is the category having as objects the continuous $Aut(u)$-sets of the form $Aut(u)\slash \chi(f)$ for $f\in \tilde{\cal C}$ and as arrows $Aut(u)\slash \chi(f)\rightarrow Aut(u)\slash \chi(g)$ the cosets $\chi(g)\alpha$ with the property that $\chi(f)\subseteq \alpha^{-1}\chi(g)\alpha$ (see \cite{MM} for more details). To prove our proposition it is therefore enough to show that there is an equivalence of categories between ${\bf S}_{\cal U}(Aut(u))$ and ${\cal C}^{\textrm{op}}$. We explicitly define a functor $F:{\bf S}_{\cal U}(Aut(u))\rightarrow {\cal C}^{\textrm{op}}$ and prove that it is an equivalence of categories.
Let us first define $F$ on objects: $F$ sends an object $Aut(u)\slash \chi(f)$ of ${\bf S}_{\cal U}(Aut(u))$ to $dom(f)\in {\cal C}$; this is well-defined since $\chi$ reflects identities. Given an arrow $Aut(u)\slash \chi(f)\rightarrow Aut(u)\slash \chi(g)$, represented by a coset $ \chi(g)\alpha$, we have that $\chi(f)\subseteq \alpha^{-1}\chi(g)\alpha$, equivalently $\alpha \chi(f)\alpha^{-1}\subseteq \chi(g)$. This means that $\alpha\circ \beta \circ \alpha^{-1}\circ g = g$ (equivalently, $\beta \circ (\alpha^{-1}\circ g)=(\alpha^{-1}\circ g)$) for each $\beta\in Aut(u)$ such that $\beta \circ f=f$, which is in turn equivalent to saying that $\chi(f)\subseteq \chi(\alpha^{-1}\circ g)$. This implies, by our hypothesis that $\chi$ is full and faithful, that there exists a unique arrow $z:dom(g)\rightarrow dom(f)$ in $\cal C$ such that $f\circ z=\alpha^{-1}\circ g$. We put $F(\chi(g)\alpha)=z$; this is well-defined since $\chi(g) \alpha =\chi(g) \alpha'$ if and only if $\alpha \circ \alpha'^{-1}\in \chi(g)$, if and only if $\alpha^{-1} \circ g=\alpha'^{-1} \circ g$, if and only if $f\circ F(\chi(g) \alpha)=f\circ F(\chi(g)\alpha')$ if and only if $F(\chi(g)\alpha)=F(\chi(g)\alpha')$, where the last equivalence follows from the fact that $f$ is monic. This also proves that $F$ is faithful. $F$ is full because $u$ is ${\cal C}$-ultrahomogeneous, and it is surjective by definition of $\cal U$. Therefore, $F$ is an equivalence of categories.
\end{proofs}
\vspace{7 mm}
{\bf Acknowledgements.} I am very grateful to my Ph.D. supervisor Peter Johnstone for his support and encouragement. Thanks also to Martin Hyland for having suggested me to investigate Fra\"iss\'e's construction topos-theoretically.\\
\newpage
|
2,877,628,089,418 | arxiv | \section{Introduction}
Let $\mathcal{X} = \{x_1,\ldots,x_n\}$ be a finite set, and let $\mathcal{E} = \{E_1,\ldots,E_s\}$
be a family of distinct subsets of $\mathcal{X}$. The pair $\mathcal{H} = (\mathcal{X},\mathcal{E})$ is called a {\bf hypergraph}
if $E_i \neq \emptyset$ for each $i$. The elements
of $\mathcal{X}$ are called the {\bf vertices}, while the elements of $\mathcal{E}$ are called the {\bf edges}
of $\mathcal{H}$. A hypergraph $\mathcal{H}$ is {\bf simple} if: (1) $\mathcal{H}$ has no loops, i.e., $|E| \ge 2$ for all
$E \in \mathcal{E}$, and (2) $\mathcal{H}$ has no multiple edges, i.e., whenever $E_i,E_j \in \mathcal{E}$
and $E_i \subseteq E_j$, then $i = j$. A hypergraph generalizes the classical notion of a graph;
a graph is a hypergraph for which every $E \in \mathcal{E}$ has cardinality two.
Let $k$ be a field. By identifying the vertex $x_i$
with the variable $x_i$ in the ring $R = k[x_1,\ldots,x_n]$, we
can associate to every simple hypergraph $\mathcal{H} = (\mathcal{X},\mathcal{E})$ a squarefree monomial ideal
\[\mathcal{I}(\mathcal{H}) =
\left( \left\{\left. x^E = \prod_{x \in E} x ~\right|~ E \in \mathcal{E}\right\} \right)
\subseteq R = k[x_1,\ldots,x_n].\]
We call the ideal $\mathcal{I}(\mathcal{H})$ the {\bf edge ideal} of $\mathcal{H}$.
In this paper we
study the minimal graded free resolution of $\mathcal{I}(\mathcal{H})$.
Since there is a natural bijection between the sets
\[
\left\{
\begin{array}{c}
\mbox{simple hypergraphs $\mathcal{H} =(\mathcal{X},\mathcal{E})$} \\
\mbox{with $\mathcal{X} = \{x_1,\ldots,x_n\}$}
\end{array}
\right\}
\leftrightarrow
\left\{
\begin{array}{c}
\mbox{squarefree monomial} \\
\mbox{ideals $I \subseteq R = k[x_1,\ldots,x_n]$}
\end{array}
\right\}
\]
we are in fact studying a fundamental problem in commutative algebra which
asks for the minimal graded free resolution of a monomial ideal
(for an introduction see \cite{MillerSturmfels2004}).
The edge ideal approach allows us to study this problem from a new angle;
the standard approach is to use the Stanley-Reisner
dictionary to associate to a squarefree monomial
ideal $I$ a simplicial complex $\Delta$ where the generators of $I$ correspond to the minimal nonfaces of $\Delta$. Instead, we associate
to $I$ a new combinatorial object, namely, a hypergraph.
The theme of this work is to understand how
the algebraic invariants of $I=\mathcal{I}(\mathcal{H})$ encoded in its minimal free resolution relate
to the combinatorial properties of $\mathcal{H}$.
The edge ideal of a hypergraph was first introduced by Villarreal \cite{V1}
in the special case that $\mathcal{H} = G$ is a simple graph.
Subsequently, many people, including
\cite{Barile,FH,FV,HHTZ,HHZ2,HHZ,HHZ1,S,SturmfelsSullivant2005,SVV,V,V2},
have been working on a program to build a dictionary between the algebraic
properties of $\mathcal{I}(G)$ and the combinatorial structure of $G$. Of particular relevance
to this paper, the minimal graded resolution of $\mathcal{I}(G)$ was investigated in
\cite{CN,EisenbudGreenHulekPopescu2004,EV,J,JK,K,HaVanTuyl2005,RothVanTuyl2005,Zheng2004}
(see also \cite{HaVanTuyl2006}
for a survey). In this paper we shall extend some of these results to the hypergraph
case, most notably, the results of \cite{HaVanTuyl2005}, thereby extending our understanding
of quadratic squarefree monomial ideals to arbitrary squarefree monomial ideals. At the same
time, we shall also derive new results which, even when restricted to graphs, give new
and interesting corollaries.
The edge ideal $\mathcal{I}(\mathcal{H})$ of an arbitrary hypergraph was first
studied by Faridi \cite{Faridi2002} but from a slightly different perspective.
Recall that $\Delta$ is a {\bf simplicial complex} on the vertex set $\mathcal{X}$ if $\{x_i\} \in \Delta$
for all $i$, and if $F \in \Delta$ then all subsets of $F$ belong
to $\Delta$. The
{\bf facets} of $\Delta$ are the maximal elements of $\Delta$ under inclusion. The
{\bf facet ideal} of $\Delta$ is then defined to be the ideal
$\mathcal{I}(\Delta) = \left( \left\{\left. x^F = \prod_{x \in F} x ~\right|~
\mbox{$F$ is a facet of $\Delta$}\right\} \right)
\subseteq R$. Note, however, that if $\mathcal{F}(\Delta) = \{F_1,\ldots,F_t\}$
denotes the set of facets of $\Delta$, then $\mathcal{H}(\Delta) = (\mathcal{X},\mathcal{F}(\Delta))$
is a hypergraph. In fact, what Caboara, Faridi and Selinger \cite{CFS} call
a {\bf facet complex} is a hypergraph. It is immediate that $\mathcal{I}(\mathcal{H}(\Delta)) = \mathcal{I}(\Delta)$.
Conversely, given any hypergraph $\mathcal{H} = (\mathcal{X},\mathcal{E})$, we can associate to $\mathcal{H}$
the simplicial
complex $\Delta(\mathcal{H}) = \{F \subseteq \mathcal{X} ~|~ F \subseteq E_i ~~\mbox{for some $E_i \in \mathcal{E}$}\}.$
It is again easy to verify that $\mathcal{I}(\mathcal{H}) = \mathcal{I}(\Delta(\mathcal{H}))$.
One may therefore take the viewpoint that the generators of a
squarefree monomial ideal correspond to either
the edges of a hypergraph or the facets of a simplicial complex. In this paper,
we have chosen to take the first option for at least two reasons: first,
the language of hypergraphs is more natural to describe our results; and
second, we only require the edge structure of the hypergraph
and never make use of the simplicial complex structure. (A hypergraph point of view
is also taken in the recent paper \cite{HHTZ}.)
Of course, all our results
could be reinterpreted as statements about the facet ideal of some simplicial complex.
The starting point of this paper
is to determine how the splitting technique used in \cite{HaVanTuyl2005}
to study the resolution of edge ideals of graphs can be extended to hypergraphs. Recall that
Eliahou and Kervaire \cite{EliahouKervaire1990} call a monomial ideal $I$
{\bf splittable} if $I = J + K$ for
two monomial ideas $J$ and $K$ such that the minimal generators of $J,K$ and $J \cap K$
satisfy a technical condition (see Definition \ref{defn: split} for the
precise statement). When an ideal is splittable, the minimal
resolutions (specifically the graded Betti numbers) of $I, J, K$ and $J \cap K$ are
then related. Given a hypergraph $\mathcal{H}$, we therefore want to split $\mathcal{I}(\mathcal{H})$
so that the ideals $J,K$, and $J \cap K$ correspond to edge ideals of sub-hypergraphs
of $\mathcal{H}$. This allows us to derive recursive-type formulas to relate the graded Betti numbers of $\mathcal{I}(\mathcal{H})$ to
those of sub-hypergraphs of $\mathcal{H}$. These formulas provide a systematic approach to investigating algebraic invariants and properties of $\mathcal{I}(\mathcal{H})$.
We now summarize the results of this paper.
In Section 3 we extend the notion of a
splitting edge of a graph as defined in \cite{HaVanTuyl2005} to the hypergraph
setting. Precisely, let $E$ be an edge of the hypergraph $\mathcal{H}$. If $\mathcal{H} \backslash E$
denotes the hypergraph with the edge $E$ removed, then it is clear that $\mathcal{I}(\mathcal{H}) =
(x^E) + \mathcal{I}(\mathcal{H}\backslash E)$. We call $E$ a {\bf splitting edge} precisely when
$\mathcal{I}(\mathcal{H}) = (x^E) + \mathcal{I}(\mathcal{H}\backslash E)$ is a splitting of the ideal $\mathcal{I}(\mathcal{H})$.
Our main result in Section 3 is the following classification of
splitting edges, thus answering a question raised
in \cite{HaVanTuyl2006}.
\begin{theorem}[Theorem \ref{characterize: splitting facets}]
Let $\mathcal{H}$ be a hypergraph
with two or more edges.
Then an edge $E$ is a splitting edge of $\mathcal{H}$ if and only if there exists
a vertex $z \in E$ such that
\[(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E) \subseteq (x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z\}).\]
Here, $\mathcal{H}\backslash \{z\}$ denotes the sub-hypergraph of $\mathcal{H}$ where every
edge containing $z$ is removed.
\end{theorem}
To make use of our classification of splitting edges, we need to be
able to describe the resolution of $J \cap K = (x^E) \cap \mathcal{I}(\mathcal{H} \backslash E)$.
This resolution was described when $\mathcal{H} =G$ is a simple graph in
\cite{HaVanTuyl2006}. However, this is a difficult problem
for an arbitrary $\mathcal{H}$. We are therefore interested in
families of hypergraphs, which includes all simple graphs, where one can say
something about $J\cap K$.
In Section 4 we
introduce one such family which we call {\bf properly-connected} hypergraphs.
A hypergraph $\mathcal{H} = (\mathcal{X},\mathcal{E})$ is properly-connected if all its edges have the same cardinality, and
furthermore, if $E,H \in \mathcal{E}$ with $E \cap H \neq \emptyset$, then
the distance $\operatorname{dist}_{\mathcal{H}}(E,H)$ between $E$ and $H$, that is, the length of the shortest
path between $E$ and $H$
in $\mathcal{H}$, is determined by $|E \cap H|$. It is easy to see that all simple graphs are
properly-connected.
In fact, a re-examination of the results of \cite{HaVanTuyl2005} reveals that the
properly-connected property of graphs is an essential ingredient implicitly used in the
proofs. A properly-connected hypergraph is in some sense a natural generalization of a
simple graph.
When $\mathcal{H}$ is properly-connected,
we can describe the resolution of $J \cap K$ in terms of edge ideals of
sub-hypergraphs of $\mathcal{H}$.
Therefore, for any splitting edge $E \in \mathcal{H}$,
we can derive the following recursive-type formula for $\beta_{i,j}(\mathcal{I}(\mathcal{H}))$.
\begin{theorem}[Theorem \ref{theoremsplitting}]\label{introtheorem2}
Let $\mathcal{H}$ be a properly-connected hypergraph and let
$E$ be a splitting edge of $\mathcal{H}$. Suppose $d = |E|$, $\mathcal{H}' = \{H \in \mathcal{H} ~|~ \operatorname{dist}_{\mathcal{H}}(E,H) \geq d+1\}$,
and $t= |N(E)|$, where
$$N(E) = \bigcup_{\{H \in \mathcal{H} ~|~ \operatorname{dist}_{\mathcal{H}}(E,H) = 1\}} H\backslash E.$$
Then for all $i \geq 1$
\[\beta_{i,j}(\mathcal{I}(\mathcal{H})) = \beta_{i,j}(\mathcal{I}(\mathcal{H}\backslash E))
+ \sum_{l=0}^i \binom{t}{l}
\beta_{i-1-l,j-d-l}(\mathcal{I}(\mathcal{H}')).\]
Here, $\beta_{-1,j}(\mathcal{I}(\mathcal{H}')) = 1$ if $j =0$ and $0$ if $j \neq 0$.
\end{theorem}
The sub-hypergraphs $\mathcal{H}\backslash E$ and $\mathcal{H}'$ in Theorem \ref{introtheorem2}
may fail to have splitting edges, thus preventing us from recursively computing
$\beta_{i,j}(\mathcal{I}(\mathcal{H}))$. However, in \cite{HaVanTuyl2005} (see also \cite{J,JK} in the case of
forests), it is proved that when $\mathcal{H}$ is a hyperforest
(i.e., a simplicial forest in the sense of \cite{Faridi2002})
then $\beta_{i,j}(\mathcal{I}(\mathcal{H}))$ can be computed recursively.
The goal of Section 5 is to introduce a subclass of properly-connected hypergraphs,
which we call {\bf triangulated} hypergraphs, for which
Theorem \ref{introtheorem2} can be used
to completely resolve the graded Betti numbers of $\mathcal{I}(\mathcal{H})$ recursively.
Triangulated hypergraphs generalize the notion
of {\bf chordal} graphs, which has attracted considerable attention lately
(cf. \cite{FH,FV,HHZ,HHZ1}).
In fact, triangulated graphs
are precisely chordal graphs. As a consequence
of Theorem \ref{introtheorem2}, we show also that the graded Betti numbers of a triangulated hypergraph
are independent of the characteristic of the ground field (Corollary \ref{cor.hyperchar}). Restricted to
simple graphs, we obtain the following interesting corollary, which extends a result of
\cite{J,JK} (who proved the result for forests).
\begin{corollary}[Corollary \ref{cor.graphchar}]
Suppose that $G$ is a chordal graph. Then the graded Betti numbers of $\mathcal{I}(G)$ are independent of the
characteristic of the ground field and can be computed recursively.
\end{corollary}
In Section 6 we study $\operatorname{reg}(\mathcal{I}(\mathcal{H}))$,
the Castelnuovo-Mumford regularity of $\mathcal{I}(\mathcal{H})$,
when $\mathcal{H}$ is properly-connected. Again, the
key idea we need here is the notion of distance between edges.
We say two edges $E,H \in \mathcal{H}$ are {\bf $t$-disjoint} if $\operatorname{dist}_{\mathcal{H}}(E,H) \geq t$.
When $\mathcal{H}$ is a properly-connected hypergraph and $d$ is the common cardinality of the edges,
then $d$-disjoint edges are disjoint edges in the usual sense.
We then show the following:
\begin{theorem}\label{introtheorem3}
Let $\mathcal{H}$ be a properly-connected hypergraph. Suppose $d$ is the common cardinality of the edges in $\mathcal{H}$. Let $c$ be the maximal number of pairwise $(d+1)$-disjoint edges of $\mathcal{H}$.
Then
\begin{enumerate}
\item[$(i)$] {\em (Theorem \ref{regtheorem})} $\operatorname{reg}(\mathcal{I}(\mathcal{H})) \ge (d-1)c+1$.
\item[$(ii)$] {\em (Theorem \ref{regularitytheorem})}
if $\mathcal{H}$ is also triangulated, then $\operatorname{reg}(\mathcal{I}(\mathcal{H})) = (d-1)c+1$.
\end{enumerate}
\end{theorem}
By a {\bf matching} of a hypergraph $\mathcal{H}$, we mean any
subset $\mathcal{E}' \subseteq \mathcal{E}$ of edges in $\mathcal{H}$ which are pairwise disjoint.
The {\bf matching number} of $\mathcal{H}$, denoted by $\alpha'(\mathcal{H})$, is the largest size of a
maximal matching of $\mathcal{H}$.
For simple graphs, we also obtain a particularly nice upper bound for the regularity of $\mathcal{I}(G)$.
This addresses a question J. Herzog had asked us.
\begin{theorem}[Theorem \ref{cor.matching}]\label{introcor}
Let $G$ be a finite simple graph. Then
$$\operatorname{reg}(R/\mathcal{I}(G)) \leq \alpha'(G)$$
where $\alpha'(G)$ is the matching number of $G$.
\end{theorem}
Using Theorem \ref{introcor}, we can compare
the regularity and projective dimension of $\mathcal{I}(G)$ to those of $\mathcal{I}(G)^{\vee}$,
the {\bf Alexander dual} of $\mathcal{I}(G)$.
\begin{theorem}[Theorem \ref{thm.Konig}] \label{intro.Konig}
Let $G$ be a simple graph.
\begin{enumerate}
\item If $G$ is unmixed (i.e., all the minimal vertex covers have the same cardinality), then
$$\operatorname{reg}(\mathcal{I}(G)) \le \operatorname{ht} \mathcal{I}(G) + 1 \le \operatorname{reg}(\mathcal{I}(G)^\vee)+1 \mbox{ and } \operatorname{pdim}(\mathcal{I}(G)^\vee) \le \operatorname{ht} \mathcal{I}(G) \le \operatorname{pdim}(\mathcal{I}(G)) + 1.$$
\item If $G$ is not unmixed, then
$$\operatorname{reg}(\mathcal{I}(G)) \le \operatorname{ht} \mathcal{I}(G) + 1 \le \operatorname{reg}(\mathcal{I}(G)^\vee) \mbox{ and } \operatorname{pdim}(\mathcal{I}(G)^\vee) \le \operatorname{ht} \mathcal{I}(G) \le \operatorname{pdim}(\mathcal{I}(G)).$$
\end{enumerate}
\end{theorem}
When restricted to simple graphs, Theorem \ref{introtheorem3} $(ii)$ also gives an interesting corollary, which was first proved by Zheng \cite{Zheng2004} in the special case that $G$ was a forest.
\begin{corollary}[Corollary \ref{cor.regchordal}] \label{introcor4}
Let $G$ be a chordal graph. Then
$$\operatorname{reg}(\mathcal{I}(G)) = c + 1$$
where $c$ is the maximal number of 3-disjoint edges in $G$.
\end{corollary}
Finally, in Section 7 we show that the first syzygy module of $\mathcal{I}(\mathcal{H})$
when $\mathcal{H}$ is properly-connected
is generated by linear syzygies
if and only if the diameter of the hypergraph $\mathcal{H}$ is small enough (Theorem \ref{linearsyzygies}).
By {\bf diameter} we mean the maximum distance between any two edges of $\mathcal{H}$.
This result can be seen as the first step towards generalizing Fr\"oberg's result \cite{Fr}
characterizing graphs whose edge ideals have a linear resolution.
As an interesting corollary, if $\mathcal{H}$ is a triangulated hypergraph, and
if $\mathcal{I}(\mathcal{H})$ only has linear first syzygies, then
the resolution of $\mathcal{I}(\mathcal{H})$ must in fact be linear (Corollary \ref{linsyzcor}).
\section{Preliminaries}
We recall the relevant results concerning hypergraphs,
resolutions, and splittable ideals.
\subsection{Hypergraphs and edge ideals} Our reference for
the hypergraph material is Berge \cite{Berge1989}.
Throughout this paper we shall assume that our hypergraphs $\mathcal{H} = (\mathcal{X},\mathcal{E})$ are simple,
i.e., $|E| \ge 2$ for all $E \in \mathcal{E}$, and there is no element of $\mathcal{E}$ which contains another. When there is no danger of confusion, we sometimes specify a hypergraph by describing only its set of edges.
If each $E \in \mathcal{E}$ has the same cardinality $d$, then we call $\mathcal{H}$
a {\bf $d$-uniform} hypergraph. Note that a simple graph is a simple $2$-uniform
hypergraph. If $\mathcal{H}$ is $d$-uniform, then the associated simplicial complex
$\Delta(\mathcal{H})$ is a {\bf pure} simplicial complex, that is, all its facets have the
same dimension.
If $E$ is an edge of a hypergraph $\mathcal{H}$, then we let $\mathcal{H} \backslash E$ denote the
hypergraph formed by removing the edge $E$ from $\mathcal{H}$. Similarly, if $x$ is a vertex of $\mathcal{H}$,
we shall write $\mathcal{H} \backslash \{x\}$ to denote the hypergraph formed by removing $x$ and all
edges $E \in \mathcal{E}$ with the property that $x \in E$. Note that $x$ is an isolated vertex of $\mathcal{H} \backslash \{x\}$, or we can also consider
the vertex set of $\mathcal{H} \backslash \{x\}$ to be $\mathcal{X} \backslash \{x\}$. If $\mathcal{Y} \subset \mathcal{X}$,
then the {\bf induced hypergraph on $\mathcal{Y}$}, denoted $\mathcal{H}_{\mathcal{Y}}$, is the
sub-hypergraph of $\mathcal{H}$ whose edge set is $\{E \in \mathcal{E} ~|~ E \subseteq \mathcal{Y}\}$.
If there is no edge $E \in \mathcal{E}$ such that $E\subseteq \mathcal{Y}$, then we view
$\mathcal{H}_{\mathcal{Y}}$ as the graph of the isolated vertices $\mathcal{Y}$.
The notion of distance between edges in a hypergraph will play a fundamental
role in later discussions. We introduce the relevant definitions here.
\begin{definition}
A {\bf chain of length $n$} in $\mathcal{H}$ is a sequence
$(E_0,x_1,E_1,\ldots,x_n,E_n)$ such that
\begin{enumerate}
\item[$(1)$] $x_1,\ldots,x_n$ are all distinct vertices of $\mathcal{H}$,
\item[$(2)$] $E_0,\ldots,E_n$ are all distinct edges of $\mathcal{H}$, and
\item[$(3)$] $x_1 \in E_0$, $x_n \in E_n$, and $x_k,x_{k+1} \in E_k$ for each $k =1,\ldots,n-1$.
\end{enumerate}
We sometimes denote the chain by $(E_0,\ldots,E_n)$ if the vertices in the chain are not being investigated. Note that $(3)$ implies that $E_i \cap E_{i+1} \neq \emptyset$ for
$i = 0,\ldots,n-1$.
If $E$ and $E'$ are two edges, then $E$ and $E'$ are {\bf connected} if there
exists a chain $(E_0,\ldots,E_n)$ where $E = E_0$ and $E' = E_n$.
If $|E| \geq |E'|$,
then the chain connecting $E$ to $E'$ is a {\bf proper chain} if
$|E_i \cap E_{i+1}| = |E_{i+1}|-1$ for all $i = 0,\ldots,n-1$.
The (proper) chain is an {\bf (proper) irredundant chain} of length $n$
if no proper subsequence is a (proper) chain from $E$ to $E'$.
\end{definition}
\begin{definition}
If $E$ and $E'$ are two edges of a hypergraph $\mathcal{H}$ with $|E| \geq |E'|$, then we define the
{\bf distance} between $E$
and $E'$, denoted by $\operatorname{dist}_{\mathcal{H}}(E,E')$, to be
\[\operatorname{dist}_{\mathcal{H}}(E,E') = \min\{\ell ~|~ (E= E_0,\ldots,E_{\ell}=E') ~\mbox{is a proper
irredundant chain}\}.\]
If no proper irredundant chain between the two edges exists, we
set $\operatorname{dist}_{\mathcal{H}}(E,E') = \infty$.
\end{definition}
As in the introduction, the {\bf edge ideal} of $\mathcal{H} = (\mathcal{X},\mathcal{E})$
is the squarefree monomial ideal
\[\mathcal{I}(\mathcal{H}) = \left(\left\{\left. x^E = \prod_{x \in E} x ~\right|~ E \in \mathcal{E}\right\} \right)
\subseteq R = k[x_1,\ldots,x_n].\]
We often abuse notation and write $x^E$ for both
the edge $E$ and the corresponding monomial.
\subsection{Resolutions and splittable ideals}
Let $M$ be a graded $R$-module where $R = k[x_1,\ldots,x_n]$.
Associated to $M$ is a {\bf minimal
graded free resolution} of the form
\[
0 \rightarrow \bigoplus_j R(-j)^{\beta_{l,j}(M)}
\rightarrow \bigoplus_j R(-j)^{\beta_{l-1,j}(M)}
\rightarrow \cdots
\rightarrow \bigoplus_j R(-j)^{\beta_{0,j}(M)}
\rightarrow M \rightarrow 0
\]
where $l \leq n$ and $R(-j)$ is the $R$-module obtained by shifting
the degrees of $R$ by $j$. The number $\beta_{i,j}(M)$, the $ij$th
{\bf graded Betti number} of $M$, equals the number of minimal generators
of degree $j$ in the $i$th syzygy module of $M$.
Of particular interest are the following invariants which
measure the ``size'' of the minimal graded free resolution of $I$.
The {\bf regularity} of $I$, denoted $\operatorname{reg}(I)$, is defined
by
\[\operatorname{reg}(I) := \max\{j-i ~|~ \beta_{i,j}(I) \neq 0\}.\]
The {\bf projective dimension} of $I$, denoted $\operatorname{pdim}(I)$, is defined to be
\[\operatorname{pdim}(I):= \max\{i ~|~ \beta_{i,j}(I) \neq 0 \}.\]
An ideal $I$ generated
by elements of degree $d$ is said to have a {\bf linear resolution}
if $\beta_{i,j}(I) = 0$ for all $j \neq i+d$.
We now recall some results concerning splittable ideals. We use
$\mathcal{G}(I)$ to denote the unique minimal set of generators of a monomial
ideal $I$.
\begin{definition}[see \cite{EliahouKervaire1990}]\label{defn: split}
A monomial ideal $I$ is {\bf splittable} if $I$ is the sum
of two nonzero monomial ideals $J$ and $K$, that is, $I = J+K$, such
that
\begin{enumerate}
\item $\mathcal{G}(I)$ is the disjoint union of $\mathcal{G}(J)$ and $\mathcal{G}(K)$.
\item there is a {\bf splitting function}
\begin{eqnarray*}
\mathcal{G}(J\cap K) &\rightarrow &\mathcal{G}(J) \times \mathcal{G}(K) \\
w & \mapsto & (\phi(w),\psi(w))
\end{eqnarray*}
satisfying
\begin{enumerate}
\item for all $w \in \mathcal{G}(J \cap K), ~~ w = \operatorname{lcm}(\phi(w),\psi(w))$.
\item for every subset $S \subset \mathcal{G}(J \cap K)$, both
$\operatorname{lcm}(\phi(S))$ and $\operatorname{lcm}(\psi(S))$
strictly divide $\operatorname{lcm}(S)$.
\end{enumerate}
\end{enumerate}
If $J$ and $K$ satisfy the above properties, then
we shall say $I = J + K$ is a {\bf splitting} of $I$.
\end{definition}
When $I = J + K$ is a splitting, then there is a relation between $\beta_{i,j}(I)$
and the graded Betti numbers of the ``smaller'' ideals.
This relation was first observed for the total Betti numbers by Eliahou and
Kervaire \cite{EliahouKervaire1990} and extended to the graded case by
Fatabbi \cite{Fatabbi2001}.
\begin{theorem}
\label{prop: ekf}
Suppose $I$ is a
splittable monomial ideal with splitting $I = J+K$. Then
\[\beta_{i,j}(I) = \beta_{i,j}(J) + \beta_{i,j}(K) +
\beta_{i-1,j}(J\cap K) ~~\mbox{for all $i, j \geq 0$} \]
where $\beta_{i-1,j}(J \cap K) = 0$ if $i = 0$.
\end{theorem}
When $I$ is a splittable ideal, Theorem \ref{prop: ekf} gives us
the following corollary.
\begin{corollary} \label{reg pdim}
If $I$ is a splittable monomial ideal with splitting $I = J+K$, then
\begin{enumerate}
\item[$(i)$] $\operatorname{reg}(I) = \max\{\operatorname{reg}(J),\operatorname{reg}(K),\operatorname{reg}(J\cap K) - 1\}$.
\item[$(ii)$] $\operatorname{pdim}(I) = \max\{\operatorname{pdim}(J),\operatorname{pdim}(K),\operatorname{pdim}(J\cap K) + 1\}$.
\end{enumerate}
\end{corollary}
Our goal is to study the numbers $\beta_{i,j}(\mathcal{I}(\mathcal{H}))$.
It follows directly from the definition of $\mathcal{I}(\mathcal{H})$
that $\beta_{0,j}(\mathcal{I}(\mathcal{H}))$ is simply the number of edges $E \in \mathcal{H}$ with $|E| = j$.
We can therefore restrict to investigating the
numbers $\beta_{i,j}(\mathcal{I}(\mathcal{H}))$ with $i \geq 1$. When
$\mathcal{H}$ is a $d$-uniform hypergraph, the following result
implies that we only need to consider a finite range of values of $j$ for each $i$.
\begin{theorem}
Suppose that $\mathcal{H}$ is a $d$-uniform hypergraph. If $\beta_{i,j}(\mathcal{I}(\mathcal{H})) \neq 0$,
then $i+d \leq j \leq \min\{n,d(i+1)\}$.
\end{theorem}
\begin{proof}
Because $\mathcal{H}$ is a $d$-uniform hypergraph, $\mathcal{I}(\mathcal{H})$ is generated by monomials
of degree $d$. So, $\beta_{i,j}(\mathcal{I}(\mathcal{H})) = 0$ for $j < i+d$, thus giving us
the lower bound. For the upper bound, the Taylor resolution implies that
$\beta_{i,j}(\mathcal{I}(\mathcal{H})) = 0$ if $j > d(i+1)$. On the other hand, Hochster's
formula implies that $\beta_{i,j}(\mathcal{I}(\mathcal{H})) = 0$ if $j > n$. The conclusion
now follows.
\end{proof}
\section{Splitting edges}
Let $I$ be any squarefree monomial ideal, and suppose that $\mathcal{H}$ is the
hypergraph associated to $I$, i.e., $I = \mathcal{I}(\mathcal{H})$. We would like to
find splittings of $I$ so that we can make use of Theorem \ref{prop: ekf}.
In this section we describe one possible splitting of $\mathcal{I}(\mathcal{H})$.
One of the simplest ways to partition $\mathcal{G}(I)$ is to pick any $m \in \mathcal{G}(I)$,
and set $\mathcal{G}(J) = \{m\}$ and $\mathcal{G}(K) = \mathcal{G}(I) \backslash \{m\}$. Note
that this is equivalent to picking any edge
$E$ of $\mathcal{H}$, and setting
\[J = (x^E) ~~\text{and} ~~ K = \mathcal{I}(\mathcal{H}\backslash E).\]
It is immediate that $I = \mathcal{I}(\mathcal{H}) = J + K$, and furthermore, $J$ and $K$
satisfy condition $(1)$ of Definition \ref{defn: split}. However, for an arbitrary
edge $E$, $J$ and $K$ may fail to satisfy condition $(2)$ of Definition \ref{defn: split}.
If $E$ is chosen so that $J$ and $K$ satisfy this condition, then
we give this edge the following name.
\begin{definition} Let $\mathcal{H}$ be a hypergraph.
An edge $E$ is a {\bf splitting edge} of $\mathcal{H}$ if
\[\mathcal{I}(\mathcal{H}) = (x^E) + \mathcal{I}(\mathcal{H}\backslash E)\] is a splitting of $\mathcal{I}(\mathcal{H})$.
\end{definition}
To make use of Theorem \ref{prop: ekf}, one would therefore like a means to identify
the splitting edges of a hypergraph. The main result of this section is the following theorem which
provides a classification of the splitting edges of a hypergraph.
This theorem answers Question 5.4.2 of \cite{HaVanTuyl2006}
which asked the equivalent question of what facet could be a splitting facet of simplicial complex.
\begin{theorem} \label{characterize: splitting facets}
Let $\mathcal{H}$ be a hypergraph
with two or more edges.
Then an edge $E$ is a splitting edge of $\mathcal{H}$ if and only if there exists
a vertex $z \in E$ such that
\[(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E) \subseteq (x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z\}).\]
\end{theorem}
\begin{proof} Let $E$ be an edge of $\mathcal{H}$, and
set $J = (x^E)$ and $K = \mathcal{I}(\mathcal{H}\backslash E)$.
To prove the ``only if'' direction, we prove the contrapositive.
So, suppose that for every vertex $z \in E$, we have
\[(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E) \not\subseteq (x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z\}).\]
Thus, for each $z \in E$, there exists a minimal generator $x^{L_z}$ of $J \cap K$
such that $x^{L_z} \not\in (x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z\})$.
Set $S = \{x^{L_z} ~|~ z \in E\} \subseteq \mathcal{G}(J \cap K)$.
We will now show that no splitting function can exist. Suppose there was a splitting
function $s:\mathcal{G}(J \cap K) \rightarrow \mathcal{G}(J) \times
\mathcal{G}(K)$ given by $s(w) = (\phi(w),\varphi(w)).$
Then, since $J = (x^E)$, for each $x^{L_z} \in S$, we have $\phi(x^{L_z}) = x^E$.
For each $z \in E$, let $x^{G_z} = \varphi(x^{L_z}) \in \mathcal{G}(K)$. So
$G_z$ is an edge of $\mathcal{H}$, and $\operatorname{lcm}(x^{E},x^{G_z}) = x^{E \cup G_z} =x^{L_z}$.
We claim that for each $z \in E$, we have $z \in G_z$. Indeed,
if $z' \not\in G_{z'}$ for some $z' \in E$, then
$G_{z'}$ is an edge of $\mathcal{H} \backslash \{z'\}$. But
then $x^{L_{z'}} = \operatorname{lcm}(x^E,x^{G_{z'}}) = x^{E\cup G_{z'}}$
is an element of $(x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z'\})$,
a contradiction to the choice of $x^{L_{z'}}$.
Now, since $z \in G_z$ for each $z \in E$, we have
\begin{eqnarray*}
\operatorname{lcm}(\varphi(S)) &=& \operatorname{lcm}(\{x^{G_z} ~|~ z \in E\}) = x^{\cup_{z \in E} G_z} = x^{(\cup_{z \in E} G_z) \cup E} = x^{\cup_{z \in E} (G_z \cup E)} \\
&=& x^{\cup_{z \in E} L_z} = \operatorname{lcm}(\{x^{L_z} ~|~ z \in E\}) = \operatorname{lcm}(S).
\end{eqnarray*}
But this contradicts the fact that we have a
splitting function. This proves the ``only if'' direction.
Conversely, suppose that there exists a vertex $z$ of $E$ such that
\[(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E) \subseteq (x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z\}).\]
This implies that $\mathcal{G}(J \cap K) \subseteq
\{x^{E \cup H} ~|~ H \in \mathcal{H}\backslash \{z\}\}$.
We will construct a splitting function
$s = (\phi,\varphi): \mathcal{G}(J \cap K) \rightarrow \mathcal{G}(J) \times \mathcal{G}(K)$ which satisfies
the
conditions of Definition \ref{defn: split}.
For any $x^L \in \mathcal{G}(J \cap K)$ we define $\phi(x^L) = x^E \in \mathcal{G}(J)$. For each $x^L \in
\mathcal{G}(J \cap K)$, $\varphi(x^L)$ is defined as follows:
by our hypothesis, we have $L \in \{E \cup H ~|~ H \in \mathcal{H}\backslash \{z\}\}$.
Thus, $\mathbb{A} = \{H \in \mathcal{H}\backslash \{z\} ~|~ L = E \cup H\}$ is not the empty set.
We consider $\mathcal{X}$ as a set of alphabets (in some order of its elements) and identify each
element of $\mathbb{A}$ with the word formed by its
vertices (in increasing order). Let $G_L$ be the unique maximal element of $\mathbb{A}$
with respect to the lexicographic word ordering (which is a total order). Observe that,
by construction, $z \not\in G_L$ and $E \cup G_L = L$. Define $\varphi(x^L) = x^{G_L}$.
It is easy to see that $s = (\phi,\varphi)$ is a well defined function on $\mathcal{G}(J\cap K)$
and that condition (a) of Definition \ref{defn: split} is satisfied. To show that
condition (b) of Definition \ref{defn: split} is satisfied, we observe that for any
$x^L \in \mathcal{G}(J \cap K)$, by construction, $z$ does not divide $\varphi(x^L)$. Observe further
that for any subset $S \subseteq \mathcal{G}(J \cap K)$, $z$ divides $x^E$ which strictly divides
$\operatorname{lcm}(S)$. Thus, since $\operatorname{lcm}(\phi(S)) = x^E$ and since $z$ does not divide $\operatorname{lcm}(\varphi(S))$, we must
have that $\operatorname{lcm}(\phi(S))$ and $\operatorname{lcm}(\varphi(S))$ both strictly divide $\operatorname{lcm}(S)$.
The ``if'' direction is proved.
\end{proof}
\begin{remark}
Theorem \ref{characterize: splitting facets} could be reinterpreted
as describing when a squarefree monomial ideal $I = (m_1,\ldots,m_s)$
in $R = k[x_1,\ldots,x_n]$
has a splitting $I = (m_i) + (m_1,\ldots,\hat{m}_i,\ldots,m_s)$ for some
$i$. Precisely, $I = (m_i) + (m_1,\ldots,\hat{m}_i,\ldots,m_s)$
is a splitting
if and only if there exists a variable $x_j$ such that $x_j|m_i$ and
$(m_i) \cap (m_1,\ldots,\hat{m}_i,\ldots,m_s) \subseteq (m_i) \cap I'R$
where by $I'R$ we mean the ideal $I' = I \cap k[x_1,\ldots,\hat{x}_j,\ldots,x_n]$,
but viewed as ideal of $R$. The result
follows from the fact that $\mathcal{I}(\mathcal{H}\backslash \{x_j\}) =I'R$. This reformulation
nicely illustrates that in some cases the hypergraph point of view is conceptually
easier (at least to us) to grasp.
\end{remark}
\begin{example}
The following example illustrates that a hypergraph may not have a splitting
edge. Let $\mathcal{H}$ be the hypergraph on vertex set $\mathcal{X} = \{a,b,c,d,e\}$
with edge set $\mathcal{E} = \{abe,ade,bce,cde\}$. The edge ideal is
then $\mathcal{I}(\mathcal{H}) = (abe, ade, bce, cde).$
By symmetry it suffices to show that any one of the edges is not a splitting edge.
So, consider the edge $E = abe$. Then
\[(x^E) \cap \mathcal{I}(\mathcal{H} \backslash E) = (abde, abce,abcde) = (abde, abce)\]
while
\[(x^E) \cap \mathcal{I}(\mathcal{H} \backslash\{a\}) = (abce), \hspace{.25cm}
(x^E) \cap \mathcal{I}(\mathcal{H} \backslash\{b\}) = ( abde ), ~~\text{and}~~
(x^E) \cap \mathcal{I}(\mathcal{H} \backslash\{e\}) = (0).\]
Thus, there is no vertex $z \in E$ with the property that
$(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E) \subseteq (x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z\})$.
\end{example}
There is a nice class of edges of a simple hypergraph that are easy to identify and
also have the property that they are splitting edges. We now define this class.
\begin{definition} \label{defn: v-leaf}
Let $\mathcal{H}$ be a simple hypergraph. An edge $E$ is a {\bf $v$-leaf} if $E$ contains
a free vertex, that is, $E$ contains a vertex $v \in \mathcal{X}$ such that $v$ does not belong to any
other edge of $\mathcal{H}$.
\end{definition}
\begin{remark} If $\mathcal{H} = G$ is a simple graph, then $v$-leaves are precisely the leaves in the usual sense.
\end{remark}
\begin{corollary} \label{splitting v-leaf}
Suppose $E$ is a $v$-leaf of a hypergraph $\mathcal{H}$. Then $E$
is a splitting edge of $\mathcal{H}$.
\end{corollary}
\begin{proof} If $v$ is the free vertex in $E$, then $\mathcal{H}\backslash E = \mathcal{H}\backslash\{v\}$.
Now apply Theorem \ref{characterize: splitting facets}.
\end{proof}
S. Faridi \cite{Faridi2002} introduced the notion of a leaf for a simplicial
complex $\Delta$. Precisely, a facet $F$ of $\Delta$ is a {\bf leaf} if $F$ is the only
facet of $\Delta$, or there exists a facet $G \neq F$ in $\Delta$ such that
$F \cap F' \subseteq F \cap G$ for all facets $F' \neq F$ in $\Delta$.
We can translate Faridi's definition into hypergraph language; we call the
translated version of Faridi's leaf a $f$-leaf to distinguish it from a $v$-leaf.
\begin{definition} An edge $E$ of a hypergraph $\mathcal{H}$ is a {\bf $f$-leaf} if $E$
is the only edge of $\mathcal{H}$, or if there exists an edge $H$ of $\mathcal{H}$ such that
$E \cap E' \subseteq E \cap H$ for all edges $E' \neq E$ of $\mathcal{H}$.
\end{definition}
We introduce two types of hypertrees and hyperforests based upon the two notions of leaves.
\begin{definition}
A hypergraph $\mathcal{H}$ is a {\bf $v$-forest}, respectively, $f$-{\bf forest}, if every
induced subgraph of $\mathcal{H}$, including $\mathcal{H}$ itself, contains a $v$-leaf, respectively,
a $f$-leaf. If $\mathcal{H}$ is connected, we call $\mathcal{H}$ a {\bf $v$-tree}, respectively,
{\bf $f$-tree}. When $\mathcal{H}$ is a $f$-forest, the associated simplicial
complex $\Delta(\mathcal{H})$ is called a {\bf simplicial forest}.
\end{definition}
Notice that when $\mathcal{H} = G$ is a simple graph, the notions of $v$-leaf and $f$-leaf coincide.
So, with simple graphs, the notions of a $v$-forest and a $f$-forest coincide with the usual
notion of a forest.
These definitions, however, are not equivalent in a general hypergraph, as illustrated below.
\begin{example}\label{ex: v-leaf}
A $f$-leaf must always contain a free vertex (cf. \cite[Remark 2.3]{Faridi2002}), thus
every $f$-leaf is a $v$-leaf. However, a $v$-leaf need not be a $f$-leaf. For example,
consider the hypergraph $\mathcal{H}$ on $\mathcal{X} = \{a,b,c,d,e,f\}$ with the edge set
$\mathcal{E} = \{abf, bcd, def\} = \{E_1,E_2,E_3\}$.
Each edge is a $v$-leaf since each edge has a vertex not in the other two edges.
However, $\mathcal{H}$ has no $f$-leaf. By symmetry, it is enough to show that $E_1 = abf$ cannot
be a $f$-leaf. Indeed, $E_1 \cap E_2 \not\subseteq E_1 \cap E_3$ and $E_1 \cap E_3 \not\subseteq
E_1 \cap E_2$.
The hypergraph $\mathcal{H}$ is an example of a $v$-tree, but $\mathcal{H}$ is not a $f$-tree since
$\mathcal{H}$ has no $f$-leaf, although all its induced subgraphs have a $f$-leaf.
\end{example}
Because a $f$-leaf is a $v$-leaf, Corollary \ref{splitting v-leaf}
immediately gives:
\begin{corollary} \label{splitting leaf}
If $E$ is a $f$-leaf of a hypergraph $\mathcal{H}$, then $E$ is a splitting edge of $\mathcal{H}$.
\end{corollary}
\section{Properly-connected hypergraphs}
Given a hypergraph $\mathcal{H}$, we would like to express the numbers $\beta_{i,j}(\mathcal{I}(\mathcal{H}))$
in terms of the graded Betti numbers of edge ideals associated to subgraphs of $\mathcal{H}$;
this would lead to recursive-type formulas.
When $E$ is a splitting edge of a hypergraph $\mathcal{H}$, Theorem
\ref{prop: ekf} implies that $\beta_{i,j}(\mathcal{I}(\mathcal{H}))$ can
be computed from the graded Betti numbers of the ideals $(x^E)$, $\mathcal{I}(\mathcal{H}\backslash E)$,
and $L = (x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)$. The Betti numbers of $(x^E)$ are trivial
to compute, while those of $\mathcal{I}(\mathcal{H}\backslash E)$ already
correspond to the edge ideal of a sub-hypergraph of $\mathcal{H}$. Thus one only
needs to relate the numbers $\beta_{i,j}(L)$
to the Betti numbers of an edge ideal of some other sub-hypergraph. For a general hypergraph,
this appears to be a difficult problem.
The goal of this section is to introduce a family of $d$-uniform hypergraphs,
which we call properly-connected, that among other things enables us to relate the
graded Betti numbers of $L$ to those of an edge ideal associated to a sub-hypergraph
of $\mathcal{H}$.
\begin{definition}
A $d$-uniform hypergraph $\mathcal{H} = (\mathcal{X},\mathcal{E})$ is said to be {\bf properly-connected} if
for any two edges $E$ and $E'$ of
$\mathcal{H}$ with the property that $E \cap E' \neq \emptyset$,
then
\[\operatorname{dist}_{\mathcal{H}}(E,E') = d - |E\cap E'|.\]
Otherwise, we say $\mathcal{H}$
is {\bf not properly-connected}.
\end{definition}
\begin{remark}
Our definition of properly-connected is similar to (but not equivalent to) what
Zheng \cite[Definition 3.14]{Zheng2004} called the {\bf intersection property}
for a simplicial complex. If $\Delta$ is a pure simplicial forest, then $\Delta$ has
the intersection property if for any two facets $F,F' \in \Delta$
the distance between $F$ and $F'$ (defined in terms
of the length of chain of between the two facets) is determined by $|F \cap F'|$.
\end{remark}
\begin{example}
Consider the $4$-uniform hypergraph $\mathcal{H}$ with edge set
$$\mathcal{E} = \{x_1x_2x_3x_4, x_1x_2x_3x_7, x_1x_2x_6x_7, x_1x_5x_6x_7,
x_1x_5x_6x_8\}.$$
There is a proper irredundant chain of length $4$ from the
edge $E = x_1x_2x_3x_4$ to $E' = x_1x_5x_6x_8$ (to
form the chain, just take the edges as listed in $\mathcal{E}$). Furthermore,
there is no shorter such chain. But $E$ and $E'$ have
a nonempty intersection. So $\mathcal{H}$ is not properly-connected
since $4 = \operatorname{dist}_{\mathcal{H}}(E,E') \neq 4 - |E \cap E'| = 3$.
This hypergraph is not properly-connected.
\end{example}
\begin{example} Every finite simple graph $G$ is properly-connected.
To see this, note that a graph is clearly a $2$-uniform hypergraph.
If $E,E'$ are two edges of $G$ such that $E \cap E' \neq \emptyset$, then
either $E$ and $E'$ are the same edge,
or $E$ and $E'$ share exactly one vertex. In the first case,
$\operatorname{dist}_{G}(E,E') = 2 - |E \cap E'| = 2-2 =0$, while in the second case
$\operatorname{dist}_{G}(E,E') = 2 - |E \cap E'| = 1$.
So, in this sense, properly-connected hypergraphs generalize simple graphs.
\end{example}
Properly-connected hypergraphs are appealing combinatorial objects to study
because within this family, the notions of $v$-leaf and $f$-leaf become equivalent.
As well, splitting edges of properly-connected hypergraphs can be described
combinatorially. We prove both of these assertions.
\begin{theorem}
Suppose $\mathcal{H}$ is a $d$-uniform properly-connected hypergraph, and $E$ is an edge of $\mathcal{H}$. Then
$E$ is a $v$-leaf if and only if $E$ is a $f$-leaf.
\end{theorem}
\begin{proof}
Because we know an $f$-leaf is a $v$-leaf, it suffices to prove the converse.
If $\mathcal{H}$ has only one edge then we are done. So, suppose that $\mathcal{H}$ has
at least two edges. Let $E$ be a $v$-leaf with free vertex $v$.
Let $H$ be any edge of $\mathcal{H}$ with $H \cap E \neq \emptyset$.
If there is no such $H$, then
$E$ is automatically a $f$-leaf.
Since $\mathcal{H}$ is properly-connected, there is a proper
chain $E_0 = E, E_1,\ldots, E_k =H$ from $E$ to $H.$ Because $|E| = |E_1| = d$ and $|E \cap E_1| =
d-1,$ $E \cap E_1 = E \backslash \{v\}.$
To see that $E$ is a $f$-leaf, let $G$ be any other edge of $\mathcal{H}.$
Then $E \cap G \subset E\backslash\{v\} = E \cap E_1.$
\end{proof}
Let $E$ be an edge of a $d$-uniform properly-connected hypergraph $\mathcal{H}$. If $H$ is
any edge of $\mathcal{H}$ with $\operatorname{dist}_{\mathcal{H}}(E,H)=1$, then $|H \backslash E| = 1$, or
in other words, $H \backslash E = \{z\}$ for some vertex $z$.
Before classifying splitting edges, we introduce the following definition.
\begin{definition}
If $E$ is an edge of a $d$-uniform properly-connected hypergraph $\mathcal{H}$, then
the {\bf vertex neighbor set of $E$} is the following subset of $\mathcal{X}$:
\[N(E) = \bigcup_{\{H \in \mathcal{H} ~|~ \operatorname{dist}_{\mathcal{H}}(E,H) =1\}} H \backslash E.\]
\end{definition}
\begin{example}
When $G$ is a finite simple graph, and $x$ is a vertex, then $N(x)$ denotes
all the neighbors of $x$. If $E = \{u,v\}$ is any edge of $G$, then
$N(E) = (N(u) \cup N(v)) \backslash \{u,v\}$.
\end{example}
\begin{theorem}\label{char: split edge}
Let $E$ be an edge of a $d$-uniform properly-connected hypergraph $\mathcal{H}$,
and suppose $N(E) = \{z_1,\ldots,z_t\}$. Then
$E$ is a splitting edge if and only if there exists a vertex $z \in E$
such that $(E \backslash \{z\}) \cup \{z_i\} \in \mathcal{H}$ for each
$z_i \in N(E)$.
\end{theorem}
The proof of this theorem depends upon the following two lemmas.
\begin{lemma}\label{lem: chains}
Let $\mathcal{H}$ be a $d$-uniform properly-connected hypergraph.
Suppose that $E= E_0 = \{x_1, \ldots, x_d \}$ and $E'$ are edges in $\mathcal{H}$ with
$\operatorname{dist}_{\mathcal{H}}(E,E') = t \leq d$. Then, after relabelling,
there exist edges $E_1, \ldots, E_t$ such that
$E_i = \{y_1, \ldots, y_i, x_{i+1}, \ldots, x_d\},
$ $E_t = E',$ and $y_i \notin E_{j}$ for all $j < i.$
\end{lemma}
\begin{proof}
Since $\operatorname{dist}_{\mathcal{H}}(E,E') = t$, there must be a proper irredundant chain of
edges $E_0 = E, \ldots, E_t = E'$. Since $E_i$ differs from $E_{i+1}$ by exactly one vertex,
for each $i,$ $|E \cap E_i| \geq d-i$ because at most one vertex changes at each stage.
Since $(E_0, \dots, E_t)$ is an irredundant chain and $\mathcal{H}$ is properly-connected, for $i < d$, we must have
\[i = \operatorname{dist}_{\mathcal{H}}(E_0, E_i) = d - |E_0 \cap E_i|.\]
Hence,
$|E_0 \cap E_i| = d-i$ for any $i$ less than $d$ for which the expression makes sense.
Moreover, if $i = t = d$, then $\operatorname{dist}_\mathcal{H}(E_0,E_i) = d,$ and we have $E_0 \cap E_i =
\emptyset$. That is, $|E_0 \cap E_i| = 0 = d-i$.
We will prove the result using induction on $i$.
Let $E =E_0 = \{x_1,\ldots,x_d\}$, and assume the vertices are labeled so that
$x_1 \notin E_1$. We know that $|E_0 \cap E_1| = d-1$ which implies that $E_1 =
\{y_1, x_2, \ldots, x_d\}$ where $y_1 \notin E_0$, thus
proving the base case.
Now assume that $E_0, \ldots, E_i$ satisfy the claim, i.e., that
$E_i = \{y_1, \ldots, y_i, x_{i+1}, \ldots, x_d\}$ with $y_i \notin E_j$ for all $j < i$.
We know that $|E_i \cap E_{i+1}| = d-1,$ so that $E_{i+1}$ is constructed from $E_i$ by
removing some vertex and adding a vertex that we will call $y_{i+1}$ which is not in $E_i.$
First, we claim that the vertex that we remove from $E_i$ cannot be a $y_j.$ If we were to
replace some $y_j$ with a vertex $y_{i+1},$ then $|E_0 \cap E_i| = d-i \leq |E_0 \cap E_{i+1}|$
which contradicts our earlier assumption that $|E_0 \cap E_{i+1}| = d-i-1$.
So, we may assume that $y_{i+1}$ replaces $x_{i+1}.$ If, $y_{i+1} = x_j$ for some $j \le i$ then
$|E_0 \cap E_{i+1}| =
|E_0 \cap E_i|$ which is a contradiction as before. Therefore,
$y_{i+1} \notin E_j$ for any $j \leq i.$
\end{proof}
\begin{lemma}\label{intersectionideal}
Let $E$ be any edge of a $d$-uniform
properly-connected hypergraph $\mathcal{H}$. Then
\begin{eqnarray*}
(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)& =& (\{\operatorname{lcm}(x^E,x^H) ~|~ H \in \mathcal{H} ~\mbox{and}~
\operatorname{dist}_{\mathcal{H}}(E,H) =1\}) + \\
&&(\{\operatorname{lcm}(x^E,x^H) ~|~ H \in \mathcal{H} ~\mbox{and}~\operatorname{dist}_{\mathcal{H}}(E,H) \geq d+1\}).
\end{eqnarray*}
\end{lemma}
\begin{proof} Set
\begin{eqnarray*}
A & =& (\{\operatorname{lcm}(x^E,x^H) ~|~ H \in \mathcal{H} \backslash E ~\mbox{and}~\operatorname{dist}_{\mathcal{H}}(E,H) \leq d \}) ~~\mbox{and}\\
B & =&(\{\operatorname{lcm}(x^E,x^H) ~|~ H \in \mathcal{H} ~\mbox{and}~\operatorname{dist}_{\mathcal{H}}(E,H) \geq d+1\}).
\end{eqnarray*}
By definition $(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E) = A + B$. Thus, if
we set
\[C = (\{\operatorname{lcm}(x^E,x^H) ~|~ H \in \mathcal{H} ~\mbox{and}~\operatorname{dist}_{\mathcal{H}}(E,H) =1\}),\] then
it suffices to show that
$A = C$. Since $C \subseteq A$ is clear, we now show the reverse containment.
Let $x^{E \cup H} = \operatorname{lcm}(x^E,x^H)$ be a generator of $A$, i.e., suppose $H \in \mathcal{H} \backslash E$ and
$t = \operatorname{dist}_{\mathcal{H}}(E,H) \leq d$. Note that we can assume that $2 \leq t \leq d$
because if $t= \operatorname{dist}_{\mathcal{H}}(E,H) = 1$, then $x^{E \cup H} \in C$. So there exists
a proper irredundant chain $E = H_0,H_1,H_2,\ldots,H_t = H$ whose length is minimal
among all proper irredundant chains from $E$ to $H$.
Now if $E = \{x_1,\ldots,x_d\}$,
then $H_1 = \{x_1,\ldots,\hat{x}_i,\ldots,x_d,z\}$ where by $\hat{x}_i$ we mean
the
vertex $x_i$ is removed, and $z$ is not one of $x_1,\ldots,x_d$.
From this observation, we have
\[\operatorname{lcm}(x^E,x^{H_1}) = x^{E \cup \{z\}} = x^Ez.\]
Now $x^Ez$ is a generator of $C$.
To finish the proof, Lemma \ref{lem: chains} implies that $z \in H_i$
for $i = 2,\ldots,t$. Therefore, $\operatorname{lcm}(x^E,x^{H_i}) = x^{E \cup H_i}$ is divisible
by $x^Ez$, and thus is in $C$. In particular $x^{E \cup H} \in C$.
\end{proof}
\noindent{\it Proof of Theorem \ref{char: split edge}.}
Suppose that $E$ is a splitting edge. By Theorem \ref{characterize: splitting facets}
there is a vertex $z \in E$ such that
$(x^E) \cap \mathcal{I}(H\backslash E) \subseteq
(x^E) \cap \mathcal{I}(H\backslash\{z\})$.
Let $z_i \in N(E)$. We will show that $(E \backslash \{z\}) \cup \{z_i\}$ is an edge of
$\mathcal{H}\backslash \{z\} \subseteq \mathcal{H}$. Since $z_i \in N(E)$, there exists an edge $H$
with $\operatorname{dist}_{\mathcal{H}}(E,H) = 1$ such that $H \backslash E = \{z_i\}$.
Thus, $x^{E \cup H}$ is a generator of $(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)$.
We thus must have $x^{E \cup H} \in (x^E) \cap \mathcal{I}(\mathcal{H}\backslash\{z\})$.
Hence there is an edge $H' \in \mathcal{H}\backslash \{z\}$ such that $E \cup H = E \cup H'$. Because
$|E \cap H| = d-1$, we must have that $|E \cap H'| = d-1$.
Since $z \not\in H'$ and $z_i \not\in E$, we must have
$H' = (E \backslash \{z\}) \cup \{z_i\}$. So, $(E \backslash \{z\}) \cup \{z_i\}
\in \mathcal{H}\backslash\{z\}$ as desired.
Conversely, suppose there exists a vertex $z \in E$ such
that $(E \backslash \{z\}) \cup \{z_i\} \in \mathcal{H}$ for each
$z_i \in N(E)$. Let $x^L$ be any minimal generator of $(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)$.
By Lemma \ref{intersectionideal}, we have $L = E \cup H$ with $\operatorname{dist}_{\mathcal{H}}(E,H) = 1$
or $L = E \cup H$ with $\operatorname{dist}_{\mathcal{H}}(E,H) \geq d+1$. If $\operatorname{dist}_{\mathcal{H}}(E,H) \geq d+1$,
then $z \not\in H$ since $E \cap H = \emptyset$. So $H \in \mathcal{H}\backslash \{z\}$,
and hence $x^L \in (x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z\})$. So,
suppose $L = E \cup H$ with $\operatorname{dist}_{\mathcal{H}}(E,H)=1$. Then, the exists
$z_i \in N(E)$ such that $E \cup H = E \cup \{z_i\}$. By our hypothesis, the
edge $E' = (E \backslash \{z\}) \cup \{z_i\} \in \mathcal{H}$. But then $E' \in \mathcal{H}\backslash \{z\}$.
Furthermore, $L = E \cup H = E \cup E'$. So $x^L \in (x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z\})$.
We have now shown that $(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E) \subseteq
(x^E) \cap \mathcal{I}(\mathcal{H}\backslash \{z\})$, so by Theorem \ref{characterize: splitting facets}
the edge $E$ must be a splitting edge.
\qed
\begin{example}
We give an example of a $3$-uniform properly-connected hypergraph
which has a splitting edge that is not a $v$-leaf. Let $\mathcal{H}$ be the hypergraph
with edge set
\[\mathcal{E} = \{x_1x_2x_3,x_1x_2x_4,x_1x_3x_5,x_2x_3x_4,x_2x_3x_5,x_3x_4x_5\}.\]
One can verify that $\mathcal{H}$ is properly connected by showing that
$\operatorname{dist}_{\mathcal{H}}(E,E') = 3 - |E \cap E'|$ for every pair of edges in $\mathcal{E}$.
Now $E = x_1x_2x_3$ is not a $v$-leaf because it does not contain a free vertex.
We can use Theorem \ref{char: split edge} to verify that $E$ is a splitting edge.
In this case $N(E) = \{x_4,x_5\}$ since the edges of distance one from
$E$ are $\{x_1x_2x_4,x_1x_3x_5,x_2x_3x_4,x_2x_3x_5\}$. Then $E$
is a splitting edge since $(E\backslash\{x_1\}) \cup \{x_4\} = x_2x_3x_4$ and $(E\backslash\{x_1\}) \cup \{x_5\} = x_2x_3x_5$ are both edges of $\mathcal{H}$. Note that even when
$E$ is a splitting edge, the graph $\mathcal{H}\backslash E$ may fail to
be properly-connected. In this case, if we remove $E$ from $\mathcal{H}$, the resulting
hypergraph fails to be properly-connected because edges $E_1= x_1x_2x_4$ and $E_2= x_1x_3x_5$
intersect at $x_1$, but there is no proper chain of length $2 = 3 -|E_1 \cap E_2|$
in $\mathcal{H}\backslash E$ between these two edges.
\end{example}
\begin{notation} \label{H'}
Suppose $E$ is an edge of a $d$-uniform properly-connected hypergraph $\mathcal{H}$.
For simplicity of notation,
throughout the rest of the paper, when not specified,
$\mathcal{H}'$ refers to the sub-hypergraph
\[\mathcal{H}' = \{H \in \mathcal{H} ~|~ \operatorname{dist}_\mathcal{H}(E,H) \ge d+1\}.\]
\end{notation}
The following lemma tells us the properly-connected property is passed on to $\mathcal{H}'$.
\begin{lemma}\label{H'prop-con}
If $E$ is an edge of a $d$-uniform properly-connected hypergraph $\mathcal{H}$, then
$\mathcal{H}'$ is also a $d$-uniform properly-connected hypergraph.
\end{lemma}
\begin{proof}
Because it is clear that $\mathcal{H}'$ is a $d$-uniform hypergraph, it suffices
to show that $\mathcal{H}'$ is properly connected. So, suppose that the edges $H,H' \in \mathcal{H}'$
have the property that $H \cap H' \neq \emptyset$. Because they are also edges of $\mathcal{H}$,
there exists a chain $H = H_0,H_1,\ldots,H_t =H'$ in $\mathcal{H}$
such that $t = \operatorname{dist}_{\mathcal{H}}(H,H') = d - |H\cap H'|$. If all the
edges $H_i$ for $i=1,\ldots,t-1$ are also in $\mathcal{H}'$,
then it is clear that $t = \operatorname{dist}_{\mathcal{H}'}(H,H') = d-|H\cap H'|$.
So, suppose there is an edge $H_i$ in the chain with $i \in \{1,\ldots,t-1\}$
and $H_i \not\in \mathcal{H}'$. Then $s = \operatorname{dist}_{\mathcal{H}}(E,H_i) \leq d$.
Let $E=E_0,E_1,\ldots,E_s=H_i$ be the proper irredundant chain
in $\mathcal{H}$ between $E$ and $H_i$. Then $\operatorname{dist}_{\mathcal{H}}(E_1,H_i) = s-1 < d$.
But this means that $|E_1 \cap H_i| \neq \emptyset$. Let $x \in E_1 \cap H_i$.
By Lemma \ref{lem: chains} the vertex $x$ must be in either $H$ or $H'$.
Without loss of generality, assume $x \in H$. But then $\operatorname{dist}_{\mathcal{H}}(E_1,H) =
d - |H \cap E_1| \leq d-1$. But since $E$ is distance one from $E_1$,
this means there is a proper chain of length $d$ from $E$ to $H$, contradicting
the fact that $H \in \mathcal{H}'$.
\end{proof}
As a byproduct of Lemma \ref{intersectionideal}, we can rewrite
$(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)$ in terms of the edge ideal of $\mathcal{H}'$.
\begin{corollary}
Let $E$ be any edge of a $d$-uniform properly-connected hypergraph $\mathcal{H}$,
and suppose $N(E) = \{z_1,\ldots,z_t\}$. Then
\[(x^E) \cap \mathcal{I}(\mathcal{H} \backslash E) = x^E((z_1,\ldots,z_t) + \mathcal{I}(\mathcal{H}')).\]
\end{corollary}
\begin{proof}
It is straight forward to verify that
\[x^E(z_1,\ldots,z_t) = (\{\operatorname{lcm}(x^E,x^H) ~|~ H\in \mathcal{H} ~\mbox{and}~\operatorname{dist}_{\mathcal{H}}(E,H) =1\}).\]
If $H \in \mathcal{H} \backslash E$ with $\operatorname{dist}_{\mathcal{H}}(E,H) \geq d+1$, then
because $\mathcal{H}$ is properly-connected, $|E \cap H| = \emptyset$.
So
\[x^E\mathcal{I}(\mathcal{H}')=
(\{\operatorname{lcm}(x^E,x^H) ~|~ H \in \mathcal{H} ~\mbox{and}~\operatorname{dist}_{\mathcal{H}}(E,H) \geq d+1\}).\]
The result now follows from Lemma \ref{intersectionideal}.
\end{proof}
When $E$ is an edge of a properly-connected hypergraph, we can also
describe the graded Betti numbers of $(x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)$
in terms of those of $\mathcal{I}(\mathcal{H}')$.
\begin{lemma} \label{betti intersection}
Let $E$ be any edge of a $d$-uniform properly-connected hypergraph $\mathcal{H}$.
Set $t = |N(E)|$. Then
\[\beta_{i-1,j}((x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)) =
\sum_{l=0}^i \binom{t}{l}
\beta_{i-1-l,j-d-l}(\mathcal{I}(\mathcal{H}'))\]
where $\beta_{-1,j}(\mathcal{I}(\mathcal{H}')) = 1$ if $j = 0$ and $0$ otherwise.
\end{lemma}
\begin{proof} If $N(E) = \{z_1,\ldots,z_t\}$,
then by the previous corollary,
\begin{eqnarray*}
\beta_{i-1,j}((x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)) &= &\beta_{i-1,j}(x^E((z_1,\ldots,z_t)+
\mathcal{I}(\mathcal{H}'))) \\
&=& \beta_{i-1,j-d}((z_1,\ldots,z_t)+ \mathcal{I}(\mathcal{H}'))\\
&=&
\beta_{i,j-d}(R/((z_1,\ldots,z_t)+ \mathcal{I}(\mathcal{H}')).
\end{eqnarray*}
None of the generators of $\mathcal{I}(\mathcal{H}')$ are divisible by $z_i$ for
$i = 1,\ldots,t$. To see this, suppose that $x^H \in \mathcal{I}(\mathcal{H}')$
is divisible by some $z_i$, i.e., $z_i$ is a vertex of the edge $H$.
Now there is a edge $H_i$ with $z_i \in H_i$ and $\operatorname{dist}_{\mathcal{H}}(E,H_i)=1$.
Since $H \cap H_i \neq \emptyset$
and because
$\mathcal{H}$ is properly-connected, $p =\operatorname{dist}_{\mathcal{H}}(H,H_i) = d - |H \cap H_i| < d$.
So there is a proper irredundant chain $H_i = H'_0,\ldots,H'_p = H$. But
then $E,H_i=H'_0,\ldots,H'_p=H$ forms a proper irredundant chain of length $p+1 \leq d$,
and thus $\operatorname{dist}_{\mathcal{H}}(E,H) \leq d$, contradicting the fact that $\operatorname{dist}_{\mathcal{H}}(E,H)\geq d+1$.
We modify our notation and write $R = k[z_1,\ldots,z_t,x_1,\ldots,x_s]$ where
$\{x_1,\ldots,x_s\} = \mathcal{X} \backslash N(E)$.
Then
\[ R/((z_1,\ldots,z_t) + \mathcal{I}(\mathcal{H}')) \cong R_1/(z_1,\ldots,z_t) \otimes_k
R_2/\mathcal{I}(\mathcal{H}')\]
where $R_1 = k[z_1,\ldots,z_t]$ and $R_2 = k[x_1,\ldots,x_s]$, and
where we view $\mathcal{I}(\mathcal{H}')$ as an ideal of $R$ and as the ideal of $R_2$
generated by the same elements.
By tensoring the resolutions of $R_1/(z_1,\ldots,z_t)$ and $R_2/\mathcal{I}(\mathcal{H}')$
together we get (see, for example, Lemma 2.1 and Corollary 2.2 of \cite{JK})
\small
\[\beta_{i,j-d}(R/L) =
\sum_{l_1=0}^i\sum_{l_2 =0}^{j-d} \beta_{l_1,l_2}(R_1/(z_1,\ldots,z_t))
\beta_{i-l_1,j-d-l_2}(R_2/(\mathcal{I}(\mathcal{H}'))\]
\normalsize
where $L = (z_1,\ldots,z_t) + \mathcal{I}(\mathcal{H}')$.
Since $z_1,\ldots,z_t$ is a regular sequence on $R_1$
\[
\beta_{l_1,l_2}(R_1/(z_1,\ldots,z_t)) =
\left\{
\begin{array}{ll}
0 & \mbox{if $l_2 \neq l_1$} \\
\binom{t}{l} & \mbox{if $l = l_2 = l_1$.}
\end{array}
\right.\]
As a consequence, the previous expression reduces to
\[\beta_{i,j-d}(R/L) = \sum_{l=0}^i \binom{t}{l}
\beta_{i-l,j-d-l}(R_2/\mathcal{I}(\mathcal{H}')).\]
We are now done since
\[
\beta_{i-l,j-d-l}(R_2/\mathcal{I}(\mathcal{H}')) = \beta_{i-l,j-d-l}(R/\mathcal{I}(\mathcal{H}'))
= \beta_{i-l-1,j-d-l}(\mathcal{I}(\mathcal{H}'))\]
for all $l$ (where we adopt the convention that
$\beta_{-1,j}(\mathcal{I}(\mathcal{H}')) = 1$ if $j=0$ and $0$ if $j \neq 0$).
\end{proof}
When $\mathcal{H}$ is a properly-connected hypergraph,
we obtain the following recursive like formula for $\beta_{i,j}(\mathcal{I}(\mathcal{H}))$.
This result generalizes a similar result for simple
graphs found in \cite{HaVanTuyl2005}.
\begin{theorem} \label{theoremsplitting}
Let $\mathcal{H}$ be a $d$-uniform properly-connected hypergraph and let
$E$ be a splitting edge of $\mathcal{H}$. Suppose
$\mathcal{H}' = \{H \in \mathcal{H} ~|~ \operatorname{dist}_{\mathcal{H}}(E,H) \geq d+1\}$, and $t= |N(E)|$.
Then for all $i \geq 1$
\[\beta_{i,j}(\mathcal{I}(\mathcal{H})) = \beta_{i,j}(\mathcal{I}(\mathcal{H}\backslash E))
+ \sum_{l=0}^i \binom{t}{l}
\beta_{i-1-l,j-d-l}(\mathcal{I}(\mathcal{H}')).\]
Here, $\beta_{-1,j}(\mathcal{I}(\mathcal{H}')) = 1$ if $j =0$ and $0$ if $j \neq 0$.
\end{theorem}
\begin{proof} Since $E$ is a splitting edge, by Theorem \ref{prop: ekf} we have
\[\beta_{i,j}(\mathcal{I}(\mathcal{H})) = \beta_{i,j}((x^E))+ \beta_{i,j}(\mathcal{I}(\mathcal{H}\backslash E))
+ \beta_{i-1,j}((x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)).\]
When $i \geq 1$, $\beta_{i,j}((x^E)) = 0$. Now substitute
the formula of Lemma \ref{betti intersection} into the last expression.
\end{proof}
\section{Triangulated properly-connected hypergraphs}
If $\mathcal{H}$ is a properly-connected hypergraph with splitting edge $E$,
the sub-hypergraphs $\mathcal{H}\backslash E$ and $\mathcal{H}'$ in Theorem \ref{theoremsplitting}
may or may not have a splitting edge. In fact, $\mathcal{H}\backslash E$
may not even be a properly-connected hypergraph.
These facts prevents us from
using Theorem \ref{theoremsplitting} to recursively compute $\beta_{i,j}(\mathcal{I}(\mathcal{H}))$
for any hypergraph. One is lead to ask if there is any subfamily of properly-connected
hypergraphs for which the formula is recursive. In this section, we
introduce one such family which generalizes the notion of a chordal graph.
In \cite{HaVanTuyl2005} it was shown that
hyperforests (i.e., a simplicial forest in the sense of \cite{Faridi2002}) is a family
of hypergraphs
for which the graded Betti numbers can be computed
recursively. Since a hyperforest need not be properly-connected, the results
of this section give a partial generalization of \cite{HaVanTuyl2005}.
We begin by recalling the definition of a chordal graph.
\begin{definition}
A graph $G$ is called {\bf chordal} if every cycle of length 4 or larger has
a chord, that is, an edge joining two nonadjacent vertices in the the
cycle.
\end{definition}
An alternative characterization for chordal graphs can be found in \cite{PTW}
(due to Dirac \cite{D}). This characterization will prove more
suitable when generalizing to properly-connected hypergraphs.
\begin{theorem} \label{chordal-triangulated}
A graph $G$ is chordal if and only if every induced subgraph of $G$ contains
a vertex $v$ whose neighborhood $N(v)$ is a complete graph.
\end{theorem}
In the above theorem, because $v$ is adjacent to every vertex in $N(v)$,
we also have that the induced graph on $N(v) \cup \{v\}$ is also a complete graph.
To extend this definition, we first introduce
an analog of complete graphs.
\begin{definition}
The {\bf $d$-complete hypergraph of order $n$}, denoted by $\mathcal{K}_n^d$,
is the hypergraph consisting of all the $d$-subsets of
the vertex set $\mathcal{X}$, where
$|\mathcal{X}| = n$. When $d = 2$, then $\mathcal{K}_n^2$ is the usual complete
graph $\mathcal{K}_n$. When $n < d$, we consider $\mathcal{K}_n^d$ as the
hypergraph with $n$ isolated vertices.
If $n=0$, then $\mathcal{K}_0^d$ is the empty graph which
we view as the the $d$-complete hypergraph of order $0$.
\end{definition}
\begin{definition}
Two distinct vertices $x,y \in \mathcal{X}$ are {\bf neighbors} if there is an edge $E \in \mathcal{H}$
such that $x,y \in E$. For any vertex $x \in \mathcal{X}$, the
{\bf neighborhood of $x$}, denoted $N(x)$, is the set
\[N(x) = \{y \in \mathcal{X} ~|~ \mbox{$y$ is a neighbor of $x$}\}.\]
\end{definition}
Observe that if $E$ is any edge of $\mathcal{H}$ and $x \in E$, then $E \subseteq N(x) \cup \{x\}$.
\begin{definition}
A $d$-uniform properly-connected hypergraph $\mathcal{H}$ is
said to be {\bf triangulated} if for every nonempty subset $\mathcal{Y} \subseteq \mathcal{X}$, the
induced subhypergraph $\mathcal{H}_\mathcal{Y}$ contains a vertex $x \in \mathcal{Y} \subseteq \mathcal{X}$
such that the induced hypergraph of $\mathcal{H}_\mathcal{Y}$ on
$N(x) \cup \{x\}$ is a $d$-complete hypergraph of order $|N(x)|+1$.
\end{definition}
By virtue of Theorem \ref{chordal-triangulated}, the simple graphs that are triangulated are precisely
the chordal graphs.
We shall show that properly-connected hyperforests are triangulated hypergraphs.
\begin{theorem}
Suppose that $\mathcal{H}$ is a $d$-uniform properly-connected hypergraph that is a
$v$-forest (or equivalently, $f$-forest). Then $\mathcal{H}$ is a triangulated hypergraph.
\end{theorem}
\begin{proof}
For any $\mathcal{Y} \subseteq \mathcal{X}$, the induced subgraph $\mathcal{H}_{\mathcal{Y}}$ must contain a $v$-leaf, say $E$.
Since $E$ is a $v$-leaf,
$E$ contains a free vertex, say $x$. Suppose $E = \{x,x_2,\ldots,x_d\}$. Then
$N(x) = \{x_2,\ldots,x_d\}$. But the induced graph of $\mathcal{H}_{\mathcal{Y}}$ on $N(x) \cup\{x\}$ is the
simply the edge $E$ which is the
$d$-uniform complete hypergraph $\mathcal{K}_{d}^d$. So $\mathcal{H}$ is a triangulated hypergraph.
\end{proof}
The following lemma is the key result needed to prove that Theorem \ref{theoremsplitting}
is recursive for triangulated hypergraphs.
\begin{lemma} \label{triangulatedlemma}
Let $\mathcal{H}$ be a triangulated hypergraph. Then there exists
an edge $E \in \mathcal{H}$ such that
\begin{enumerate}
\item[$(a)$] $E$ is a splitting edge, and
\item[$(b)$] the subgraphs $\mathcal{H}\backslash E$ and
$\mathcal{H}'$ are triangulated hypergraphs.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $\mathcal{H}$ is a triangulated hypergraph, there exists a vertex $x \in \mathcal{X}$ such
that the induced hypergraph on $N(x) \cup \{x\}$ is a $d$-complete hypergraph.
Let $E$ be any edge of $\mathcal{H}$ that contains $x$. We will show that $E$ is
an edge that satisfies $(a)$ and $(b)$.
$(a)$
Suppose that $N(E) = \{z_1,\ldots,z_t\}$. For each $z_i \in N(E)$, there
must be an edge $E_i \in \mathcal{H}$ such that $\operatorname{dist}_{\mathcal{H}}(E,E_i) = 1$
and $E \cup E_i = E \cup \{z_i\}$. For each $i$,
either $x \in E_i$ or $x \not\in E_i$. If $x \not\in E_i$, then
$(E \backslash \{x\}) \cup \{z_i\} = E_i \in \mathcal{H}$.
Now, suppose $x \in E_i$.
Since $z_i \in E_i$, we have $z_i \in N(x)$. If $E = \{x,x_2,\ldots,x_d\}$, then
$\{x_2,\ldots,x_d,z_i\} \subseteq N(x)$ is a subset of size $d$ in $N(x) \cup \{x\}$.
But since the induced hypergraph on $N(x) \cup \{x\}$ is a $d$-complete hypergraph,
that means that $\{x_2,\ldots,x_d,z_i\}$ is an edge of $\mathcal{H}$. This edge
is simply $(E \backslash \{x\}) \cup \{z_i\}$. So, $E$ is a splitting edge
by Theorem \ref{char: split edge}.
$(b)$ We consider $\mathcal{H}\backslash E$ first.
We begin by showing that $\mathcal{H}\backslash E$ is properly-connected.
If $H,H' \in \mathcal{H}\backslash E$ with $H \cap H' \neq \emptyset$,
then in $\mathcal{H}$ we also have $H \cap H' \neq \emptyset$. Since
$\mathcal{H}$ is properly-connected, we can find a proper irredundant chain
$H=E_0,E_1,\ldots,E_t=H'$ where $t = \operatorname{dist}_{\mathcal{H}}(H,H') = d-|H\cap H'|$.
If $E \not\in \{E_1,\ldots,E_{t-1}\}$, then this chain remains
a proper irredundant chain in $\mathcal{H}\backslash E$ giving
us $t = \operatorname{dist}_{\mathcal{H}\backslash E}(H,H')=d-|H\cap H'|$.
So suppose $E \in \{E_1,\ldots,E_{t-1}\}$. Let $x \in E$ be
the vertex such that the induced hypergraph on $N(x) \cup
\{x\}$ is a $d$-complete hypergraph. Let $E_{i-1}$ and $E_{i+1}$
be the edges that appear immediately before and after $E$, respectively,
in the chain $E_0,\ldots,E_t$. There then exists a vertex $z_{i-1} \in E_{i-1}$
such that $\{z_{i-1}\} = E_{i-1} \setminus E$, and a vertex
$z_{i+1} \in E_{i+1}$ such that $\{z_{i+1}\} = E_{i+1} \setminus E$.
By Lemma \ref{lem: chains} there are three cases to consider:
(i) $x \in E_{i-1},E,$ and $E_{i+1}$, (ii) $x \in E_{i-1}$ and $E$, but $x \not\in E_{i+1}$,
or (iii) $x \not\in E_{i-1}$ but $x \in E$ and $E_{i+1}$ (Lemma \ref{lem: chains}
shows that when moving through the chain, one removes one vertex from
an edge and replaces it with another vertex, and furthermore,
once you add a vertex to a chain, this vertex appears in all later edges
in the chain.) In case (i), let $E' = E_{i-1} \cap E_{i+1}$. Note
that $|E'|=d-2$. Then $E' \cup \{z_{i-1},z_{i+1}\}$ is a subset of $N(x) \cup \{x\}$
of size $d$, and because the induced graph on $N(x) \cup \{x\}$ is
a $d$-complete hypergraph, this means that
$E'' = E' \cup \{z_{i-1},z_{i+1}\}$ is an edge of $\mathcal{H}$. The edge $E''$ is
distance 1 from $E_{i-1}$ and $E_{i+1}$. We can replace $E$ in
the chain $E_0,\ldots,E_t$ with $E''$ and still have a proper chain of length
$t$ in $\mathcal{H}\backslash E$ from $H$ to $H'$. Moreover, this chain must be irredundant,
because if it was shorter, then this would give rise to shorter chain in $\mathcal{H}$,
contradicting the fact that $t$ is the length of the shortest chain.
In case $(ii)$, let $z$ be the vertex of $E$ such that $\{z\} = E \setminus E_{i-1}$.
Then $E_{i-1} \setminus \{x\}$ and $z$ are in $N(x) \subseteq N(x) \cup \{x\}$.
Thus $E' = (E_{i-1} \setminus \{x\})\cup \{z\}$ is also an edge of $\mathcal{H}$.
Furthermore, $E'$ is distance one way from $E_{i-1}$ and
$E_{i+1}$ (because $z$ is added to $E$, we have $z \in E_{i+1}$). So,
we can replace $E$ in the chain by $E'$ and get a chain of the correct
length in $\mathcal{H}\backslash E$. Finally, in case $(iii)$, let $z$ be the
vertex in $E$ such that $\{z\} = E \setminus E_{i+1}$. Then
$z \in N(x)$ and $(E_{i+1} \setminus\{x\}) \subseteq N(x)$.
This means that $E' = (E_{i+1}\setminus\{x\}) \cup \{z\}$
is an edge of $\mathcal{H}$. But this edge is distance one from both $E_{i-1}$
and $E_{i+1}$, so, as we did before, we can replace $E$ with $E'$ to
get a chain of the desired length.
We can now show that $\mathcal{H} \backslash E$ is also triangulated.
If the vertex $x \in E$ only appears in $E$, then $E$ is
a $v$-leaf. Then $\mathcal{H} \backslash E
= \mathcal{H} \backslash \{x\} = \mathcal{H}_{\mathcal{X}\backslash \{x\}}$, and it is clear that
$\mathcal{H}_{\mathcal{X} \backslash \{x\}}$ is
a triangulated hypergraph. So, suppose that
there are two or more edges that contain
$x$. If $\mathcal{Y} \subseteq \mathcal{X}$
with $x \not\in \mathcal{Y}$, then the induced hypergraph of $\mathcal{H}\backslash E$ on $\mathcal{Y}$
is the same as the induced hypergraph of $\mathcal{H}$ on $\mathcal{Y}$, so there exists
a vertex $z \in \mathcal{Y}$ such that the induced hypergraph on $N(z) \cup \{z\}$ is a $d$-complete
hypergraph. It remains to consider the case when $x \in \mathcal{Y}$. Let $N_\mathcal{Y}(x)$ denote the neighbors
of $x$ in $(\mathcal{H}\backslash E)_{\mathcal{Y}}$. Note that $N_\mathcal{Y}(x)\cup \{x\} \subseteq N(x) \cup \{x\}$. Since
the induced hypergraph on $N(x) \cup \{x\}$ is a $d$-complete hypergraph, any
induced subgraph on a subset of $N(x) \cup \{x\}$ is also a $d$-complete hypergraph. So
the induced hypergraph $(\mathcal{H}\backslash E)_{N_\mathcal{Y}(x) \cup \{x\}}$ is a $d$-complete hypergraph.
Thus $\mathcal{H} \backslash E$ is triangulated.
Finally, by Lemma \ref{H'prop-con} we know that $\mathcal{H}'$ is properly-connected.
The reason that $\mathcal{H}'$ is triangulated follows from the fact that
\[ \mathcal{H}' = \mathcal{H} \backslash \{x,x_2,\ldots,x_d,z_1,\ldots,z_t\}
= \mathcal{H}_{\mathcal{X} \backslash \{x,x_2,\ldots,x_d,z_1,\ldots,z_t\}}\]
where
$E = \{x,x_2,\ldots,x_d\}$ and $N(E) = \{z_1,\ldots,z_t\}$.
\end{proof}
We come to the main result of this section.
\begin{theorem}
Suppose that $\mathcal{H}$ is a $d$-uniform triangulated hypergraph. Then the
graded Betti numbers of $\mathcal{I}(\mathcal{H})$ can be computed recursively
using the formula
\[\beta_{i,j}(\mathcal{I}(\mathcal{H})) = \beta_{i,j}(\mathcal{I}(\mathcal{H}\backslash E))
+ \sum_{l=0}^i \binom{t}{l}
\beta_{i-1-l,j-d-l}(\mathcal{I}(\mathcal{H}'))\]
where $E$ is a splitting edge, $t = |N(E)|$, and
$\mathcal{H}'$ and $\mathcal{H} \backslash E$ are also $d$-uniform triangulated hypergraphs.
Here, $\beta_{-1,j}(\mathcal{I}(\mathcal{H}')) = 1$ if $j =0$ and $0$ if $j \neq 0$.
\end{theorem}
\begin{proof} By Lemma \ref{triangulatedlemma}, the triangulated hypergraph
$\mathcal{H}$ has a splitting edge $E$. Furthermore, since both hypergraphs $\mathcal{H}\backslash E$
and $\mathcal{H}'$ are triangulated hypergraphs, they also have a splitting edges. Thus,
by repeatedly using the formula of Theorem \ref{theoremsplitting} we get the
recursive formula.
\end{proof}
It is well known that the graded Betti numbers for an arbitrary monomial ideal
may depend
upon the characteristic of $k$. However, as a consequence of the above formula
we obtain the following corollary.
\begin{corollary}\label{cor.hyperchar}
Suppose that $\mathcal{H}$ is a triangulated hypergraph. Then the graded Betti numbers of $\mathcal{I}(\mathcal{H})$ are
independent of the characteristic of the ground field and can be computed recursively.
\end{corollary}
When restricted to simple graphs, we get a particularly nice corollary.
\begin{corollary}\label{cor.graphchar}
Suppose that $G$ is a chordal graph. Then the graded Betti numbers
of $\mathcal{I}(G)$ are independent of the characteristic of the ground field and can be computed recursively.
\end{corollary}
\noindent Jacques \cite{J} and Jacques and Katzman \cite{JK} first proved
Corollary \ref{cor.graphchar} in the special case that $G$ is a forest, a subclass of chordal graphs.
\section{Properly-connected hypergraphs and regularity}
In this section we investigate the Castelnuovo-Mumford regularity of
the edge ideal $\mathcal{I}(\mathcal{H})$ associated to a properly-connected hypergraph $\mathcal{H}$.
For such a hypergraph, we bound $\operatorname{reg}(\mathcal{I}(\mathcal{H}))$ below
by combinatorial invariants of the hypergraph. When $\mathcal{H}=G$ is a simple
graph, we also provided an upper bound.
In the case
that $\mathcal{H}$ is also triangulated, we explicitly compute $\operatorname{reg}(\mathcal{I}(\mathcal{H}))$. Our exact formula
for $\operatorname{reg}(\mathcal{I}(\mathcal{H}))$
generalizes Zheng's formula \cite{Zheng2004} for the regularity of $\mathcal{I}(\mathcal{H})$
when $\mathcal{H} = G$ is a forest.
We begin by relating the regularity of $\mathcal{I}(\mathcal{H})$ to
the regularity of edge ideals associated to sub-hypergraphs of $\mathcal{H}$. We produce
similar results for the projective dimension of $\mathcal{I}(\mathcal{H})$. We first make the convention
that $\operatorname{reg}(0) = 1$ and if $\mathcal{H}$ has no edges, we set $\operatorname{pdim}(\mathcal{I}(\mathcal{H})) = -1$.
\begin{lemma} \label{regL}
Let $E$ be any edge of a $d$-uniform properly-connected hypergraph $\mathcal{H}$ such that $\mathcal{H} \backslash E$
is nonempty.
Let $t = |N(E)|$ and
$\mathcal{H}' = \{H \in \mathcal{H} ~|~ \operatorname{dist}_{\mathcal{H}}(H,E) \geq d+1\}$. If $L = (x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)$, then
\begin{enumerate}
\item[$(a)$] $\operatorname{reg}(L) = \operatorname{reg}(\mathcal{I}(\mathcal{H}'))+d$, and
\item[$(b)$] $\operatorname{pdim}(L) = \operatorname{pdim}(\mathcal{I}(\mathcal{H}')) + t.$
\end{enumerate}
\end{lemma}
\begin{proof}
We shall prove both results using Lemma \ref{betti intersection}. For $(a)$
suppose $s = \operatorname{reg}(L)$. So, there exists $a$ such that $\beta_{a,a+s}(L) \neq 0$.
By Lemma \ref{betti intersection}
\[\beta_{a+1-1,a+s}(L) = \sum_{l=0}^{a+1}\binom{t}{l}\beta_{a+1-1-l,a+s-d-l}(\mathcal{I}(\mathcal{H}')).\]
Since every number in the summation on the right hand side is nonnegative, there
exists some $l$ such that $\beta_{a-l,a+s-d-l}(\mathcal{I}(\mathcal{H}')) \neq 0$. Hence,
$\operatorname{reg}(\mathcal{I}(\mathcal{H}')) \geq s-d$, or equivalently, $\operatorname{reg}(\mathcal{I}(\mathcal{H}')) + d \geq \operatorname{reg}(L)$.
Conversely, if $r = \operatorname{reg}(\mathcal{I}(\mathcal{H}'))$, then there exists $b$ such that
$\beta_{b,b+r}(\mathcal{I}(\mathcal{H}'))\neq 0$. But then since $b+r = (b+r+d)-d$, by
Lemma \ref{betti intersection}
\[0 \neq \beta_{b,(b+r+d)-d}(\mathcal{I}(\mathcal{H}')) \leq
\sum_{l=0}^{b+1}\binom{t}{l}\beta_{b+1-1-l,b+r+d-d-l}(\mathcal{I}(\mathcal{H}')) = \beta_{b,b+r+d}(L).\]
So $\operatorname{reg}(\mathcal{I}(\mathcal{H}'))+d \geq \operatorname{reg}(L) \geq \operatorname{reg}(\mathcal{I}(\mathcal{H}'))+d$, as desired.
To prove $(b)$, suppose $N(E) = \{z_1,\ldots,z_t\}$.
In the proof of Lemma \ref{betti intersection} it was shown that
\[R/L \cong R_1/(z_1,\ldots,z_t) \otimes _k R_2/\mathcal{I}(\mathcal{H}').\]
where $R_1 = k[z_1,\ldots,z_t]$ and $R_2 = k[x_1,\ldots,x_s]$,
with $\{x_1,\ldots,x_s\} = \mathcal{X} \backslash N(E)$. By tensoring
the resolutions of $R_1/(z_1,\ldots,z_t)$ and $R_2/\mathcal{I}(\mathcal{H}')$, we get
\begin{eqnarray*}
\operatorname{pdim}(L)+1 = \operatorname{pdim}(R/L) &=& \operatorname{pdim}(R_1/(z_1,\ldots,z_t))+ \operatorname{pdim}(R_2/\mathcal{I}(\mathcal{H}'))\\
&=& t + \operatorname{pdim}(R/\mathcal{I}(\mathcal{H}')) = t + \operatorname{pdim}(\mathcal{I}(\mathcal{H}'))+1.
\end{eqnarray*}
The desired identity is obtained by comparing the first and last values of the above equality.
\end{proof}
\begin{theorem}\label{reg pdim 2}
Let $E$ be any edge of a $d$-uniform properly-connected hypergraph $\mathcal{H}$
such that $\mathcal{H} \backslash E$ is nonempty.
Let $t =|N(E)|$.
Then
\begin{enumerate}
\item[$(a)$] $\operatorname{reg}(\mathcal{I}(\mathcal{H}))\leq \max\{\operatorname{reg}(\mathcal{I}(\mathcal{H} \backslash E)), \operatorname{reg}(\mathcal{I}(\mathcal{H}')) + d -1\}.$
\item[$(b)$] $\operatorname{pdim}(\mathcal{I}(\mathcal{H}))\leq \max\{\operatorname{pdim}(\mathcal{I}(\mathcal{H}\backslash E)),\operatorname{pdim}(\mathcal{I}(\mathcal{H}'))+t+1\}.$
\end{enumerate}
Furthermore, if $E$ is a splitting edge, then we have equality in both $(a)$ and $(b)$.
\end{theorem}
\begin{proof}
Set $L = (x^E) \cap \mathcal{I}(\mathcal{H}\backslash E)$.
The two inequalities then follow by using the short exact sequence
\[0 \rightarrow L \rightarrow (x^E) \oplus \mathcal{I}(\mathcal{H}\backslash E)
\rightarrow \mathcal{I}(\mathcal{H}) \rightarrow 0\]
and Lemma \ref{regL}
to bound $\operatorname{reg}(I(\mathcal{H}))$ and $\operatorname{pdim}(\mathcal{I}(\mathcal{H}))$, noting that since $\mathcal{H} \backslash E$ is nonempty,
$\operatorname{reg}(\mathcal{H} \backslash E) \ge d$.
When $E$ is a splitting edge, the equalities are a result of
the formulas of Corollary \ref{reg pdim}.
\end{proof}
We now focus our attention on using combinatorial information from
$\mathcal{H}$ to bound $\operatorname{reg}(\mathcal{I}(\mathcal{H}))$. More precisely, the regularity
will be expressed using the following terminology.
\begin{definition}
Let $\mathcal{H}$ be a $d$-uniform properly-connected hypergraph. Two edges $E,H$ of
$\mathcal{H}$ are {\bf $t$-disjoint} if $\operatorname{dist}_{\mathcal{H}}(E,H) \geq t$. A set of edges $\mathcal{E}' \subseteq \mathcal{E}$
is {\bf pairwise $t$-disjoint} if every pair of edges of $\mathcal{E}'$ is $t$-disjoint.
(We thank Jeremy Martin
for suggesting this name.)
\end{definition}
\begin{remark}
When $\mathcal{H}$ is a $d$-uniform properly-connected hypergraph, then two edges $E$ and $H$ are
$d$-disjoint if and only if $E \cap H = \emptyset$; that is, $E$ and $H$ are disjoint in the usual
sense. When $\mathcal{H} =G$ is a simple graph, Zheng's definition \cite[Definition 2.15]{Zheng2004} for two
edges to be {\bf disconnected} is equivalent to our definition that the two edges be 3-disjoint in $G$.
\end{remark}
We come to the first main result of this section.
\begin{theorem} \label{regtheorem}
Let $\mathcal{H}$ be a $d$-uniform properly-connected hypergraph. Then $\beta_{i-1,di}(\mathcal{I}(\mathcal{H}))$ equals the
number of sets of $i$ pairwise $(d+1)$-disjoint edges of $\mathcal{H}$. In particular, if $c$ is the maximal
number of pairwise $(d+1)$-disjoint edges of $\mathcal{H}$ then
$$\operatorname{reg}(\mathcal{I}(\mathcal{H})) \ge (d-1)c+1.$$
\end{theorem}
\begin{proof} The first statement of the theorem implies that $\beta_{c-1,dc}(\mathcal{I}(\mathcal{H})) \not= 0$.
Thus, $dc-(c-1) \le \operatorname{reg}(\mathcal{I}(\mathcal{H}))$ and the second statement is proved. We shall prove the first
statement of the theorem. In the case $d=2$, this is the content of \cite[Lemma 2.2]{K}. We generalize
Katzman's arguments to the more general situation.
Recall that $\mathcal{E} = \{E_1, \dots, E_s\}$ and let
$\mathbb{T}: 0 \rightarrow T_s \stackrel{\partial_s}{\rightarrow} \dots \stackrel{\partial_2}{\rightarrow}
T_1 \stackrel{\partial_1}{\rightarrow} \mathcal{I}(\mathcal{H}) \rightarrow 0$ be the Taylor resolution of $\mathcal{I}(\mathcal{H})$.
Then $T_i$ is a free $R$-module with generators $e_{j_1, \dots, j_i}$, for
$1 \le j_1 < \dots < j_i \le s$, and the boundary map $\partial_i$ is defined by
$$\partial_i(e_{j_1, \dots, j_i}) =
\sum_{k=1}^i (-1)^k \mu_k e_{j_1, \dots, \widehat{j_k}, \dots, j_i},$$
where $\widehat{j_k}$ indicates the removal of $j_k$ and $\mu_k = x^{E_{j_k}
\backslash (\cup_{l \not= k} E_{j_l})}$. Let $\frak{m} = (x_1, \dots, x_n)$ be the maximal homogeneous
ideal in $R$. It is well-known that the graded Betti numbers of $\mathcal{I}(\mathcal{H})$ are given by
$$\beta_{i-1,j}(\mathcal{I}(\mathcal{H})) = \dim_k H_i(\mathbb{T} \otimes_R R/\frak{m})_j.$$
Observe that generators of degree $di$ of $T_i$ are $e_{j_1, \dots, j_i}$'s where
$E_{j_1}, \dots, E_{j_i}$ are pairwise disjoint. Consider one such generator
$e_{j_1, \dots, j_i}$. Let $\mathcal{H}_1$ be the induced sub-hypergraph of $\mathcal{H}$ on the
vertices in $\bigcup_{k=1}^i E_{j_k}$. It can be seen that for $1 \le k \le i$, $E_{j_k}$ is
disjoint from $\bigcup_{l \not= k} E_{j_l}$ and hence, $\mu_k \in \frak{m}$. Thus, the image of
$\partial_i(e_{j_1, \dots, j_i})$ in $\mathbb{T} \otimes_R R/\frak{m}$ is 0. Also, if $\mathcal{H}_1$ contains an
edge $E_t$ different from $E_{j_1}, \dots, E_{j_i}$, then since
$E_t \subseteq \bigcup_{k=1}^i E_{j_k}$, we have $\partial_{i+1}(e_{j_1, \dots, j_i, t}) =
e_{j_1, \dots, j_i}$. That is, if $\mathcal{H}_1$ contains an edge different from $E_{j_1}, \dots, E_{j_i}$
then the image of $e_{j_1, \dots, j_i}$ in $H_i(\mathbb{T} \otimes_R R/\frak{m})$ is 0. Furthermore, the image
of $e_{j_1, \dots, j_i}$ in $\mathbb{T} \otimes_R R/\frak{m}$ is in the image of $\partial_{i+1}$ if and only
if it is the image of $\partial_{i+1}(e_{l_1, \dots, l_{i+1}})$, where
$\{l_1, \dots, l_{i+1}\} = \{j_1, \dots, j_i\} \cup \{t\}$ for some $t$. This implies that in the
expansion of $\partial_{i+1}(e_{l_1, \dots, l_{i+1}})$, we must have $\mu_t = 1$, i.e.,
$\mathcal{H}_1$ contains the edge $E_t$ different from $E_{j_1}, \dots, E_{j_i}$.
It remains to show that $E_{j_1}, \dots, E_{j_i}$ are pairwise disjoint edges of $\mathcal{H}$ such that
the induced sub-hypergraph of $\mathcal{H}$ on the vertices of $\bigcup_{k=1}^i E_{j_k}$ contains no other
edges if and only if $E_{j_1}, \dots, E_{j_i}$ are pairwise $(d+1)$-disjoint edges in $\mathcal{H}$.
Suppose first that $E_{j_1}, \dots, E_{j_i}$ are pairwise disjoint edges of $\mathcal{H}$ such that the
induced sub-hypergraph $\mathcal{H}_1$ on the vertices of $\bigcup_{k=1}^i E_{j_k}$ contains no other edges.
Clearly, since $E_{j_k} \cap E_{j_l} = \emptyset$ for $k \not= l$, we have
$\operatorname{dist}_{\mathcal{H}}(E_{j_k},E_{j_l}) \ge d$. Now, suppose there exist $k \not= l$ so that
$\operatorname{dist}_{\mathcal{H}}(E_{j_k},E_{j_l}) = d$. Then there is a proper chain $E_{j_k} = F_0, F_1, \dots, F_d =
E_{j_l}$. By Lemma \ref{lem: chains}, the vertices of $F_1$ are in $E_{j_k} \cup E_{j_l}$, so $F_1$ is
an edge in $\mathcal{H}_1$. This implies that $F_1$ has to be one of the $\{E_{j_1}, \dots, E_{j_i}\}
\backslash \{E_{j_k}, E_{j_l}\}$. This is a contradiction since $F_1 \cap E_{j_k} \not= \emptyset$.
Conversely, suppose that $E_{j_1}, \dots, E_{j_i}$ are pairwise $(d+1)$-disjoint edges of $\mathcal{H}$.
Let $\mathcal{H}_1$ be the induced sub-hypergraph of $\mathcal{H}$ on the vertices of $\bigcup_{k=1}^i E_{j_k}$.
By contradiction, suppose $\mathcal{H}_1$ contains an edge $E$ different from $E_{j_1}, \dots, E_{j_i}$.
Then $E \subseteq \bigcup_{k=1}^i E_{j_k}$. Without loss of generality, we may assume that $E \cap
E_{j_1} \not= \emptyset$. Then there is a proper chain $E_{j_1} = F_0, F_1, \dots, F_l = E$ for
some $l < d$. By Lemma \ref{lem: chains}, the vertices of $F_1$ are in $E_{j_1} \cup E$. Thus,
$F_1$ is also an edge of $\mathcal{H}_1$. This implies that there exists $j_k \not= j_1$ so that $F_1$ has a
nonempty intersection with $E_{j_k}$ (otherwise, $F_1 \subseteq E_{j_1}$, which is a contradiction).
However, we now have $\operatorname{dist}_{\mathcal{H}}(F_1, E_{j_k}) = d - |F_1 \cap E_{j_k}| \le d-1$, whence
$\operatorname{dist}_{\mathcal{H}}(E_{j_1},E_{j_k}) \le d$, which is again a contradiction.
\end{proof}
When $\mathcal{H}$ is a graph we also obtain an especially appealing upper bound for the regularity of $\mathcal{I}(\mathcal{H})$.
\begin{definition}
Let $\mathcal{H} = (\mathcal{X},\mathcal{E})$ be a hypergraph. A {\bf matching} of $\mathcal{H}$ is defined to be a
subset $\mathcal{E}' \subseteq \mathcal{E}$ consisting of pairwise disjoint edges.
The {\bf matching number} of $\mathcal{H}$, denoted $\alpha'(\mathcal{H})$, is the largest size
of a maximal matching in $\mathcal{H}$.
\end{definition}
\begin{theorem} \label{cor.matching}
Let $G$ be a finite simple graph. Then
$$\operatorname{reg}(R/\mathcal{I}(G))\leq \alpha'(G)$$
where $\alpha'(G)$ is the matching number of $G$.
\end{theorem}
\begin{proof} It can be seen from the Taylor resolution that
$$\operatorname{reg}(\mathcal{I}(G)) \le
\max \{ \deg \operatorname{lcm}(x^{E_1}, \dots, x^{E_i}) - i ~|~ \{E_1, \dots, E_i\} \subseteq \mathcal{E} \} + 1.$$
Since any edge of $G$ has 2 vertices, it can be seen that
$i+1 \le \deg \operatorname{lcm}(x^{E_1}, \dots, x^{E_i}) \le 2i$. Let $\deg \operatorname{lcm}(x^{E_1}, \dots, x^{E_i}) = i+k$
for some $1 \le k \le i$. It suffices to show that we can always find a matching of size
$k$ among $\{E_1, \dots, E_i\}$. To this end, we shall use induction on $i+k$.
If $i+k = 2$, i.e., $i = k = 1$, then the statement is clear. Suppose now that
$i+k > 2$. If $k = 1$ or $k=i$ then the statement is also clear.
Assume that $1 < k < i$. If $E_i$ is disjoint from $E_j$ for all $j < i$,
then $\deg \operatorname{lcm}(x^{E_1}, \dots, x^{E_{i-1}}) = i+k-2 = (i-1) + (k-1)$. By induction, there exists a
matching $S \subset \{E_1, \dots, E_{i-1}\}$ of size $(k-1)$. It is easy to see that $S \cup \{E_i\}$
is now a matching of size $k$. It remains to consider the case that at least a vertex of $E_i$ is
also a vertex of $E_j$ for some $j < i$. In this case, we have
$\deg \operatorname{lcm}(x^{E_1}, \dots, x^{E_{i-1}}) \ge i+k-1 = (i-1)+k$. By induction, there is a
matching $S \subset \{E_1, \dots, E_{i-1}\}$ of size $k$, and the statement is proved.
\end{proof}
Theorem \ref{cor.matching} seems to give an interesting bound for the regularity of edge ideals
with a simple proof which may have been overlooked.
When $\mathcal{H}$ is a triangulated hypergraph, the lower bound of Theorem \ref{regtheorem} turns out to be the exact formula for the regularity of $\mathcal{I}(\mathcal{H})$.
\begin{theorem}\label{regularitytheorem}
Suppose that $\mathcal{H}$ is a $d$-uniform properly-connected triangulated hypergraph,
If $c$ is the maximum number of pairwise $(d+1)$-disjoint edges of $\mathcal{H}$, then
\[\operatorname{reg}(\mathcal{I}(\mathcal{H})) = (d-1)c + 1.\]
\end{theorem}
\begin{proof} The proof is similar to the one given by \cite{HaVanTuyl2005} in the case for forests.
We proceed by induction on the number of edges of $\mathcal{H}$.
If $\mathcal{H}$ only has one edge $E$, then $\mathcal{I}(\mathcal{H}) = (x^E)$.
Because $\mathcal{I}(\mathcal{H})$ is principal, it is clear that $\operatorname{reg}(\mathcal{I}(\mathcal{H})) = d$.
But then it is clear that the formula holds since $1$ is the maximal number
of pairwise $(d+1)$-disjoint edges.
So, suppose $\mathcal{H}$ has at least two edges. Since $\mathcal{H}$
is triangulated, by Lemma \ref{triangulatedlemma} there is a splitting edge $E \in \mathcal{H}$ ($\mathcal{H} \backslash E$ is nonempty in this case) such that $\mathcal{H}\backslash E$ and $\mathcal{H}'$ are also
$d$-uniform properly-connected triangulated hypergraphs.
Since $E$ is a splitting edge, by Corollary \ref{reg pdim 2} we have
\[\operatorname{reg}(\mathcal{I}(\mathcal{H})) = \max\{\operatorname{reg}(\mathcal{I}(\mathcal{H}\backslash E)), \operatorname{reg}(\mathcal{I}(\mathcal{H}'))+d-1\}.\]
By induction $\operatorname{reg}(\mathcal{I}(\mathcal{H}\backslash E)) = (d-1)c_1+1$ where $c_1$ is the maximal number
of pairwise $(d+1)$-disjoint edges of $\mathcal{H}\backslash E$, and
$\operatorname{reg}(\mathcal{I}(\mathcal{H}')) = (d-1)c_2+1$
where $c_2$ is the maximal number of pairwise $(d+1)$-disjoint edges of $\mathcal{H}'$.
So
\[\operatorname{reg}(\mathcal{I}(\mathcal{H})) = \max\{(d-1)c_1+1,(d-1)c_2+d\}.\]
If we let $c$ denote the maximal number of pairwise $(d+1)$-disjoint edges of $\mathcal{H}$,
then since $(d-1)c_2 + d = (d-1)(c_2 + 1) + 1$
to complete the proof it suffices for us to show that $c = \max\{c_1,c_2+1\}$.
Let $\mathcal{E}_1$ be the set of the $c_1$ pairwise $(d+1)$-disjoint edges of $\mathcal{H}\backslash E$. The
edges of $\mathcal{E}_1$ are also a set of pairwise $d+1$-disjoint edges of $\mathcal{H}$.
To see this fact, suppose that $H,H'$ are two $(d+1)$-disjoint edges in $\mathcal{H}\backslash E$
that are not $(d+1)$-disjoint in $\mathcal{H}$. That is, $\operatorname{dist}_{\mathcal{H}}(H,H') \leq d$. But
because $H \cap H' = \emptyset$, we must have $\operatorname{dist}_{\mathcal{H}}(H,H') = d$.
Let $H = E_0,\ldots,E_d=H'$ be the proper irredundant chain of length $d$ in $\mathcal{H}$.
Since this chain is not in $\mathcal{H}\backslash E$, we must have $E = E_i$ for some $i=\{1,\ldots,d-1\}$.
Consider the edges $E_{i-1}$ and $E_{i+1}$ in the chain that
occur before and after, respectively, the edge $E$. The splitting edge $E$ of Lemma
\ref{triangulatedlemma} is picked so that it contains a vertex $x$ such
that the induced graph on $N(x) \cup \{x\}$ is a $d$-complete hypergraph.
We can now adapt the proof given in Lemma \ref{triangulatedlemma}
that showed that $\mathcal{H}\backslash E$ was properly-connected to show that $E$ can be replaced
by an edge $E' \in \mathcal{H}\backslash E$. As a consequence, we get a path of length $d$ from $H$ to $H'$
in $\mathcal{H}\backslash E$.
But this contradicts the fact that $\operatorname{dist}_{\mathcal{H}\backslash E}(H,H') \geq d+1$.
Thus $|\mathcal{E}_1| = c_1 \leq c$.
If $\mathcal{E}_2$
is a set of $c_2$ pairwise $(d+1)$-disjoint edges of $\mathcal{H}'$, we claim that
$\mathcal{E}_2 \cup \{E\}$ is a set of pairwise $(d+1)$-disjoint edges of $\mathcal{H}$.
Indeed, for any edge $H \in \mathcal{H}'$, $\operatorname{dist}_{\mathcal{H}}(E,H) > d$,
and so in particular, $E$ and $H$ is $(d+1)$-disjoint for every edge $H \in \mathcal{E}_2$.
Thus $|\mathcal{E}_2 \cup \{E\}| = c_2 + 1 \leq c$. Thus $c \geq \max\{c_1,c_2+1\}$.
Suppose that $c > \max\{c_1,c_2+1\}$. Let $\mathcal{E}$ be a set of $c$
pairwise $(d+1)$-disjoint edges of $\mathcal{H}$. If $E \not\in\mathcal{E}$,
then $\mathcal{E}$ is also a set of pairwise $(d+1)$-disjoint edges of $\mathcal{H}\backslash E$,
and so $c = |\mathcal{E}| \leq c_1$, a contradiction. If $E \in \mathcal{E}$,
then $\mathcal{E} \backslash \{E\}$ is a set of pairwise $(d+1)$-disjoint
edges of $\mathcal{H}'$ since any other edge $H \in \mathcal{E}$
must have $\operatorname{dist}_{\mathcal{H}}(E,H) > d$. But this would imply that $c-1 \leq c_2$,
again a contradiction. Hence $c = \max\{c_1,c_2+1\}$.
\end{proof}
Theorem \ref{regularitytheorem} gives the following interesting corollary for simple graphs, which was first proved by Zheng \cite{Zheng2004} in the special case that $G$ was a forest.
\begin{corollary} \label{cor.regchordal}
Suppose that $G$ is chordal graph. If $c$ is the maximum number of
pairwise $3$-disjoint edges of $G$, then
\[\operatorname{reg}(\mathcal{I}(G)) = c + 1.\]
\end{corollary}
\begin{example} \label{upperbound}
The bounds for the regularity in Theorems \ref{regtheorem} and Theorem \ref{cor.matching}
are sharp. If $\mathcal{H}$ is any triangulated hypergraph, then the lower bound in Theorem \ref{regtheorem}
is achieved by Theorem \ref{regularitytheorem}. To show that
the upper bound in Theorem \ref{cor.matching} is achieved, consider the the edge ideal of $C_5$, the
five-cycle. So $\mathcal{I}(G) = (x_1x_2,x_2x_3,x_3x_4,x_4x_5,x_5x_1)$. Then
$\alpha'(G) = 2$ (for example, take edges $E_1 = x_1x_2$ and $E_2 = x_3x_4$). So $\operatorname{reg}(\mathcal{I}(G)) \leq 3$.
In fact we have equality since the resolution of $\mathcal{I}(G)$ is
\[0 \rightarrow R(-5) \rightarrow R^5(-3) \rightarrow R^5(-2) \rightarrow \mathcal{I}(G) \rightarrow 0. \]
\end{example}
In the study of squarefree monomial ideals, the theory of Alexander duality has proved to be
significant in many ways. We round out this section by
relating some algebraic invariants of
edge ideals and their Alexander duals.
\begin{definition} \label{Alexander dual}
Let $I = (x_{11} \cdots x_{1i_1}, \dots, x_{r1} \cdots x_{ri_r}) \subseteq k[x_1, \dots, x_n]$ be
a squarefree monomial ideal. Then the {\bf Alexander dual} of $I$ is defined to be
$$I^\vee = (x_{11}, \dots, x_{1i_1}) \cap \dots \cap (x_{r1}, \dots, x_{ri_r}).$$
\end{definition}
\begin{definition} \label{vertex cover}
Let $G$ be a graph. A subset $V$ of the vertices of $G$ is called a {\bf vertex cover} if every
edge in $G$ is incident to at least a vertex in $V$; a {\bf minimal vertex cover} is a vertex cover
$V$ with the property that no proper subset of $V$ is.
The smallest size of a minimal vertex cover of $G$ is denoted by $\nu(G)$.
The graph $G$ is {\bf unmixed} if all its minimal vertex covers have the same cardinality $\nu(G)$.
\end{definition}
\begin{remark} \label{Alexander gens}
The operation of taking the Alexander dual of a squarefree monomial ideal brings generators to
primary components. The minimal generators of $\mathcal{I}(G)^\vee$ correspond to
minimal vertex covers of $G$.
\end{remark}
\begin{theorem} \label{thm.Konig}
Let $G$ be a simple graph.
\begin{enumerate}
\item If $G$ is unmixed, then
\[\operatorname{reg}(\mathcal{I}(G)) \le \operatorname{ht} \mathcal{I}(G) + 1 \le \operatorname{reg}(\mathcal{I}(G)^\vee)+1
~\text{and}~ \operatorname{pdim}(\mathcal{I}(G)^{\vee}) \leq \operatorname{ht} \mathcal{I}(G) \leq \operatorname{pdim}(\mathcal{I}(G))+1.\]
\item If $G$ is not unmixed, then
\[\operatorname{reg}(\mathcal{I}(G)) \le \operatorname{ht} \mathcal{I}(G) +1\le \operatorname{reg}(\mathcal{I}(G)^\vee)
~\text{and}~\operatorname{pdim}(\mathcal{I}(G)^{\vee}) \le \operatorname{ht} \mathcal{I}(G) \le \operatorname{pdim}(\mathcal{I}(G)).\]
\end{enumerate}
\end{theorem}
\begin{proof} It suffices to prove the inequalities involving the regularity,
since the bounds on the projective dimension follow from the
identities $\operatorname{reg}(\mathcal{I}(G)) = \operatorname{pdim}(R/\mathcal{I}(G)^\vee)$ and
$\operatorname{reg}(\mathcal{I}(G)^\vee) = \operatorname{pdim}(R/\mathcal{I}(G))$ (see, for example, \cite[Theorem 5.59]{MillerSturmfels2004}).
Observe that if $\mathcal{E}'$ is a matching in $G$ then any vertex cover must contain at least a vertex of every edge in $\mathcal{E}'$. Thus, $\alpha'(G) \le \nu(G) = \operatorname{ht} \mathcal{I}(G)$. It follows from Theorem \ref{cor.matching} that $\operatorname{reg}(\mathcal{I}(G)) \le \operatorname{ht} \mathcal{I}(G) + 1$. Since $\nu(G)$ is the least
generating degree of $\mathcal{I}(G)^\vee$, we have $\nu(G) \le \operatorname{reg}(\mathcal{I}(G)^\vee)$ and thus (1) follows. To
prove (2) observe that when $G$ is not unmixed, $\operatorname{reg}(\mathcal{I}(G)^\vee)$ is at least the largest
generating degree of $\mathcal{I}(G)^\vee$, which is at least $\nu(G)+1$.
\end{proof}
\section{Properly-connected hypergraphs and linear first syzygies}
In \cite{Fr} Fr\"oberg gave a characterization of edge ideals
of simple graphs with linear resolutions. In this section,
we obtain a partial generalization of Fr\"oberg's result to the
class of properly-connected hypergraphs. Specifically,
we describe when $\mathcal{I}(\mathcal{H})$ has linear first syzygies.
Let us first recall Fr\"oberg's result. If $G$ is a simple graph,
then the {\bf complement of $G$}, denoted $G^c$, is the
graph whose vertex set is the same as $G$, but
whose edge set is defined by the rule $E \in G^c$ if and only
$E \not\in G$. Fr\"oberg then showed:
\begin{theorem} \label{froberg}
Let $G$ be a simple graph. Then
$\mathcal{I}(G)$ has a linear resolution if and only if $G^c$ is a chordal graph.
\end{theorem}
When $\mathcal{H}$ is a $d$-uniform properly-connected hypergraph,
we define the {\bf complement of $\mathcal{H}$}, denoted $\mathcal{H}^c$, as
\[\mathcal{H}^c = \{E \subseteq \mathcal{X} ~\big|~ |E| = d ~~\text{and}~~ E \not\in \mathcal{H}\}.\]
So, one might expect Theorem \ref{froberg}
generalizes to $d$-uniform properly-connected hypergraphs as follows:
$\mathcal{I}(\mathcal{H})$ has a linear resolution if and only if $\mathcal{H}^c$ is a triangulated
hypergraph. Unfortunately, this is not the case, as shown below,
since $\mathcal{H}^c$
need not be properly-connected.
\begin{example}
Let $\mathcal{X} = \{x_1,x_2,x_3,x_4,x_5\}$. Let $\mathcal{H} = \mathcal{K}^3_5 \backslash \{x_1x_2x_3,x_3x_4x_5\}$,
i.e., $\mathcal{H}$ is the $3$-uniform complete hypergraph of order $5$ with
two edges removed. Then $\mathcal{H}^c = \{x_1x_2x_3,x_3x_4x_5\}$
is not properly-connected since the two edges intersect at $x_3$, but
there is no properly-irredundant chain of length 2 between the two edges.
Because $\mathcal{H}^c$ is not even properly-connected, the notion of a triangulated
hypergraph is undefined. However, the ideal $\mathcal{I}(\mathcal{H})$ has the linear
resolution
\[0 \rightarrow R^4(-5) \rightarrow R^{11}(-4) \rightarrow R^8(-3) \rightarrow \mathcal{I}(\mathcal{H}) \rightarrow
0.\]
\end{example}
We take the first step towards generalizing Theorem \ref{froberg}
by asking when $\mathcal{I}(\mathcal{H})$ must have linear first syzygies. Like
our previous results, the
distance between edges is key.
\begin{definition}
The {\bf edge diameter} of a $d$-uniform properly-connected
hypergraph $\mathcal{H}$ is
\[ \operatorname{diam}(\mathcal{H}) = \max \{ \operatorname{dist}_{\mathcal{H}}(E,H) \mid E, H \in \mathcal{H}\},\]
where the diameter is infinite if there exist two edges not connected by any proper chain.
\end{definition}
Since $\mathcal{I}(\mathcal{H})$ is a monomial ideal, we know that its first syzygy module is generated by
syzygies $S(x^E, x^H)$, for $E, H \in \mathcal{E}$.
Moreover, it is clear that $S(x^E, x^H)$ is a linear syzygy if and only if
$\operatorname{dist}_{\mathcal{H}}(E,H) = 1$. We shall see that these syzygies generate all of the syzygies on $\mathcal{I}(\mathcal{H})$ if
the diameter of $\mathcal{H}$ is small enough. Indeed, a short enough proper chain will give us a way of
writing $S(x^E, x^H)$ as a telescoping sum of linear syzygies. The next theorem
generalizes \cite[Theorem 3.17]{Zheng2004}.
\begin{theorem}\label{linearsyzygies}
Suppose that $\mathcal{H}$ is a $d$-uniform properly-connected hypergraph.
Then $\mathcal{I}(\mathcal{H})$ has linear first syzygies if and only if diam$(\mathcal{H}) \leq d$.
\end{theorem}
\begin{proof}
Assume first that diam$(\mathcal{H}) \le d$. It follows from the Taylor resolution that the first syzygy
module of $\mathcal{I}(\mathcal{H})$ is generated by syzygies $S(x^E,x^H)$, where $E,H \in \mathcal{E}$.
We shall show that $S(x^E,x^H)$ is generated by linear syzygies. Let $t = \operatorname{dist}_\mathcal{H}(E,H)$. Then,
since diam$(\mathcal{H}) \le d$, we have $t \le d$.
If $(E_0,\ldots,E_t)$ is the proper irredundant chain, then
by Lemma \ref{lem: chains} we can write
$E = E_0= \{ z_1, \ldots, z_d\},$ $E_i = \{y_1, \ldots, y_i, z_{i+1}, \ldots, z_d\}$ where
$y_i \notin E_j$ for $j < i,$ and $E_t = H$.
It can be seen that $S(x^E,x^H)$ is given by the
equality $y_1 \cdots y_t x^{E_0} - z_1\cdots z_t x^{E_t} = 0.$
Furthermore,
$$y_1 \dots y_tx^{E_0} - z_1 \dots z_t x^{E_t} =
\sum_{k=0}^{t-1} \left(\prod_{i=1}^k z_i \prod_{j=k+2}^t y_j\right)
(y_{k+1}x^{E_k} - z_{k+1}x^{E_{k+1}}).$$
Thus, $S(x^E,x^H)$ is generated by linear syzygies.
Conversely, suppose that $\mathcal{I}(\mathcal{H})$ has linear first syzygies, that is,
$\beta_{1,j}(\mathcal{I}(\mathcal{H})) = 0$ for $j \neq d+1$. If
$\operatorname{diam}(\mathcal{H}) \geq d+1$, then this implies that there exists at least
two edges $E,H$ with $\operatorname{dist}_{\mathcal{H}}(E,H) \geq d+1$, i.e., $\{E,H\}$ is
a set of pairwise $(d+1)$-disjoint edges of $\mathcal{H}$. By Theorem \ref{regtheorem}
this implies that $\beta_{1,2d}(\mathcal{I}(\mathcal{H})) \neq 0$. But this contradicts
the fact that $\mathcal{I}(\mathcal{H})$ has linear first syzygies.
\end{proof}
\begin{example} If $\operatorname{diam}(\mathcal{H}) \leq d$ is small,
$\mathcal{I}(\mathcal{H})$ may still have nonlinear second syzygies. For example,
if $G = C_5$ is the 5-cycle, then $\operatorname{diam}(G) = 2$. However
$\mathcal{I}(G) = (x_1x_2,x_2x_3,x_3x_4,x_4x_5,x_5x_1)$ has nonlinear
second syzygies since $\beta_{2,5}(\mathcal{I}(G)) = 1$, as shown in Example
\ref{upperbound}.
\end{example}
Interestingly, if $\mathcal{H}$ is triangulated, knowing that $\mathcal{I}(\mathcal{H})$ has linear
first syzygies is enough to know that the entire resolution
of $\mathcal{I}(\mathcal{H})$ is linear.
\begin{corollary} \label{linsyzcor}
Suppose that $\mathcal{H}$ is a $d$-uniform properly-connected hypergraph that is
also triangulated. Then the following are equivalent:
\begin{enumerate}
\item[$(a)$] $\mathcal{I}(\mathcal{H})$ has a linear resolution.
\item[$(b)$] $\mathcal{I}(\mathcal{H})$ has linear first syzygies.
\item[$(c)$] $\operatorname{diam}(\mathcal{H}) \leq d$.
\end{enumerate}
\end{corollary}
\begin{proof}
The implication $(a) \Rightarrow (b)$ is immediate, and $(b) \Rightarrow (c)$
is a consequence of Theorem \ref{linearsyzygies}. To show that $(c)
\Rightarrow (a)$, the bound on $\operatorname{diam}(\mathcal{H})$ implies that $\mathcal{H}$
cannot have two or more
pairwise $(d+1)$-disjoint edges (otherwise diam$(\mathcal{H}) > d$). By
Theorem \ref{regularitytheorem} this implies that $\operatorname{reg}(\mathcal{I}(\mathcal{H})) = (d-1)+1 =d$.
Since $\mathcal{I}(\mathcal{H})$ is generated in degree $d$, this forces $\mathcal{I}(\mathcal{H})$ to have a linear
resolution.
\end{proof}
Restricted to simple graphs, Corollary \ref{linsyzcor} gives the following result.
\begin{corollary} Suppose that $G$ is a chordal graph. Then the following are equivalent:
\begin{enumerate}
\item[$(a)$] $\mathcal{I}(G)$ has a linear resolution.
\item[$(b)$] $\mathcal{I}(G)$ has linear first syzygies.
\item[$(c)$] $\operatorname{diam}(G) \leq 2$.
\end{enumerate}
\end{corollary}
\section*{ Acknowledgements} The authors would especially like to thank Jessica Sidman
who made some contributions to this paper in its preliminary stages, and with whom
we had many useful discussions. The authors would like to thank J. Herzog and X. Zheng
for stimulating discussions on the regularity of edge ideals. Part of this research was carried out while
the second author visited the first at Tulane University. The authors acknowledge the support from
Louisiana Board of Regents for this visit. The second author also thanks Tulane University
for its hospitality during his visit. The second author further acknowledges the research
support received by
NSERC. The computer algebra system {\tt CoCoA} \cite{Co} was used to generate examples.
We would also like to thank the two anonymous referees for their suggestions and comments.
|
2,877,628,089,419 | arxiv | \section*{Main}
\noindent
Extensive research has been carried out on the development of sensitive THz detectors utilising many different technologies. For example, transition edge sensors\cite{TES_biblio} (TESs), kinetic inductance detectors\cite{KIDs_microwave_biblio} (KIDs), quantum dots\cite{QD_THz_detector}, and qubit-based detectors\cite{Nakamura_mw_detector_2018,Wallraff_mw_detector_2018} have been explored. Especially TESs and KIDs have reached high technological maturity, and are widely applied in astronomy, such as in the observations of the cosmic microwave background\cite{stevens2019_CMB_TES}.
However, detectors for itinerant single microwave photons are still in their early stage of development, mainly due to orders of magnitude lower photon energies requiring higher sensitivity.
Qubit-based detectors have been successfully demonstrated to operate in the single-microwave-photon regime but they have a relatively narrow absorption bandwidth, typically of the order of $10\,\rm{MHz}$, and a dynamic range limited to single photons. In contrast, thermal detectors may provide a large detection bandwidth and dynamic range, and even an energy-resolving detection mode\cite{Pekola2015}.
Advancing thermal detectors towards the single-microwave-photon regime is of great interest to the field of circuit quantum electrodynamics. They could be used, for example, in qubit readout\cite{govia2014high,Opremcak1239} or parity measurement\cite{parity_joonas}. Thermal detectors for qubit readout would be especially beneficial since their readout frequency can be engineered independent of the detection frequency, and consequently they may provide a relief to the frequency crowding challenge in large-scale multiplexing of qubit readout signals; Qubit readout signals even at equal carrier frequencies may be detected with bolometers utilizing frequency multiplexing in their readout.
In entanglement experiments, photon number eigenstates offer advantages over coherent fields, owing to opportunities in mitigating the effects of loss in the transmission channel\cite{PhysRevX.6.031036,PhysRevLett.114.080503, michael2016new}.
Such accurate single-shot experiments require a single-photon detector, whereas coherent fields can be detected with linear amplifiers. Furthermore, a simple thermal detector would greatly decrease the overhead related to the characterization of microwave components\cite{yeh2017microwave, PhysRevX.5.041020, kokkoniemi2017flux} at the single-photon regime. Such characterization is necessary for many components operated at ultralow powers, for example, in quantum computers.
The sensitivity of radiation detectors is often quantified in terms of noise equivalent power (NEP) which is typically defined as the noise in the readout signal in units of the input power of the detector. Mature technologies, such as TESs and KIDs, have been able to reach NEP in range of a few hundred zW/$\sqrt{\textrm{Hz}}$. We recently introduced a Josephson-junction-based bolometer\cite{joonas_zJ_biblio,Roope_JPA_biblio} exhibiting NEP of 20 zW/$\sqrt{\textrm{Hz}}$ when operated with a nearly quantum-limited amplifier\cite{vesterinen2017lumped}. Furthermore, qubit-based quantum-capacitance detectors\cite{echternach2018single} have recently been reported to have NEP below 10 zW/$\sqrt{\textrm{Hz}}$. Even lower NEP has been expected from semiconducting charge sensors\cite{komiyama2010single}, but full experimental characterization is lacking. Very recently, a calorimeter based on a superconductor--normal-metal--insulator--superconductor junction has been shown to reach the limit of fundamental temperature fluctuations in thermometry, and hence holds great potential for detection of single microwave photons\cite{karimi_reaching_2020}.
A sensitive bolometer typically relies on maximizing the temperature changes induced by absorption of incident photons. To this end, one may minimize the volume of the absorber and fabricate it from a material with low specific heat. In addition, decreasing the thermal conductance from the absorber to its bath increases the low-frequency response at the cost of decreasing the readout speed. Graphene is a two-dimensional (2D) material with unusual thermal properties, which renders it a promising candidate for the realization of a single-microwave-photon bolometer\cite{graphene_review_X_Du}.
Among the wonderful properties of graphene, it has a low electron density of states which leads to a low heat capacity and fast response. At a relatively high temperature of 5 K, extremely fast thermal relaxation time of 35 ps has been reported\cite{efetov_nature} for a graphene-based bolometer. Another recent study on graphene-Josephson-junction-based bolometer\cite{efetov_graphene_JJ_bolometer} carried out at 0.19~K found NEP of 700 zW/$\sqrt{\textrm{Hz}}$ with theoretical thermal time constant down to 0.6 ns. Together these suggest potential for an energy resolution down to a single 32-GHz photon although such an extreme resolution was not measured. In addition, graphene has a low electrical resistance compared with other two-dimensional materials. Importantly, the resistance can be tuned with an electric field, which enables the possibility of precise impedance matching with a planar antenna or a waveguide using, for example, the detector design of refs.~\cite{joonas_zJ_biblio, Roope_JPA_biblio}.
In this article, we introduce and demonstrate a hot-electron bolometer based on a superconductor--graphene--superconductor (SGS) junction (Fig.\ref{scheme_bolometer}a). We couple this graphene Josephson junction to on-chip capacitors forming a temperature-dependent $LC$ oscillator (Fig.\ref{scheme_bolometer}b). Incident radiation absorbed in the graphene modifies the resonance frequency of the oscillator, which serves as our thermometer. For example, Fig.\ref{scheme_bolometer}c shows a megahertz-level redshift of the resonance frequency for a heater power of a few attowatts.
\begin{figure*}[t]
\centering
\includegraphics[height=7.2cm]{image/scheme_bolometer.png}
\caption{\textbf{Bolometer and its operation principle.} \textbf{a}, False-colour scanning electron microscope (SEM) image of the graphene bolometer. The scale bar denotes $10\, \text{\textmu} \rm{m}$. The gate voltage is applied via port G whereas the heater and probe signals couple through port P to the superconductor--graphene--superconductor (SGS) junction located below the narrowest part of the gate electrode G. Aluminium parts are denoted by blue colour and the gate insulator by red colour. \textbf{b}, Circuit diagram of the detector and a simplified measurement setup. The heater and probe signals, denoted by subscripts h and p, respectively, are combined at room temperature. The microwave reflection coefficient for the probe tone is denoted by $\Gamma$. \textbf{c}, Reflected fraction of the probe power $P_{\rm{p}}$ as a function of the probe frequency $f_{\rm{p}}$ for the indicated gate voltages $V_{\rm{g}}$ and heater powers $P_{\rm{h}}$ at the bath temperature $T_\textrm{b}=55$~mK. \textbf{d}, Considered thermal model. The electrons in the graphene are coupled to the cryostat phonons through an effective thermal conductance $\tilde{G} = G_{\rm{e-p}}+G_{\rm{diff}}+G_{\rm{photon}}$, which is a sum of the phononic ($G_{\rm{e-p}}$), electron diffusion ($G_{\rm{diff}}$), and photonic ($G_{\rm{photon}}$) thermal conductances (see Methods).}
\label{scheme_bolometer}
\end{figure*}
We place a gate electrode on top of the SGS junction allowing us to optimize the charge carrier density in the graphene with electric field. This technique enables us to obtain a low NEP of $30\, \rm{zW}/\sqrt{\rm{Hz}}$ at a thermal time constant of $500\, \rm{ns}$. These results indicate an energy resolution down to a single $30$-GHz photon, which exceeds the performance suggested in ref.~\cite{efetov_graphene_JJ_bolometer}. Importantly, we obtain the NEP and the time constant from direct measurements.
Furthermore, our device exhibits a weak thermal-energy exchange with its environment, with a differential thermal conductance between the graphene bolometer and the phonon bath of the cryostat as low as $0.8\, \rm{fW/K}$, which is less than $2\, \%$ of the quantum of thermal conductance $G_{\rm{Q}}$ at $50\, \rm{mK}$.
The properties of the detector can be tuned with the electric field and the probe signal power and frequency, providing us three degrees of freedom to optimize the performance of the detector. The readout frequency can be tuned by roughly 80 MHz (see Extended Data Fig.~\ref{fig:2D_map_Vg}), and the thermal time constant varies from 200~ns to several micro seconds with probe power and electric field.
Let us discuss in detail our measurements of the differential thermal conductance between the graphene bolometer and the phonon bath of the cryostat.
In a steady state, the heat transfer from the electrons in the graphene at temperature $T_{\rm{e}}$ to the cryostat bath at temperature $T_{\rm{b}}$ is $P_{{\rm{e-b}}} (T_{\rm{e}}, T_{\rm{b}}) = P_{\rm{appl}} + P_{\rm{x}}$, where $P_{\rm{x}}$ is referred to as the parasitic heating, $P_{\rm{appl}} = (1-\abs{\Gamma}^2)P_{\rm{p}}$ is the microwave probe power absorbed by the graphene flake, $\abs{\Gamma}^2$ is the microwave reflection coefficient at the gate capacitor $C_{\rm{g}}$ shown in Fig.\ref{scheme_bolometer}b, and $P_{\rm{p}}$ is the probe power incident on the capacitor. We define the differential thermal conductance by
$$ \tilde{G} = - \partial_{T_{\rm{b}}} P_{{\rm{e-b}}} (T_{\rm{e}}, T_{\rm{b}}) $$
and measure it by changing the bath temperature and compensating for the resulting change in the resonance frequency, and hence the electron temperature, by changing the applied power as shown in
Figure \ref{thermal_conductance_bolometer}a.
Since the electron temperature is constant, the change in the applied power fully flows to the bath, and we obtain the differential thermal conductance in
Figure \ref{thermal_conductance_bolometer}b as the derivative of the applied power with respect to the bath temperature from Figure \ref{thermal_conductance_bolometer}a.
We observe that $\tilde{G}$ scales at maximum linearly with $T_{\rm{b}}$ as does the quantum of thermal conductance $G_{\rm{Q}} = \pi^2 k_{\rm{B}}^2 T_{\rm{b}}/(3h)$, where $k_{\rm{B}}$ is the Boltzmann constant and $h$ is the Planck constant. This scaling is of significantly lower power in temperature than suggested by studies of electron--phonon coupling in monolayer graphene\cite{phononic_coupling_graphene_Antti,Song_e-phonon_coupling_graphene,Betz_cooling_graphene} which have found $\tilde{G} \propto T_{\rm{b}}^{\delta}$
with $\delta \simeq 2-4$ depending on the charge density and the phonon temperature.
This discrepancy tends to indicate that the phononic coupling is not the dominant heat conduction mechanism in our sample.
The observed behaviour is similarly unlikely to arise from the electron diffusion since the use of superconducting leads to the graphene flake suppresses this effect\cite{Peltonen2010}. Other processes such as multiple Andreev reflections
may contribute to the heat conduction through the leads\cite{G_diff_with_superconductor}, but their effect is greatly suppressed at the used vanishing voltage bias across the SGS junction.
\begin{figure}
\centering
\includegraphics[height=6.8cm]{image/resonance_all_Vg.pdf}
\includegraphics[height=6.8cm]{image/G_NEP_TEF.pdf}
\caption{\textbf{Differential thermal conductance and the thermal-fluctuation-limited noise equivalent power.} \textbf{a}, Measured points of constant resonance frequency (markers) in the plane of the cryostat phonon temperature, $T_{\rm{b}}$, and the absorbed power in the graphene, $P_\textrm{appl}$, for the indicated gate voltages and probe frequencies. The dashed lines represent polynomial fits to the data. \textbf{b}, Differential thermal conductance of the graphene electron system (black markers), $\tilde{G}$, as a function of $T_{\rm{b}}$ obtained from the slope of the dashed lines in \textbf{a} at the temperature points of the measured data. The black solid lines are fits to the markers linear on the logarithmic scale. The grey dashed line shows 2.0\% of the quantum of the thermal conductance $G_{\rm{Q}}$. The red markers and lines show the thermal-fluctuation-limited noise equivalent power $\rm{NEP_{TEF}}$ (right vertical axis) corresponding to the differential thermal conductance shown in black colour. The error bars denote $1\sigma$ confidence intervals.}
\label{thermal_conductance_bolometer}
\end{figure}
However, the observed linear temperature dependence of the thermal conductance may be explained by photonic coupling $G_{\rm{photon}} \propto T_{\rm{b}}$ \cite{electron-photon_conduction_graphene}. This photonic thermal conductance should dominate below the crossover temperature $T_{\text{cr}} = \left[ r_0 \pi^2 k_{\text{B}}^2/(15 h \Sigma S_{\text{graphene}}) \right]^{1/2}$ \cite{electron-photon_conduction_Guichard}, where $r_0$ corresponds to the impedance matching between the detector and its electromagnetic environment, the electron-phonon coupling constant $\Sigma$ is a characteristic of the material, and $S_{\rm{graphene}}$ is the area of the electron gas. Considering a typical value\cite{graphene_Betz} for graphene of $\Sigma = 10^{-15}\, \rm{W \text{\textmu} m^{-2}K^{-4}}$ and an impedance matching $r_0>10^{-2}$, one obtains $T_{\rm{cr}} \geqslant 300\, \rm{mK}$. Thus, by operating at $50\, \rm{mK}<\mathit{T}< 200\, \rm{mK}$ the photonic coupling is likely dominating.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{image/nep_tau_eres.pdf}
\caption{\textbf{Key characteristic properties of the bolometer.} \textbf{a--c}, Measured noise equivalent power (NEP) (black markers, left axis) and thermal relaxation time $\tau$ (red markers, right axis) of the detector for gate voltages, $V_{\rm{g}} = -2.5\, \rm{V}$ (\textbf{a}), $V_{\rm{g}} = 0\, \rm{V}$ (\textbf{b}), and $V_{\rm{g}} = 2.5\, \rm{V}$ (\textbf{c}), bath temperature $T_\textrm{b}= 55$~mK, and probe power $P_\textrm{p}=530$~aW (\textbf{a}), $P_\textrm{p}=140$~aW (\textbf{b}), and $P_\textrm{p}=370$~aW (\textbf{c}). The dashed horizontal lines indicate the thermal-fluctuation-limited NEPs obtained from the thermal conductance given by Fig.~\ref{thermal_conductance_bolometer}b. The error bars denote $1\sigma$ confidence intervals. \textbf{d-f}, Energy resolution of the bolometer obtained from the NEP and time constant experiments of panels \textbf{a}--\textbf{c}, respectively. See text for details.}
\label{NEP}
\end{figure*}
We define the NEP to be the noise density in the readout signal in units of the absorbed power. The NEP is obtained in practice as the voltage noise in the readout signal divided by the voltage responsivity of the detector to the absorbed power. For convenience, we only measure the quasistatic responsivity and divide it by $\sqrt{1+\left(2\pi \tau f_{\rm{n}}\right)^2}$ which takes into account the thermal cut-off in the responsivity for noise frequencies $f_{\rm{n}}$ higher than inverse thermal relaxation time $1/\tau$. This is a generally accepted method for obtaining the responsivity, justified by our observations of exponential thermal relaxation dominated by a single time constant (see Extended Data Fig.~\ref{fig:example_time_trace}).
Figures~\ref{NEP}a--\ref{NEP}c show the experimentally obtained NEP and time constant as functions of the probe frequency at three different gate voltages.
The minimum NEP occurs at a gate voltage of $V_{\rm{g}} = 0\, \rm{V}$ for a probe frequency of $f_{\rm{p}}=503\, \rm{MHz}$ and equals $\rm{NEP} = 30\, \rm{zW}/\sqrt{\rm{Hz}}$. Fortunately, this low NEP coincides with an exceptionally short thermal time constant, $500\, \rm{ns}$. The minimum observed time constant is 200~ns. Note that the thermal time constant yields the speed, at which the bolometer exchanges energy with its environment, but does not pose a fundamental limit on the operation speed of the device in detecting energy packets. Namely, if the internal thermalization of the electrons in the bolometer is fast, the rising edge of the readout signal can be orders of magnitude faster that the falling edge set by the thermal time constant.
Thus the measured thermal time constant seems promising for applications in cQED with the state-of-the-art readout time of the order of 100~ns.
Random exchange of energy quanta with the environment of the bolometer leads to fluctuations in the local electron temperature, and hence may significantly contribute to the total noise in bolometers\cite{Penttil__Phonon_noise,karimi_reaching_2020}.
The thermal-fluctuation-limited NEP is given by\cite{doi:10.1063/1.357128} $\rm{NEP}_{\rm{TEF}} = \sqrt{4 \mathit{k}_{\rm{B}}\mathit{T}_{\rm{e}}^2 \mathit{\tilde{G}} }$. Using $T_{\rm{e}} = 55\, \rm{mK}$, $V_{\rm{g}}=0\, \rm{V}$, and the data of Fig.~\ref{thermal_conductance_bolometer}b, we obtain $\rm{NEP}_{\rm{TEF}} = 12\, \rm{zW}/\sqrt{Hz}$. This indicates that our bolometer operates close to the thermal bound. Thus improvements of roughly a factor of two in the minimum NEP may be obtained by technical changes of the measurement setup such as was obtained in ref.~\cite{Roope_JPA_biblio} by the introduction of a nearly quantum-limited amplifier. However, major further progress calls for development of the device itself such as redesign of the sample or use of new materials. For example, reduction of the photonic heat conduction by advanced filtering schemes is likely to improve the NEP.
In order to maximise the absorption coefficient, we operate the heater signal at a frequency close to the resonance frequency, typically around $500\, \rm{MHz}$. On the other hand, this optimization prevents us from measuring directly the single-photon calorimetric energy resolution of the device which would require much higher energies for the absorbed photons. However, we extract the energy resolution using the NEP\cite{Moseley1984Thermal, enss2005cryogenic}, $\Delta E = \left( \int_0^{\infty} \frac{4 \textrm{d}f_n}{\text{NEP}(f_n)^2} \right)^{-1/2}$, and show the results in Figs.~\ref{NEP}d--\ref{NEP}f. The finest energy resolution of $20\, \rm{yJ}$ is obtained at $55\, \rm{mK}$. This corresponds to the energy of a single $30$-GHz photon, or alternatively five 6-GHz photons which is a satisfactory frequency and photon number scale for the usual readout of superconducting qubits.
The bolometer seems also an ideal detector candidate in an alternative qubit readout scheme demonstrated in ref.~\cite{Opremcak1239}, where the total energy subject to the first-stage measurement device may be even higher than in the usual qubit readout, for example, $100\textrm{ yJ}=h\times 150\textrm{ GHz}$ in ref.~\cite{Opremcak1239}.
Alternatively, a lower bound of the energy resolution can be estimated from the thermal conductance and time constant. A simple thermal model with a single thermal relaxation time, as shown in Fig.\ref{scheme_bolometer}d, is enough to describe our observations. Thus we may estimate the heat capacity of the graphene as $C_{\rm{e}} = \tilde{G} \tau$. For $f_{\rm{p}}=503\, \rm{MHz}$, $V_{\rm{g}}=0\, \rm{V}$, $\tau=500\, \rm{ns}$ and $T_{\rm{b}} = 50\, \rm{mK}$, we obtain $C_{\rm{e}} = 2.5 \times 10^{-22} \rm{JK}^{-1}$, which corresponds to $1.2\times 10^{-23} \, \rm{JK}^{-1}\text{\textmu}m^{-2} =0.87\times k_\textrm{B}\text{\textmu}m^{-2}$. It is in a rather good agreement with the literature value of roughly $2\mathit{k}_{\rm{B}}\, \text{\textmu} m^{-2}$ at $50\, \rm{mK}$ assuming a linear temperature dependence of the heat capacity\cite{specific_heat_graphene}. From this heat capacity we estimate the standard deviation of energy owing to the fundamental thermal fluctuations using the equation $\Delta E_\textrm{th} = \sqrt{\mathit{k}_{\rm{B}} \mathit{T}_{\text{e}}^{2} \mathit{C}_{\rm{e}} } = 3\, \rm{yJ}= \mathit{h} \times 4.4\, \rm{GHz}$ at $50\, \rm{mK}$.
This article experimentally demonstrates an ultrafast low-noise graphene bolometer operating in the microwave range: measured thermal conductance as low as $0.8\, \rm{fW/K}$, thermal relaxation time $\tau$ down to $200\, \rm{ns}$, and NEP as low as $30\, \rm{zW}/\sqrt{\rm{Hz}}$ which is close to the corresponding thermal-fluctuation limit $\rm{NEP}_{\rm{TEF}} = 12\, \rm{zW}/\sqrt{\rm{Hz}}$.
The achieved low noise level and fast response surpass the threshold for applications in circuit quantum electrodynamics where ultralow powers are used and detected in time scales orders of magnitude shorter than the typical state-of-the-art coherence times of roughly 100~{\textmu}s\cite{cQEd_coherence_time_100us}.
Our experiments indicate an energy resolution of our device in the yoctojoule range in the calorimeter mode. It sets the detection threshold at a single $30$-GHz microwave-photon level which is, to the best of our knowledge, the finest resolution reported for a thermal detector.
Interestingly, the estimated low heat capacity of the bolometer implies a thermal energy uncertainty of only $h\times 4.4$~GHz which suggest that such a fine energy resolution
may be achievable already with the present device by technical improvements in the measurement setup such as integration of a nearly quantum-limited amplifier to the readout circuit\cite{Roope_JPA_biblio}. In the future, we aim to use bolometers in the framework of circuit quantum electrodynamics to study quantum phenomena with photon-number-based detection, free of quantum noise stemming from the Heisenberg uncertainty relations. Furthermore, unusual Josephson physics\cite{Wiedenmann2016} seems interesting for utilisation in bolometers.
\bibliographystyle{naturemag}
|
2,877,628,089,420 | arxiv | \section{Introduction}
\label{Sec::Introduction}
When traveling in the heliosphere, energetic charged particles are spatially diffused, magnetically drifted,
advected and decelerated by the solar wind and its embedded magnetic field.
Due to these effects, the observed energy spectrum of Galactic cosmic rays (GCRs) inside the heliosphere
is significantly different to the local interstellar spectrum (LIS) outside the heliosphere.
Moreover, the modifications of the GCR intensities and energy spectra
are temporal dependent and follows the Sun's variability.
This phenomenon is referred to as \emph{solar modulation} of GCRs.
Understanding solar modulation is very important in GCR physics,
either to infer the origin of GCRs or to investigate the dynamics of charged particles in the heliospheric turbulence \cite{Moraal2013,Potgieter2013}.
Modeling the evolution of the GCR radiation in the heliosphere is also important for crewed space missions or for the electronic components
radiation hazard during long-duration missions.
Along with the Voyager-1 data beyond the heliosphere \cite{Cummings2016},
the new precise data from AMS-02 \cite{Aguilar2018PHeVSTime,Aguilar2018LeptonVSTime} and PAMELA \cite{Adriani2013,Martucci2018}
experiments offer a unique possibility to study the solar modulation over a long period of time.
\section{The Numerical Model}
\label{Sec::Model}
The propagation of GCRs in the heliosphere is governed by the Parker equation for their phase space density $f(t,R)$\,\cite{Moraal2013}:
\begin{equation}
\label{Eq::Parker}
\frac{\partial f}{\partial t}
= \nabla\cdot [\mathbf{K}^{S}\cdot\nabla f ]
- (\vec{V}_{sw} + \vec{V}_D) \cdot\nabla f
+ \frac{1}{3}(\nabla \cdot\vec{V})\frac{\partial f}{\partial (ln R)}
\end{equation}
where $R=p/Z$ is the rigidity of GCRs (momentum/charge ratio),
$\vec{V}_{sw}$ is the speed of the solar wind, $\vec{V}_{D}$ drift speed,
and $\mathbf{K}^{S}$ is the symmetric component of the GCR diffusion tensor.
The particle flux $J=J(t,R)$ is eventually given by $J=\frac{\beta{c}}{4\pi}n$, where $n=4{\pi}R^{2}f$ is the GCR number density.
In this work, the equation is solved using the \emph{stochastic differential equation} (SDE) method
in steady-state conditions ($\partial/\partial{t}=0$) \cite{Strauss2017},
based on a customized version of the \emph{Solarprop} framework \cite{Kappl2016,Tomassetti2017BCUnc}.
We implemented a 2D model of heliosphere described by radius $r$ and heliolatitude $\theta$ \citep{Fiandrini2021}.
The heliosphere is modeled as a spherical cavity centered to the Sun from which the wind flows radially.
The wind speed follows a parameterization $V_{sw}(r,\theta,t)$ where, in particular, the latitudinal profile
is time-dependent, \ie, it evolves with solar activity.
The speed is nearly independent upon helioradius, but it drops to subsonic speeds across the termination shock, at $r_{\rm{TS}}=85$\,AU,
and then vanishes at the heliopause $r_{\rm{HP}}=122$\,AU. The Earth lies in the equatorial plane, at $r_{0}=$1\,AU from the Sun.
The wind carries a frozen-in Heliospheric Magnetic Field (HMF) which is wounded up in a rotating spiral structure.
From the solar rotation with a characteristic tilt angle $\alpha$ between magnetic and rotational axis,
a waving Heliospheric Current Sheet (HCS) is generated.
The HCS is a rotating structure which divides the HMF into two hemispheres of opposite polarity.
The $\alpha$-angle is measured real-time by the Wilcox Solar Observatory (WSO), since the 70's, on a 10-day basis \cite{Hoeksema1995}.
It ranges from $\sim{0-10}^{\circ}$ during solar minimum (flat HCS) to $\sim{80-90}^{\circ}$ during maximum and reversal (wavy HCS).
In the propagation of GCRs in the HMF, various processes occurring at different spatial scales:
diffusion arises from the erratic random-walk scattering of the particles off the small-scale HMF turbulence.
Drift is due to the large-scale regular component of the HMF, from spatial gradient, curvature, and in proximity of the HCS.
Diffusion and drift are included in the symmetric and antisymmetric parts of the diffusion tensor $\mathbf{K}$, respectively:
$\mathbf{K}=\mathbf{K}^S+\mathbf{K}^A$, with $K_{ij}^S = K_{ji}^S$ and $K_{ij}^A = -K_{ji}^A$.
The $\mathbf{K}^{S}$ tensor can be also divided parallel and perpendicular diffusion $K_{\parallel}$ and $K_{\perp}$,
or terms of the mean free paths $\lambda_{\parallel}$ and $\lambda_{\perp}$, such that
$K_{\parallel} = \beta c \lambda_{\parallel}/3$, where $\beta=v/c$ is the particle speed.
The perpendicular diffusion length $\lambda_{\perp}$
is assumed proportional to the parallel one, $\lambda_{\perp}= \xi \lambda_{\parallel}$, with $\xi \cong\,0.02$ \cite{Giacalone1999}.
The rigidity dependence of the GCR diffusion coefficients arises from the
cyclotron resonance condition of GCR scattering on the HMF irregularities, occurring when the Larmor radius $r_{L}=r_{L}(R)$
is comparable with the spatial scale size of the irregularities $\hat{\lambda}$.
From the condition $r_{L} \sim \hat{\lambda}$, it turns out that GCRs with rigidity $R$ resonate at wave number $k_{\rm{res}} \sim 1/R$.
The spatial scale of irregularities however follows a turbulence spectrum of the type $w(k) \propto k^{-\eta}$,
in terms of wave number $k=2\pi/\lambda$.
The index $\eta$ depends on type and spatial scales of the turbulence energy cascade.
Thus, $\lambda_{\parallel}$ will depend on rigidity as $\lambda_{\parallel} \sim R^{2-\eta}$.
On a wide range of scale, various regimes can be distinguished for the HMF power spectrum \cite{Kiyani2015}.
A good parameterization for the rigidity dependence of $\lambda_{\parallel}$
is the \emph{double power-law} function, defined by two spectral indices $a$ and $b$ and a critical rigidity value $R_{k}$ \cite{Potgieter2013}.
For $K_{\parallel}$ we have adopted the following description:
\begin{equation}\label{Eq::Par_diff}
K_{\parallel} = \frac{K_{0}}{3}\beta \left(\frac{B_0}{B}\right) \left(\frac{R_0}{R}\right)^a
\times \left[ \frac{(R/R_0)^h + (R_k/R_0)^h }{1 + (R_k/R_0)^h} \right]^{\frac{b-a}{h}}
\end{equation}
where $K_{0}$ is a constant in units of $10^{23}$ $cm^2 s^{-1}$ and $R_{0}\equiv$\,1\,GV sets the rigidity scale.
The the HMF magnitude is $B$ while $B_{0}$ is the \emph{local} field value at $r_{0}=$\,1 AU.
The parameters $a$ and $b$ set the two slopes of the rigidity dependence below and above $R_{k}$, respectively.
The smoothness of the transition is regulated by the parameter $h$.
The perpendicular mean free path follows from $\lambda_{\perp}= \xi \lambda_{\parallel}$, with the
addition of polar corrections \cite{Heber1998}.
The parameters regulating GCR diffusion are subjected to temporal evolution following the Solar Cycle \cite{Manuel2014}. T
The diffusion parameter set ${K_{0}, a, b}$ their temporal evolution is determined by a global fit to the monthly data of
AMS-02 and PAMELA \cite{Aguilar2018PHeVSTime,Adriani2013,Martucci2018}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.85\textwidth,scale=0.45]{./ccFig_FluxDataVSTime.pdf}
\caption{\footnotesize{BR averaged flux $J_{0}$ evaluated in the reference energy range between 0.49-0.62 GeV
from PAMELA (open squares)\citep{Adriani2013Protons,Martucci2018}
and AMS-02 (filled circles) \citep{Aguilar2018PHeVSTime,Aguilar2018LeptonVSTime}.
The vertical dashed line shows the epoch of the HMF polarity inversion, along with the shaded area indicating the reversal epoch.}}
\label{Fig::ReferenceFlux}
\end{figure*}
The intensity of the GCR proton fluxes in the energy range between 0.49 - 0.62 GeV
are shown in Fig.\,\ref{Fig::ReferenceFlux} as a function of time for both the PAMELA and AMS-02 data sets.
Along with the three GCR \emph{diffusion parameters}, we identify a set of three \emph{heliospheric parameters} that describe the
status of the modulation region at a given epoch. They are the HCS tilt angle $\alpha$, the local value of the HMF $B_{0}$,
and the magnetic polarity $A$, where the latter is defined as the sign of the Sun's magnetic field in the outgoing direction from its North pole.
The parameter set ${\alpha, B_{0}, A}$ is also time dependendent.
Finally, to compute the modulation according to Eq.\,\ref{Eq::Parker}, an input LIS model should be specified as boundary condition.
Models of LIS include Galactic astrophysics processes such as acceleration and interstellar propagation.
To compute the GCR proton LIS, we employ calculations from recent works \cite{Tomassetti2015TwoHalo,Tomassetti2012Hardening,Feng2016,Tomassetti2018PHeVSTime}.
Our proton LIS was tight constrained with low-energy interstellar data from Voyager-1 at $\sim$\,100\,-500\,MeV of kinetic energy \cite{Cummings2016},
and with AMS-02 high-energy data at $E\gtrsim$\,100\,GeV \cite{Aguilar2018PHeVSTime,Aguilar2015Proton,Aguilar2015Helium}.
The resulting proton LIS agrees fairly well with other recent models
\cite{Boschini2017,Corti2019,Tomassetti2017TimeLag,Tomassetti2015PHeAnomaly,Tomassetti2017Universality}.
\section{The parameter extraction}
\label{Sec::Model}
We model the time-dependence of the problem
by making use of a continuous series of equilibrium solution of Eq.\,\ref{Eq::Parker},
where each solution is obtained for a given set of six input parameters.
The three \emph{heliospheric parameters} are obtained using a backward moving average (BMA) of
observations by WSO observatory and by \emph{in situ} measurements of the ACE space probe.
For a given epochs $t$, the average is calculated within a time window $[t-\Delta{T}, t]$, with $\Delta{T}=6-12$\,months.
The window is chosen so that the BMA values of $\hat{\alpha}$ (from WSO) and $\hat{B}_{0}$ (from ACE) reflect
the average HMF conditions sampled by GCRs arriving Earth \cite{Fiandrini2021,Tomassetti2017TimeLag}.
The remaining \emph{diffusion parameters} $K_{0}$, $a$, and $b$ have been determined with a global fit on the GCR proton measurements from AMS-02 and PAMELA.
To fit the GCR data, we have built a six-dimensional grid. Each nodes of the grid corresponds to a configuration of the
vector $\vec{q}=$ ($\alpha$, $B_0$, $A$, $K_0$, $a$, $b$). The grid has a total number of 938,400 nodes.
Using the stochastic technique, the GCR proton spectrum $J_{m}(E, \vec{q})$ was evaluated for each node of the grid
at kinetic energies from 20 MeV to 200 GeV.
This task required the simulation of 14 billion trajectories, corresponding to several months of CPU time.
For each trajectory, the pseudoparticles were backwardly-propagated from Earth to the heliopause and then re-weighted according to the LIS.
Once the proton grid has completed, the parameters were inferred using the GCR proton data.
Using the measured fluxes $J_{d}(E,t)$ made at epoch $t$, the model calculation $J(E,\vec{q})$ with the
heliospheric parameters ${\hat{\alpha},\hat{B_{0}},\hat{A}}$ fixed by the BMA procedure,
a global $\chi^{2}$ function was calculated as follows:
\begin{equation} \label{Eq::ChiSquare}
\chi^{2}(K_{0},a,b)= \sum_{i} \frac{\left[ J_{d}(E_{i},t) - J_{m}(E_{i}, \vec{q}) \right]^{2}}{\sigma^{2}(E_{i},t)}
\end{equation}
The best-fit diffusion parameters were then obtained by the minimization of the $\chi^{2}$ function.
In Eq.\,\ref{Eq::ChiSquare}, the errors are given by $\sigma^{2}(E_{i},t) = \sigma_{d}^{2}(E_{i},t) + \sigma_{mod}^{2}(E_{i},t)$, \ie,
by the sum in quadrature of several contributions: experimental uncertainties in the data, theoretical uncertainties
of the model, and errors associated with the minimization procedure.
\begin{figure*}[hbt]
\centering
\includegraphics[width=0.85\textwidth,scale=0.50]{./ccFig_TransportParameters.pdf}
\caption{\footnotesize{Results for the best-fit model parameters $K_{0}$, $a$, and $b$ determined using the time-resolved proton flux measurements
from PAMELA (open squared) and AMS-02 (filled circles).
In panel (d), the monthly averaged and smoothed SSN is shown. The vertical dashed line indicates the reversal epoch $T_{\rm{rev}}$
and the shaded area around it shows the transition epoch where the HMF polarity is weakly defined.}}
\label{Fig::BestFitParametersVSTime}
\end{figure*}
\section{Results and discussion}
\label{Sec::Results}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.85\textwidth,scale=0.45]{./ccFig_MeanFreePath.pdf}
\caption{\footnotesize{Envelope of the diffusion mean free paths $\lambda_{\parallel}$ as function of GCR rigidity inferred in the examined period (pink band).
The shaded green box corresponds to the Palmer consensus for reference to observational data on $\lambda_{\parallel}$ \citep{Palmer1982}.}}
\label{Fig::MeanFreePath}
\end{figure*}
For the diffusion parameters, our least-square minimization procedure returned a time-series of
best-fit values and their corresponding uncertainties. The results of the fitting are shown in Fig.\,\ref{Fig::BestFitParametersVSTime}.
In the figure, we plot the temporal dependence of the parameters $K_{0}(t)$ (a), $a(t)$ (b), $b(t)$ (c), along with
the corresponding evolution of the monthly/smoothed sunspot number (SSN) (d) as a proxy of the solar activity cycle.
The color codes represent the data used to make the fits, \ie, the time-series of GCR fluxes from AMS-02 (green filled circles) and PAMELA (blue open suqares).
The considered period covered a significant fraction the Solar Cycle, including the magnetic reversal phase around $T=T_{\rm{rev}}$, indicated by the shaded band,
where the HMF polarity $A$ switched from positive to negative.
From the figure, it can be seen that the diffusion parameters show a remarkable temporal dependence, and such a dependence is well correlated with solar activity.
The diffusion normalization parameter $K_{0}$ shows a clear temporal dependence and a marked anti-correlation with the monthly SSN.
The parameter appears to be maximum in the $A<0$ epoch before
reversal ($t\ll{T_{\rm{rev}}}$), and in particular during the long solar minimum of 2009-2010.
The minimum of $K_{0}$ is reached during solar maximum of 2014, about one year after the reversal.
Physically, larger $K_{0}$ values imply faster GCR diffusion, thereby causing a minor modification of the LIS, \ie, a higher GCR flux at the GeV scale.
In contrast, lower $K_{0}$ values imply slower diffusion which and a stronger attenuation of the GeV flux.
This behavior can be interpreted within the Force-Field model where, in fact, one has $\phi\propto 1/K_{0}$ \citep{Tomassetti2017BCUnc}.
Within the Force-Field model, the parameter $\phi$ is interpreted as the average kinetic energy loss of GCR protons inside the heliosphere.
Thus, one expects a positive correlation between $K_{0}(t)$ of Fig.\,\ref{Fig::BestFitParametersVSTime} and the
reference GCR flux $J_{0}=J(t,E)$ of Fig.\,\ref{Fig::ReferenceFlux}.
Our finding are in agreement with earlier works \citep{Manuel2014,Tomassetti2017TimeLag,Corti2019}.
Interestingly, the diffusion index $b$ shows a distinct time dependence, while the index $a$ has milder variations.
This suggests that the turbulence spectrum in the inertial range
evolve as a function of the solar activity, with a clear delayed peak at the solar maximum.
The inferred spectral index of the turbulence in the energy-containing range is about $\nu_{ec}=0.79\pm0.13$ in the examined period.
In the inertial range, the index evolves from $\nu_{in}= 0.74\pm0.08$ at solar minimum to $\approx{1.3}\pm0.15$ during the solar maximum.
In both ranges, our parameters are in agreement with the measured slopes of the HMF power spectrum on Jan-Feb 2007 \cite{Kiyani2015}.
In most of numerical models of solar modulation, these parameters are usually assumed to be time-independent.
Variations in these parameters imply changes in the HMF turbulence spectrum \cite{Usoskin2019,Horbury2005}.
Figure\,\ref{Fig::MeanFreePath} shows the envelope of the mean free paths $\lambda_{\parallel}$ for parallel diffusion inferred in the examined period.
It can be seen that our results are in agreement with the Palmer consensus, \ie, the
large collection of observational measurements on the scattering mean free path \citep{Palmer1982}.
\section{Acknowledgements}
\label{Sec::Results}
We acknowledge the support of ASI
under agreement \emph{ASI-UniPG 2019-2-HH.0}.
|
2,877,628,089,421 | arxiv | \section{Introduction}
The power sum method of Tur\'an (see Tur\'an \cite{Turan} or Montgomery \cite{Montgomery} Chapter 5) allows us to obtain lower bounds for power sums
\begin{gather}
\max_{\nu=N(n),\ldots,M(n)} \abs{g(\nu)}, \\ \intertext{where}
g(\nu)=\sum_{k=1}^n b_k z_k^\nu \\ \intertext{for $z_k$ and $b_k$ complex numbers, and $M(n)-N(n) \geq n$ a function of $n$. We will henceforth assume that the $b_k \geq 0$ are positive. In particular we are interested in the case of {\em pure} power sums ($b_k=1$) }
S(\nu)= \sum_{k=1}^n z_k^\nu,
\end{gather} and the minimum norm $\min_{k} |z_k| = 1$. We will also assume that $N(n)=1$. In this case a number of results have been proved.
\begin{align*}
\max_{\nu=1,\ldots,n} \abs{S(\nu)}&\geq 1, \qquad &\text{(Tur\'an \cite{Turan2})} \\
\max_{\nu=1,\ldots,2nm-m(m+1)+1} \abs{S(\nu)}&\geq \sqrt{m}. \qquad (1 \leq m \leq n) \qquad &\text{(Andersson \cite{Andersson})}
\\ \intertext{Under the min norm it seems reasonable that the minimal systems
$(z_1,\ldots,z_n)$ which minimize the expressions actually lies on, or very close to the unit circle. This has been difficult to prove and in fact in that case, when the $z_k$ are unimodular, Newman, Cassels and Szalay have independently proved the stronger result}
\max_{1 \leq \nu \leq c n} \left| \sum_{k=1}^n z_k^\nu \right|&\geq \sqrt{\frac {cn-n+ 1} {c}}. \qquad &\text{(\cite{Turan}, Theorem 7.3)}
\end{align*}
\section{One sided bounds}
We will denote
\begin{gather} \label{star3}
g(\nu)=\sum_{k=1}^n b_k e(\theta_k \nu),
\end{gather}
where $\theta_k$ are real numbers and $b_k >0$. We will let
\begin{gather*}
A= g(0)=\sum_{k=1}^n b_k, \qquad \text{and} \qquad B=\sum_{k=1}^n b_k^2.
\end{gather*}
In this section we will furthermore assume that $g$ is real valued. In particular this implies that
\begin{gather} \label{hhh}
g(\nu)=g(-\nu).
\end{gather}
We will also let
\begin{gather*}
g^+(\nu)=\begin{cases} g(\nu), & g(\nu)>0, \\ 0, & \text{otherwise,} \end{cases} \hskip -13pt \qquad \text{and} \hskip -13pt \qquad g^-(\nu)=\begin{cases} g(\nu), & g(\nu)<0, \\ 0, & \text{otherwise.}
\end{cases}
\end{gather*}
It is clear that
\begin{gather} \notag
g(\nu)= g^+(\nu)+g^-(\nu), \\ \intertext{and}
\abs{g(\nu)}= g^+(\nu)-g^-(\nu). \label{oj23}
\end{gather}
Our method of proof will use the Fej\'er kernel
\begin{gather} \label{star1}
F_{m+1}(x)=\sum_{\nu=-m}^{m}\p{1-\frac{\abs{\nu}}{m+1}} e(\nu x).
\\ \intertext{The Fej\'er kernel can be written as}
\label{nonneg}
F_{m+1}(x) =\frac 1 {m+1} \left( \frac{\sin \pi (m+1)x}{\sin \pi x} \right)^2,
\end{gather}
and is thus non negative. We will let
\begin{gather} \label{starr2}
\alpha= \frac 2 {m} \sum_{\substack{g^+(\nu)>0 \\ 1 \leq \nu \leq m}} \p{1-\frac{\nu}{m+1}}, \qquad \text{and} \qquad \beta= \frac 2 {m} \sum_{\substack{g^-(\nu)<0 \\ 1 \leq \nu \leq m}} \p{1-\frac{\nu}{m+1}}.
\end{gather}
From the representation \eqref{nonneg} it follows that $F_{m+1}(0)=m+1$. From this it is clear that
\begin{gather} \label{abbb}
\sum_{\nu=1}^m \p{1-\frac {\nu}{m+1}} = \frac m 2,
\end{gather}
and thus also
\begin{gather} \label{star2}
\alpha+\beta \leq 1.
\end{gather}
\subsection{The first method}
\begin{lem}
One has that
\begin{gather*}
\sum_{\nu=1}^{m} \p{1-\frac {\nu}{m+1}} \abs{g(\nu)}^2 \geq \frac{(m+1) B-A^2} 2.
\end{gather*}
\end{lem}
\begin{proof} We have that
\begin{gather*}
\sum_{\nu=-m}^{m} \p{1-\frac{\abs{\nu}}{m+1}} \abs{g(\nu)}^2 = \sum_{k,l=1}^n b_k b_l F_{m+1}(\theta_{k}-\theta_{l}), \\
\intertext{which by the contribution of the diagonal $k=l$, and the non negativity of the Fej\'er kernel, eq. \eqref{nonneg} implies that}
\sum_{\nu=-m}^{m} \p{1-\frac{\abs{\nu}}{m+1}} \abs{g(\nu)}^2 \geq \sum_{k=1}^n b_k^2 F_{m+1}(0).
\end{gather*}
The result follows by subtracting the term $\nu=0$ and using equation \eqref{hhh}.
\end{proof}
\begin{lem} \label{lem2} Suppose that $\abs{g(v)} \leq M$ for $\nu=1,\ldots,m$. Then
\begin{gather*}
\sum_{\nu=1}^{m} \p{1-\frac {\nu}{m+1}} g^+(\nu) \geq \frac {B(m+1) - AM-A^2} {4M}.
\end{gather*}
\end{lem}
\begin{proof}
Since $g^+(\nu)=(\abs{g(\nu)}+g(\nu))/2$ and
obviously $\abs{g(\nu)} \geq \abs{g(\nu)}^2/M$ when $\nu \neq 0$ we have that
\begin{gather*}
\sum_{\nu=1}^{m} \p{1-\frac {\nu}{m+1}} g^+(\nu) \geq \frac 1 {2} \sum_{\nu=1}^{m} \left( 1-\frac{\nu}{m+1} \right) \left(\frac{\abs{g(\nu)}^2} {M} + g(\nu) \right) .
\end{gather*}
The first term can be estimated by Lemma 1 and gives the contribution
\begin{gather*}
\frac {B(m+1) -A^2} {4M}.
\end{gather*}
The second term can be investigated by use of the Fej{\'e}r kernel. By the non negativity of the Fej\'er kernel, equation \eqref{nonneg} we have that
\begin{gather*}
\sum_{\nu=-m}^{m} \left( 1-\frac{\abs{\nu}}{m+1} \right) g(\nu) \geq 0
\end{gather*}
By the fact that $g(0)=A$ and using equation \eqref{hhh} we find that
\begin{gather}
\sum_{\nu=1}^{m} \left( 1-\frac{\nu}{m+1} \right) g(\nu) \geq - \frac A 2,
\end{gather}
which gives the remaining contribution to our Lemma.
\end{proof}
We now prove the following Theorem.
\begin{thm}
Suppose that $\abs{g(\nu)} \leq M$ for $\nu=1,\ldots,m$. Then one has that
\begin{gather*}
\max_{\nu=1,\ldots,m} g^+(\nu) \geq \frac {B(m+1)-AM-A^2} {2M m}.
\end{gather*}
\end{thm}
\begin{proof}
This follows from Lemma 2 and equation \eqref{abbb}.
\end{proof}
\subsection{An improvement for large values of $m$}
As $m$ tends to infinity Theorem 1 will give us
\begin{gather*}
\max_ {\nu=1,\ldots,m} g^+(\nu) \geq \frac {B} {2M}-o(1),
\end{gather*}
We will prove a stronger results which allows us to obtain
\begin{gather*}
\max_ {\nu=1,\ldots,m} g^+(\nu) \geq M+2-\frac{2M^2} B-o(1).
\end{gather*}
This will give sharper results when $M \asymp \sqrt B$ and for large $m$.
\begin{thm}
Suppose that $\abs{g(\nu)}\leq M$ for $\nu=1,\ldots,m$. If
$B(m+1)-A^2-mM^2 \geq 0$ then one has that
\begin{gather*}
\max_{\nu=1,\ldots,m} g^+(\nu) \geq \frac{B(m+1)-A^2}{mM}, \\ \intertext{In case $B(m+1)-A^2-m M^2 \leq 0$ one has that}
\max_{\nu=1,\ldots,m} g^+(\nu) \geq M+2 \times \frac{B(m+1)-A^2-mM^2} {B(m+1)-A^2-AM}
\end{gather*}
under the assumption that the denominator in the last fraction is positive.
\end{thm}
\begin{proof}
By equations \eqref{starr2} and \eqref{star2}
it is clear that
\begin{gather} \label{oj22}
\frac 2 m \sum_{\nu=1}^{m} \p{1-\frac{\nu} {m+1}} g^-( \nu) \geq - M (1-\alpha).
\end{gather}
By equation \eqref{oj23} and the fact that $\abs{g(\nu)} \leq M$ we get the inequality
\begin{gather*}
g^+(\nu) \geq \frac{\abs{g(\nu)}^2} M+g^-(\nu).
\end{gather*} By combining this with equation \eqref{oj22} we see that
\begin{align*}
\frac 2 m \sum_{\nu=1}^{m} \p{1-\frac{\nu} {m+1}} g^+( \nu) &\geq \frac 2 m \sum_{\nu=1}^{m} \p{1-\frac{\nu} {m+1}} \frac{\abs{g( \nu)}^2} M - M (1-\alpha).
\\ \intertext{which by Lemma 1 can be estimated by}
&\geq \frac{B (m+1)- A^2} {Mm} - M (1-\alpha). \end{align*}
This together with the definition of $\alpha$, equation \eqref{starr2} implies that
\begin{gather} \label{ttt}
g^+(\nu) \geq
\frac{1} {\alpha} \cdot \p{ \frac{B (m+1)- A^2} {Mm} - M (1-\alpha)} =
\frac {B(m+1)-A^2-mM^2} {\alpha M m}+M
\end{gather}
for some $\nu=1,\ldots,m$.
We see that if $B(m+1)-A^2-mM^2 \geq 0$ then the function is decreasing in $\alpha$ and the minimum over $0 < \alpha \leq 1$ is attained for $\alpha=1$. This gives us case 1. In the case when $B(m+1)-A^2-mM^2 \leq 0$ the function is increasing in $\alpha$ and we use the following estimate
\begin{gather*}
\alpha \geq \frac {B(m+1)-AM-A^2}{2 m M}.
\end{gather*}
which follows from Lemma 2 to obtain a lower bound. By putting this value in the right hand side of \eqref{ttt} we obtain a lower bound and we obtain the second part of our theorem. We remark that we also need that $B(m+1)-AM-A^2$ is positive since otherwise we get an $\alpha<0$.
\end{proof}
\section{A lower bound for power sums}
We will now use our one sided theorems to obtain improved lower bounds for the absolute values of power sums.
\begin{thm}
Let
\begin{gather*}
B_\nu=\sum_{k=1}^n b_k^\nu, \qquad A=B_1^2-B_2, \qquad \text{and} \qquad B=B_2^2-B_4.
\end{gather*}
Then one has that
\begin{gather} \label{i0} \max_{\nu=1,\ldots,m} \abs{g(\nu)}^2 \geq B_2+\frac{B (1+1/m)}{2B_2}- \frac {AB_2+A^2} {2B_2 m}. \\ \intertext{One also has that}
\label{i1}\max_{\nu=1,\ldots,m} \abs{g(\nu)}^2 \geq 2B_2 -2 \times \frac{A^2-B+ m B_4}{B(m+1)-AB_2-A^2}
\end{gather}
when $m \geq (B-A^2)/B_4$ and both the numerator and the denominator in the last fraction is positive (this is true for $m$ sufficiently large).
\end{thm}
\begin{proof}
Let $g(\nu)$ be defined by equation \eqref{star3}. Then
\begin{gather*} \begin{split}
\abs{g(\nu)}^2 &= \sum_{k=1}^n b_k^2+ \sum_{k=1}^{n^2-n} c_k e(\lambda_k \nu), \\ &= B_2+h(\nu) \end{split}
\end{gather*}
where $c_k=b_ib_j$ and $\lambda_k=\theta_i-\theta_j$ for $i \neq j$. It is clear that $h(\nu)$ is real valued and hence we can use the methods of section 2.
Let us now assume that
$|h(\nu)| \leq B_2$ for $\nu=1,\ldots,m$. By using Theorem 1 with $M=B_2$ we have that there exist a $\nu$ with $\nu=1,\ldots,m$ such that
\begin{gather*}
g^+(\nu) \geq \frac {B(m+1)-AB_2-A^2} {2B_2 m}.
\end{gather*}
This implies \eqref{i0} in case $|h(\nu)| \leq B_2$.
We have by the definition of $B$ that
\begin{gather*}
B(m+1)-A^2-mB_2^2 =B-A^2-m B_4
\end{gather*}
which is negative if $m \geq (B-A^2)/B_4$, and it follows from Theorem 2 with $M=B_2$ that
\begin{gather*}
h(\nu) \geq B_2 -\frac{2A^2-2B+2mB_4}{B(m+1)-AB_2-A^2}
\end{gather*}
for some $\nu=1,\ldots,m$. This implies \eqref{i1} in case $|h(\nu)| \leq B_2$.
Let us now assume that $|h(\nu)| > B_2$ for some $\nu=1,\ldots,m$.
Since $\abs{g(\nu)}^2=B_2+h(\nu) \geq 0$ this means that $h(\nu) > B_2$ and $\abs{g(\nu)}^2 \geq 2 B_2$. We see that this implies \eqref{i0} since $2 B_2 \geq B_2+(1+1/m)B/(2B_2)$ and the third term on the right hand side in \eqref{i0} is negative. Likewise it implies \eqref{i1} since the last term on the right hand side in \eqref{i1} is negative.
\end{proof}
\section{The pure power sum case}
In the pure power sum case we have that $B_k=n$, $A=B=n^2-n$ in Theorem 3 and it follows that
\begin{cor} One has that
\begin{enumerate}[(i)]
\item
$ \displaystyle \max_{\nu=1,\ldots,m} \abs{S(\nu)}^2 \geq n+ \frac {(-1+n)(1+m-n^2)} {2 m},$
\item $\displaystyle
\max_{\nu=1,\ldots,m} \abs{S(\nu)}^2 \geq 2n -\frac{2(1+m-2n^2+n^3)}{(-1+n)(1+m-n^2)}. \qquad (m>n^2)$
\end{enumerate}
\end{cor}
Corollary 1 $(i)$ improves upon known results for $m$ bigger than $n^2.$ In fact it is convenient to write $m=n^2+j$ and we obtain
\begin{thm}
One has for $j \geq 0$ that
\begin{gather*}
\max_{\nu=1,\ldots,n^2+j} \abs{S(\nu)} \geq \sqrt{n + \frac {(1+j)(n-1)}{2(j+n^2)}}.
\end{gather*}
\end{thm}
For $j=0$ and in the case of unimodular numbers $z_k$ it improves slightly on the general lower bound
\begin{gather*}
\max_{\nu=1,\ldots,n^2} \abs{S(\nu)} \geq \sqrt{n}
\end{gather*}
in Tur\'an's problem 10 from Andersson \cite{Andersson}. We obtain
\begin{cor} One has that
\begin{gather*}
\inf_{\abs{z_k}=1} \max_{\nu=1,\ldots,n^2} \abs{S(\nu)} \geq \sqrt{n+\frac 1 {2n}-\frac 1 {2 n^2}}.
\end{gather*}
\end{cor}
Another result where the lower bound follows from Theorem 4 and the upper bound follows from Montgomery's construction (see Montgomery \cite{Montgomery} page 101. Example 6.) is the following
\begin{cor} Suppose that $n+1$ is a prime number. One then has that
\begin{gather*}
\sqrt{n + \frac 1 2 -\frac {2n-1} {2(n^2+n-1)}}\leq \inf_{\abs{z_k}=1} \max_{\nu=1,\ldots,n^2+n-1} \abs{S(\nu)} \leq \sqrt{n+1}.
\end{gather*}
\end{cor}
We remark that the lower bound holds in general. We see that this approximately halves the previous gap between the upper and lower bound. For further discussions of explicit constructions that yields similar upper bounds in power sum problems, see our paper Andersson \cite{Andersson3}.
In our paper Andersson \cite{Andersson2} page 17, we considered functions $\Lambda$ that fulfills
\begin{gather*}
\sqrt n \p{\Lambda(\alpha)-o(1)} \leq \inf_{\abs{z_k}=1} \max_{\nu=1,\ldots,\lfloor \alpha n^2 \rfloor} \abs{S(\nu)}.
\end{gather*}
We proved that we can choose $\Lambda(\alpha)=1$ for $\alpha>0$ and furthermore that for $0<\alpha \leq 1$ we proved that it is the best possible. We asked whether the function must be identically $1$ or must be bounded. While we can not answer if there exist such an unbounded function it follows from Corollary 1 that we can choose a $\Lambda$ such that $\lim_{\alpha \to \infty} \Lambda(\alpha)=\sqrt 2$. More specifically we obtain the following Theorem.
\begin{thm} Let $\alpha \geq 1$ be a constant. One then has that
\begin{gather*}
\p{\sqrt{\Phi(\alpha)}-o(1)} \sqrt n \leq \inf_{\abs{z_k}=1} \max_{\nu=1,\ldots,\lfloor \alpha n^2 \rfloor} \abs{S(\nu)} \leq \p{\sqrt{\lceil \alpha \rceil}+o(1)} \sqrt n,
\\ \intertext{where}
\Phi(\alpha)= \begin{cases} \frac 3 2-\frac 1 {2\alpha}, & 1 \leq \alpha \leq 3, \\
2-\frac 2 \alpha, & \alpha \geq 3. \end{cases}
\end{gather*}
\end{thm}
\begin{proof}
The lower bound for $1 \leq \alpha\leq 3$ follows from Corollary 1 $(i)$ with $m = \lfloor \alpha n^2 \rfloor$.
The lower bound for $3 \leq \alpha$ follows from Corollary 1 $(ii)$ with $m = \lfloor \alpha n^2 \rfloor$.
The upper bound follows from Theorem 6 in Andersson \cite{Andersson2}.
\end{proof}
In particular this will give us
\begin{gather*}
\p{\sqrt{\frac 5 4}-o(1)} \sqrt n \leq \inf_{\abs{z_k}=1} \max_{\nu=1,\ldots,2n^2} \abs{S(\nu)} \leq \p{\sqrt 2 +o(1)} \sqrt n.
\end{gather*}
We see that the lower and upper bounds are not the same and we do not yet have the true asymptotic. This contrasts to the case when we take the maximum over the interval $\nu=1,\ldots,n^2$ where we proved (see Andersson \cite{Andersson2})
\begin{gather*}
\inf_{\abs{z_k}=1} \max_{\nu=1,\ldots,n^2} \abs{S(\nu)} \sim \sqrt n.
\end{gather*}
\bibliographystyle{alpha}
|
2,877,628,089,422 | arxiv | \section{Introduction}
Chirality is the property of an object to not be superimposable
with its mirror image through a combination of translations and rotations.
In molecular systems, the mirrored forms, called enantiomers, have almost
entirely identical physical properties and interact indistinguishably
with non-chiral probes.
At the same time, enantiomers can behave very differently in their interaction
with other chiral objects, as evidenced by the role of chirality in
many biochemical and medical processes. The development of better techniques for chiral
discrimination is therefore a very active field of research both from a theoretical
and experimental point of view.
Characterisation of chirality can be achieved \emph{via} chiral observables, \emph{i.e.}
properties which take on different values
for each enantiomer. These techniques either rely on the interaction of the
sample with a \emph{chiral probe} or on the construction of a \emph{chiral setup}
to record the response\cite{Ordonez2018}.
A prototypical chiral observables is circular dichroism (CD), which
has been the subject of many theoretical \cite{Gunde1996,Jansik2005,
Stener2006,Ma2006,Rizzo2008,Horsch2011,Kroner2015} and experimental
\cite{Pulm1997,Boesl2006,Li2006,Bornschlegl2007,Breunig2009} studies.
It is defined as the difference in absorption of circularly
polarised light (acting as the chiral probe) by the two enantiomeric forms of a chiral molecule.
To leading
order, circular dichroism is formed by the interplay between electric dipole and
magnetic dipole transitions.
Due to the generally low magnitude of magnetic dipole transition moments, CD
is a comparatively weak effect, amounting to less than 1\% of the total
absorption signal. In recent years a lot of effort has been invested in the
description and measurement of chiral observables which do not require
involvement of the weak magnetic transition dipole moments, two notable
examples being the photoelectron circular dichroism (PECD) and rotational spectroscopy
with microwave three-wave mixing (M3WM). The enantiomeric contrast
obtained with these techniques reaches values of several percent even with
transform-limited pulses.
Recently, it has been shown that optimal control theory can be used to increase
this value even further by exploiting interference between various photoionization
pathways\cite{Goetz2019}.
For instance,
perfect anisotropy in the photoelectron angular distribution of a randomly
oriented ensemble can be generated by exploiting interferences between
single-photon pathways and a manifold of resonantly enhanced
two-photon pathways \cite{Goetz2019a}.
Prospects for control are even more promising for M3WM where complete
enantiomer-specific population transfer is possible if a suitable combination of
frequencies and polarization for the electric fields driving the three-wave
mixing process is chosen\cite{Leibscher2020}.
These recent
successes in enhancing chiral signatures with shaped pulses
strongly suggest that interference between different excitation
pathways may be a promising avenue to increase the contrast also in CD experiments.
However,
PECD and M3WM are pure electric dipole effects to first order, which leads to strong
transitions and the possibility to attain high contrast with moderate laser
intensities.
Conversely, CD relies on small magnetic dipole transition moments.
This raises the question whether interference effects between different excitation paths can
also be exploited to increase the contrast of the overall much weaker CD signals.
In the pursuit of understanding how to maximise the dichroic signal, the magnitude of
CD both as a function of laser pulse frequency \cite{Horsch2011,Kroner2015}, as
well as duration and envelope \cite{Ma2006} have been theoretically
investigated. The majority of these studies focused on the leading-order
contribution to CD which involves only the electric dipole and magnetic dipole
transition moment in the absorption process. However, it has long been
established that all multipolar terms in the light-matter interaction contribute
to CD \cite{Meath1987}. Indeed,
the electric quadrupole has a noticeable effect in the absorption signatures for
the 1,2-propylene oxide molecule when multiphoton excitations are considered\cite{Kroner2015}.
Even beyond chiral observables, there is recent interest in the study of
nondipole effect for many different physical processes, for example in
photoionisation \cite{Brennecke2018,Brennecke2018a, Hartung2021,Maurer2021} and
high-harmonic generation \cite{Gorlach2020,Jensen2021}.
When attempting to increase the CD signal in an experiment, the final puzzle
piece
is to transfer the optimal pulses from theory to the lab. This step requires to
disentangle
the physically relevant pulse properties from purely numerical
features that are often introduced by optimisation algorithms.
It also relies on an appropriate correspondence between the theoretical figure
of merit used in the optimization and the experimentally measured quantity.
Although experimental determination of absorption CD in the liquid
phase is well-established\cite{Berova2000}, optimal control of chiral
signatures for molecules in the gas phases presents a more adequate framework to
compare theory and experiment.
This is because emerging gas phase techniques allow for
measurements under collision- and interaction-free conditions
\cite{Zehnacker2010,Patterson2014} also in table-top setups, which
therefore serve as the focal point of our investigations.
One way to assess CD is
mapping into the ionisation continuum: By using resonance enhanced mulitphoton
ionisation (REMPI), the helicity-dependent population of the optically active
electronic state is translated into ion yields \cite{Boesl2006,Li2006}. In combination with
time-of-flight laser mass spectrometry it is possible not only to measure the CD
of the parent ion but also of the fragment ions. Successful experiments have
recently been reported for several chiral molecules \cite{Horsch2011,Boesl2013,Hong2014} exploiting the
advent of advanced techniques such as the measurement of differential photoion
CD \cite{Fehre2021} or twin-peak setups for improved statistics adapted to
femtosecond laser pulses \cite{Ring2021}. For resonant processes, ion-yield CD
and absorption CD are closely connected with the normalised difference in ion
yields which for a specific resonance is equal to the normalised CD in
extinction \cite{Boesl2013}.
In this paper we investigate, for the first time, in how far
optimal control can exploit the interaction of the molecule with light via the
transition electric dipole,
magnetic dipole, and electric quadrupole as well as permanent electric dipole
and quadrupole moments to enhance circular dichroism. In order to avoid concealment
of the magnetic-dipole dependent CD signal by strong electric dipole
transitions, we focus on the A--band $n\rightarrow\pi^{\ast}$ transition in fenchone.
This transition is electric dipole-forbidden to first
order\cite{Pulm1997} which allows multipolar signatures to come to the forefront.
By using an effective two-level description together with a physically motivated
parametrisation of the laser pulse, we are able to elucidate the role of different
multipole orders in the optimised protocols.
Moreover, we examine how the optimised
pulses address different molecular orientations
when
maximising CD for an orientationally averaged ensemble.
To stay close to experimental realisation, we also ensure that the pulse parameters
are feasible in state-of-the-art table-top experiments in the femtosecond regime.
This paper is organised as follows: Section 2 introduces our theoretical model
of fenchone and the molecule's interaction with a laser pulse as well as
our control functional and algorithm. Section 3 presents the results from our
optimisations with a particular focus on the role of the permanent electric
dipole and electric quadrupole transition moments for the control protocols.
Finally, Section 4 concludes and presents an outlook for future investigations.
\section{Theoretical Framework}
Setting the stage for an optimal control problem can be condensed to three main
questions: How do we represent the relevant physical states and model dynamics of the
molecule under study? How do we encode the physical control target in
a mathematical functional? And finally, which algorithm do we use to minimise,
respectively maximise, the target functional? We begin by addressing the
question of representation and dynamics. To this end, in Section~\ref{ssec:light_chiral} we
introduce the description of the light-matter interaction of a laser field
with a chiral molecule beyond the electric dipole
approximation\cite{Krems2018,Milonni2019}. Then, we discuss the
most important features of the A--band transition in fenchone\cite{Pulm1997} in
Section~\ref{ssec:model}, which allows us to employ a minimal description for
the molecule that still contains all of the relevant physics. Specifically, we
motivate a model involving only two electronic states (the ground state and the
first excited state) and neglecting any additional degrees of freedom. Such
a two-level description does not account for continuum dynamics, but the
absorption step serves as an important first step towards optimising ion-yield
CD experiments -- a high contrast during the absorption step will lead to high
contrast in the ion yield.
Finally in Section~\ref{ssec:oct} we detail how to account for orientational averaging in
optimisations \cite{Goerz2014a},
introduce an optimisation functional
specifically adapted to the task of maximising CD, and discuss which algorithm is particularly
suitable for computing optimised pulses.
\subsection{Light-matter interaction in chiral molecules}\label{ssec:light_chiral}
Within the Born-Oppenheimer approximation, the Hamiltonian describing the
interaction of a molecule with an electromagnetic field using minimum coupling
is given by\cite{Meath1987, Bernadotte2012a, Krems2018, Milonni2019},
\begin{align}
\begin{split}
\hat{H}=&%
-\sum_{j=1}^{N}
\frac{1}{2m_e}\left(\hat{\bm{p}}_j-e\bm{A}(\hat{\bm{r}}_j,t)\right)^{2}
-\frac{ge}{2m_e}\sum_{j=1}^{N}\bm{B}(\hat{\bm{r}}_j,t)\cdot \hat{s}_j\\
& -\sum_i\sum_j \frac{Z_ie^2}%
{4\pi\epsilon_0\left|\hat{\bm{R}}_i-\hat{\bm{r}}_j\right|}
+\sum_i\sum_{j>i} \frac{e^2}%
{4\pi\epsilon_0\left|\hat{\bm{r}}_i-\hat{\bm{r}}_j\right|}
\end{split}\\
\begin{split}
=&\sum_{j=1}^{N}\frac{\hat{\bm{p}}_j}{2m_e}-\sum_{j=1}^{N} \frac{e}{m_e}\mathbf{A}(\hat{\bm{r}}_j,t)\cdot\hat{\bm{p}}_j\\
& -\frac{e^2}{2m_e}\bm{A}^2(\hat{\bm{r}}_j,t)
-\frac{ge}{2m_e}\sum_{j=1}^{N}\bm{B}(\hat{\bm{r}}_j,t)\cdot \hat{s}_j\\
& -\sum_i\sum_j \frac{Z_ie^2}%
{4\pi\epsilon_0\left|\hat{\bm{R}}_i-\hat{\bm{r}}_j\right|}
+\sum_i\sum_{j>i} \frac{e^2}%
{4\pi\epsilon_0\left|\hat{\bm{r}}_i-\hat{\bm{r}}_j\right|}.
\end{split}
\label{eq:full_molH}
\end{align}
\textcolor{black}{In Eq.~\eqref{eq:full_molH}, $\hat{\bm{p}}_j$, $\hat{\bm{r}}_j$ and $\hat{s}_j$
are the momentum, position and spin operators for the $j$th electron,
$\hat{\bm{R}}_i$ and $Z_i$ are the position operator and nuclear
charge for the $i$th nuclei,
$\bm{A}(\hat{\bm{r}}_j,t)$ is the vector potential, and
$\bm{B}(\hat{\bm{r}}_j,t)$ the magnetic field. Moreover, the constants $e$,
$m_e$, $g$ and $\epsilon_0$ correspond to the charge and mass of
the electron, the spin $g$-factor and the vacuum permittivity.}
The terms containing squares of the vector potential $\bm{A}$ can be safely
neglected outside the strong-field regime. More specifically,
for optical or near UV wavelengths, this approximation
is well-motivated for intensities $I<10^{18} \mathit{W}/\mathit{cm^2}$\cite{Ludwig2014}.
Introducing the expansion of the electric field (see Eqs~\eqref{eq:E_decomp}
and~\eqref{eq:Ex_appr} in Appendix~\ref{sec:light}), and performing a suitable
gauge transformation, the multipolar form of the light-matter
interaction Hamiltonian becomes\cite{Buckingham1959,Milonni2019} for an incident light field
propagating in z direction,
\begin{align}
\begin{split}
\hat{H}=\hat{H}_0&-|\varepsilon_x(t)|e^{i\varphi_x(t)}\hat{\mu}_x -|\varepsilon_y(t)|e^{i\varphi_y(t)}\hat{\mu}_y\\
& -\frac{\hat{Q}_{xz}}{c}\frac{d|\varepsilon_x(t)|e^{i\varphi_x(t)}}{dt}
-\hat{m}_yB_y(t)\\
& - \frac{\hat{Q}_{yz}}{c}\frac{d|\varepsilon_y(t)|e^{i\varphi_y(t)}}{dt}+\hat{m}_xB_x(t)\\
&+\sum_{j=1}^{N}\bm{B}(\hat{\bm{r}}_j,t)\cdot \hat{s}_j,
\end{split}
\label{eq:multip_H}
\end{align}
where we collected the field-free terms into the time-independent Hamiltonian $\hat{H}_0$,
and used the definitions:
\begin{align}
\hat{\mu}_{\alpha}&=\sum_{j=1}^{N}e\hat{\alpha}_j\\
\hat{m}_{\beta}&=\sum_{j=1}^{N}\frac{e}{2m_e}\left(\hat{p}_{\alpha,j}\hat{\gamma}_j-\hat{\alpha}_j\hat{p}_{\gamma,j}\right)\\
\color{black}\hat{Q}_{\alpha,\beta}&\color{black}=\sum_{j=1}^{N}\frac{e}{3}\hat{\alpha}_j\hat{\beta}_j-r^2\delta_{\alpha,\beta}
\end{align}
for electric dipole, magnetic dipole, and electric quadrupole operators
with $\alpha, \beta, \gamma \in \{x,y,z\}$
The first line in Eq.~\eqref{eq:multip_H} corresponds to the well-known dipole
approximation. It is equivalent to neglecting
the spatial dependence of the electric field entirely, such that only a function
of time remains.
Note that the dipole approximation removes any information
concerning the direction of propagation,
$\bm{k}$, hence the handedness of circularly
polarised light is lost in such a model. As such, the only spatial
information encoded in the dipole approximation is the transition dipole moment $\bm{\mu}$ (a
molecular vector) and the plane of polarisation of light (a field pseudovector).
In order to get a chiral observable in the dipole approximation it is necessary
to introduce another vector in the process, so that we can define a pseudoscalar
that codifies the handedness of the molecule\cite{Ordonez2018}.
For instance, the photoelectron angular
distribution of a randomly oriented sample of chiral molecules presents a
forward--backward asymmetry, known as \emph{Photoelectron Circular Dichroism}
(PECD)\cite{Ritchie1976}. The high contrast of the signal (up to 10\% between both enantiomers)
has motivated extensive theoretical and experimental studies.
Although PECD measurements provide comparatively high signal strengths, the
description of the corresponding observable is more complex than CD from a theoretical
point of view due to the necessity to describe the electronic continuum.
Conversely, chiral signatures from light absorption - (conventional) CD
and ion-yield CD - primarily rely on bound-state
electronic properties.
Nevertheless, a chiral signature due to CD requires the helicity of
light to explicitly enter the interaction \emph{via} the propagation
vector $\bm{k}$. For this reason our model includes the next-higher
order term of the multipole expansion beyond the electric dipole, cf.~ the
second and third line of Eq.~\eqref{eq:multip_H}.
\subsection{System under study: A--band of fenchone}\label{ssec:model}
Electric dipole transitions are typically much stronger
than the corresponding magnetic dipole transitions. As a result, CD signatures
can easily be concealed by the electric dipole, leading to low-contrast signals in experiments. In order to avoid
such concealment, we focus
on the A--band of fenchone. This transition is electric dipole forbidden to
first order, since its main component is a symmetry
forbidden $n\rightarrow\pi^{\ast}$ transition\cite{Pulm1997}, and therefore
features
electric and magnetic transition dipole moments of the same order of
magnitude.
We seek to optimise laser pulses as used in table-top experiments, i.e.
pulse lengths of the order of 100~fs and laser wavelengths
of $300$~nm. This timescale is significantly shorter than the
rotational periods of fenchone, which are of the order of 1~ns\cite{Loru2016}.
Therefore we can safely assume that the molecule
remains at a single fixed orientation during the full length of the pulse.
Conversely, the main vibrational modes of the fenchone molecule have periods of the order of
50~fs. These short amplitude motions, however, correspond to
individual \ce{C-H} and \ce{C-C} bonds in the molecular backbone, and are not
expected to play a significant role in the electronic dynamics\cite{Longhi2006}. Therefore, we
will restrict the modeling to the electronic degree of freedom.
Representing the Hamiltonian, Eq.~\eqref{eq:multip_H}, in the
basis of electronic eigenstates of fenchone, $\ket{\Psi_n}=\ket{n}$, we
obtain the expression
\begin{align}
\begin{split}
\Braket{m|\hat{H}|n}=\Braket{m|\hat{H}_0|n}&\textcolor{black}{-}|\varepsilon_x(t)|e^{-i\varphi_x(t)}
\left(\Braket{m|\hat{\mu}_x|n}+\frac{1}{c}\Braket{m|\hat{m}_y|n}\right)\\
&\textcolor{black}{-}|\varepsilon_y(t)|e^{-i\varphi_y(t)}\left(\Braket{m|\hat{\mu}_y|n}-\frac{1}{c}\Braket{m|\hat{m}_x|n}\right)\\
&\textcolor{black}{-}\frac{1}{c}\frac{\dif|\varepsilon_x(t)|}{\dif t}e^{i\varphi_x(t)}\Braket{m|\hat{Q}_{xz}|n}\\
&-\frac{1}{c}\frac{\dif|\varepsilon_y(t)|}{\dif t}e^{i\varphi_y(t)}\Braket{m|\hat{Q}_{yz}|n}\\
\label{eq:Hmatrix}
\end{split}
\end{align}
where we have used the fact that, for symmetry reasons, any contribution
due to spin vanishes for real-valued wave functions of singlet
states\cite{Bernadotte2012a,Krems2018}. \textcolor{black}{In Eq.~\eqref{eq:Hmatrix},
$|\varepsilon_{\alpha}(t)|e^{-i\varphi_{\alpha}(t)}$ is the $\alpha$ component of the
complex-valued Fourier transform of the electric field (see Appendix~\ref{sec:light}
for details on the expansion of the electric field).}
We have calculated electronic state energies as well as permanent and transition moments
with the \textsc{Dalton 2020} software package
\cite{Aidas2014,DALTON2020} at Coupled Cluster Singles Doubles
(CCSD) level with a 6-31G basis set, employing the Linear Response theory
implementations described in Refs.~\citenum{Christiansen1996,Christiansen1998a,
Halkier1998}.
Due to the localised nature of the two states involved in the A--band transition (the ground and the
first electronic excited state), a more extended basis set
describing the strong Rydberg nature of higher excited states\cite{Goetz2017},
was not necessary.
Furthermore, in order to guarantee a good representation of the two states
in our minimal model, we included up to the fifth electronic excited
state when calculating electronic energies and multipole moments.
All computed quantities relevant for the optimisations
are provided in Table~\ref{tab:dalton}. Note that the permanent magnetic dipole moment is
neglected due to the singlet nature of the electronic states considered.
\begin{table*}
\centering
\caption{Energies, permanent electric dipole and transition
multipole moments for the ground and first electronic excited state of
fenchone obtained at CCSD/6-31G level with
\textsc{Dalton2020.0}.}
\label{tab:dalton}
\begin{tabular}{c|cccc|cccc}
&\multicolumn{4}{c}{$\Ket{0}$}&\multicolumn{4}{c}{$\Ket{1}$}\\\hline
&Energy & El. dip.& Mag. dip. & El. quad.& Energy & El. dip.& Mag. dip. & El. quad.\\
&$\mathit{eV}$ &$\mathit{e a_0}$& $\mathit{e\hbar m_e^{-1}}$& $\mathit{e a_0^{2}}$&$\mathit{eV}$ &$\mathit{e a_0}$& $\mathit{e\hbar m_e^{-1}}$& $\mathit{e a_0^{2}}$\\\hline
$\Bra{0}$& 0 &
{$\!\begin{aligned}
\begin{pmatrix}
-0.047 \\
-1.061 \\
-0.414
\end{pmatrix}
\end{aligned}$}
&
&
{$\!\begin{aligned}
\begin{pmatrix}
4.159 & 0.012 & -0.241\\
-0.012 & -5.841 & -3.411 \\
-0.240 & -3.411 & 1.682
\end{pmatrix}
\end{aligned}$}
&
&
{$\!\begin{aligned}
\begin{pmatrix}
0.0033 \\
0.0002 \\
0.0037
\end{pmatrix}
\end{aligned}$}
&
{$\!\begin{aligned}
\begin{pmatrix}
0.0851 \\
0.958 \\
0.426
\end{pmatrix}
\end{aligned}$}
&
{$\!\begin{aligned}
\begin{pmatrix}
0.003 & 0.110 & 0.221\\
0.110 & 0.026 & 0.052 \\
0.221 & 0.052 & 0.029
\end{pmatrix}
\end{aligned}$}
\\
$\Bra{1}$& &
{$\!\begin{aligned}
\begin{pmatrix}
0.003 \\
0.0002 \\
0.004
\end{pmatrix}
\end{aligned}$}
&
{$\!\begin{aligned}
\begin{pmatrix}
0.085 \\
0.958 \\
0.426
\end{pmatrix}
\end{aligned}$}
&
{$\!\begin{aligned}
\begin{pmatrix}
0.003 & 0.110 & 0.22\\
0.110 & 0.026 & 0.052 \\
0.221 & 0.052 & 0.029
\end{pmatrix}
\end{aligned}$}
&
4.01
&
{$\!\begin{aligned}
\begin{pmatrix}
-0.076 \\
-0.823 \\
-0.343
\end{pmatrix}
\end{aligned}$}
&
&
{$\!\begin{aligned}
\begin{pmatrix}
4.060 &-0.561 & 0.157\\
-0.561 & -4.512 & -1.943 \\
0.157 & -1.943 & 0.453
\end{pmatrix}
\end{aligned}$}
\\
\end{tabular}
\end{table*}
\subsection{Optimal Control of Circular Dichroism}\label{ssec:oct}
Since we neglect rotational motion, our Hamiltonian
(Eqs.~\eqref{eq:multip_H} and \eqref{eq:Hmatrix})
only describes a single orientation of the chiral molecule with respect
to the light pulse. However, experiments are typically carried out
with a statistical ensemble of randomly oriented molecules which have to be
accounted for in our model.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fenchone_dips_cmp.png}
\caption{Reference geometry of fenchone, as obtained after optimisation at
CCSD/6-31G level with \textsc{Dalton2020.0}, superimposed with the transition electric dipole
moment (scaled $\times400$, orange) and transition magnetic dipole (scaled
$\times 4$, blue). The coordinate system indicates the orientation of the
molecular frame, with the RGB axes corresponding to the $x$, $y$, $z$ Cartesian
coordinates}\label{fig:fenchone}
\end{figure}
Averaging over all Euler angles, defined with respect to the orientation shown
in Figure~\ref{fig:fenchone} in the $y-z-y$ convention, we obtain
for the excited state population of a single enantiomer
\begin{align}
\begin{split}
\left|\Braket{\Psi^1|\Psi_R(T)}\right|^{2}=\frac{1}{8\pi^2}\int_{0}^{2\pi}\int_{0}^{\pi}\int_{0}^{2\pi}
&\left|\Braket{\Psi^1(\alpha,\beta,\gamma)|\Psi_R(\alpha,\beta,\gamma,T)}\right|^2\\
&\sin{\beta}\dif\alpha\dif\beta\dif\gamma,
\label{eq:rotaver}
\end{split}
\end{align}
with $\ket{\Psi^{1}(\alpha,\beta,\gamma)}$ the excited state electronic eigenfunction, and
$\ket{\Psi_R(\alpha,\beta\gamma,T)}$ the
state of the $R$ enantiomer wave function at time $T$.
With this final puzzle piece on the question of representation in place,
we can now turn to address the description of the system dynamics. To this end, we employ the
time--dependent Schr\"odinger equation,
\begin{align}
\frac{\dif \Ket{\Psi(\bm{x},t})}{\dif
t}=\frac{1}{i\hbar}\hat{H}(t)\Ket{\Psi(\bm{x},t_0)}.\label{eq:TDSE}
\end{align}
Although this equation of motion only describes coherent dynamics, such
a treatment is justified by the fact that any decoherence or decay
is expected to occur on much longer time scales than the $fs$
pulse durations.
As it is commonly done in the field of optimal control, we have separated the
Hamiltonian in Eq.~\eqref{eq:multip_H} into a field-free, time-independent \emph{system}
Hamiltonian, $\hat{H}_0$, (the so-called \emph{drift})
and the time-dependent Hamiltonian due to the interaction
of the chiral molecule with the light field (the so-called \emph{control}
\textcolor{black}{$\varepsilon$}),
\begin{align}
\hat{H}(t)=\hat{H}_0+\sum_{k=1}^N \varepsilon_k(t)\hat{H}_k.
\end{align}
The coupling to an external field provides a means to steer the dynamics of the system
towards a specific target, in our case by shaping the incident laser pulse.
Next, we address the question of how to encode the physical target in terms of a functional, $J_T$.
This functional quantifies how well the control pulses implement the optimisation goal.
In our case we target a high contrast in electronic state populations of
the two enantiomers of fenchone. We also expect this to serve as a precursor
of getting high-contrast in ion-yield CD signals, since these experiments
only rely on the measurement of the number of ions obtained after absorption and ionisation.
As seen in the previous section, the two enantiomeric forms share the
same drift Hamiltonian, however, they feature a different relation between the
electric and magnetic dipole transition moments: for one enantiomer the product
of both quantities is positive, while for the other it is negative. This
relative arrangement of the multipole moments in the light-matter interaction term
is the source of the different dynamics in a given enantiomer when exposed to
a light source with different helicity, and thus the origin of circular
dichroism.
Note that in this work we use a complementary (and equivalent) point of view
to evaluate the CD: instead of changing the helicity of the pulse, we consider
how a specific light field interacts with each of the enantiomers.
Therefore, starting from chiral molecules in the ground state, we seek a pulse which
selectively excite one of the enantiomers while leaving the opposite form
in the ground state.
Once a difference in electronic state population between the
two enantiomers is established, a second pulse can be used to selectively ionise from
the higher energy level, thus obtaining an increased ionisation CD signal.
For a single orientation this goal can be encoded by a so-called
state-to-state optimisation \emph{via}
the following functional,
\begin{align}
J_T=1-\frac{1}{2}\left(\left|\Braket{\Psi^1|\Psi_R(T)}\right|^2+\left|\Braket{\Psi^0|\Psi_S(T)}\right|^2\right)\label{eq:JT}.
\end{align}
Similar functionals aiming to increase the distinguishability of two systems
with a single control are also prominent, e.g., in
quantum discrimination of magnetic fields \cite{Ansel2018, vanDamme2018,Basilewitsch2020}.
In Eq.~\eqref{eq:JT}, $\Psi^0$ and $\Psi^1$ refer respectively to the ground
and electronic excited state of the chiral molecule. $\Psi_R(T)$
($\Psi_S(T)$) denotes the state of the $R$ ($S$) enantiomer at final time $T$.
This functional takes on its minimal value 0 when the
R enantiomer is completely excited and the S enantiomer remains entirely in the
ground state, and its maximal value 1 in the opposite scenario. Note that both extrema
correspond to perfect distinguishability, while a vanishing chiral
signal corresponds to a functional value of 0.5. Thus, increasing the distance
to this middle point, which can be achieved by either minimisation or maximisation,
improves the realisation of our physical goal.
The linearly polarised components of the electric field, $E_x(t)$
and $E_y(t)$, can be represented as two different control pulses which are optimised
independently allowing for arbitrary elliptical polarisation. Moreover, due to
the small absorption amplitude in the A--band transition of fenchone,
perturbative considerations are suitable to predict
which processes will be the most relevant in our simulations\cite{Meath1987}.
This suggests to employ a parametrisation for the control pulses, which allows
to reduce the dimensionality of the optimisation landscape.
Specifically, we expect three main contributions for each control field:
\textcolor{black}{DC components ($E_{x/y}^{(0)}$),
which primarily couple to the permanent dipole, one-photon components
($E_{x/y}^{(1)}$) with frequency $\omega^{(1)}$, which primarily couple to the electric
and magnetic dipole transitions, and two-photon components ($E_{x/y}^{(2)}$)
with frequency $\omega^{(2)}$ which primarily couple to the electric quadrupole
moments\cite{Meath1987}.
The interference between these couplings leads
to different excitation pathways, which will be the main resource exploited
by the optimised pulses.
Following this physical intuition we parametrise our control field as
a superposition of the three aforementioned contributions:
\begin{align}
E_x&=s(t)\left(E_x^{(0)}+E_x^{(1)}\sin{(\omega^{(1)} t)}+E_x^{(2)}\sin{(\omega^{(2)} t)}\right)\label{eq:pulse_params1}\\
E_y&=s(t)\left(E_y^{(0)}+E_y^{(1)}\sin{(\omega^{(1)} t+\varphi)}+E_y^{(2)}\sin{(\omega^{(2)} t+\varphi)}\right),
\label{eq:pulse_params2}
\end{align}
with $\varphi$ the relative phase between the $x$ and $y$ components of the electric
field.
In Eqs.~\eqref{eq:pulse_params1} and~\eqref{eq:pulse_params2} $s(t)$ is an
envelope function ensuring that the pulse is smoothly turned on and off. Here we
choose a squared sine as a good approximation to experimental pulse
shapes\cite{Barth2009},
\begin{align}
s(t)=\sin^2\left(\frac{\pi t}{T}\right).
\end{align}
}
Note that we keep $\omega^{(1)}$ as an optimisation parameter and do not fix it
to the resonant frequency of the electronic excitation ($\omega_r=4.01$~eV).
This is done to allow for non-resonant processes to be considered as candidates
for the optimal solutions and permits flexibility in view of potential DC and AC
Stark shifts. Conversely, we keep the frequency for the two-photon pathway to the
excited state fixed at $\omega^{(2)}=\omega^{(1)}/2$ to explicitly target
a bichromatic control mechanism using interference between a one- and two-photon
pathway.
To account for orientational averaging
we perform an ensemble
optimisation\cite{Goerz2014a}: We propagate a set of differently orientated
molecules, each described by its own Hamiltonian, under the effect of the same
control pulses, and minimise the \emph{averaged} functional,
\begin{align}
\begin{split}
J_T^{\mathit{aver}}=\frac{1}{8\pi^2}\sum_{i=1}^{N_{\alpha}}\sum_{j=1}^{N_{\beta}}\sum_{k=1}^{N_{\gamma}}&
\left[1-\frac{1}{2}\left(\left|\Braket{\Psi^1(\alpha,\beta\gamma,T)|\Psi_R(\alpha,\beta\gamma,T)}\right|^2\right.\right.\\
&\left.\left.+\left|\Braket{\Psi^0(\alpha,\beta\gamma,T)|\Psi_S(\alpha,\beta,\gamma,T)}\right|^2\right)\right]\\
&\sin{\beta}\Delta\alpha\Delta\beta\Delta\gamma.
\end{split}
\label{eq:JT_aver}
\end{align}
Note that we have replaced the integrals from Eq.~\eqref{eq:rotaver}
by sums, to account for the numerical necessity of discretising the set of orientations.
We have chosen $N_{\alpha}=N_{\gamma}=2N_{\beta}=14$
to sample the orientations equidistantly and with identical spacing for all three Euler angles, i.e.,
$\Delta=\Delta\alpha=\Delta\beta=\Delta\gamma$.
Since we can describe our control field with very few parameters,
cf.~Eqs~\eqref{eq:pulse_params1} and \eqref{eq:pulse_params2}, gradient-free optimisation methods are
particularly suitable. We have used \textcolor{black}{a combination of the Multi--Level Single
Linkage (MLSL) approach~\cite{RinnooyKan1987, RinnooyKan1987a, Kucherenko2005} and } the generalised simplex (or Nelder-Mead)
\cite{Nelder1965,Box1965,Richardson1973} algorithm as implemented in the python
\emph{NLopt} library\cite{Johnson}. \textcolor{black}{The MLSL algorithm
stochastically samples the parameter space of the optimisation. This global scan of the
optimisation landscape complements the local nature of the generalised simplex method and allows
to find the optimal solution even when several local minima are present. All
propagations have been performed using the \emph{QDYN} library\cite{qdyn}. }
Optimal control algorithms \emph{per se} do
not impose any restriction on the calculated pulses. However, in gradient free
methods it is easily possible to restrict the domain of the parameters to be optimised.
These constraints should be chosen
in order to obtain control pulses apt for experimental
applications, which are usually limited by the total pulse duration, and
the maximum field strength that can be generated.
In table-top setups, pulses with 30-40~fs duration and
peak electric field strengths of the order of $GV/m$ can be routinely obtained.
However, due to the small transition moments of the A--band
of fenchone, such pulses result in populations of the
excited state of around 1\%. These low values are insufficient to increase
the contrast in the CD signal with a high signal to noise ratio.
Preliminary simulations showed that we can obtain population transfer of
$\approx 10\%$ by using pulses of 100~fs and peak electric field
strength of 25.7~$\frac{\textrm{GV}}{\textrm{m}}$ (corresponding to a value of
0.05 atomic units).
These restrictions are at the upper limit of experimental feasibility in
table top setups but are still possible, albeit challenging, to implement.
\textcolor{black}{Due to the dipole-forbidden nature of the A--band, peak
intensities of the order of $10^{14}\ W/cm^2$ result in a comparatively weak light-matter
interaction, as it can be seen from the fact that they only produce $\approx
10\%$ population transfer to the excited state. Therefore the pulses can still be considered
not to be in the strong field regime and we expect the assumptions of our two-state model to hold.
} It should be noted that a more complex model beyond a
two-level description would increase the dimension of the parameter space, where
gradient-based methods show their strengths. A more detailed discussion on which
optimisation algorithm is most suitable to a particular problem can be found for
example in Ref.~\citenum{Goerz2019}.
\section{Results and discussion}\label{sec:results}
\subsection{Circular dichroism of randomly oriented ensembles}\label{ssec:CDens}
As a guess for the optimisation we
choose a 50~fs FWHM circularly polarised pulse
with a single-frequency component at $\omega=4.01~\textrm{eV}$ and
$E_x^{(1)}=E_y^{(1)}=25.7\frac{\textrm{GV}}{\textrm{m}}$.
Due to the electric dipole forbidden nature of the
transition, population transfer to the excited state with this pulse reaches
maximum values of 8\%, with a difference in excited state population
between the R and S enantiomers
around 0.5\%. We can quantify the dichroic signal with the anisotropy factor
$g$, defined as
\textcolor{black}{the ratio between the difference in absorption of circularly
polarised light between the left and right enantiomers over the absorption of
non-polarised light for that band, taken as the average absorption of both
enantiomers\cite{Kuhn1930}:
\begin{align}
g=\frac{I_{\mathit{left}}-I_{\mathit{right}}}{\frac{1}{2}\left(I_{\mathit{left}}+I_{\mathit{right}}\right)},
\end{align}
}
where $I_\mathit{left}$ and $I_\mathit{right}$ refer to the absorption of
a given enantiomer\textcolor{black}{, which in our system can be taken as
equivalent to the excited state populations.}
Even though this definition is usually used for monochromatic, circularly
polarised pulses we will also employ it for our optimised field.
In the case of our guess circularly polarised pulse we obtain $g=6.25\cdot 10^{-2}$,
which compares very well with the $5\cdot 10^{-2}$ value reported in the literature\cite{Pulm1997}.
The generalised simplex
(or Nelder-Mead) algorithm minimises the value of the rotationally
averaged functional $J_T^{\mathit{aver}}$ (\emph{c.f.} Eq~\eqref{eq:JT_aver})
by independently varying the different components of the pulse:
the intensity of the DC component ($E_{x/y}^{(0)}$, coupling primarily to the
permanent electric dipole moment), the one-photon component
($E_{x/y}^{(1)}$, coupling primarily to
the electric and magnetic dipole transitions), the two-photon component
($E_{x/y}^{(2)}$, which coupling primarily to
electric quadrupole moment), as well as the frequency $\omega^{(1)}$.
The optimised pulses in time and frequency domain as well as the resulting population
dynamics are displayed in Figure~\ref{fig:CDopt_orientaver_NM100fs}. The
optimised values for the pulse parameters, cf.
Eq.~\eqref{eq:pulse_params1} and~\eqref{eq:pulse_params2}, are shown in Table~\ref{tab:NM100fs_params}.
\begin{figure*}
\centering
\includegraphics{CD_orient_aver_NM_100fs2.pdf}
\caption{Results for the optimisation of circular dichroism of a rotational
ensemble for the A--band
transition of fenchone. a)
Evolution of the excited state population as a function of time for the
$R$ (green) and $S$ (purple) enantiomer of fenchone\textcolor{black}{, as well
as the corresponding value of the anisotropy parameter $g$ (yellow, in the right $y$ axis)}. The dashed line
corresponds to the circularly polarised guess pulse, while the solid line
corresponds to the optimised control
fields. \textcolor{black}{The oscillations of $g$ at short times are a numerical artifact due
to the near 0 absorption of the excited states during the first
femtoseconds.} b) Optimised pulses in time domain. c) Optimised pulses in frequency
domain.}\label{fig:CDopt_orientaver_NM100fs}
\end{figure*}
\textcolor{black}{
\begin{table*}
\centering
\caption{Parameters of the circularly polarised guess pulse (anisotropy $g=6.25\cdot 10^{-2}$ after orientational averaging) and
the optimised pulse (anisotropy $g=1.0$ after orientational averaging).}
\label{tab:NM100fs_params}
\begin{tabular}{c|ccc|cc|c|ccc|cc|c}
&\multicolumn{6}{c}{Optimised pulse}&\multicolumn{6}{c}{Guess pulse}\\\hline
&$E^{(0)}$&$E^{(1)}$&$E^{(2)}$&$\omega^{(1)}$ &$\omega^{(2)}$& $\varphi$
&$E^{(0)}$&$E^{(1)}$&$E^{(2)}$&$\omega^{(1)}$ &$\omega^{(2)}$& $\varphi$\\
&\multicolumn{3}{c|}{$\frac{GV}{m}$}&\multicolumn{2}{c|}{$\mathit{eV}$}&
&\multicolumn{3}{c|}{$\frac{GV}{m}$}&\multicolumn{2}{c|}{$\mathit{eV}$}
&\\\hline
$E_x$ & 4.95$\cdot 10^{-3}$ & 27.71&3.26 &3.97 &1.99 &$\pi/2$ & 0.0 & 25.70& 0.0 & 4.01
& - & $\pi/2$\\
$E_y$ &25.71& 25.71 & 12.86 &3.97 &1.99 & & 0.0 & 25.70 & 0.0 & 4.01 & - & \\
\end{tabular}
\end{table*}
}
\textcolor{black}{
Remarkably, the anisotropy obtained with the guess circularly polarised pulse reaches its
maximum value in the first few femtoseconds, and remains constant throughout the
rest of the dynamics. This stands in sharp contrast to the behaviour under the
optimised pulse, which shows a gradual increase of the anisotropy throughout the
whole pulse.
The parameters in Table~\ref{tab:NM100fs_params} also show that the optimisation
slightly alters the frequency compared to the resonance frequency $\omega_r$
from the guess pulse. We attribute this feature to the combined effects of AC
and DC Stark shifts induced by the field}. Moreover, the optimised pulse not
only addresses dipolar transitions at the frequency $\omega_r$ but also DC field
contributions ($\omega = 0$) due to the permanent electric dipole, and
two-photon \textcolor{black}{($\omega = \omega^{(1)}/2$)} contributions arising
from coupling to the electric quadrupole \textcolor{black}{(see also
Figure~\ref{fig:CDopt_orientaver_NM100fs}c). Interestingly, only the $y$ field
contributes to the DC component. This can be attributed to the symmetry of the
rotationally averaged system, which causes the maximum anisotropy to arise when
the DC component is aligned with only one polarisation vector.}
\textcolor{black}{
The resulting difference in excited-state population between the two
enantiomers, and thus the chiral contrast, increases by a factor 2.5 compared to
the guess field. Moreover, the overall signal strength in the A--band is also
reduced from $\approx 9\%$ for the circularly polarised pulse to $\approx 2\%$
for the optimised one. Given that the field strength of both sets of pulses is
similar, we can attribute the difference in dynamics to interference effects
between the different excitation pathways. All in all, the combination of
increase of chiral contrast and decrease of overall absorption results in
a increase of the anisotropy parameter to almost $g=1.0$, i.e., absorption in
one enantiomer is almost doubled compared to its mirror image when using the
optimised pulse.}
Despite the fact that the leading order for circular dichroism is usually given
by electric and magnetic dipole transitions, our optimisation results for the
(to first order) dipole-forbidden transition in fenchone reveals the significance
of multipolar terms beyond the electric and magnetic dipole transition moment.
To further illustrate and investigate their significance we employ
the following two approaches: On the one hand, we use a \emph{restricted
model} by removing the corresponding coupling operator from the Hamiltonian and
perform another optimisation of the pulse parameters. On the other hand,
we optimised a \emph{restricted pulse} by simulating dynamics using the full
Hamiltonian, yet constraining
$E_0=0$ (respectively $E_2=0$) for the control fields.
The comparison of the simulations with these different schemes is shown in
Figure~\ref{fig:restr_comparison}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{compare_plot_aver2.pdf}
\caption{\textcolor{black}{Comparison of optimisations with a fully
parametrised pulse (top), a restricted pulse without
DC contribution (middle) and a restricted pulse without $\omega^{(2)}$ contribution (bottom).
Left to right: excited
state population for the R (solid green) and S (solid purple) enantiomer and anisotropy factor $g$ (dashed yellow, in right $y$ axis); envelope of the optimised pulses
in time domain; optimised pulses in frequency domain ($E_x$ blue, $E_y$ orange).
}}\label{fig:restr_comparison}
\end{figure*}
\textcolor{black}{From these different setups we can clearly see that
the permanent electric dipole, through its interaction with the
DC component of the electric field, is the critical ingredient for the
success of the ensemble optimisation:
An optimisation restricting the pulse to a vanishing zero-frequency component
does not significantly increase the anisotropy
with respect to the guess. Conversely, an optimisation using a restricted pulse
without two-photon contributions still offers a significant increase of the anisotropy
factor, but only reaches around 0.75 anistropy in contrast to the almost 1.00
when utilising all possible pathways.
Evidently, all multipolar terms in our model provide a
significant optimisation resource, since they open multiphoton excitation pathways
which allow to exploit interference effects towards the desired objective.
}
\subsection{Oriented Circular Dichroism}\label{ssec:OCD}
For typical experiments on circular dichroism in the gas phase the chiral signal is
orientationally averaged over all possible orientations of the molecular target.
From a theoretical point of view it is nevertheless interesting to also consider
how control pulses can induce different absorption between two enantiomers
for a single, space-fixed orientation.
Specifically, we first investigate optimisations for single orientations of
fenchone with respect to the light field. Then, we analysed how the optimal controls
obtained from ensemble optimisation act on individiual orientations. The
comparison of these two sets of simulations helps to gain
insight into the underlying control mechanism.
We analyse the population
dynamics for the two enantiomeric forms for a given orientation.
For one of the enantiomers the optimised pulse aims to minimise excited state
population transfer altogether, or at least to return all intermittent
population in the excited state back to the ground state at the end of the
pulse. At the same time, for the mirror image, the optimised field tries to maximise
population transfer to the excited state. The latter process (maximisation of population
in the excited state for one enantiomer) is limited by the available fluence
in the pulse. Specifically, taking into account our restrictions
on field strength and the limited pulse length (cf. Sec.~\ref{ssec:oct}),
the optimisation does not have enough resources to get a complete population transfer to the
excited state.
Remarkably, not all orientations are equally easy to control
in terms of distinguishability. Every pulse-molecule geometry yields
different values for the components of the permanent dipole and
transition multipole moments in the control Hamiltonian, and small values of
these moments can prevent the optimisation of CD altogether.
Figure~\ref{fig:EulerSurvey_indopt} shows the fidelity $F$ of the optimisation
for different individual orientations of the system. Using the functional
$J_T$ defined in Eq.~\eqref{eq:JT}, this quantity is defined
as
\begin{align}
F=2|0.5-J_T|\label{eq:F}
\end{align}
and takes the value 1 for perfectly distinguishable systems, and 0 for
completely indistinguishable ones, cf. our discussion in Sec.~\ref{ssec:oct}
\begin{figure*}
\includegraphics[width=\textwidth]{EulerAng_ind_opt2.pdf}
\caption{Value of the fidelity (Eq.~\eqref{eq:F}) optimised for individual orientations
of fenchone as a function of the Euler angles $\alpha$, $\beta$ and $\gamma$.
Lighter areas correspond to higher fidelities, \emph{i.e.} better
chiral distinguishability.}\label{fig:EulerSurvey_indopt}
\end{figure*}
A closer look at the values of the transition moments for different orientations
shows that the possibility for improvement via optimal control depends strongly on the
interplay of the different components of the vectors: For instance, in the
orientation $\alpha=1.35$, $\beta=2.62$, $\gamma=5.38$, where anisotrpy
appears not to be improvable, the value of the
electric dipole transition moment $x$ component is one order of magnitude
smaller than in the neighboring optimisable orientation $\alpha=0.45$,
$\beta=2.62$, $\gamma=5.38$. Similar relations, with one or more relevant
transition moments becoming small, can be observed for several other areas
that only show negligible improvement though optimisation.
The fidelity obtained by considering the action of the pulse optimised for a rotational average
(Figure~\ref{fig:CDopt_orientaver_NM100fs} and Eq~\eqref{eq:JT_aver}) on individual
orientations is shown in Figure~\ref{fig:EulerSurvey_OrAverPulse}.
\begin{figure*}
\includegraphics[width=\textwidth]{EulerAng_OrAverPulse2.pdf}
\caption{Value of the fidelity (Eq.~\eqref{eq:F} after irradiation with the ensemble optimised
pulse (Table~\ref{tab:NM100fs_params} and
Figure~\ref{fig:CDopt_orientaver_NM100fs}) as a function of the Euler angles
$\alpha$, $\beta$ and $\gamma$.
Lighter areas correspond to higher fidelities, \emph{i.e.} better
chiral distinguishability.}\label{fig:EulerSurvey_OrAverPulse}
\end{figure*}
\textcolor{black}{We can clearly see that
the ensemble optimised pulse
increases the distinguishability for a subset of orientations in the region $1.5<\beta<2.7$
while having close to no effect on the rest. The reason for this is that due to
the weighting factor $\sin\beta$ appearing from the rotational
averaging (\emph{cf.} Eq.~\eqref{eq:rotaver}), orientations in that region
have an above average contribution to the ensemble.
This incentivises the optimisation algorithm to focus on this domain.
By comparison to Figure~\ref{fig:EulerSurvey_indopt} we can see that
the optimisation also targets those orientations
intrinsically more favourable in terms of distinguishability.}
\section{Summary and Conclusions}
We have shown that optimal control can be used to
increase the absorption contrast in the A--band of the two
enantiomers of fenchone by independently shaping the $x$ and $y$ components of
the incident light field. In order to do so, we have developed a minimal
molecular model, including only the electronic ground and first excited state of
the molecule.
Our model consistently includes all light-matter interaction terms up to one order
beyond the dipole approximation, \emph{i.e.} the electric and magnetic dipole
transition moments (which are the leading-order contribution to CD), electric
quadrupole moment, and permanent electric dipole moment.
The magnetic and electric dipole moments, including the permanent electric
dipole, contribute appreciably to the excitation dynamics, while the
electric quadrupole only has a minor effect.
We have obtained optimised pulses that increase the orientationally averaged
contrast in the excited state population between the two enantiomers by
\textcolor{black}{almost a factor of twenty compared to
a monochromatic circularly polarised pulse, while also decreasing the overall
absorption to around a quarter compared to the guess pulse. These effects are
a result of the interferences between the different excitation paths generated
by the optimised pulses, which feature spectral contributions with frequency
$\omega^{(1)}$ and $\omega^{(1)}/2$
with $\omega^{(1)}\approx \omega_r$,
as well as a DC field component for the electric field. The DC component proves to
be critical for the optimisation, while the $\omega_r/2$ contribution,
coupling primarily to the quadrupole, has a smaller yet still clearly noticeable effect.}
As a result, we have shown that it is possible to achieve
control for CD signatures by exploiting different multipolar contributions
of the light-matter interaction, even in a basic two-level description.
While such a description simplifies the electronic structure to only the ground
and a single excited state, our model still captures most of the
relevant dynamics for table-top pulses in the femtosecond regime.
To rationalise the results of the ensemble optimisation, we have studied how the
optimised pulse affects specific orientations of the fenchone molecule. We have observed
that only a subset of geometries shows an increase in the population difference
between ground and excited state compared to the guess pulse.
In order to explain this behaviour, we have performed full optimisations on
individual orientations sampling the whole rotational space. The optimisation results
show that the regions where the rotational ensemble optimised pulse performs
better correspond to domains in which the optimisation of individual orientations
is more favourable. This is related to a stronger coupling, and hence an enhanced
addressability, by virtue of larger overlaps between the molecular
transition moments and the electromagnetic field.
In a next step, this knowledge is to be transferred to the experiment. Instrumental
restrictions will influence the implementation of the optimised pulses:
Our pulse lengths and peak intensities are, albeit challenging, attainable in
state-of-the-art table-top setups, but our optimised solutions also prominently
feature a DC component for the electric field which may be problematic for
an experimental implementation.
Our optimisations show, that attempting to
increase the CD signal with a more restricted protocol (\emph{i.e.} removing the
DC component of the field) leads to only marginal increase of the
distinguishability, pointing towards the critical role the DC field plays for
our optimisations.
Several further avenues towards obtaining more easily
realisable yet efficient pulses can be considered. A first option is to add more electronic levels in
our model. This would add more excitation pathways that can be addressed
simultaneously by a multicolored laser pulse. The interference between these
pathways is expected to lead to better control mechanisms similarly to the case
of PECD\cite{Goetz2019, Goetz2019a}.
Secondly, we have observed that the excited state population difference can be easily
increased for particular orientations of the molecule with respect to the light
pulse.
This suggests that a pre-pulse which
induces a partial orientation of the molecular ensemble might be a promising
strategy\cite{Koch2019}. Moreover, it is conceivable to engineer an optimised pulse
which both orients and excites the chiral molecules. Although such a study would
require a description of different timescales to account for rotational dynamics,
recent advances in controlling the rotational state of chiral molecules show
a lot of promise in that direction\cite{Tutunnikov2021}.
\section*{Conflicts of interest}
There are no conflicts to declare.
\section*{Acknowledgements}
We would like to thank Thomas Baumert and Andr\'es Ordo\~nez
for helpful discussions, and Marec Heger for providing the molecular model from
Figure~\ref{fig:fenchone}. Financial support by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation)—Projektnummer
328961117—SFB ELCH 1319 is gratefully acknowledged.
|
2,877,628,089,423 | arxiv | \section*{Abstract}
The current transition in the European energy sector towards climate neutrality requires detailed and reliable energy system modeling. The quality and relevance of the energy system modeling highly depend on the availability and quality of model input datasets. However, detailed and reliable datasets, especially for energy infrastructure, are still missing. In this contribution, we present our approach to developing an open-source and open-data model of the gas transport network in Europe. Various freely available data sources were used to collect gas transport datasets and their attributes. The resulting datasets of the various data sources were processed, and unique elements were merged together. Statistical and heuristic methods were used to generate missing network element attributes.
\newline \indent As a result, we successfully created a gas transport network model only using open-source data. The SciGRID\_gas model contains 237.000\,km of pipeline data which is a very good approximation to know length values. In addition, datasets of compressor stations, LNG terminals, storage, production sides, gas power plants, border points, and demand time series are provided. Finally, we have discussed data gaps and how they can potentially be closed.
\section{Introduction}
The most significant shares of gross energy consumption in Europe in 2019 were held by oil and petroleum products (34.5\,\%), followed by natural gas (23.1\,\%) representing almost 60\,\% of energy consumption \cite{Eurostat-NRG_BAL_C}. The high share of natural gas in the energy mix reflects its essential role as an energy carrier and the need to decarbonize sectors where it is used as a fuel source. This will be achieved mainly by ramping up the integration of renewable energy sources (RES) in the energy system. In this context, new flexibility concepts are needed to integrate higher RES shares while maintaining energy supply stability and reliability. Such flexibilities are the Power-to-X (P2X) technologies combined with energy storage \cite{sternberg2015power}. P2X refers to various processes to convert and "store electricity" using surplus RES electric power to seasonally and spatially balance energy. A promising P2X technology is power-to-gas (P2G) which has a vast potential in decarbonizing different energy sectors such as heating and transport. P2X uses surplus electrical power to produce hydrogen via water electrolysis \cite{stockl2021optimal, kakoulaki2021, schiebahn2015power}. Hydrogen can then be used as a fuel directly or converted to LPG, syngas, or methane. The produced gas can then be transported using the current gas transport network.\\
Despite the importance of modeling and analysis of the gas sector and its interactions with other energy sectors, no reliable datasets describing the gas transport grid exist. Examples of available datasets are limited to single countries and do not provide details of the grid components. Such examples are the LKD-EU dataset for Germany \cite{kunz2017electricity} and the National Grid dataset for the UK \cite{GB_NationalGrid}. The lack of datasets motivated us to initiate the SciGRID\_gas project \cite{SciGRID_gas_ws} at the DLR Institute of Networked Energy Systems. The goal of the SciGRID\_gas project is to derive a reliable and detailed dataset for the gas transport grid in Europe which can be used for modeling and analysis purposes. In practice, the source code of the data model, the geo-referenced datasets describing the gas transport grid as well as the documentation, are made available under open source licences. In order to use SciGRID\_gas in the simulation of energy systems, the integration of the datasets in existing energy system models such as \textit{open\_eGo}, \textit{PyPSA}, and \textit{pandapipes} is planned.
With the SciGRID\_gas data model, we would like to answer the following research questions:
\begin{itemize}
\item Can we build a reliable data model for the European gas transport grid using only publicly available data?
\item Is the amount of available parameter data sufficient to estimate missing parameter data via heuristics and statistical methods?
\end{itemize}
This contribution is structured as follows: In Chapter\,\ref{dataorigin}, we discuss the data sources used for constructing the open-source gas transport network model. This is followed by the discussion of the model architecture in Chapter\,\ref{model}. Chapter\,\ref{sec:methods} gives a short overview of some suitable methods for creating the model. Due to the page number limitation, more detailed information is also available in the respective model documentation, which is accessible online\cite{SciGRID_gas_ws}.
Chapter\,\ref{sec:results} presents our model’s graphical and statistical results. This is followed by the discussion in Chapter\,\ref{sec:discussion} and the conclusion and outlook in Chapter\,\ref{sec:conclusion}.
\section{Data sources} \label{dataorigin}
Obtaining reliable open-source data of the gas transport system is a challenging task. The grid data of gas Transmission System Operators (TSOs\footnote{The European operators of the gas transmission (transport) grid are associated in the European Network of Transmission System Operators for Gas (EntsoG)}) are commonly not standardized, nor are they freely accessible \cite{EMap_MainRef}. Data are generally not geo-referenced and mainly available as PDF maps. Most individual TSOs are not willing to share their data due to competitive reasons. Within the SciGRID\_gas project, we have gathered freely available data from different sources.
The most relevant are presented below. In the subsection name, we first indicate the source of the dataset (ex. Web Search) followed by the name (ex. INET) we gave the dataset in the SciGRID\_gas project.
\subsection{Web Search - INET}
We carried out a web search on all gas network components and compiled the gathered data into the INET dataset. The data stems from TSO press releases, TSO transparency platforms, and TSO public data. Some TSO information had to be made available by the TSOs due to EU regulations \cite{EU543013}. Other information has been made public as part of a company’s self-presentation and advertisement. The collected information includes data on network components. This includes their positions but also relevant energy modeling parameters, like diameter, capacity, power, pressure, etc.
\subsection{German gas model - LKD}
The \textit{long-term planning and short-term optimization} dataset (LKD)\cite{LKD_MainRef} contains geo-referenced data on gas facilities in Germany. It was created by several German research institutes and includes information on gas pipelines, production sites, storage, compressor locations, and nodes. The SciGRID\_gas project was granted the right to use, change and redistribute the LKD data under an open license from the LKD project members.
\subsection{EntsoG - EMAP}
The development of the European gas transport network is coordinated by the European Network of Transmission System Operators for Gas (EntsoG)\footnote{https://www.entsog.eu/}. The EntsoG is an association of 44 European TSOs, three associated partners, and nine observers. EntsoG members are required to publish certain information according to EU directives. A significant amount of this information is incorporated in the freely available and regularly updated map of the gas pipelines, drilling platforms, and storage facilities. The SciGRID\_gas project extracted the rough course and location of the depicted gas pipelines, storage, and production facilities using the map\footnote{https://www.entsog.eu/maps} of 2019.
\subsection{Eurostat - Cons}
The European Statistical Office (Eurostat) collects and publishes data on energy supply, transformation, and consumption on a monthly and yearly basis. Eurostat statistics — like the \textit{complete energy balances} dataset \cite{EuroStat}, and others — provided the data foundation for one of our studies regarding the European gas demand. We derived gas demand time series with a daily resolution covering the years 2010-2019 with a NUTS 3 spatial resolution\footnote{NUTS is a geographical system to divide the EU territory into hierarchical levels. In Germany, NUTS 1 are federal States (Bundesländer), NUTS 2 are governmental regions known as Regierungsbezirke, and NUTS 3 are the districts (Kreise).} for 27 European countries. To provide detailed information for modelers and dataset flexibility the time series distinguishes between the sectors \textsc{households}, \textsc{commercial} and \textsc{industry}. Figure\,\ref{fig:img_mask_pred3} provides an exemplary data plot for the annually averaged residential gas demand in Europe (2010-2019) disaggregated into NUTS 3 regions. The data derivation techniques showed good benchmarking results against three existing time series of gas demand in Germany \cite{Sandoval_COMS_2021} originating from the DemandRegio project \cite{demandregio}. Significant insights were obtained when analyzing the times series concerning the seasonal, geographical, and sector-specific variability of the gas demand in Europe \cite{Sandoval_COMS_2021}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{figures/cons_industrial.png}
\caption{Annually averaged gas demand in the residential sector between 2010-2019, disaggregated into Nuts 3 regions. \cite{javier}.}
\label{fig:img_mask_pred3}
\end{figure}
\subsection{OpenStreetMap - OSM}
OpenStreetMap (OSM) \cite{OSM} is a freely modifiable and accessible geo-data database with steadily increasing data coverage and data quality. In the past, OSM data has contributed to the field of energy system modeling for example in the creation of power grid models \cite{MEDJROUBI201714} or the optimization of flexibility options for urban areas \cite{Alhamwi_2017, Alhamwi_2017a}.
With the \texttt{esy-osmfilter} \cite{Pluta2020esyosmfilterA} we created a Python library to easily access and filter data from OpenStreetMap. We used \texttt{esy-osmfilter} to analyze the European gas pipeline data in OSM. Our analysis showed that data of gas transport pipeline is strongly represented and rapidly growing. However, some countries show significant data gaps. Moreover, data relating to system-relevant components like compressors stations or storage is clearly missing. With \texttt{osmscigrid}\cite{osmscigrid} we created another library to convert OSM pipeline data directly to SciGRID\_gas data format for easier integration of OSM datasets.
OSM data has the downside of being licensed under ODbL, which is not compatible with CC-BY license, a hurdle that can be overcome using a collective database. However, we have decided that the current data will not be used directly in our model but only to validate the topology.
\subsection{Landsat 5}
In order to address missing data, we conducted a feasibility study on gas pipeline detection using remote sensing and Artificial Intelligence (AI) methods. We used an approach based on detecting the 16-28 m wide gas pipeline construction lane in order to detect pipeline routes. We then trained a convolutional neural network to discriminate between pixels labeled as \textsc{Pipeline} and pixels labeled as \textsc{Background}. For this purpose, we have used Landsat 5 imagery. Training and tests on the British gas transmission network and the NEL pipeline showed good evaluation scores. They proved the concept of using AI and remote sensing methods on historic open-source satellite imagery to detect pipeline pathways \cite{Dasenbrock-energies-2021}.\\
\section{Model Architecture}\label{model}
The SciGRID\_gas data model \textbf{network} consists of several \textbf{component classes}, each representing a list of objects. The following component classes have been implemented:
\textsc{PipeSegments} (PS),
\textsc{BorderPoints} (BP),
\textsc{Compressors} (CS),
\textsc{LNGs} (LNG),
\textsc{PowerPlants} (PP),
\textsc{Productions} (PO),
\textsc{Consumers} (CO),
\textsc{Storages} (ST). Any object which is a member of a component class is defined as an \textbf{element} of that respective class and can therefore be described by a common component-specific set of attributes. To keep track of the data processing steps, we have also included parameter information for each attribute of a specific element.
To make this data structure suitable for gas networks, we need to restructure the data using \textbf{nodes} and \textbf{edges}. These are connected to the elements in our dataset by a unique ID. In that way, all components with the exception of \textsc{PipeSegments} are implemented as nodes. However, \textsc{PipeSegments} are also related to a start and end node, so they were implemented as edges. Intermediate pipeline points, reflecting the geographical course of \textsc{PipeSegments}, are stored in separate lists.
\section{Methodology}\label{sec:methods}
In this section, we describe the methodology we developed to create the gas network dataset. This addresses, in particular, the release of datasets of various sources, the creation of a merged network dataset, the post-processing, and the visualization.
\subsection{Data Basis}
We have created and released datasets from the various data sources mentioned in Chapter$\,$\ref{dataorigin} on the project website\cite{SciGRID_gas_ws} under the section \textit{downloads}, which were converted from their original format to the SciGRID\_gas format described in Chapter$\;$\ref{model}.
Table\,\ref{tab:data origin2} gives an overview of the data sources constituting the IGGIELGNC-1 dataset.
\begin{table}[ht]
\caption{Overview of the available data sources for different gas transport components.}
\begin{center}
\begin{tabular}{l | l}
\textbf{component} & \textbf{data source} \\
\hline
\textsc{PipeSegments} & INET, EMAP, LKD, (GB, NO) \\
\textsc{Nodes} & INET, EMAP, LKD, GIE, CONS \\
\textsc{Lngs} & INET, GIE \\
\textsc{Storages} & GIE, GSE, LKD, EMAP\\
\textsc{PowerPlants} & INET, CONS\\
\textsc{Productions} & INET, EMAP, LKD \\
\textsc{Compressors} & INET, LKD\\
\textsc{Borderpoints} & INET\\
\textsc{Consumers} & INET, CONS\\
\end{tabular}
\end{center}
\label{tab:data origin2}
\end{table}
\subsection{Data Merging Process}
In the next step, we have merged the various datasets we obtained. This task required identifying duplicate elements that exist in more than one dataset. For this task, we rely on the criteria of spatial and name similarity using the \texttt{fuzzywuzzy} python package \cite{fuzzy}. The python algorithm will evaluate the identity of objects with an identity score between 0 and 100, where 0 indicates no similarity and 100 indicates apparent duplicity. Components are merged if they exceed a component-dependent threshold between 80 and 95. In case that the likely duplicates do not share the same attributes, the attributes of the subjectively most trustworthy source from Chapter$\;$\ref{dataorigin} are adopted.
This process works for all components which are implemented as nodes. The process of merging pipelines is more complex.
For edge,s the process is built around a similarity check of the start and end node positions and comparisons of the diameter, pressure, capacity, and length values. The respective algorithm is described in more detail in the documentation of the final dataset on the website\cite{SciGRID_gas_ws}. In terms of data, we have used the pipeline data from the EMAP dataset as a basis for the pipeline system and combined them with pipeline data from the INET and LKD datasets.
\subsection{Attribute Generation}
Once a merged dataset has been compiled, we focus on predicting missing data on the attribute level. Depending on the specific attribute, various approaches produced different results regarding their suitability to estimate missing values.
One approach would be to use heuristics to determine missing values. For example, in the case of missing pipeline capacity values, one could use the capacity of an adjacent compressor station to derive this value under the consideration of all other incoming and outgoing pipelines. This approach, of course, requires the construction of meaningful heuristics and sufficient data.
Another approach is exploiting linear relations between different parameters of the same element. For example, one can identify a linear regression between maximal power and maximal capacity of a compressor. For this purpose, we have used the Lasso-linear regression method from \texttt{scikit}\cite{scikit_Reg}.
However, a meaningful linear correlation is not identifiable in most cases, or the data density is insufficient. Thus, we must rely on a statistical approach and calculate mean or median values.
We have also used some of our unused data from Chapter$\;$\ref{dataorigin} to derive statistical correlations and heuristics in some particular situations. For all derived values, the data generation process will also store the applied method in the corresponding metadata of this value. This is especially useful if a user wants to distinguish raw and generated data. The user can identify further heuristics and develop other data generation methods based only on the original attribute data.
\subsection{Post-Processing and Visualization}
Finally, we have added our artificial \textsc{Consumer} (or \textsc{demand}) nodes to the network and connected them to the nearest pipeline. We have compiled the final dataset for the three different consumer aggregation levels NUTS 1, NUTS 2, and NUTS 3, which resulted in the datasets: IGGIELGNC-1 \cite{IGGIELGNC1}, IGGIELGNC-2 \cite{IGGIELGNC2} and IGGIELGNC-3 \cite{IGGIELGNC}; respectively. Further, some cleanup routines have been implemented, e.g., for removing or connecting unconnected isolated elements to create a coherent network. Also, during post-processing, the elevation of each node was determined with the help of \textit{Bing Maps Elevation API}\cite{BingAPI}. Additionally, we have released \texttt{qplot}\cite{qplot}, a \texttt{matplotlib} \cite{Hunter:2007} based visualization library for SciGRID\_gas data. The library was used for the creation of Fig.\,\ref{fig:ES-OSM}.
\section{Results} \label{sec:results}
We have released our final results under the name IGGIELGNC-1 on our website\cite{SciGRID_gas_ws}, where it is linked to its respective \textsc{Zenodo} repository. The data is licensed under CC-BY and available in CSV and geojson formats, and accompanied by a methodical documentation. We want to emphasize that the following results are from version 1.1 of the dataset and are subject to changes in future updates.
We have merged the pipeline systems of INET with a total length of about 60.000\,km, EMAP with a total length of about 207.000\,km, and LKD with a total length of about 27.000\,km to a network which finally contains 237.000\,km of gas transport pipelines. This pipeline system is plotted in Fig.\,\ref{fig:IGG3_1}. For comparison, the extrapolated pipeline validation dataset from OSM in 2020 only contains a total length of 108.000\,km.
In Fig.\,\ref{fig:IGG3_2} we show the pipeline system with all other components that sum up to 109 BP, 248 CS, 32 LNG, 314 PP, 102 PD, 108 CO, and 294 ST (refer to Section\,\ref{model} for the nomenclature). A country-wise overview of all components for some EU countries is shown in Table\,\ref{tab:pipe country} .
\begin{table}[htbp]
\caption{Total pipeline (PS) length and compressor station (CS) count for some countries. Data source: IGGIELGNC-1 }
\begin{center}
\begin{tabular}{c | c | c |c | c | c | c}
country & PS & CS & LNG & BP & ST & CO\\
code & length & count &count&count&count&count \\
\hline
AT & 2451 & 7 & 0 & 4 & 15 & 3 \\
BE & 2312 & 6 & 1 & 6 & 1 & 3 \\
CH & 1012 & 1 & 0 & 2 & 0 & 1 \\
CZ & 2159 & 6 & 0 & 3 & 10 & 1 \\
DE & 27708 & 35 & 0 & 15 & 68 & 15 \\
DK & 841 & 1 & 0 & 1 & 3 & 1 \\
ES & 8389 & 18 & 7 & 4 & 8 & 5 \\
FR & 15424 & 40 & 4 & 7 & 23 & 12 \\
GB & 6836 & 28 & 4 & 4 & 22 & 11 \\
IT & 12053 & 14 & 4 & 6 & 20 & 4 \\
\end{tabular}
\end{center}
\label{tab:pipe country}
\end{table}
Next, we looked at the attribute data for \textsc{PipeSegments}, \textsc{Storages}, \textsc{LNGs} and \textsc{Compressors}. For this purpose, we have chosen up to three of the most relevant attributes for each component and determined their respective data density. Tab.\,\ref{tab:parameter} shows the result in percent. We have not considered \textsc{BorderPoints} or \textsc{Consumers} for this analysis as these components are based on data aggregation and have no real physical counterpart.
\begin{table}[ht]
\caption{Raw parameter density of the IGGIELGNC-1 dataset.}
\begin{center}
\begin{tabular}{l | c | c| c}
\textbf{component} & \textbf{density} & \textbf{density} & \textbf{density} \\
\hline
\multirow{ 2}{*}{\textsc{PipeSegments}}& capacity &diameter& pressure \\
& 13\% & 32\% & 19\% \\
\hline
\multirow{ 2}{*}{ \textsc{Lngs}} & capacity &size& \multirow{ 2}{*}{-}\\
& 94\% & 69\% & \\
\hline
\multirow{ 2}{*}{ \textsc{Storages}} &capacity& power & pressure\\
& 63\% & 28\% & 35\% \\
\hline
\multirow{ 2}{*}{\textsc{Compressors}} &capacity & power & pressure\\
& 7\% & 15\% & 7\% \\
\hline
\multirow{ 2}{*}{\textsc{PowerPlants}} &energy &\multirow{ 2}{*}{-} & \multirow{ 2}{*}{-}\\
& 100\% & & \\
\hline
\multirow{ 2}{*}{\textsc{Productions}} &supply &\multirow{ 2}{*}{-} &\multirow{ 2}{*}{-} \\
& 5\% & & \\
\end{tabular}
\end{center}
\label{tab:parameter}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{figures/IGG3_pipes.png}
\caption{The pipeline system of the European gas transport grid in the IGGIELGNC-1 dataset. Pipelines are colored according to their diameter values.}
\label{fig:IGG3_1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{figures/IGG1.png}
\caption{Extract of the European gas network of the IGGIELGNC-1 dataset with all its components.}
\label{fig:IGG3_2}
\end{figure}
We have visually validated the topology of our network datasets with the OSM pipeline data. This process is illustrated in Fig.\,\ref{fig:ES-OSM} for the validation of the INET data for the region of Spain. Our overall impression was that the topology of all major pipelines is in good agreement with the currently available OSM data.
\begin{figure}[ht]
\centering
\includegraphics[width=1\textwidth]{figures/ES_OSM_INET3.png}
\caption{Comparison of pipelines from OSM (black) and the INET dataset (colored with diameter [mm]).}
\label{fig:ES-OSM}
\end{figure}
\section{Discussion} \label{sec:discussion}
We presented our approach to create a gas transport network data model of Europe from publicly available data sources. In terms of pipelines, our data model has a total length of about 237.000\,km, which is roughly in accordance with the commonly assumed length of 200.000\,km \cite{CONNOLLY2014475}. The slight overestimation is probably a direct result of our broader definition of the European gas network, which also incorporates the Western part of Russia and some Northern African States and Turkey. However, the total length is a good indicator for the overall success of modeling the European gas transport network from open-source data.
Further, we have used OpenStreetMap data to validate our network's topology visually. Our data show good accordance in this regard. Nevertheless, the OSM dataset currently contains only about 45\,\% of all pipelines due to significant data gaps in some regions. Also, this method can not be used to validate parallel pipelines because they are not explicitly specified in OSM. The examination of other components like \textsc{Powerplants} and \textsc{Productions} has shown data gaps in some regions stemming from incomplete data sources. We have further noticed that some countries still have missing \textsc{Borderpoints}. Since our algorithm generates these artificial nodes, this will be updated in future dataset versions.
Further validation of our dataset is currently not feasible due to a lack of relevant open-source validation datasets. This is also why we have avoided evaluating our attribute generation methods, which are described in more detail in the dataset documentation. We rather want to discuss the potential accuracy of such methods. The results of any attribute generation method, designed to predict partially unknown data, will scale in accuracy with the percentage of known data. We have analyzed our data regarding important parameters of different network components. We therefore can state that the generated attributes for \textsc{LNGs}, \textsc{Storages} and \textsc{PowerPlants} are more trustworthy, than for \textsc{PipeSegments}, \textsc{Compressors} and \textsc{Productions}.
From our perspective, more focus needs to be put into the data acquisition for these components. Such data might be provided by OpenStreetMap (OSM) soon. Pluta and Lünsdorf\cite{Pluta2020esyosmfilterA} have stated that the gas transport data content was rapidly growing between 2014-2019. If this trend continues or will even be supported by TSOs, OSM might become a good source for this data. At some point, it might even be possible to create an entire network from OSM data like it was done for the open-source power transmission dataset SciGRID\_power \cite{SciGRID_power_ws}. The reason why this is currently not possible for the gas grid is mainly rooted in the fact that gas transport pipelines are buried underground. This makes the direct identification of their position and additional properties difficult for OpenStreetMap mappers.
\section{Conclusion and Outlook} \label{sec:conclusion}
This contribution shows that creating a European gas transport network model is possible using only open-source data sources. Further, we have used statistical and heuristic methods to generate missing network elements attributes. Nevertheless, complete data validation was not possible due to the lack of verification data. However, our analysis of the underlying component attribute data showed that gas transport pipelines and compressor station data show low density in terms of their main attributes. Since both components are critical in modeling gas flows, future work will diminish these data gaps. We have discussed how some data gaps can potentially be closed either by a steady growth of OSM data or by remote sensing methods. However, some data, especially on the pipeline materials and roughness values, are not accessible without the assistance of TSOs.
We believe that our data model is a valid approximation of the current European gas network. It can potentially encourage TSOs to make their data open-source, which in the long term will result in a more precise representation of the gas grid and more suited energy scenarios.
\section*{Acknowledgment}
The authors like to acknowledge the contribution of Ontje Lünsdorf and Alaa Alhamwi.
\section*{Funding}
This research was funded under the “SciGRID\_gas” project by the German Federal Ministry for Economic Affairs and Energy (BMWi) within the funding of the 6. Energieforschungsprogramm der Bundesregierung. Funding Code: 03ET4063.
\bibliographystyle{alpha}
|
2,877,628,089,424 | arxiv | \section{Introduction}
\subsection{Minkowski's bound for polynomial automorphisms.}\label{par:Minkowski_Schur}
\paragraph{Rational numbers.--} Let $p$ be a prime. A finite \emph{$p$-group} is a group of size $p^\alpha$ for some integer $\alpha \geq 0$.
For $d\in \mathbf{Z}_+$, define $M_\mathbf{Q}(d,p)$ to be the integer
\[ M_\mathbf{Q}(d,p) = \ent{\frac{d}{p-1}} + \ent{\frac{d}{p(p-1)}} + \ent{\frac{d}{p^2 (p-1)}} + \cdots
\]
(Here $M$ stands for Minkowski).
Let $v_p$ be the $p$-adic valuation; then $M_\mathbf{Q}(d,p) = \ent{\frac{d}{p-1}} + v_p \left( \ent{\frac{d}{p-1}} !
\right)$.
\begin{thm}[Minkowski 1887, see \cite{SerreBoundsOrder}]\label{MinkowskiBoundLinear} Let $d$ be a natural
number and let $p$ be a prime. If $G$ is a finite $p$-subgroup of ${\mathrm{\mathrm{GL}}}_d (\mathbf{Q})$, then $v_p(\vert G\vert)\leq M_\mathbf{Q}(d,p)$, and this upper bound is
optimal: there are groups of order $p^{M_\mathbf{Q}(d,p)}$ in ${\mathrm{\mathrm{GL}}}_d(\mathbf{Q})$. \end{thm}
\paragraph{Number fields.--} Schur extended Minkowski's result to the case of number fields. To state Schur's result,
let us introduce some notation for cyclotomic extensions. Consider a number field $\k$ and fix an
algebraic closure ${\overline{\k}}$ of $\k$. Denote by $z_a\in {\overline{\k}}$ any primitive $a$-th root of unity, for
$a$ any positive integer; for instance $z_4={\mathsf{i}}$,
a square root of $-1$.
\begin{itemize}
\item If $p \geq 3$, set $t(\k;p) = [\k(z_p) : \k]$ and let $m(\k;p)$
be the maximal integer $a$ such that $\k(z_p)$ contains $z_{p^a}$; note that $m(\k; p)$ is finite because $\k$ is a finite extension of $\mathbf{Q}$. Then, define
\[
M_\k(d,p) := m(\k; p) \cdot \ent{\frac{d}{t(\k;p)}} + \ent{\frac{d}{p \cdot t(\k;p)}} +
\ent{\frac{d}{p^2t(\k;p)}} + \cdots .
\]
\item If $p=2$, set $t(\k; 2) =[\k(z_4): \k]$ and let $m(\k; 2)$ be the largest integer $a$ such that $z_{2^a} \in \k(z_4)$. Define \[
M_\k (d,2) = d + (m(\k; 2)-1) \ent{\frac{d}{t(\k;2)}} + \ent{\frac{d}{2t(\k;2)}} +
\ent{\frac{d}{4t(\k;2)}} + \cdots .
\]
\end{itemize}
This definition is consistent with the definition of $M_\mathbf{Q} (d,p)$ given above.
\begin{thm}[\cite{schur1973klasse}, \cite{SerreBoundsOrder}]\label{SchurBound} Let $d$ be a natural
number, and let $p$ be a prime.
If $G$ is a finite $p$-subgroup of ${\mathrm{\mathrm{GL}}}_d (\k)$ then $v_p(\vert G\vert)\leq M_\k(d,p)$ and
this bound is optimal.
\end{thm}
It is not difficult to find a subgroup $G \subset \mathrm{\mathrm{GL}}_d (\k)$ such that $\abs G =
p^{M(d,p)}$. We recall how to do so in Proposition \ref{PropBorneOptimale}.
\paragraph{Polynomial automorphisms.--}
Our first goal is to extend the theorem of Minkowski and Schur to an algebraic,
but nonlinear context. Let $\Aut(\mathbf{A}_\k^d)$ be the group of polynomial automorphisms of the affine space
$\mathbf{A}^d$, over some number field $\k$. This group contains $\mathrm{GL}_d (\k)$ but it
is much more complicated. Surprisingly, we are able to show that the Minkowski-Schur bound still
holds for subgroups of $\Aut (\mathbf{A}_\k^d)$ and in fact the same finite subgroups appear.
\begin{bigtheorem}\label{SchurBoundPolynomial}
Let $\k$ be a number field, let $d$ a natural number, and let $p \geq 3$ be a prime. If $G$ is a finite
$p$-subgroup of $\Aut(\mathbf{A}_\k^d)$, then there exists a group embedding $ G \hookrightarrow \mathrm{GL}_d(\k).$
In particular, Schur's bound still holds:
\[
v_p (\abs G) \leq M_\k(d,p),
\]
and this bound is optimal.
\end{bigtheorem}
The proof first shows the bound on the cardinal of the group $G$ and we then find the group embedding $G \hookrightarrow
\mathrm{GL}_d(\k)$ using a Sylow argument.
\begin{rmq} \label{remark:CasPImpairPasOptimal} The case $p=2$ is
also dealt with in Section 2. But we don't get an optimal bound. For example for $p=2$ and $\k = \mathbf{Q}$, we show that any
$2$-subgroup $G$ of $\Aut(\mathbf{A}_\mathbf{Q}^d)$ can be embedded into $\mathrm{GL}_d(\mathbf{Q}(z_4))$ and therefore satisfies
$v_2(\abs G) \leq M_\mathbf{Q}(d,2) + \ent{\frac{d}{2}}$. More
precisely, Proposition \ref{ImageCaractereCyclotomique} defines three cases $(a), (b)$ and $(c)$ when
$p=2$. We get an embedding into $\mathrm{GL}_d(\k)$
in case $(a)$ and $(b)$ (this is the case for example if $\k$ contains $z_4$), but in case $(c)$ we can
only get an embedding of $G$ into $\mathrm{GL}_d(\k(z_4))$ and therefore we get the bound $ v_2(\abs G) \leq
M_\k(d,2) + \ent{\frac{d}{2}} = M_{\k(z_4)}(d,2) $. See Theorem \ref{BigThmSchurBoundPolynomial} page
\pageref{BigThmSchurBoundPolynomial} for the general statement.
In fact, Theorem \ref{SchurBoundPolynomial} still holds when $\k$ is a finitely generated
field over $\mathbf{Q}$ but the proof is less intuitive so we will show the proof for $\k$ a number field and
explain how to extend it to finitely generated field over $\mathbf{Q}$ in Remark \ref{RmqFinitelyGeneratedField}.
We then state the complete theorem for finitely generated fields over $\mathbf{Q}$ in Theorem
\ref{BigThMSchurBoundPolynomialGeneralCase} page \pageref{BigThMSchurBoundPolynomialGeneralCase}.
\end{rmq}
Our method of proof follows \cite{SerreBoundsOrder}, in which Serre bounds the order of the finite subgroups
of ${\mathrm{H}}(\k)$, for ${\mathrm{H}}$ a semi-simple algebraic group; the phenomenon mentioned in Remark
\ref{remark:CasPImpairPasOptimal} also appears for such groups ${\mathrm{H}}$. The general idea is to embed $G$ into a group of
linear automorphisms over a finite field, study the finite field case, and use cyclotomic characters to find the
optimal bound yield by this method.
\paragraph{Birational transformations.--}
The problem of the existence of uniform bounds on the size of finite $p$-groups or finite simple groups in infinite
dimensional groups such as $\Aut(\mathbf{A}^d)$ or $\Bir(\mathbf{A}^d)$ has been studied extensively during the last decade
(see~\cite{Serre09}). For an arbitrary complex projective variety $X$, one cannot expect uniform
bounds that would only depend on the dimension of $X$, since
every finite group is the group of automorphisms of a complex projective curve (see~\cite{Greenberg}).
But precise results have been obtained when $X$ is rationally connected. Recently, Jinsong Xu showed the following
optimal result:
{\sl{Let $d$ be a natural number and let $p$ be a prime $>d+1$.
If $X$ is a rationally connected variety of dimension $d$ over an
algebraically closed field of characteristic $0$, and $G$ is a finite $p$-subgroup of $\Bir(X)$, then G is abelian
and its rank is at most $d$}} (see~\cite{Jinsong_Xu:CRAS_2020}). Results of this type were first shown by
Prokhorov, Shramov and Birkar in \cite{prokhorov_shramov_2014} for birational transformations of any varieties and
improvements were made for rationally connected varieties in \cite{prokhorov2016jordan}.
These results are deeper than our Theorem \ref{SchurBoundPolynomial}, but our contribution has
a few advantages: it
may serve as an introduction to the work of Prokhorov and Shramov, the techniques are more elementary, the precise
bound we obtain illustrates the interplay between the arithmetic of the field $\k$ and the size of the group, and the proof
shows why the upper bound of Minkowski and Schur is still valid in $\Aut(\mathbf{A}_\k^d)$.
\begin{rmq}
The results of Prokhorov and Shramov rely on the BAB
conjecture, which was proved by Birkar in \cite{birkar2016singularities}. The result of J.
Xu relies on the work of Haution on
equivariant cohomology and fixed points of finite groups (see~\cite{haution_2019}).
\end{rmq}
\subsection{A bound for the action of finitely generated nilpotent groups}
\subsubsection{Nilpotent and solvable groups}\label{par:nilpotent_and_solvable}
Let~$H$~be a group. If~$a,b \in H$, we denote by~$[a,b] := ab \inv a \inv b$~their commutator. If~$H_1,H_2$~are two
subgroups of~$H$, then we denote by~$H_1 H_2$~the subgroup generated by the set~$\{h_1 h_2: h_1 \in H_1, h_2 \in
H_2\}$~and by $[H_1,H_2]$ the subgroup generated by the set~$\{[h_1, h_2]: h_1 \in H_1, h_2 \in H_2\}$.
The lower central (resp. derived) series is defined by~$D^{0}(H) = H$~(resp.~$D_0(H) = H$)~and~$D^{i+1}(H)
= [H, D^{i}(H)]$~(resp. $D_{i+1}(H) = [D_i(H), D_i (H)]$). A group $H$ is \emph{nilpotent} (resp.
\emph{solvable}) when there exists an integer $k$ such that $D^k(H) = {1}$ (resp. $D_k(H) = {1}$).
If $H$ is nilpotent, its {\emph{nilpotency
class}} $\nilp(H)$~is the lowest integer such that $D^k(H) = {1}$.
For a solvable group
$H$, denote by $\dl (H)$ its derived length, that is the least integer $k$ such that $D_k(H) = {1}$. The \emph{virtual
derived length} is the minimum of $\dl(H_0)$ over finite index subgroups $H_0$ of $H$. Similar definitions
and notation will be used for Lie algebras.
\subsubsection{Upper bounds on the virtual derived length}
Finite $p$-groups are nilpotent. We now look at infinite, finitely generated nilpotent groups, and their actions by
automorphisms and birational transformations.
In \cite{cantat2014algebraic}, Cantat and Xie used $p$-adic analysis to give information on group actions on complex
algebraic varieties by birational transformations, and sketched the proof of the following result.
\begin{bigtheorem}\label{BoundNilpotentGroups}
Let $H$ be a finitely generated nilpotent
group acting faithfully on a quasi-projective variety $X$ by algebraic automorphisms over a field of characteristic
zero. Then, \[ \vdl(H)\leq \dim X \] where $\vdl(H)$ is the \emph{virtual derived length} of $H$.
Furthermore, this bound is optimal.
\end{bigtheorem}
Another goal of this paper is to give a complete proof of this result. Again, the main idea is to replace the
initial field of definition by another one, here $\mathbf{Q}_p$, and in fact by $\mathbf{Z}_p$, for a suitable prime $p$.
Then, the initial action of the discrete group $H$ will be extended to an analytic action of a $p$-adic
Lie group over $\mathbf{Z}_p^{\dim X}$, so that
tools from $p$-adic analysis will be available, in particular $p$-adic analytic vector fields and $p$-adic Lie algebras. Thus,
Theorem \ref{BoundNilpotentGroups} will follow from a similar theorem we prove over $\mathbf{Z}_p$. Section
\ref{SecPAdicAnalysis} is dedicated to the construction of $p$-adic analytic tools needed for the proof of Theorem
\ref{BoundNilpotentGroups} such as infinite dimensional $p$-adic Lie groups or Tate-analytic diffeomorphisms and Section
\ref{SecFinitelyGeneratedNilpotentGroups} is dedicated to the proof of Theorem \ref{BoundNilpotentGroups}.
\section{Finite $p$-groups}
\subsection{Preliminaries}
\paragraph{Primes and $p$-adic numbers} In the rest of the article, $p$ is a prime unless mentioned otherwise, $\mathbf{Z}_p$
denote the ring of $p$-adic integers and $\mathbf{Q}_p$ is the fraction field of $\mathbf{Z}_p$. Recall that Dirichlet's
theorem states for any integers $a,n$ such that $gcd(a,n) =1$, there is an infinite amount of prime numbers
$\ell$ such that $\ell = a \mod n$.
\paragraph{Maximal ideals and reduction} If $q$ is a power of a prime, we denote by $\mathbf{F}_q$ the field with $q$ elements.
Let $A$ be a finitely generated $\mathbf{Z}$-algebra. Then for every maximal ideal $\m \subset A$, $A / \m$ is a finite
field. This comes from the Nullstellensatz for Jacobson rings which is proven in \cite{bourbaki2007algebre}, chapter
5, $\S 3$, theorem 3 of section 4.
\subsection{Groups of linear transformations over $\mathbf{Q}$}
To warm up, let us prove the theorem of Minkowski. For a ring $A$, we denote by $A^{\times}$ its subgroup of invertible
elements; for any prime $p$ the group $(\mathbf{Z} / p^2 \mathbf{Z})^{\times}$ is cyclic.
\begin{prop}\label{PropEmbeddingFinite}
Let $G$ be a finite subgroup of $\mathrm{GL}_d (\mathbf{Q})$. For any prime $\ell$ large enough there exists an injective homomorphism
$G \hookrightarrow \mathrm{GL}_d (\mathbf{F}_\ell)$.
\end{prop}
\begin{proof}
Since $G$ is finite, there exists an integer $N$ such that $G \subset \mathrm{GL}_d (\mathbf{Z}[1 / N])$. Now, for each $g \in G
\setminus \left\{ \id \right\}$ denote by $l(g)$ the largest prime factor that appears in the prime decomposition of the rational
numbers given by the coefficients of the matrix $g - \id$; denote by $L$ the maximum of the primes $l(g)$. If $\ell >
\max(N,L)$, the homomorphism of reduction modulo $l$ is defined on $G$ and is injective.
\end{proof}
Thus, if $G \subset \mathrm{GL}_d ( \mathbf{Q})$ is a finite subgroup, $v_p(\abs G) \leq v_p (\abs{\mathrm{GL}_d(\mathbf{F}_\ell)})$ for any $\ell$
given by Proposition \ref{PropEmbeddingFinite}. We know that
\begin{equation}
\abs{ \mathrm{GL}_d (\mathbf{F}_\ell)} = \ell^{d(d-1)/2} \prod_{i=1}^{d-1} \left( \ell^i -1 \right).
\label{EqCardinalGLd}
\end{equation}
for any prime $\ell$. Let us compute the $p$-adic valuation of such a product.
\begin{lemme} \label{CalculValuationsP-Adiques}
Suppose $p\neq 2$ and let $\ell$ be a generator of $(\mathbf{Z} / p^2 \mathbf{Z})^\times$.
\begin{enumerate}
\item If $p$ divides $\ell^i -1$ then $p-1$ divides $i$;
\item If $p-1$ divides $i$ then $v_p(\ell^i-1) = 1 + v_p(i)$.
\end{enumerate}
\end{lemme}
\begin{proof}
Suppose $p$ divides $\ell^i-1$. Note that $\ell^{ip}-1 = (\ell^i -1) \sum_{j=0}^{p-1} \ell^{ij}$; since
$\ell^i \equiv 1
\mod p$, we have $\sum_{j=0}^{p-1} \ell^{ij} \equiv 0 \mod p$, and then $\ell^{ip} \equiv 1 \mod p^2$.
Since $\ell$ is of
order $p(p-1)$ in $(\mathbf{Z} / p^2 \mathbf{Z})^{\times}$, we have that $p(p-1)$ divides $ip$
therefore, $p-1$ divides $i$, which proves the first assertion.
We prove assertion 2 by induction on $v_p(i)$. To initialize the induction assume $v_p(i) =0$. Then
$p$ and therefore $p(p-1)$ do not divide $i$; thus $\ell^i \not \equiv 1 \mod p^2$ because $\ell$ is of order $p(p-1)$.
Thus, $v_p(\ell^i -1) =1$. Now suppose the assertion true for $v_p(i) = k$ with $k \geq 0$ and suppose $v_p(i) = k+1$.
Write $i = (p-1) p^{k+1} m$ with $m$ not divisible by $p$ and suppose the result true for $v_p(i) = k$.
Let $s:= \ell^{(p-1)m}$, then
\begin{align*}
\ell^{i} -1 = s^{p^{k+1}} -1 = (s^{p^k}-1) \sum_{j=0}^{p-1} s^{j p^k}.
\end{align*}
By induction, $s^{p^k}$ is of the form $s^{p^k} = 1 + u p^{k+1}$ where $u$ is an integer not divisible by $p$. Therefore,
for all $1 \leq j \leq p-1, s^{jp^k} = 1 + j p^{k+1} u + v_jp^2$ where $v_j$ is some integer.
, therefore we can write
\[ \sum_{j=0}^{p-1} s^{jp^k} = p + p^{k+1} \frac{p(p-1)}{2} u + p^2V = p \left(1 + p^{k+1} \frac{p-1}{2} u + pV \right) \]
where $V = \sum v_j$. Since $p$ is odd, $\frac{p-1}{2}$ is an integer and this sum has $p$-adic valuation $1$ since
$k+1 \geq 1$.
\[
s^{p^{k+1}} -1 = (s^{p^k} -1) \cdot p \left(1 + p^k\sum_{j=0}^{p-1} u_j \right) \]
since $k \geq 1$, we get $v_p (s^{p^{k+1}} -1) = 1 + v_p (s^k -1 ) = 1 + (k+1)$.
\end{proof}
Equation \eqref{EqCardinalGLd} and Lemma \ref{CalculValuationsP-Adiques} provide the following corollary.
\begin{cor}
Let $d$ be an integer, let $p$ be an odd prime, and let $\ell$ be a prime whose image in $(\mathbf{Z} / p^2
\mathbf{Z})^{\times}$ is a generator. Then
\[ v_p (\mathrm{GL}_d(\mathbf{F}_\ell)) = M_\mathbf{Q}(d,p). \]
This proves also the fact that Theorem \ref{MinkowskiBoundLinear} "is optimal for $\mathrm{GL}_d(\mathbf{F}_\ell)$" by Sylow.
\end{cor}
To prove Theorem \ref{MinkowskiBoundLinear}, consider a finite group $G \subset \mathrm{GL}_d (\mathbf{Q})$, then apply Dirichlet's
theorem and Proposition \ref{PropEmbeddingFinite} to embed $G$ in $\mathrm{GL}_d(\mathbf{F}_\ell)$ for some prime generator $\ell$ of
$(\mathbf{Z} / p^2 \mathbf{Z})^*$. The corollary gives the desired upper bound.
\begin{rmq}\label{remarkCasPairLinear}
The case $p=2$ is also treated by Minkowski and in fact the same bound applies. However the proof is slightly
different as it is required to embed $G$ into an orthogonal group over a finite field. Indeed, If $\ell$ is an odd prime
then the best bound one can get is $v_2(\mathrm{GL}_d(\mathbf{F}_l)) \leq M(d,2) + \lfloor d/2 \rfloor$ with equality with the right
choice of $\ell$ (see Proposition \ref{ConstanteMajoréeParBorneSchur}). To embed a finite group $H$ of
matrices over $\mathbf{Q}$ into an orthogonal group over a finite field, one just need to look at the positive
definite bilinear form $\psi := \sum_{h \in H} {}^t \! H H$. For any prime $\ell$ large enough such that
$\ell$ does not divide $\det \psi$, the group homomorphism of reduction mod $\ell$ induces an embedding of
$H$ into an orthogonal group over $\mathbf{F}_l$, however this process does not generalize well when looking at
polynomial automorphisms (See Remark \ref{remarkCasPairNeMarchPas}).
\end{rmq}
\subsection{The Minkowski's bound for finite groups of polynomial automorphisms with rational coefficients}
To prove Theorem \ref{SchurBoundPolynomial}, we adapt the proof of the Minkowski bound for linear automorphisms.
Actually, to conclude it suffices to show that Proposition \ref{PropEmbeddingFinite} also holds for finite $p$-subgroups of
polynomial automorphisms.
\begin{prop}\label{EmbeddingPolynomialCase}
Let $d$ be an integer. Let $G$ be a finite $p$-subgroup of $\Aut(\mathbf{A}_\mathbf{Q}^d)$.
Then, there exists a prime $\ell$ such that
\begin{enumerate}
\item $\ell$ is a generator of $(\mathbf{Z} / p^2 \mathbf{Z})^{\times}$
\item There is an injective homomorphism $G \hookrightarrow \mathrm{GL}_d (\mathbf{F}_\ell)$
\end{enumerate}
\end{prop}
\begin{lemme}\label{PointFixeEtDifferentielleInj}
Let $d$ be an integer and $p$ a prime. Let $F$ be a finite field with $\Char (F) \neq p$. Let $G$ be a finite
subgroup of $\Aut (\mathbf{A}_F^d)$ of order $p^\alpha$. Then $G$ has a fixed point $x_0 \in \mathbf{A}^d (F) = F^d$ and the homomorphism
\[
\begin{array}{lrcl}
\Phi:& G & \longrightarrow & \mathrm{GL}_d (F)\\
& g & \longmapsto & D_{x_0} g
\end{array}
\]
is injective.
\end{lemme}
\begin{proof}
The group $G$ acts on $F^d$ which is of size $\abs F^d$. Since $\abs G = p^\alpha$ and $p$ does not divide $\abs F$,
the class equations gives the existence of at least one trivial $G$-orbit in $F^d$; hence, the existence of a fixed
point $x_0 \in F^d$.
Up to a translation we can suppose that $x_0 = 0$. Now to show the injectivity of $\Phi$. Take $g$ in $G$ such that
$D_{0} g = \id$, then \[ g(x_1,\cdots, x_d) = g(\mathbf x) = \id + \sum_{j \geq 2} A_j (\mathbf x) \]
where $A_j$ is the homogeneous part of $g$ of degree $j$.
Suppose that $g \neq \id$, let $j_0$ be the lowest index $j \geq 2$ such that $A_j \neq 0$. We rewrite $g$ as $g =
\id + A_{j_0} + B$ where $B = \sum_{j > j_0} A_j$ and compute the second iterate
\begin{align*}
g^2 (\mathbf x) &= g(\mathbf x) + A_{j_0} (g(\mathbf x)) + B (g(\mathbf x)) \\
&= \id + A_{j_0} (\mathbf x) + B(\mathbf x) + A_{j_0} (\mathbf x + A_{j_0} (\mathbf x) + B(\mathbf x) ) + B(g (\mathbf x)) \\
&= \id + 2 A_{j_0} (\mathbf x) + (\text{terms of higher degree}).
\end{align*}
And for every $k \geq 1$ we obtain
\[ g^k (\mathbf x) = \id + k A_{j_0} (\mathbf x) + (\text{terms of higher degree}). \]
Since, $g$ is of order $p^t$ for a certain $t>0$, replacing $k$ by $p^t$ in this formula we get $ p^t A_{j_0} (\mathbf x) =0 $,
a contradiction since $\Char F \neq p$.
\end{proof}
\begin{rmq}\label{remarkChar0ResteInj}
If $F$ is of characteristic $0$ and $x_0$ is fixed by $G$, then the proof shows also that $\Phi: g \mapsto
D_{x_0} g$ is injective.
\end{rmq}
\begin{proof}[Proof of Theorem \ref{SchurBoundPolynomial} when $\k = \mathbf{Q}$]
As in the linear case, we can find an integer $N$ such that $G \subset \Aut (\mathbf{A}_{\mathbf{Z}[1 / N]}^d)$. So, for $\ell > N$ prime ,
reduction modulo $\ell$ is well defined on $G$. Now, for $\ell$ large enough such that $\ell$ does not
divide any coefficient of $g - \id$ for all $g \in G \subset \Aut (\mathbf{A}_{\mathbf{Z} [ 1/N]}^d)$, this homomorphism is injective and we
can use Dirichlet's theorem to ensure that $\ell$ is a generator of $(\mathbf{Z} / p^2 \mathbf{Z})^{\times}$. $G$ is now embedded in
$\Aut (\mathbf{A}_{\mathbf{F}_\ell}^d)$ and we replace it by its image in $\Aut(\mathbf{A}^d_{\mathbf{F}_\ell})$. By Lemma \ref{PointFixeEtDifferentielleInj},
there is a point $x_0 \in \mathbf{F}_\ell^d$ fixed by $G$ and we have an injective homomorphism $\Phi: G \hookrightarrow \mathrm{GL}_d
(\mathbf{F}_\ell)$. This concludes the proof when $p \neq 2$.
\end{proof}
\subsection{Extension of Minkoswski's bound to number fields} \label{SubsecSchurBoundNumberFields}
\paragraph{Strategy.--} This part is dedicated to the proof of Schur's bound for finite $p$-groups of
polynomial automorphisms over arbitrary number fields. We will then prove Theorem
\ref{SchurBoundPolynomial} using a Sylow argument. As in the previous section, we want to show the
\begin{thm}\label{theoremPlongementCorpsFiniBonCardinal}
Let $\k$ be a number field, $d$ an integer and $p$ be an odd prime. Let $G$ be a finite $p$-subgroup of $\Aut_\k
(\mathbf{A}^d)$, then there exists a finite field $\mathbf{F}$ with $\Char \mathbf{F} \neq p$ and an injective group homomorphism $G \hookrightarrow \mathrm{GL}_d
(\mathbf{F})$ such that $v_p (\abs{\mathrm{GL}_d(\mathbf{F})}) \leq M_\k (d,p)$.
\end{thm}
Indeed, this would prove that $v_p ( \abs G) \leq v_p (\abs{\mathrm{GL}_d (\mathbf{F})}) \leq M_\k (d,p)$. The natural idea is to do an
analog of the proof for $\k =\mathbf{Q}$. Replace $\mathbf{Z}$ by the ring of integers $L := \mathcal O_\k$ of $\k$, then for any maximal ideal
$\m$ of $L$ lying over a sufficiently large prime, there is an injective homomorphism $G \hookrightarrow \Aut
(\mathbf{A}_{L / \m}^d)$. By taking differentials at a fixed point over $L / \m$ we would see $G$ as a subgroup of $\mathrm{GL}_d( L / \m)$ and
the order of $\mathrm{GL}_d( L /\m)$ would give a bound $v_p (\abs{G}) \leq \sum_{i=1}^d v_p (\abs{L / \m}^i -1)$. The remaining
part is to choose $\m$ wisely so that we get the lowest bound possible. To do this, we use cyclotomic characters.
\paragraph{Cyclotomic characters.--}In this part, $\k$ is a finitely generated field over $\mathbf{Q}$. We denote
by $\mu_{n}$ the
group of $n$-th roots of unity in $\overline \k$.
Recall that $\Aut (\mu_{n}) = (\mathbf{Z} / n \mathbf{Z})^{\times}$ because every automorphism $\phi$ is of the form $\phi(\omega) =
\omega^a$ where $a \in (\mathbf{Z} / n \mathbf{Z})^\times$.
\begin{dfn}[Cyclotomic character]
Denote by $\Gamma_\k = \Gal(\overline \k / \k)$ the absolute Galois
group of $\k$. For every $n \geq 1, \Gamma_\k$ preserves the group $\mu_{n} \subset \overline \k^{\times}$ of
$n$-th roots of unity, this induces a group homomorphism
\[
\chi_n : \Gamma_\k \rightarrow \Aut (\mu_n) = (\mathbf{Z} / n \mathbf{Z})^\times \]
called the \emph{$n$-th cyclotomic character of $\k$}. In particular, if $p$ is a prime number, since the
inclusion $\mu_{p^n} \subset \mu_{p^{n+1}}$ induces a group homomorphism $\Aut
(\mu_{p^{n+1}}) = (\mathbf{Z} / p^{n+1} \mathbf{Z})^\times \rightarrow \Aut (\mu_{p^n}) = (\mathbf{Z} / p^n \mathbf{Z})^\times$, we have a compatible
family of homomorphisms
\[ \chi_{p^n} : \Gamma_\k \rightarrow \Aut (\mu_{p^n} ). \]
This family of homomorphisms induces the $p^\infty$-\emph{cyclotomic character}
\[ \chi_{p^\infty} : \Gamma_\k \rightarrow \mathbf{Z}_p^{\times} = \lim_{\longleftarrow} (\mathbf{Z} / p^n \mathbf{Z})^{\times} \]
where $\mathbf{Z}_p$ is the ring of $p$-adic integers. This homomorphism is continuous with respect to the profinite topologies
on $\Gamma_\k$ and $\mathbf{Z}_p^\times$.
\end{dfn}
We are interested in the image of $\chi_{p^\infty}$ which is a closed subgroup of $\mathbf{Z}_p^{\times}$. Define $t(\k; p)$ and
$m(\k; p)$ as in Section \ref{par:Minkowski_Schur}. The number $m(\k; p)$ is always finite if $\k$ is
finitely generated over $\mathbf{Q}$ (see \cite{SerreBoundsOrder}, $\S4.3$). If $s$ is an integer, we denote by
$C_s$ the cyclic group of order $s$.
\begin{prop}[\cite{SerreBoundsOrder}, $\S 4$]\label{ImageCaractereCyclotomique} $\phantom{-}$
\begin{enumerate}
\item If $p$ is an odd prime, one has
\[ \mathbf{Z}_p^{\times} \simeq C_{p-1} \times (1 + p \cdot \mathbf{Z}_p). \]
The group $1 + p \cdot \mathbf{Z}_p$ is a procyclic subgroup generated by $1+p$ as
a topological group and isomorphic to the additive group $\mathbf{Z}_p$. Its closed subgroups are the groups $1 + p^j \mathbf{Z}_p$ with
$j\geq 1$.
Furthermore, one has
\[ \im \chi_{p^\infty} = C_{t(\k;p)} \times \left\{ 1 + p^{m(\k;p)} \cdot \mathbf{Z}_p \right\}. \]
\item If $p=2$, then $\mathbf{Z}_2^{\times} = C_2 \times \left\{ 1 + 4 \cdot \mathbf{Z}_2 \right\}$. There are 3
possibilities for $\im \chi_{2^\infty}$:
\begin{enumerate}
\item $\im \chi_{2^\infty} = 1 + 2^{m(\k;p)} \cdot \mathbf{Z}_2$ and then $t(\k;p)=1$.
\item $\im \chi_{2^\infty} = \langle -1 + 2^{m(\k;p)-1} \rangle$ (the closure of the group generated by $-1 +
2^{m(\k;p)-1} $) and then $t(\k;p)=2$.
\item $\im \chi_{2^\infty} = C_2 \times \left\{ 1 + 2^{m(\k;p)} \mathbf{Z}_2 \right\}$ and then $t(\k;p)=2$.
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{rmq}
Those 3 cases are distinct when $m(\k,p) \neq \infty$. We will refer as $\k$ being in case (a), (b), or (c) when $\im
\chi_{2^\infty}$ is of the form (a),(b) or (c) of Proposition \ref{ImageCaractereCyclotomique}.
\end{rmq}
Recall that an integral domain $L$ is \emph{normal} if every localisation at a prime ideal
of $L$ is integrally closed. Let $L$ be a normal domain that is finitely generated over $\mathbf{Z}$ such that the
fraction field of $L$ is $\k$. For
any maximal ideal $\m \subset L$, the quotient $L / \m$ is finite by the Nullstellensatz for Jacobson
rings and $N(\m) := \abs{L / \m}$ is the \emph{norm} of
$\m$. Recall that for a ring $R$, $\Spec R$ denotes the set of prime ideals of $R$ and $\Specmax R$ the set of its
maximal ideals both with the Zariski topology. The following theorem is proven in \cite[$\S 6$ Theorem
7]{SerreBoundsOrder}.
\begin{thm}\label{OuvertDenseAvecAnneauNormal}
Let $L$ be a normal domain finitely generated over $\mathbf{Z}$ such that the fraction field of $L$ is $\k$. Let $n$
be an integer and $c$ an element of $(\mathbf{Z} / n\mathbf{Z})^{\times}$. Denote by $X_c$ the set of elements $x \in \Specmax(L)$ such
that $N(x) \equiv c \mod n$. Then:
\begin{enumerate}
\item If $c \not \in \im \chi_n$, $X_c = \emptyset$.
\item If $c \in \im \chi_n$, then $X_c$ is Zariski-dense in $\Specmax (L)$. In particular, $X_c$ is infinite.
\end{enumerate}
\end{thm}
In particular, the ring of integers of a number field is normal because it is integrally
closed and this property is stable under localisation. So Theorem \ref{OuvertDenseAvecAnneauNormal} holds
for $L$ the ring of integers of a number field.
\paragraph{Valuations.--}
We define the constant
\[ M'_\k (d,p) = \inf_{u \in \im \chi_{p^\infty}} \sum_{i=1}^d v_p (u^i -1). \]
The next proposition is adapted from Proposition 4, $\S 6$ of \cite{SerreBoundsOrder} to our context.
\begin{prop}\label{ConstanteMajoréeParBorneSchur}
One has
\begin{enumerate}[label=(\alph*)]
\item If $p \neq 2$ or if $p=2$ and $t(\k;p)=1$ ($\k$ is in case (a)), then
\[ M'_\k (d,p) = \sum_{\substack{ i=1 \\ t(\k;p) | i}}^d (m(\k;p)+ v_p(i)) = M_\k (d,p). \]
\item If $p=2$, $t(\k;p)=2$ and $\k$ is in case (b), one has
\[ M'_\k (d,2) = r_1 + (m(\k;p)-1)r_0 + \sum_{i=1}^d v_2 (i) = M_\k (d,2) \]
where $r_1$ is the number of odd integers between $1$ and $d$ and $r_0$ the number of even integers
in this range.
\item If $p=2$, $t(\k;p)=2$ and $\k$ is in case(c), one has
\[ M'_\k(d,2) = r_1 + m(\k;p)r_0 + \sum_{i=1}^d v_2 (i) = \ent{\frac{d}{2}} + M_\k (d,2) \]
with the same definition for $r_1$ and $r_0$.
\end{enumerate}
\end{prop}
\begin{proof}
Set $t= t(\k;p), m = m(\k;p)$.
We start with the case $p \neq 2$. First if $t$ divides $i$, then $v_p (u^i -1) \geq m + v_p(i)$. This is because $u$
can be written as $zv$ with $z^t =1$ and $v_p (v-1) \geq m$, so $v_p (u^i -1) = v_p (v^i-1)$. So we have an inequality
$M'_\k(d,p) \geq \sum_{\substack{ i=1 \\ t | i}}^d (m+ v_p(i))$. To have the opposite one, choose $u \in \im
\chi_{p^\infty}$ such that $u = zx$ with $z$ of order $t$ and $v_p(x -1) = m$. This also works for $p=2$ and $t=1$.
Suppose now that $p=2$ and $t=2$, Define $m'= m-1$ in case (b) and $m' = m$ in case (c). Then for every $x \in \im \chi_{2^\infty}$,
\begin{align*}
v_2(x^i -1) &\geq m' + v_2 (i) \text{ if } i \text{ is even.} \\
v_2(x^i -1)& \geq 1 \text{ if } i \text{ is odd.}
\end{align*}
This gives
\[ M'_\k(d,2) \geq \sum_{i \text{ odd}} 1 + \sum_{i \text{ even}} (m' + v_2 (i) ) = r_1 + m' r_0 + \sum_{i \text{
even}} v_2 (i).\]
To show the opposite inequality, we use the fact that $x =-1 + 2^{m'} \in \im \chi_{2^\infty}$ and we check that $\sum_{i=1}^d v_2(x^i-1) = r_1 + m'r_0 + \sum_{i=1}^d v_2 (i)$.
Now, to show the different equalities, notice that for (a):
\[ M'_\k (d,p) = m \cdot \ent{\frac{d}{t}} + \sum_{i=1}^{\ent{\frac{d}{t}}} v_p(ti). \]
Now, since $t$ divides $p-1$, one has $v_p(ti) = v_p(i)$ and the rest of the computation is similar as in the case $\k = \mathbf{Q}$.
For (b) and (c), we have $r_0 = \ent{\frac{d}{2}}$ and $r_1 = d - r_0$.
\begin{align*}
M'_\k(d,2) &\leq d - \ent{\frac{d}{2}} + m' \ent{\frac{d}{2}} + \sum_{i=1}^d v_2(i)\\
&= d + (m' -1) \ent{\frac{d}{2}} + \sum_{i=1}^d v_2 (i) \\
&= d + (m'-1) \ent{\frac{d}{t}} + \sum_{k \geq 1} \ent{\frac{d}{2^k}} \\
&= d + m' \ent{\frac{d}{t}} + \sum_{k\geq 1} \ent{\frac{d}{2^k t}}.
\end{align*}
\end{proof}
We can now state Theorem \ref{theoremPlongementCorpsFiniBonCardinal} without assuming $p$ odd.
\begin{thm}\label{theoremPlongementCorpsFiniBonCardinalPGeneral}
Let $\k$ be a number field, $d$ an integer and $p$ be prime. Let $G$ be a finite $p$-subgroup of $\Aut_\k
(\mathbf{A}^d)$, then there exists a finite field $\mathbf{F}$ with $\Char \mathbf{F} \neq p$ and an injective group homomorphism $G \hookrightarrow \mathrm{GL}_d
(\mathbf{F})$ such that $v_p (\abs{\mathrm{GL}_d(\mathbf{F})}) \leq M_\k ' (d,p)$.
\end{thm}
\paragraph{Proof of Theorem \ref{theoremPlongementCorpsFiniBonCardinalPGeneral}.--}
Take $G$ a finite $p$-subgroup of $\Aut (\mathbf{A}_\k^d)$ with $p$ prime.
\smallskip
{\sl{Step 1. Reduction modulo $\mathfrak l $.-- }}
Set $L = \mathcal O_\k$. For every
element $a \in \k^\times$ the fractional ideal generated by $a$ is of the form (see \cite{Neukirch},
$\S 3$)
\[ a \cdot \mathcal O_\k = (a) = \prod_{\mathfrak l \in \Spec L} \mathfrak l^{v_\mathfrak l (a)} \]
and the prime ideals $\mathfrak l$ such that $v_\mathfrak l (a) \neq 0$ are in finite number. For such an $\mathfrak l$
there exists a unique prime $\ell \in \mathbf{Z}_+$ such that $(\ell) \subset \mathfrak l$. We define for $g \in \Aut (\mathbf{A}_\k^d)$
\[ \ell_g := \max_{a \in \text{coeff}(g - \id)} \left\{ \text{prime } \ell \in \mathbf{Z}_+ : \exists \mathfrak l \in \Spec L, (\ell) \subset \mathfrak l, v_\mathfrak l (a) \neq 0 \right\} \]
where $\text{coeff} (g- \id)$ is the set of coefficients of the polynomial transformation $g - \id$. Set $M_1 = \max_{g
\in G} \ell_g$ ($M_1 < + \infty$ since $G$ is finite) and $M = \max(M_1,p)$, then for every prime $\ell >M$ and for
every $\m \in \Specmax(L)$ such that $(\ell) \subset \m$, we have a well-defined injective homomorphism
\[ \Psi: G \hookrightarrow
\Aut( \mathbf{A}_{\mathbf{F}}^d),
\]
where $\mathbf{F} = L / \m$. Indeed, the homomorphism of rings $\phi: L \twoheadrightarrow L / \m$ induces the homomorphism $\phi: L_\m :=
\inv{(L \setminus \m)} L \rightarrow L/ \m$. By construction, $G$ is a subgroup of $\Aut (\mathbf{A}_{L_\m}^d)$, so $\phi: G
\rightarrow \Aut (\mathbf{A}_{L / \m}^d)$ is well-defined and it is injective by our definition of $M$.
\smallskip
{\sl{Step 2. The group $\Psi(G)$.--}}
Now, $\Psi(G)$ is a $p$-subgroup of $\Aut (\mathbf{A}_{\mathbf{F}}^d)$. Since $p \not \in \m$, we get $\Char (\mathbf{F}) \neq p$. By
Proposition \ref{PointFixeEtDifferentielleInj}, there is a point $x_0$ in $\mathbf{A}^d(\mathbf{F})$
fixed by $\Psi(G)$ and by taking the differentials at $x_0$, we obtain an injective homomorphism $G \hookrightarrow \Psi(G) \hookrightarrow \mathrm{GL}_d
(\mathbf{F})$. So, we get
\begin{equation}\label{eq1}
v_p (\abs G) \leq v_p \left( N(\m)^{\frac{d(d+1)}{2}} \prod_{i=1}^d (N(\m)^i-1) \right) = \sum_{i=1}^d v_p(N(\m)^i -1).
\end{equation}
Set $X := \left\{ \m \in \Specmax (L) : \m | (s), \text{for some }s > M \text{ prime} \right\}$, then
(\ref{eq1}) holds for
all $\m \in X$ and we obtain $v_p (\abs G) \leq \inf_{\m \in X} \sum_{i=1}^d v_p(N(\m)^i -1).$ So, to conclude, all we
have to prove is
\begin{equation}
\inf_{\m \in X} \sum_{i=1}^d v_p(N(\m)^i -1) \leq M'_\k(d,p).
\label{eqInegalite}
\end{equation}
\smallskip
{\sl{Step 3. Proof of \eqref{eqInegalite}.--}} The set $X$ is open in $\Specmax L$. For, $X = \left( \bigcup_{l \leq
M, l \text{ prime}} V(l) \right)^c$ with $V(l) = \left\{ \m \in \Specmax(L) : (l) \subset \m \right\}$
and $V(l)$ is closed.
Take $u \in \im \chi_{p^\infty}$. For $j \geq 1$, let $u_j$ be the projection
of $u$ in $(\mathbf{Z} / p^j \mathbf{Z})^{\times}$. By Theorem \ref{OuvertDenseAvecAnneauNormal} the set of maximal ideals $\m$
such that $N(\m) \equiv u_j \mod p^j$ is dense, therefore it intersects the open subset $X$, so for every $j\geq
1$, we can find $\m_j \in X$ such that $ N(\m_j) \equiv u_j \mod p^j$. Then, one has $\lim_{j \rightarrow
\infty} N( \m_j) = u$ in $\mathbf{Z}_p^{\times}$, therefore $v_p (u^i -1) = \lim_{j \rightarrow \infty} v_p( N(\m_j)^i
-1)$ so
\[ \inf_{\m \in X} \sum_{i=1}^d v_p(N(\m)^i -1) \leq \sum_{i=1}^d v_p(u^i-1); \]
and this holds for every $u \in \im \chi_{p^\infty}$. Using Proposition \ref{ConstanteMajoréeParBorneSchur}, we get
\[ \inf_{\m \in X} \sum_{i=1}^d v_p(N(\m)^i -1) \leq \inf_{u \in \im_{\chi_{p^\infty}}} \sum_{i=1}^d v_p(u^i-1)
= M'_\k( d,p). \]
\paragraph{Proof of Theorem \ref{SchurBoundPolynomial} and comments.--}
\begin{bigtheorem}\label{BigThmSchurBoundPolynomial}
Let $\k$ be a number field, let $d$ be a natural number, and let $p$ be a prime. Let $G$ be a finite $p$-subgroup of $\Aut( \mathbf{A}^d_\k)$, then
\begin{enumerate}
\item If $p \geq 3$ or $p=2$ and $\k$ is in case (a) or (b), there exists a group embedding
\[ G \hookrightarrow \mathrm{GL}_d (\k). \]
\item If $p=2$ and $\k$ is in case (c), there exists a group embedding
\[ G \hookrightarrow \mathrm{GL}_d (\k(z_4)). \]
\end{enumerate}
\end{bigtheorem}
\begin{rmq}
We do not state a Sylow-like property, saying that $G$ is conjugated to a subgroup of $\mathrm{GL}_d (\k)$, we
only state that we can find an isomorphism of abstract groups from $G$ to a subgroup of $\mathrm{GL}_d (\k)$.
\end{rmq}
\begin{proof}
For 1, we know that $v_p(\abs G) \leq M_\k (d,p)$ and that there exists a subgroup $H \subset \mathrm{GL}_d
(\k)$ such that $\abs H = p^{M_\k (d,p)}$ by Theorem \ref{SchurBound}. Let $L = \mathcal O_\k$ be the ring of
integers of $\k$. The proof of Theorem \ref{theoremPlongementCorpsFiniBonCardinalPGeneral} shows that there
exists an infinite number of maximal
ideals $\m$ of $L$ such that $v_p (\mathrm{GL}_d (\mathbf{F})) \leq M_\k (d,p)$ where $\mathbf{F} = L / \m$. So for any such
maximal ideal $\m \subset L$ lying over a sufficiently large prime, there are embeddings $\Psi_H: H
\hookrightarrow \mathrm{GL}_d (\mathbf{F})$ and $\Psi_G : G \hookrightarrow \mathrm{GL}_d (\mathbf{F})$. Looking at the size of $H$, we
deduce that $v_p (\mathrm{GL}_d (\mathbf{F})) = M_\k (d,p)$ and $\Psi_H (H)$ is a $p$-Sylow of $\mathrm{GL}_d(\mathbf{F})$. By Sylow's
theorems, $\Psi_G(G)$ is conjugated to a subgroup of $\Psi_H(H)$ in $\mathrm{GL}_d(\mathbf{F})$. This implies that $G$
is isomorphic to a subgroup of $H$.
For 2, if $\k$ is in case (c) then one can check that $\k(z_4)$ is in case (a) and that $m(\k(z_4); 2) =
m(\k; 2)$, therefore $M_{\k(z_4)}(d,2) = M_\k (d,2) + \ent{\frac{d}{2}}$ and the same proof as 1 shows
the result.
\end{proof}
\begin{rmq}\label{RmqFinitelyGeneratedField}
Theorem \ref{SchurBoundPolynomial} and \ref{BigThmSchurBoundPolynomial} still hold for $\k$ finitely
generated over $\mathbf{Q}$. We just need to explain how the proof of Theorem
\ref{theoremPlongementCorpsFiniBonCardinalPGeneral} works in that case.
We need to find a normal domain $L$ finitely generated over $\mathbf{Z}$ such that $G$ is defined over $L$ and
to define the open subset $X \subset \Specmax L$ used for equation \eqref{eqInegalite}. Here
is how to proceed: since $G$ is finite, there exists a
finitely generated $\mathbf{Z}$-algebra $R$ such that the elements of $G$ are defined over $R$, we can suppose that $R$
contains $1/p$. By Noether Normalization's Lemma and more precisely by generic freeness (see
\cite{eisenbud1995commutative}, Theorem 14.4), there exists $t_1, \dots, t_s \in R$ and an integer $N$ such that
$R$ is a finite free module over $\mathbf{Z}[1/N][t_1,\dots, t_s]$. We can then take for $L$ the integral closure of
$\mathbf{Z}[1/N][t_1, \dots, t_s]$ in $\k$, $L$ is a normal domain over which
$G$ is defined since $R \subset L$. We also have that $L$ is finitely generated over $\mathbf{Z}$ because by
\cite[Theorem 4.14]{eisenbud1995commutative} it is a finite module over $\mathbf{Z}[1/N] [t_1, \dots, t_s]$.
Now, let $A$ be the set of coefficients of $g - \id$ for $g \in G$. Set $X = \left\{ \m \in \Specmax L :
A \cap \m = \emptyset \right\}$. This is an open subset of $\Specmax L$ as $A$ is finite and $X = \bigcap_{a \in A}
V(a)^c$. For any $\m \in X$ we have an injective group homomorphism $G \hookrightarrow \Aut (\mathbf{A}^d_{L
/ \m})$ and Equation \eqref{eq1} holds. The proof of Equation \eqref{eqInegalite} is the same as in the
case of number fields. This proves Theorem \ref{theoremPlongementCorpsFiniBonCardinalPGeneral} for finitely generated fields over
$\mathbf{Q}$.
\end{rmq}
To prove Theorem \ref{BigThmSchurBoundPolynomial}, the key ingredient is
that there exists subgroups of $\mathrm{GL}_d(\k)$ of size $p^{M_\k(d,p)}$, as Theorem
\ref{SchurBoundPolynomial} is stated only for number fields we show for completeness how to construct finite
$p$-groups of $\mathrm{GL}_d (\k)$ of size $p^{M_\k(d,p)}$ when $\k$ is finitely generated over $\mathbf{Q}$. The proof
of Theorem \ref{BigThmSchurBoundPolynomial} for finitely generated fields over $\mathbf{Q}$ is then similar as
in the case of number fields using Noether Normalization Lemma, we leave the details to the reader.
\begin{prop}
Let $\k$ be a finitely generated field over $\mathbf{Q}$ and let $p$ be a prime, there exists a finite
$p$-subgroup of $\mathrm{GL}_d(\k)$ of size $p^{M_\k(d,p)}$.
\label{PropBorneOptimale}
\end{prop}
\begin{proof} Set $t= t(\k; p), m = m(\k;p)$ and $r = \lfloor d/t \rfloor$.
\paragraph{The case $p \geq 3$.--} Let $\rho = z_{p^m} \in
\k(z_p)$. Then, the group $(\mathbf{Z} / p^m \mathbf{Z})$ acts on $\k(z_p)$ via multiplication by $\rho^k$ for all $k \in \mathbf{Z} / p^m \mathbf{Z}$.
Now take $r$ copies of $\k(z_p)$; this is a $\k$-vector space $V$ of dimension $t \cdot r \leq d$ and let $S_r$ be the
$r$-th symmetric group, $S_r$ acts on $V$ by permuting the $r$ copies of $\k(z_p)$ and therefore the group
\[ G := S_r \ltimes (\mathbf{Z} / p^m \mathbf{Z})^r \]
acts faithfully by linear automorphisms on $V$ and has the desired size. Indeed, $v_p (\vert G \vert) =
m \cdot \left\lfloor \frac{d}{t} \right\rfloor + v_p (\lfloor \frac{d}{t} \rfloor !)$.
\paragraph{The case $p=2$ and $t=1$.--} In that case, $\k = \k (z_4)$, then $M_\k(d,2) = m \cdot \left\lfloor
\frac{d}{t} \right\rfloor + v_2 ( \left\lfloor \frac{d}{t} \right\rfloor !)$. Therefore, the proof above works
as well, with $\rho = z_{2^m}$ acting on $\k(z_4) = \k$.
\paragraph{The case $p=2$ and $t=2$.--} The construction above
yields that $(\mathbf{Z} / 2^m \mathbf{Z})$ acts linearly on $\k(z_4)$. We twist this action by the Galois
automorphism $\sigma$ that sends $z_4$ to $-z_4$; $\sigma$ is an involution that sends $\rho = z_{2^m}$ to another
primitive $2^m$-th root of unity. So we get that the
group $H:= \mathbf{Z} / 2 \mathbf{Z} \ltimes \mathbf{Z} / 2^m \mathbf{Z}$ acts faithfully on $\k(z_4)$. Now set $ r = \lfloor d/2
\rfloor$, we have that $G := S_r \ltimes H$ acts faithfully and linearly on a $\k$ vector space
$V$ consisting of $r$ copies of $\k(z_4)$. The vector space $V$ has dimension $2 \cdot \lfloor d/2
\rfloor \leq d$. Now, we have
\[ v_2 (\vert G \vert) = (m+1) \cdot \lfloor d/2 \rfloor + v_2 (\lfloor d/2 \rfloor !). \]
If $d$ is even this is equal to $M_\k (d,2)$ and we are done. If $d$ is odd then $v_2 (\abs G) = M_\k (d,2) -1$ but
then $V$ is of dimension $d-1$ so the group $G \times \{ \pm 1 \}$ acts faithfully on $V \oplus \k$
that is of dimension $d$ and this group has the desired size.
\end{proof}
We can therefore state:
\begin{bigtheorem}\label{BigThMSchurBoundPolynomialGeneralCase}
Let $\k$ be a finitely generated field over $\mathbf{Q}$, let $d$ be a natural number, and let $p$ be a
prime. Let $G$ be a finite $p$-subgroup of $\Aut( \mathbf{A}^d_\k)$, then
\begin{enumerate}
\item If $p \geq 3$ or $p=2$ and $\k$ is in case (a) or (b), there exists a group embedding
$ G \hookrightarrow \mathrm{GL}_d (\k)$ and $ v_p (\vert G \vert) \leq M_\k (d;p)$.
\item If $p=2$ and $\k$ is in case (c), there exists a group embedding
$ G \hookrightarrow \mathrm{GL}_d (\k(z_4))$ and $v_2(\vert G \vert) \leq M_\k(d, 2) + \lfloor \frac{d}{2} \rfloor $.
\end{enumerate}
\end{bigtheorem}
\begin{rmq}\label{remarkCasPairNeMarchPas}
We get the optimal bounds except when $p=2$ and $\k$ is in case (c) (this includes $\k = \mathbf{Q}$). For that case,
following Remark \ref{remarkCasPairLinear}, to get the optimal bound one would need a result of the following type:
\emph{
Let $\k$ be a number field in case (c) and $G$ a finite subgroup of $\Aut(\mathbf{A}_\k^d)$ of order $2^\alpha$, then for $\m$ in the complement of
a finite set of $\Specmax \mathcal O_\k$ the group $G$ embeds into an orthogonal group over $\mathcal O_\k / \m$.}
We know that for any maximal ideal $\m$ lying over a large
enough prime, there exists an embedding $G \hookrightarrow \mathrm{GL}_d(\mathbf{F})$ and a fixed point $\bar x \in (\mathbf{F})^d$ of
$G$ where $\mathbf{F} = \mathcal O_\k / \m$. The problem is to find a symmetric matrix $A$ such that
\[
A_G := \sum_{g \in G} {}^t \! D_{\bar x} g \cdot A \cdot D_{\bar x} g
\]
is non-degenerate. Such an $A$ does not exist for every subgroup of $\mathrm{GL}_d (\mathbf{F})$ precisely because $v_2
(\abs{\mathrm{GL}_d(\mathbf{F}})$ is larger than the 2-adic valuation of the order of any orthogonal group over $\mathbf{F}$. So we have to
use that $G$ comes from a group over $\k$ and adapt $\m$ wisely.
Here is one way to attack this problem. Pick a fixed point $\overline x$ of $G$ with coordinates in $\overline \mathbf{Q}$;
such a point exist because otherwise let $(P_n)$ be the system of polynomial equations stating that $G$ has a fixed
point. If this system has no solution over $\overline \mathbf{Q}$ then by Hilbert's Nullstellensatz, there is a relation of
the form $1 = \sum Q_i P_i$ for some polynomials $Q_i$. Now take a number field $\k'$ where this relation is defined.
By the previous paragraph we can reduce modulo a large enough maximal ideal $\m$ of $\mathcal O_{\k'}$ (i.e
lying over a large enough prime) and this would yield an
injective group homomorphism $G \hookrightarrow \Aut (\mathbf{A}^d)$ where $F$ is a finite field with $\Char F \neq p$. The relation
$1 = \sum Q_i P_i $ still holds in $\mathbf{F}$ but this is absurd since we know that $G$ admits a fixed point over $\mathbf{F}$.
Let $\k '$ be the number field generated by the coordinates of $\overline x$
and $\k$. We would like to find $A$ such that $A_G$ is non-degenerate. If $\k ' \subset \mathbf{R}$ we can use argument of
positive definiteness to do so, but otherwise a first difficulty occurs. Now, even if such an $A$ could be found,
the arithmetic of $\k '$ leads to another difficulty: For any maximal ideal $\m ' \subset \mathcal O_{\k '}$ lying over a
large enough maximal ideal $\m \subset \mathcal O_\k$, the image $x'$ of $\overline x$ in $\mathbf{F} ' = \mathcal O_{\k'} / \m'$ is a fixed
point of $G$, and the reduction modulo $\m'$ of $A_G$ is an invertible symmetric matrix over $\mathbf{F}'$. But if the
degree $[ \mathbf{F}', \mathbf{F}]$ is even, then the 2-adic valuation of any orthogonal group over $\mathbf{F} '$ will be too large to get
the optimal bound.
\end{rmq}
\section{$p$-adic analysis}\label{SecPAdicAnalysis}
To prove Theorem \ref{BoundNilpotentGroups}, we will show that any
finitely generated nilpotent group acting on a complex quasiprojective variety of dimension~$d$~can be embedded in a
finite dimensional~$p$-adic Lie group acting analytically on a~$p$-adic manifold of dimension~$d$. The
theorem will follow from a version of Theorem 1.1 of \cite{epstein1979transformation} in a~$p$-adic
context. In this section, we introduce all the tools from~$p$-adic analysis and~$p$-adic Lie groups needed
for the proof.
\subsection{Tate-Analytic Diffeomorphisms}\label{SecAnalyticDiffeo}
\subsubsection{Definitions and topology}
Let~$p$~be a prime. We denote by~$\mathbf{Z}_p$~the completed ring of~$\mathbf{Z}$~with respect to the~$p$-adic norm defined such
that~$\abs p = 1/p$. Denote by~$\mathbf{Q}_p$~the completion of~$\mathbf{Q}$~with respect to this norm. Then~$\mathbf{Q}_p =
\Frac(\mathbf{Z}_p)$~and~$\mathbf{Z}_p$~is the set of elements of~$\mathbf{Q}_p$~of absolute value~$\leq 1$. We extend this norm
to~$\mathbf{Q}_p^d$~by taking the maximum of the absolute values of the coordinates. We will use explicitly the ring~$\mathbf{Z}_p$~and
the field~$\mathbf{Q}_p$~but what follows can be done with any complete valued ring or field of characteristic~$0$. The right
setup would be to consider~$\mathbf{C}_p$~the completion of the algebraic closure of~$\mathbf{Q}_p$~and~$\D_p$~the unit ball of
$\mathbf{C}_p$.
For
reference, check \cite{cantat2014algebraic}.
We denote by~$B(x,r) = \left\{ y \in \mathbf{Q}_p^d : \norm{x-y} \leq r \right\}$~the closed ball of radius~$r$~and center
$x$. It is both open and closed. Such sets will be called \emph{clopen}.
\paragraph{Tate analytic maps.--}Classically, a function~$\mathbf{Z}_p^d
\rightarrow \mathbf{Q}_p$~is analytic if it can be written locally as a converging power series, we work with
\emph{Tate-analytic} functions which are converging power series of radius~$\geq 1$~over~$\mathbf{Z}_p^d$.
Take~$\mathbf{Z}_p^d$~with its standard coordinates~$\mathbf x = x_1, \cdots, x_d$.
On~$\mathbf{Q}_p[x_1,\cdots,x_d] =: \mathbf{Q}_p [\mathbf x]$~the Gauss norm is defined by
\[ \forall g \in \mathbf{Q}_p[\mathbf x], \quad g = \sum_{I \subset \mathbf{Z}_+^d} a_I
\mathbf x^I, \quad \norm g := \max_{I} \abs{a_I} \]
where $I = (I_1, \cdots, I_d)$ and $\mathbf x^I := x_1^{I_1}\cdots x_d^{I_d}$;
we denote by~$\mathbf{Q}_p \langle x_1,\cdots, x_d \rangle =: \mathbf{Q}_p \langle \mathbf x
\rangle$~the completion of $\mathbf{Q}_p [x_1,\cdots,x_d]$ with respect to the Gauss norm~$\mathbf{Q}_p \langle \mathbf x \rangle$~is the set of
formal power series with coefficients in~$\mathbf{Q}_p$~such that~$a_I \rightarrow 0$~when~$I \rightarrow \infty$~(i.e when
$\max(I) \rightarrow \infty$). It is also the set
of formal power series with coefficients in~$\mathbf{Q}_p$~converging over~$\mathbf{Z}_p^d$. This shows that~$\mathbf{Q}_p \langle \mathbf x \rangle$
equipped with the Gauss norm is an infinite-dimensional Banach space over~$\mathbf{Q}_p$. For all polynomials~$f,g \in
\mathbf{Q}_p [\mathbf x]$, then~$\norm{f \cdot g} \leq \norm f \cdot \norm g$~and this is also true in~$\mathbf{Q}_p \langle \mathbf x \rangle$,
therefore~$\mathbf{Q}_p \langle \mathbf x \rangle$~is a Banach algebra over~$\mathbf{Q}_p$, it is the \emph{Tate algebra} over~$\mathbf{Q}_p$~in~$d$~variables
(see \cite{robert2013course}). We also define~$\mathbf{Z}_p \langle
\mathbf x \rangle$~which is the completion of~$\mathbf{Z}_p [\mathbf x]$~for the gauss norm; it is in fact the set of elements of~$\mathbf{Q}_p
\langle \mathbf x \rangle$~of norm~$\leq 1$.
\begin{rmq}\label{remarkCoeffEntierAMultiplicationPres}
For each~$f \in \mathbf{Q}_p \langle \mathbf x \rangle$~there exists an
element~$s \in \mathbf{Z}_p$~such that~$s \cdot f \in \mathbf{Z}_p \langle \mathbf x \rangle$~and if~$g \in \mathbf{Q}_p \langle \mathbf x \rangle$~is such
that~$g(0) \in \mathbf{Z}_p$, then there exist an integer~$N>0$~such that~$g (p^N \mathbf x) \in \mathbf{Z}_p \langle \mathbf x \rangle$. Moreover,
if~$g \in \mathbf{Q}_p [ [ \mathbf x ] ]$~is a formal power series with coefficients in~$\mathbf{Q}_p$~with a strictly positive
radius of convergence, then there exists an integer~$N$~such that~$g (p^N \mathbf x)$~belongs to~$\mathbf{Q}_p \langle \mathbf x \rangle$.
\end{rmq}
\begin{rmq}\label{remarkPourquoiCoeffEntiers}
There exist Tate-analytic maps with non-integer coefficients such that~$f(\mathbf{Z}_p^d) \subset \mathbf{Z}_p$. For example, take
\[ f(x) = \frac{x^p - x}{p}. \]
Since for all~$x \in \mathbf{Z}_p, x^p \equiv x \mod p$,~$f$~induces a map~$f: \mathbf{Z}_p \rightarrow \mathbf{Z}_p$. However
every element~$f \in \mathbf{Q}_p \langle \mathbf x \rangle^d$~induces a map~$f: \D_p^d \rightarrow \mathbf{C}_p$~and we have~$f(\D_p^d) \subset
\D_p \Leftrightarrow f \in \mathbf{Z}_p \langle \mathbf x \rangle^d$. This has to do with the residue field of~$\mathbf{Z}_p$~being finite but
not the residue field of~$\D_p$~(see \cite{robert2013course}, Proposition of page 240).
\end{rmq}
For any~$m \geq 0$, elements of~$\mathbf{Q}_p \langle \mathbf x \rangle^m$~are called \emph{Tate-analytic functions}.
If~$g \in \mathbf{Q}_p \langle \mathbf x \rangle^d$, then
\begin{equation}
\forall x, y \in \mathbf{Z}_p^d, \norm{g(x) - g(y)} \leq \norm g \norm{x-y}.
\label{EqLipschitz}
\end{equation}
In particular,~$g$~is~$\norm g$-Lipschitz.
\begin{prop}[Strassman's Theorem, see \cite{robert2013course}, chapter 6, section 2.1] \label{PropIsolatedZeroPrinciple}
Let~$f \in \mathbf{Q}_p \langle t \rangle$~be a Tate-analytic function in one variable, if~$f$~is
not the zero function, then~$f$~has a finite number of zeros over~$\mathbf{Z}_p$.
\end{prop}
\begin{cor}\label{AnalyticContinuation}
Let~$f \in \mathbf{Q}_p \langle \mathbf x \rangle$, if there exists a non-empty open subset
~$\mathcal U \subset \mathbf{Z}_p^d$~such that~$f_{| \mathcal U} \equiv 0$~then~$f$~is the zero function.
\end{cor}
\begin{rmq}
This is not true for analytic functions over~$\mathbf{Z}_p^d$. For example define~$g$~by~$g(y) = 1$~if~$\norm y \leq
\abs p$~and~$g(y) = 0$~otherwise. Then,~$g$~is analytic at every point of~$\mathbf{Z}_p^d$~because it is locally constant, it
vanishes on the open subset~$\left\{ x \in \mathbf{Z}_p^d : \norm x = 1 \right\}$~but~$g$~is not the zero function.
\end{rmq}
\begin{proof}[Proof of Corollary \ref{AnalyticContinuation}]
Take~$y \in \mathcal U$~and~$x \in \mathbf{Z}_p^d$. Let~$\varphi$~be the
function~$\varphi: t \in \mathbf{Z}_p \mapsto f(tx +
(1-t)y)$. Then~$\varphi$~belongs to~$\mathbf{Q}_p \langle t \rangle$~and it vanishes for any sufficiently small~$t$. By
Proposition \ref{PropIsolatedZeroPrinciple}, we have that~$\varphi$~is the zero function, therefore~$f(x) = 0$.
\end{proof}
Let~$f,g \in \mathbf{Q}_p \langle \mathbf x \rangle$~and~$c>0$, we write~$f \equiv g \mod p^c$~if $\norm{f -g} \leq \abs p^c$ and we
extend such notation componentwise for~$\mathbf{Q}_p \langle \mathbf x \rangle^m$~for every~$m \geq 1$.
\begin{ex} \label{ExampleCongruence}
If~$c=1$~and~$f,g \in \mathbf{Z}_p \langle \mathbf x \rangle$, then~$f = \sum_I a_I \mathbf x^I
\equiv \id(\mathbf x) \mod p$~means that~$\overline f := \sum_{I} \overline{a_I} \mathbf x^I
= \id (\mathbf x)$~where~$\overline{a_I} = a_I \mod p$~is the reduction of~$a_i$~mod~$p \mathbf{Z}_p$.
\end{ex}
\paragraph{Tate analytic diffeomorphisms.--}
The composition determines a natural map
\[
\begin{array}{cclll}
\mathbf{Z}_p \langle X_1,\cdots,X_n \rangle^m & \times & \mathbf{Z}_p \langle Y_1,\cdots, Y_s \rangle^n & \longrightarrow &
\mathbf{Z}_p \langle Y_1,\cdots, Y_s \rangle^m \\
(g_1,\cdots.,g_m) & &(h_1,\cdots,h_n) & \longmapsto & (g_1(h_1,\cdots,h_n),\cdots, g_m(h_1,\cdots,h_n))
\end{array}
\]
If the three integers~$n,m,s$~are equal to the same integer~$d$,~$(\mathbf{Z}_p \langle \mathbf x \rangle^d, \circ)$~becomes a semigroup. The
invertible elements of this semigroup are called \emph{Tate-analytic diffeomorphisms} and form a group denoted by
$\Diff^{an} (\mathbf{Z}_p^d)$. Using Equation \eqref{EqLipschitz}, we have that~$\Diff^{an}(\mathbf{Z}_p^d)$~acts by isometries on
$\mathbf{Z}_p^d$.
\begin{rmq}
Following Remark \ref{remarkPourquoiCoeffEntiers}, we see that~$\Diff^{an} (\mathbf{Z}_p^d)$~consists exactly of the elements of
~$f \in \mathbf{Q}_p \langle \mathbf x \rangle$~that induces a Tate-analytic diffeomorphisms~$f: \D_p^d \rightarrow \D_p^d$.
\end{rmq}
The next proposition shows an easy way to construct Tate-analytic
diffeomorphisms of small polydisks.
\begin{prop}[Local inversion theorem, see \cite{SerreLieGroupsLieAlgebras}]\label{ExistenceInverse}
Let~$\Phi \in \mathbf{Z}_p [[X_1,\cdots.,X_d]]^d$~be a
power series with a strictly positive radius of convergence. Suppose that~$\Phi(0) = 0$~and~$\det (D_0 \Phi) \neq
0$, then there exists a unique~$\Psi \in \mathbf{Q}_p [[X_1,\cdots.,X_d]]^d$, with a strictly positive radius of convergence, such
that~$\Psi(0) = 0$~and \[ \Phi \circ \Psi (\mathbf x) = \Psi \circ \Phi(\mathbf x) = \mathbf x. \]
Furthermore,~$\norm{\Psi_n} \leq \max (1, \norm{ \inv {D_0 \Phi}}^n)$, where~$\Psi_n \in \mathbf{Q}_p
[X_1,\cdots,X_n]^d$~is the homogeneous part of degree~$n$~of~$\Psi$~and~$\abs{\abs{\cdot}}$~is the Gauss norm over
polynomials. Therefore, if~$\Phi$~belongs to~$\mathbf{Z}_p \langle \mathbf x \rangle^d$, then for any~$k$~such that~$~\abs p^k <
\norm{D_0 \inv \Phi}$, we have that~$\frac{1}{p^k} \Phi (p^k \mathbf x)$~and~$\frac{1}{p^k} \Psi (p^k
\mathbf x)$~are Tate-analytic diffeomorphisms and are inverse of each other.
\end{prop}
\paragraph{Group topology.--}
The following proposition shows that~$\Diff^{an} (\mathbf{Z}_p^d)$~is a topological group with respect to the topology
induced by the Gauss norm.
\begin{prop}\label{truc1}
Let~$f,g,h \in \mathbf{Z}_p \langle \mathbf x \rangle^d$, then
\begin{enumerate}
\item~$\norm{g \circ f} \leq \norm g$.
\item If~$f$~is an element of~$\Diff^{an} (\mathbf{Z}_p^d)$~then~$\norm{ g \circ f} = \norm g$.
\item~$~\norm{ g \circ (\id +h) - g } \leq \norm h$.
\item~$\norm{\inv f - \id} = \norm{f - \id}$~if~$f$~is a Tate-analytic diffeomorphism.
\end{enumerate}
\end{prop}
\begin{lemme}\label{lemma:PuissanceCongruence}
Let~$f$~be an element of~$\Diff^{an}(\mathbf{Z}_p^d)$, if~$f \equiv \id \mod p$~then~$f^{p^c} \equiv \id \mod p^c$.
\end{lemme}
\begin{cor}\label{corollary:SubgroupsBasisOfNeighbourhoods}
Let~$c>0$~be a real number, then the
subgroup~$\Diff^{an}_c(\mathbf{Z}_p^d)$~of~$\Diff^{an} (\mathbf{Z}_p^d)$~consisting of all
elements~$f \in \Diff^{an} (\mathbf{Z}_p^d)$~such that~$f \equiv \id \mod p^c$~is a normal subgroup of
~$\Diff^{an}(\mathbf{Z}_p^d)$.
\end{cor}
Proposition \ref{truc1}, Lemma \ref{lemma:PuissanceCongruence} and Corollary \ref{corollary:SubgroupsBasisOfNeighbourhoods} are
proven in \cite{cantat2014algebraic}, section 2.1.
\subsubsection{Analytic flow and Bell-Poonen theorem}
\paragraph{Flows and vector fields.--}
As in real or complex geometry, we define vector fields and flows. Let~$d$~be an integer:
A \emph{Tate-analytic vector field}~$\mathbf X$~over~$\mathbf{Z}_p^d$~is a vector field of the form
\[ \mathbf X(\mathbf x) = \sum_{i=1}^d u_i (\mathbf x) \partial_i \]
where each~$u_i$~belongs to~$\mathbf{Q}_p \langle \mathbf x \rangle$. The Lie bracket of two vector
fields~$\mathbf X$~and~$\mathbf Y= \sum_{i=1}^d v_i \partial_i$~is the vector field defined by
\[ [ \mathbf X, \mathbf Y] =
\sum_{j=1}^d w_j (\mathbf x) \partial_j \text{ with } w_j = \sum_{i=1}^d \left(u_i \frac{\partial v_j}{\partial x_i} - v_i
\frac{\partial u_j}{\partial x_i}\right).
\]
The~$\mathbf{Q}_p$-Lie algebra of Tate-analytic vector fields over~$\mathbf{Z}_p^d$~is denoted by~$\Theta(\mathbf{Z}_p^d)$~it is a strict
subalgebra of the Lie Algebra of analytic vector fields over~$\mathbf{Z}_p^d$. The Gauss norm of a Tate-analytic vector field
~$\mathbf X = \sum u_i (\mathbf x) \partial_i$~is defined as~$\norm \mathbf X = \max_i \norm {u_i}$~and makes~$\Theta(\mathbf{Z}_p^d)$~a complete Lie
Algebra over~$\mathbf{Q}_p$~isomorphic as a Banach space to~$\mathbf{Q}_p \langle \mathbf x \rangle^d$.
A \emph{Tate-analytic flow}~$\Phi$~over~$\mathbf{Z}_p^d$~is an element of~$\mathbf{Z}_p \langle X_1, \cdots,
X_d, t \rangle^d = \mathbf{Z}_p \langle \mathbf x,t \rangle^d$~which satisfies the following properties
\begin{enumerate}[label=(\roman*)]
\item~$~\forall \mathbf x \in \mathbf{Z}_p^d, \ \forall s,t \in \mathbf{Z}_p, \quad \Phi(\mathbf x, s+t) = \Phi( \Phi(\mathbf x,s), t).$
\item~$\forall \mathbf x \in \mathbf{Z}_p^d,\quad \Phi(\mathbf x,0) = \id(\mathbf x)$.
\end{enumerate}
Set~$\Phi_t := \Phi( \cdot, t) \in \mathbf{Z}_p \langle \mathbf x \rangle$. Then,~$\Phi_0 = \id$~and~$\Phi_t \in \Diff^{an}(\mathbf{Z}_p^d)$
since~$\inv{\Phi_t} = \Phi_{-t}$. Then,~$t \in \mathbf{Z}_p \mapsto \Phi_t \in \Diff^{an}(\mathbf{Z}_p^d)$~is a continuous
homomorphism of topological groups with respect to the Gauss norm. The main point here is that flows are
parametrized by the compact group~$(\mathbf{Z}_p, +)$.
\begin{ex}
If~$\Phi$~is a Tate-analytic flow, then we can define its associated Tate-analytic vector
field~$\mathbf X_\Phi := \frac{\partial \Phi_t}{\partial t}_{|t=0}$. In particular,~$\mathbf X_\Phi$~is~$\Phi_t$-invariant, for all~$t
\in \mathbf{Z}_p$.
\end{ex}
\paragraph{From vector fields to Tate-analytic flows.--}
Since a Tate-analytic vector field~$\mathbf X$~is analytic, it is a general fact that it admits local analytic flows over
$\mathbf{Z}_p^d$~(see \cite{bourbaki2007varietes} for example), the next proposition shows that if the norm
of~$\mathbf X$~is sufficiently small, then it
admits a global Tate-analytic flow.
\begin{prop}\label{PropExistenceGlobalTateAnalyticFlow}
If~$\mathbf X$~is a Tate-analytic flow over~$\mathbf{Z}_p^d$, then for any sufficiently small~$\lambda \in \mathbf{Z}_p$, there
exists a unique Tate-analytic flow~$\Phi^\lambda \in \mathbf{Z}_p \langle \mathbf x, t \rangle^d$~such that
\[ \frac{\partial \Phi_t^\lambda (\mathbf x)}{\partial t} = \lambda \mathbf X(\Phi_t^\lambda(\mathbf x)). \]
In particular, let $c>0$ be such that $c > \frac{1}{p-1}$, then every Tate-analytic vector
fields~$\mathbf X$~such that~$\norm \mathbf X \leq \abs p^c$~admits a global Tate-analytic flow.
\end{prop}
\begin{proof}
The strategy is to solve this differential equation in the space of power series~$\mathbf{Q}_p \left[ \left[ \mathbf x, t \right]
\right]^d$~and then to show some properties on the radius of convergence of the solution. We first replace~$\mathbf X$~by
~$\mu \mathbf X$~for some~$\mu \in \mathbf{Z}_p$~such that~$\norm \mathbf X \leq 1$. Write~$\mathbf X (\mathbf x) = \sum_i u_i
(\mathbf x) \partial_i$~with~$u_i \in \mathbf{Z}_p \langle \mathbf x \rangle$. We look at the differential equations
\begin{equation}
\frac{\partial}{\partial t } f_i (\mathbf x, t) = u_i(f(\mathbf x,t))
\label{EqDiff}
\end{equation}
with~$f_i \in \mathbf{Q}_p \left[ \left[ \mathbf x, t \right] \right]$~and~$f = (f_1, \cdots, f_d)$~such that~$f(\mathbf x, 0) = \mathbf x$.
Write
\[
f_i (\mathbf x ,t) = \sum_{k \geq 0 } a_k^{(i)} (\mathbf x) t^k , \quad a_k^{(i)} \in \mathbf{Q}_p \left[ \left[ \mathbf x \right] \right]
\]
then, the unique solution of this equation is formally given by the formulas~$a_k^{(i)} (\mathbf x) = \frac{1}{k !}
\frac{\partial^k f_i}{\partial t^k} (\mathbf x, 0)$. We show that for all integer~$k \geq 0, \frac{\partial^k f_i}{\partial t^k}
(\mathbf x, 0)$~belongs to~$\mathbf{Z}_p
\langle \mathbf x \rangle$~by induction on~$k$. We get~$a_0^{(i)} = x_i$~since~$f(\mathbf x, 0) = \id(\mathbf x)$~and~$a_1^{(i)} (\mathbf x) = u_i(\mathbf x)$~by
Equation \eqref{EqDiff}. Take~$k \geq 2$~and suppose the result to be true for all~$l < k$. By
differentiating both sides of Equation \eqref{EqDiff}~$k-1$~times with respect to~$t$~and taking~$t=0$, we see that~$\frac{\partial^k
f_i}{\partial t^k} (\mathbf x, 0)$~is obtained by sum and compositions of differentials of orders~$\leq k - 1$~of the
Tate-analytic function~$u_i \in \mathbf{Z}_p \langle \mathbf x \rangle$~and the Tate-analytic functions~$\frac{\partial^l}{\partial
t^l} f_i (\mathbf x, 0) \in \mathbf{Z}_p \langle \mathbf x \rangle$~with~$l < k$.
So~$\frac{\partial^k f_i}{\partial t^k} (\mathbf x, 0)$~belongs to~$\mathbf{Z}_p \langle \mathbf x \rangle$~by induction.
The solution~$f$~is then of the form
\[ f(\mathbf x ,t) = \id(\mathbf x) + \sum_{k \geq 1} \frac{\partial^k f}{\partial t^k} (\mathbf x, 0) \frac{t^k}{k !}. \]
Now take $\lambda \in \mathbf{Z}_p$, such that $\abs \lambda \leq \abs p ^c$. We have that for all $k \geq 0,
\frac{\lambda^k}{k!} \in \mathbf{Z}_p$ and $\lambda^k / k! \rightarrow 0$ in $\mathbf{Z}_p$ when $k \rightarrow \infty$. Then,
$\Phi^\lambda_t := f( \cdot, \lambda t)$~is a Tate-analytic flow such that~$\frac{\partial
\Phi_\lambda^t}{\partial t} (\mathbf x) = \lambda \mathbf X(\Phi_t^\lambda (\mathbf x))$.
For the final statement, take~$\mathbf X$~a Tate-analytic vector field such that~$\norm \mathbf X \leq \abs p^c$
and let $s \in \mathbf{Z}_p$ be such that $\abs s = \norm \mathbf X$, then $\mathbf Y := \frac{1}{s} \mathbf X$ has norm $\leq 1$.
The proof shows that there exists a unique Tate-analytic flow~$\Phi$~such that~$\frac{\partial \Phi_t}{\partial t}_{|t=0} = s \mathbf Y = \mathbf X$.
\end{proof}
\begin{thm}[local linearisation of vector fields]\label{pAdicFrob}
Let~$\mathbf X_1,\cdots,\mathbf X_k$~be Tate-analytic vector fields over ~$\mathbf{Z}_p^d$~such that~$[\mathbf X_i, \mathbf X_j] = 0$~for all~$1 \leq i,j \leq
k$. Suppose that there exists a point~$m \in \mathbf{Z}_p^d$~such that the vectors~$\mathbf X_i (m)$~are linearly independent. Then,
there exists a clopen subset~$\mathcal V \subset \mathbf{Z}_p^d$~containing~$m$~and an analytic
diffeomorphism~$\varphi$~from~$\mathbf{Z}_p^d$~onto~$\mathcal V$~such that~$\varphi^* (X_{i|\mathcal V}) = \partial_i$~and such that
~$\varphi^*$~yields an injective Lie Algebra homomorphism~$\Theta(\mathbf{Z}_p^d)_{|\mathcal V}
\hookrightarrow \Theta(\mathcal V)$.
\end{thm}
\begin{rmq}
This theorem is well known in~$p$-adic differential geometry with analytic regularity (see
\cite{bourbaki2007varietes}),
what is important here is that when changing coordinates we keep the Tate-analytic regularity for vector fields.
\end{rmq}
\begin{proof}
By translation, we can suppose that~$m=0$. We pick $Y_0 \subset T_0 \mathbf{Z}_p^d$ such that we have the decomposition $T_0 \mathbf{Z}_p^d = \Vect
( X_1(0),\cdots, X_k (0)) \oplus Y_0$. Let~$e_{1}, \cdots, e_{d-k}$~be a basis of~$Y_0$. Pick local (analytic)
coordinates~$(x_1,\cdots,x_k, y_1,\cdots, y_{d-k})$~such that for all~$1 \leq j \leq d-k, \frac{\partial }{\partial y_j} (0) = e_j$.
Define :~$f: \mathbf{Z}_p^{d-k} \rightarrow \mathbf{Z}_p^d$~by \[ f(y_1,\cdots,y_{d-k}) =(0,\cdots,0, y_1,\cdots,y_{d-k}). \] Take the
local analytic flows~$\varphi^1,\cdots, \varphi^k$~associated to~$\mathbf X_1,\cdots, \mathbf X_k$~at~$0$~(here we do not suppose these flows to
be Tate-analytic)~and consider
\[
\begin{array}{crcl}
g&: \mathbf{Z}_p^k \times \mathbf{Z}_p^{d-k} & \longrightarrow & {\mathbf{Z}_p^d}\\
& {(t_1,\cdots,t_k; y)} & \longmapsto& { \varphi^1_{t_1} \circ \cdots \circ \varphi^k_{t_k} (f(y)).}
\end{array}
\]
The function~$g$~belongs to~$\mathbf{Z}_p
\left[ \left[ t_1, \cdots, t_k, \mathbf y \right] \right]^d$~with a radius of convergence~$r_g >0$, satisfies~$g(0) = 0$~and
its differential at the point~$(0,0)$~is
\[
(x_1,\cdots,x_k; z) \mapsto x_1 \mathbf X_1(0) +
\cdots+ x_k \mathbf X_k(0) + \sum_j z_j \frac{\partial}{\partial y_j} (0).
\]
Therefore it is invertible. By Proposition
\ref{ExistenceInverse}~$g$~admits a formal inverse~$h \in \mathbf{Q}_p \left[ \left[ t_1, \cdots, t_k, \mathbf y \right] \right]^d$
with a radius of convergence~$r_h >0$. Denote by~$\mathbf z$~the set of coordinates~$(t_1, \cdots, t_k, y_1, \cdots,
y_{d-k})$. Pick integers~$K, L$~such that~$\abs p ^K < r_g$~and~$\abs p^L < r_h$~such that~$g(B(0, \abs p^K)) \subset
B(0, \abs p^L)$. Let $\mathcal V$ denote $g(B(0, \abs p^K))$; it is a clopen subset of~$\mathbf{Z}_p^d$~because~$B(0, \abs
p^K)$~is clopen. Set~$\varphi := \frac{1}{p^L} g(p^K \mathbf z)$~and~$\psi := \frac{1}{p^K} h(p^L \mathbf z)$, they both belong to
~$\mathbf{Q}_p \langle \mathbf z \rangle^d$~and are inverse of each other and we have~$\varphi^* \mathbf X_i = \partial_i$. Finally, since
~$\varphi \in \mathbf{Q}_p \langle \mathbf z \rangle^d$, the map~$\varphi^*$~preserves Tate-analytic vector fields.
\end{proof}
\begin{thm}[$p$-adic version of \cite{epstein1979transformation} Theorem 1.1]\label{theoremPAdicEpsteinThurston}
Let~$\mathfrak h$~be a nilpotent Lie algebra of Tate-analytic vector fields of~$\mathbf{Z}_p^d$, then~$d \geq \dl (\mathfrak h)$.
\end{thm}
\begin{proof}
We follow the proof of \cite{cantat2014mapping} Proposition 3.10 and proceed by induction on the dimension~$d$. If
~$d=0$, there is nothing to prove. Suppose~$d \geq 1$~and that the result is true in dimension~$d-1$. Since~$\mathfrak h$~is
nilpotent, its center is not trivial.
Let~$\mathbf X$~be a nonzero central element of~$\mathfrak h$. Let~$m$~be a point where~$\mathbf X(m) \neq 0$, then by Theorem
\ref{pAdicFrob}, there exists a small clopen subset~$\mathcal V \subset \mathbf{Z}_p^d$~and an analytic diffeomorphism~$\varphi:
\mathcal V \rightarrow \mathbf{Z}_p^d$~that yields coordinates~$x_1, \cdots, x_d$~over~$\mathcal V$~such that~$\varphi_* \mathbf X =
\partial_d$~and such that~$\varphi_*$~maps Tate-analytic vector fields to Tate-analytic vector fields.
By Proposition \ref{AnalyticContinuation} the morphism of restriction~$\mathfrak h \rightarrow \mathfrak h_{|\mathcal V}$~is an
isomorphism of Lie algebras. We replace~$\mathfrak h$~by~$\mathfrak h_{|\mathcal V}$~and work with the coordinates~$x_1, \cdots, x_d$~over
~$\mathcal V$. Every vector field~$\mathbf Y$~of~$\mathfrak h$~must commute
with~$\mathbf X = \partial_d$~so it is of the form \[ \mathbf Y = \sum_{i=1}^d u_i(x_1,\cdots, x_{d-1}) \partial_i. \] Let~$\pi:
\mathcal V \simeq \mathbf{Z}_p^d \rightarrow \mathbf{Z}_p^{d-1}$~be the projection over the first~$d-1$~coordinates. This yields a Lie algebra
homomorphism~$\pi_* : \mathfrak h \rightarrow \Theta(\mathbf{Z}_p^{d-1})$. Denote by~$\mathfrak h_1$~the image of~$\mathfrak h$~under~$\pi_*$~and~$\mathfrak h_0$
its kernel. We have the exact sequence \[ 0 \rightarrow \mathfrak h_0 \rightarrow \mathfrak h \rightarrow \mathfrak h_1 \rightarrow 0. \]
Now,~$\mathfrak h_0$~consists of Tate-analytic vector fields of~$\mathfrak h$~of the form~$u(x_1, \ldots, x_{d-1}) \partial_d$~so it
is abelian and~$\mathfrak h_1$~is nilpotent because~$\mathfrak h$~is. So we get~$\dl(\mathfrak h) \leq \dl(\mathfrak h_1) + 1$~by the exact sequence
and~$\dl(\mathfrak h_1) \leq d-1$~by induction.
\end{proof}
We discuss the optimality of Theorem \ref{theoremPAdicEpsteinThurston} in Section \ref{SubSecOptimality}.
\paragraph{The theorem of Bell and Poonen.--}
The following theorem first proven by Bell in \cite{Bell05} then by Poonen in \cite{poonen2014p} gives us
an easy way to construct flows from analytic transformations. This is a very strong theorem as it shows
that, contrary to~$\mathbf{R}$, over~$\mathbf{Q}_p$~a lot of analytic diffeomorphisms are in a flow. See
\cite{Cantat_smf_18} for a more precise discussion on Bell-Poonen theorem.
\begin{thm}[Bell-Poonen]\label{theoremBellPoonen}
Let~$d \geq 1$~be an integer, and~$f \in \mathbf{Z}_p \langle \mathbf x \rangle^d$. Take~$c > \frac{1}{p-1}$
and suppose that~$f \equiv \id \mod p^c$, then
\begin{enumerate}
\item~$f$~is a Tate-analytic diffeomorphism.
\item There exists a unique Tate-analytic flow~$\Phi \in \mathbf{Z}_p \langle \mathbf x,t \rangle^d$~such that
\[ \forall n \in \mathbf{Z}, \quad \Phi(\mathbf x,n) = f^n (\mathbf x). \]
In particular,~$\Phi_1 = f$.
\end{enumerate}
\end{thm}
In fact, Poonen showed this theorem for the valuation ring of any ultrametric field~$\mathbf{K}$. So,
Bell-Poonen Theorem also holds over~$\D_p$~or over any finite extension of~$\mathbf{Q}_p$~for example.
\begin{cor}\label{NoTorsion}
Let~$H$~be a subgroup of~$\Diff^{an}_1 (\mathbf{Z}_p^d)$~with~$p \geq 3$, then~$H$~is torsion-free.
\end{cor}
\begin{proof}
Let~$h \in H$, suppose that~$h$~has order~$N < \infty$. By Theorem \ref{theoremBellPoonen}, there exists an
Tate-analytic flow~$\Phi$~such that~$\Phi_1 = h$. Then for all
~$\mathbf x \in \mathbf{Z}_p^d$~the function~$t \in \mathbf{Z}_p \mapsto \Phi_t (\mathbf x) - \mathbf x \in \mathbf{Z}_p^d$~is analytic and has an infinite number of
zeros, so it is zero everywhere by Proposition \ref{AnalyticContinuation}. Therefore~$\Phi_1 (\mathbf x) = h(\mathbf x) = \mathbf x$~and~$h =
\id$.
\end{proof}
The next proposition won't be used in the proof of Theorem \ref{BoundNilpotentGroups} but it gives useful
information on the dynamics of Tate-analytic flows.
\begin{prop}\label{StableAIndiceFiniPres}
Let~$\Phi \in \mathbf{Z}_p \langle \mathbf x,t \rangle$~be a Tate-analytic flow over~$\mathbf{Z}_p^d$.
If~$\mathcal U \subset \mathbf{Z}_p^d$~is a clopen set, then there exists an~$\epsilon >0$~such that \[ \forall t \in \mathbf{Z}_p,
\quad \abs t \leq \epsilon \Rightarrow \Phi_t (\mathcal U) = \mathcal U. \]
\end{prop}
\begin{proof}
Fix~$x \in \mathbf{Z}_p^d$~and~$0 < r \leq 1$. Since~$\Phi_t \rightarrow
\id$~as~$t \rightarrow 0$~in~$\Diff^{an} (\mathbf{Z}_p)$, there exists~$\epsilon >0$~such that for all~$t \in \mathbf{Z}_p$,~$\abs t
\leq \epsilon \Rightarrow \norm{\Phi_t - \id} \leq r$. Now for all~$z \in \mathbf{Z}_p^d, \norm{ \Phi_t (z) - z}
\leq \norm{\Phi_t - \id} \leq r$. Then, for all~$y$~such that~$\norm{y -x} \leq r$,
\begin{align*}
\norm{\Phi_t (y) - x}
&= \norm{\Phi_t (y) - y + y -x } \\ &\leq \max( \norm{\Phi_t(y) - y}, \norm{y-x} ) \leq r.
\end{align*}
So if~$\abs t \leq \epsilon$, we have~$\Phi_t (B(x,r)) \subset B(x,r)$~and~$\Phi_{-t}(B(x,r)) \subset B(x,r)$, so
we get the equality.
Since~$\mathcal U$~is clopen, by compactness,~$\mathcal U = \bigcup_{i=1}^T B(x_i, r_i)$~for some finite set~$\left\{
x_1, \cdots, x_T \right\} \subset \mathcal U$~and radii~$r_i \in (0, 1]$. Thus, the results follows from the case of one
ball. \end{proof}
\subsection{Infinite-dimensional analytic manifold over~$\mathbf{Q}_p$}\label{SecInfiniteDimensionalAnalyticmanifolds}
The main goal of the next two sections is to show that the topological group~$\Diff^{an}(\mathbf{Z}_p^d)$~is in fact an infinite
dimensional Lie group over~$\mathbf{Q}_p$.
We refer to \cite{bourbaki2007varietes} for reference on analytic functions and analytic manifolds over a Banach
space. In this section~$\k$~is an ultrametric complete field and~$E, F$~are Banach spaces over~$\k$~(potentially of infinite
dimension). As we shall see,
taking~$\k = \mathbf{Q}_p$~and~$E, F = \mathbf{Q}_p^d$~allows one to recover the definition of converging power series and analytic
functions over~$\mathbf{Q}_p^d$.
Basically, if~$A$~is a Banach algebra over~$\mathbf{Q}_p$, then any map of the form~$f: A^d \rightarrow A$~such that locally at
any point~$x \in A^d$, there is a expression of~$f$~as a converging power series
\[ f(x + h) = \sum_{I \subset \mathbf{Z}_+^d} a_I h^I \]
with~$a_I \in A, a_I \rightarrow 0$~is an analytic map from~$A^d$~to~$A$. The problem is that if~$A$~is not finite dimensional,
this definition is not enough, as for example a continuous linear map is not necessarily described by an expression of
this form but still should be analytic.
\paragraph{Multi-indices, multi-linear maps.--}
If~$\alpha = (\alpha_1, \cdots, \alpha_d) \in \mathbf{Z}_+^d$~is a multi-index, then~$\abs \alpha := \sum_i \alpha_i$. For~$1 \leq
j \leq \abs \alpha$, we define
\[ \alpha(j) = \max \left\{ k + 1 \in \mathbf{Z}_+ : \alpha_1 + \cdots + \alpha_{k} < j \right\}. \]
The sequence~$(\alpha(j))_{1 \leq j \leq \abs \alpha}$~is the increasing sequence consisting of~$\alpha_1$~times the
number 1,~$\alpha_2$~times the number 2, \ldots,~$\alpha_d$~times the number~$d$. For example, if~$\alpha = (1, 5 ,7)$,
then~$d=3, \abs \alpha = 13$~and
\[ (\alpha(j))_{1 \leq j \leq 13} = (1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3).
\]
For~$1 \leq i \leq d$, we denote by
$p_i: E^d \rightarrow E$~the projection to the~$i$-th coordinate. For a multi-index~$\alpha \in \mathbf{Z}_+^d$, we define
\[
p_\alpha := (p_{\alpha(j)})_{1 \leq j \leq \abs \alpha}: E^d \rightarrow E^{\abs \alpha}.
\]
If~$\beta \in \mathbf{Z}_+^d$~is another multi-index, then we write~$\alpha + \beta$~for the multi-index~$(\alpha_i +
\beta_i)_{1 \leq i \leq d}$.
We write~$\alpha \geq \beta$~if~$\alpha_i \geq \beta_i$~for all~$1 \leq i \leq d$; in that case there is a unique
multi-index~$\gamma$~such that~$\alpha = \beta + \gamma$, and we set~$\alpha - \beta := \gamma$. We also define the
binomial coefficient~$\binom{\alpha}{\beta} := \binom{\alpha_1}{\beta_1} \cdots \binom{\alpha_d}{\beta_d}$. Finally, if
$\mathbf x = (x_1, \cdots, x_d)$, then~$\mathbf x^\alpha := x_1^{\alpha_1} \cdots x_d^{\alpha_d}$~ and if~$\mathbf y = (y_1, \cdots,
y_d)$, one has the identity
\begin{align*}
(\mathbf x + \mathbf y)^\alpha &= (x_1 + y_1)^{\alpha_1} \cdots (x_d + y_d)^{\alpha_d} \\
&= \left(\sum_{\beta_1 = 0}^{\alpha_1} \binom{\alpha_1}{\beta_1}x_1^{\beta_1} y_1^{\alpha_1 - \beta_1} \right) \cdots
\left( \sum_{\beta_d = 0}^{\alpha_d} \binom{\alpha_d}{\beta_d}x_d^{\beta_d} y_d^{\alpha_d - \beta_d} \right) \\
&= \sum_{0 \leq \beta_1 \leq \alpha_1} \cdots \sum_{0 \leq \beta_d \leq \alpha_d} \binom{\alpha_1}{\beta_1} \cdots
\binom{\alpha_d}{\beta_d} x_1^{\beta_1} \cdots x_d^{\beta_d} y_1^{\alpha_1 - \beta_1} \cdots y_d^{\alpha_d - \beta_d} \\
&= \sum_{\beta \leq \alpha }\binom{\alpha}{\beta} \mathbf x^{\beta} \mathbf y^{\alpha - \beta}.
\end{align*}
For an integer~$k$, let~$\mathcal L_k (E, F)$~be the set of continuous multilinear maps from~$E^k$~to~$F$~equipped with
the topology of uniform convergence over bounded subsets. The norm of an element~$\phi \in \mathcal L_k(E,F)$~is defined by
\[ \norm \phi = \inf \left\{ a > 0 : \forall x_1, \cdots, x_k \in E^k, \norm{\phi(x_1, \cdots, x_k)}_F \leq a \norm{x_1}_E \cdots \norm{x_k}_E \right\}. \]
\paragraph*{Continuous polynomial maps and power series.--} (\cite{bourbaki2007varietes} Appendix of \S 1-7) A
\emph{continuous homogeneous polynomial map of multi degree}~$\alpha$, is a map~$f: E^d \rightarrow F$
such that there exists~$u \in \mathcal L_{\abs{\alpha}}(E,F)$~for which~$f = u \circ p_\alpha$.
We denote by~$P_\alpha(E,F)$~the vector space of continuous homogeneous polynomial maps of multi-degree~$\alpha$~equipped
with the quotient topology from~$\mathcal L_{\abs {\alpha}}(E,F)$. The norm of a continuous homogeneous polynomial map~$P \in P_\alpha
(E,F)$~is defined by
\[ \norm P := \inf_{u \in \mathcal L_{\abs{\alpha}} (E,F), P = u \circ p_\alpha } \norm u_{\mathcal L_{\abs{\alpha}}(E,F)}. \]
\begin{ex}
Set~$E, F = \mathbf{Q}_p \langle \mathbf x \rangle$. Let~$P$~be the monomial~$\mathbf x^\alpha$, then the map~$P: g \in \mathbf{Q}_p
\langle \mathbf x \rangle^d \mapsto P(g) \in \mathbf{Q}_p \langle \mathbf x \rangle$~is a
continuous homogeneous polynomial map of multi-degree~$\alpha$. Indeed, let~$k = \abs \alpha$~and
consider the multilinear map
\[
\begin{array}{lccc}
{T_{k}}:& {E^{k}}& \longrightarrow & {F}\\
&{(f_1, \cdots, f_{k})} &\longmapsto &{f_1 \cdots f_{k}};
\end{array}
\]
it is continuous as~$\norm{ T_{k}(f_1, \cdots, f_{k})} \leq \norm{f_1} \cdots \norm{f_{k}}$~and~$~P = T_{k} \circ
p_\alpha$.
Furthermore, for a multi-index~$\beta$, define~$\phi_\beta: \mathbf{Q}_p \langle \mathbf x \rangle \rightarrow \mathbf{Q}_p \langle \mathbf x
\rangle$~such that~$\phi_\beta (g)$~is the homogeneous part of multi-degree~$\beta$~of~$g$. Then,~$\phi_\beta$~is
linear and continuous, therefore if~$P(\mathbf x) = \mathbf x^\alpha$, the map~$g \in \mathbf{Q}_p \langle \mathbf x \rangle^d \mapsto
P(\phi_{\beta_1}(g_1), \cdots, \phi_{\beta_d}(g_d))$~is a continuous homogeneous polynomial map of
multi-degree~$\alpha$~for any multi-index $(\beta_i)_{1 \leq i \leq d}$.
\end{ex}
For an integer~$k$,~$P_k (E^d,F)$~is the direct sum of the~$P_\alpha(E,F)$~for~$\alpha$~such that~$\abs \alpha = k$, the
elements of~$P_k (E^d,F)$~are the \emph{continuous homogeneous polynomial maps of total degree~$k$.}
\begin{ex}\label{ExampleOfhomogeneousContinuousPolynomialMap}
If~$P \in \mathbf{Q}_p [\mathbf x]$~is a homogeneous polynomial of degree~$k$~in~$d$~variables, then the map~$P: g \in \mathbf{Q}_p \langle \mathbf x \rangle^d
\mapsto P(g)$~is a continuous homogeneous polynomial map of total degree~$k$~and for any sequence of multi-index
~$(\beta_i)_{1 \leq i \leq d}$, the map~$g \in \mathbf{Q}_p \langle \mathbf x \rangle^d \mapsto P(\phi_{\beta_1}(g), \cdots,
\phi_{\beta_d}(g))$~also is.
\end{ex}
We denote by~$P(E^d, F)$~the direct sum of the spaces~$P(E^d, F)$, its elements are the \emph{continuous polynomial maps in
$d$~variables}.
\begin{prop}
Set~$E, F = \mathbf{Q}_p \langle \mathbf x \rangle$. Take a polynomial~$P \in \mathbf{Q}_p [\mathbf x ]$. Then,~$P$~induces a
continuous polynomial map~$E^d \rightarrow F$~and the linear embedding~$\mathbf{Q}_p[\mathbf x] \hookrightarrow P(E^d, F)$~is an isometry.
\end{prop}
Finally, the set~$\hat P (E^d, F)$~of \emph{power series} in~$d$~variables over~$E$~is the (infinite) product of the~$P_\alpha(E,
F)$~(or of the~$P_k(E^d, F)$) for~$\alpha \in \mathbf{Z}_+^d$~(for~$k \in \mathbf{Z}_+$) equipped with the product topology of the discrete topology
over each factor; equivalently if~$f = \sum_\alpha f_\alpha \in \hat P(E^d, F)$, then the order of vanishing at~$0$~of
$f$~is~$\ord (f) = \min \left\{ \abs \alpha : f_\alpha \neq 0 \right\}$~and this is the topology induced by the norm~$\norm f :=
2^{- \ord(f)}$. The
space~$\hat P(E^d, F)$~is complete Hausdorff for this topology. A \emph{converging power
series} is an element~$f = \sum_\alpha f_\alpha$~of~$\hat P(E^d, F)$~such that there exists~$R \in (\mathbf{R}_{>0})^d$
satisfying
$\sup_\alpha R^{\alpha} \norm{f_\alpha}_{P_\alpha(E,F)} < + \infty$. If~$f = \sum_\alpha f_\alpha$, then the \emph{polyradius of
convergence of~$f$} is
\[
r(f) := \sup \left\{ R \in (\mathbf{R}_{>0})^d : R^{\alpha} \norm{f_\alpha} \rightarrow 0 \text{ when } \abs
\alpha \rightarrow \infty \right\}.
\]
\begin{dfn}
Let~$\mathcal U$~be an open subset of~$E^d$, a map~$f: \mathcal U \rightarrow F$~is \emph{analytic} at a point~$a \in
\mathcal U$~if there exists a converging power series~$f_a$~such that for all~$x$~in a small neighbourhood of~$a$~in
~$\mathcal U, f(a +x) = f_a (x)$. The function~$f$~is analytic if it is analytic at every point of~$\mathcal U$.
For any integer~$m \geq 1$, a map~$f: \mathcal U \rightarrow F^m$~is analytic if each of its coordinates is analytic.
\end{dfn}
\begin{ex}
Every continuous linear map~$\mathbf{Q}_p \langle \mathbf x \rangle^d \rightarrow \mathbf{Q}_p \langle \mathbf x \rangle^d$~is analytic.
\end{ex}
\begin{prop}\label{PropCompositionIsAnalytic}
The map~$~\text{\emph{Comp}}: (h, f) \in \mathbf{Z}_p \langle \mathbf x \rangle^d \times \mathbf{Z}_p \langle \mathbf x \rangle^d \mapsto h \circ f \in
\mathbf{Z}_p \langle \mathbf x \rangle^d$~is analytic. In particular, it is linear in~$h$.
\end{prop}
\begin{proof}
It is enough to show that the map~$\Phi: (h,f) \in \mathbf{Z}_p \langle \mathbf x \rangle \times \mathbf{Z}_p \langle \mathbf x \rangle^d \mapsto h
\circ f \in \mathbf{Z}_p \langle \mathbf x \rangle$~is analytic.
Let~$(h, f) \in \mathbf{Z}_p \langle \mathbf x \rangle \times \mathbf{Z}_p \langle \mathbf x \rangle^d$, we show that~$\Phi$~is analytic at~$(h,
f)$. Let~$g \in \mathbf{Z}_p \langle \mathbf x \rangle^d$~and write~$h(\mathbf x) = \sum_\alpha a_\alpha \mathbf x^\alpha$, then
\begin{align*}
h \circ (f + g(\mathbf x)) &= \sum_\alpha a_\alpha (f(\mathbf x) + g(\mathbf x))^\alpha \\
&= \sum_\alpha \sum_{\gamma \leq \alpha } a_\alpha \binom{\alpha}{\gamma} f(\mathbf x)^{\alpha - \gamma} g(\mathbf x)^{\gamma} \\
&= \sum_{\beta} \left( \sum_{\alpha \geq \beta} a_\alpha \binom{\alpha}{\beta}f(\mathbf x)^{\alpha - \beta} \right) g(\mathbf x)^\beta \\
&= \sum_\beta Q_{\beta,f}(h)(\mathbf x) \cdot g(\mathbf x)^\beta
\end{align*}
where~$Q_{\beta,f}: \mathbf{Q}_p \langle \mathbf x \rangle \rightarrow \mathbf{Q}_p \langle \mathbf x \rangle$~is a continuous linear
map and $\norm {Q_\beta} \rightarrow 0$ when $\beta \rightarrow \infty$, this is a
converging power series in the variables~$(h, g)$~of polyradius of convergence~$(+ \infty, 1)$. Therefore~$\Phi$~is
analytic at any point~$(0, f)$~and by linearity in~$h, \Phi$~is analytic at any point~$(h,f)$.
\end{proof}
\paragraph{Analytic manifolds.--}
Let~$\mathbf{K}$~be an ultrametric field and let~$X$~be a topological space. A~$\mathbf{K}$-\emph{chart} of~$X$~is a
homeomorphism~$\phi: U \rightarrow \phi(U) \subset E$~where~$U$~in an open subset of~$X$~and~$E$~a Banach space over~$\mathbf{K}$. We say that two
$\mathbf{K}$-charts~$\phi: U \rightarrow E, \psi: V \rightarrow F$~are \emph{compatible} if
\begin{enumerate}
\item~$\phi (U \cap V)$~is open in~$E$~and~$\psi (U \cap V)$~is open in~$F$.
\item~$\psi \circ \inv \phi : \phi (U \cap V) \rightarrow F$~is analytic.
\item~$\phi \circ \inv \psi: \psi (U \cap V) \rightarrow E$~is analytic.
\end{enumerate}
An analytic manifold~$X$~over~$\mathbf{K}$~is defined classically as a topological space equipped with
an atlas of compatible~$\mathbf{K}$-charts. For a
point~$x \in X$, the tangent space at~$x$~is denoted by~$T_x X$. A
function~$f: X \rightarrow Y$~between two analytic manifolds is analytic if for every chart~$\phi: U \subset X
\rightarrow E, \psi: V \subset Y \rightarrow F$, the map~$\psi \circ f \circ \inv \phi: \inv \phi(U) \rightarrow F$~is
analytic. The differential of~$f$~at a point~$x$~will be denoted~$D_x f$.
\begin{prop}\label{PropDiffAnIsAnAnalyticVariety}
The topological space~$\Diff^{an}(\mathbf{Z}_p^d)$~is an analytic manifold over~$\mathbf{Q}_p$, it is in fact an open subset of the
Banach space~$\mathbf{Q}_p \langle \mathbf x \rangle^d$. The subgroups~$\Diff^{an}_c(\mathbf{Z}_p^d)$
for~$c > \frac{1}{p-1}$~are diffeomorphic to~$\mathbf{Z}_p \langle \mathbf x \rangle^d$~and they form a basis of neighbourhood of
~$\id$~in~$\Diff^{an}(\mathbf{Z}_p^d)$. \end{prop}
\begin{proof}
Theorem \ref{theoremBellPoonen} shows that~$\Diff^{an}_c (\mathbf{Z}_p^d)$~is the ball of center~$\id$~and radius~$\abs p^c$~in~$\mathbf{Z}_p
\langle \mathbf x \rangle^d$, using Proposition \ref{truc1} we see that for every~$f \in \Diff^{an}(\mathbf{Z}_p^d)$, the ball of
center~$f$~and radius~$\abs p^c$~is included in~$f \circ \Diff^{an}_c(\mathbf{Z}_p^d)$~therefore it is an open set of~$\mathbf{Q}_p \langle \mathbf x
\rangle^d$, so~$\Diff^{an}(\mathbf{Z}_p^d)$~is an infinite dimensional analytic manifold over~$\mathbf{Q}_p$.
\end{proof}
\paragraph{The implicit function theorem.--}
Let~$X, Y, Z$~be manifolds over~$\mathbf{K}$~and let~$f: X \times Y \rightarrow Z$~be an analytic map. Let~$(a,b) \in X \times Y$, we write~$D_{(a,b)} f$~the differential map of~$f$~at~$(a,b)$~and
let~$D_{(a,b)}^{(1)} f$~be the differential of the partial map~$x \in X \mapsto f(x, b)$~at~$a$~and~$D_{(a,b)}^{(2)} f$~the
differential of the partial map~$y \in Y \mapsto f(a, y)$~at~$b$. Then, one has~$T_{(a,b)} X \times Y = T_a X \times T_b Y$
and~$D_{(a,b)}f (u,v) = D_{(a,b)}^{(1)}f \cdot u + D_{(a,b)}^{(2)} f \cdot v$.
\begin{thm}[Implicit function theorem, 5.6.1 of \cite{bourbaki2007varietes}]
Suppose that~$D_{(a,b)}^{(2)}f$~is bijective, then there exists an open neighbourhood~$U$~of~$a$~in~$X$~and an open
neighbourhood~$V$~of~$b$~in~$Y$~and a unique analytic map~$g: U \rightarrow V$~such that
\[ \forall x \in U, \quad f(x, g(x)) = f(a,b) \]
and the differential of~$g$~at any~$x \in \mathcal U$~is given by
\[ D_x g = - \inv{\left( D_{(x,g(x))}^{(2)} f \right)} \circ D_{(x,g(x))}^{(1)} f \]
\end{thm}
\begin{prop}\label{PropInvIsAnalytic}
The inversion map~$\Inv : f \in \Diff^{an}(\mathbf{Z}_p^d) \mapsto \inv f$~is analytic.
\end{prop}
\begin{proof}
We write~$\mathcal U = \Diff^{an}(\mathbf{Z}_p^d)$, we know that~$\mathcal U$~is an analytic manifold over~$\mathbf{Q}_p$~by
Proposition \ref{PropDiffAnIsAnAnalyticVariety}.
By Proposition \ref{PropCompositionIsAnalytic}, the
composition operation is analytic over~$\mathbf{Z}_p \langle \mathbf x \rangle^d \times \mathbf{Z}_p \langle \mathbf x \rangle^d$, therefore it is
over~$\mathcal U \times \mathcal U$.
To show that~$\Inv$~is analytic we only need to show that it is
analytic at~$\id$. Indeed, take~$f \in \mathcal U$, then~$\Inv = L_{\inv f} \circ \Inv
\circ R_{\inv f}$~where~$R_{\inv f}$~is composition on the right by~${\inv f}$~and~$L_{\inv f}$~composition on
the left. Since~$L_{\inv f}$ and $R_{\inv f}$ are analytic,~$\Inv$~is analytic at~$f$~if and only if it is analytic at~$\id$.
To show that~$\Inv$~is analytic at~$\id$, we use the implicit function theorem, since the map~$M: (f,g)
\in \mathcal U \times \mathcal U \rightarrow f \circ g \in \mathcal U$
is analytic and the partial differential~$D_{\id, \id}^{(2)} M = \id$, one has the existence of a unique function~$G:
\mathcal V \rightarrow \mathcal \mathcal U$~with~$\mathcal V$~an open neighbourhood of~$\id$~such that~$G$~is analytic at
~$\id$~and~$M(f, G(f)) = \id$~for all~$f \in \mathcal V$. Therefore~$\Inv_{|\mathcal V} = G$~and inversion is analytic
at~$\id$.
\end{proof}
\subsection{$p$-adic Lie groups}
We refer to \cite{Bourbaki06} for more details on the results provided in this section.
A~$p$-adic Lie group~$G$~is a topological group with a structure of a~$p$-adic analytic manifold such that the
multiplication map and the inverse map are analytic. The dimension of~$G$~is its dimension as an analytic manifold. It
can be infinite. Its \emph{Lie algebra}~$\mathfrak g$~is the tangent space of~$G$~at the neutral element, it is equipped with a
Lie bracket~$[\cdot , \cdot]$~defined as follows. Let~$g \in G$~and~$\iota_g: h \in G \mapsto g h \inv g$, then~$\Ad (g)
:= D_e \iota_g \in \mathrm{GL}(\mathfrak g)$~is the adjoint representation of~$G$. Define~$\ad := D_e \Ad$, then
\[
\forall \mathbf X, \mathbf Y \in \mathfrak g, [\mathbf X, \mathbf Y] := \ad(\mathbf X)(\mathbf Y).
\]
\begin{thm}\label{theoremDiffAnIsALieGroup}
The topological group~$\Diff^{an} (\mathbf{Z}_p^d)$~is an infinite-dimensional Lie group over~$\mathbf{Q}_p$. Its Lie
Algebra is~$\Theta(\mathbf{Z}_p^d)$.
Moreover, the subgroups~$\Diff^{an}_c (\mathbf{Z}_p^d)$~are also Lie groups for~$c > \frac{1}{p-1}$~and they form a basis of
neighbourhood of~$\id$~in~$\Diff^{an}_c(\mathbf{Z}_p^d)$.
\end{thm}
\begin{proof}
The fact that~$\Diff^{an} (\mathbf{Z}_p^d)$~is a Lie group over~$\mathbf{Q}_p$~follows from Propositions
\ref{PropCompositionIsAnalytic}, \ref{PropDiffAnIsAnAnalyticVariety} and \ref{PropInvIsAnalytic} where
it was shown that it was an analytic manifold and that composition and inversion are analytic maps. The statement for
~$\Diff^{an}_c(\mathbf{Z}_p^d)$~follows from the same propositions.
The tangent space at~$\id$~is~$\mathbf{Q}_p \langle \mathbf x \rangle^d~$~that we identify with~$\Theta(\mathbf{Z}_p^d)$~and under this
identification the Lie bracket between two Tate-analytic vector fields corresponds to the Lie bracket of the Lie
algebra of the Lie group~$\Diff^{an}(\mathbf{Z}_p^d)$~because if~$\mathbf X, \mathbf Y$~are of norm~$\leq \abs p^c$ with $c >
\frac{1}{p-1}$, then they admit global
Tate-analytic flows~$\Phi^\mathbf X$~and~$\Phi^\mathbf Y$~by Proposition \ref{PropExistenceGlobalTateAnalyticFlow} and
\begin{eqnarray*}
[\mathbf X, \mathbf Y] &= \frac{\partial }{\partial_s}_{|s=0} \frac{\partial }{\partial t}_{|t=0} \Phi^\mathbf X_{-s} \circ \Phi^\mathbf Y_t
\circ \Phi^\mathbf X_s \\
&= \frac{\partial }{\partial_s}_{|s=0} \frac{\partial }{\partial t}_{|t=0} \iota_{\Phi^\mathbf X_s}
(\Phi^\mathbf Y_t) \\
&= D_{\id} \Ad(\mathbf X) (\mathbf Y) = \ad(\mathbf X) (\mathbf Y).
\end{eqnarray*}
On the other hand, if~$f,g \in \Diff^{an}_c(\mathbf{Z}_p^d)$~with~$c > \frac{1}{p-1}$
, then~$\frac{\partial}{\partial s}_{|s=0} \frac{\partial}{\partial t}_{|t=0}
\Phi^f_{-s} \circ \Phi^g_t \circ \Phi^f_s = [\mathbf X_f, \mathbf X_g] = \ad\mathbf X_f (\mathbf X_g)$.
\end{proof}
\begin{rmq}
Since Bell-Poonen theorem holds for any ultrametric field, the same proof shows that $\Diff^{an}(\D_p^d)$ is a Lie
group over~$\mathbf{C}_p$. In fact, for any complete extension~$\mathbf{K}$~of~$\mathbf{Q}_p$~with unit ball~$\mathbf A$, the
group~$\Diff^{an}(\mathbf A^d)$~is a Lie group over~$\mathbf{K}$.
\end{rmq}
\begin{thm}[\cite{Bourbaki06}, \S 8, Theorem 1]\label{PropContinousMorphismIsAnalytic}
Let~$G, H$~be Lie groups over~$\mathbf{Q}_p$~and~$\phi: G \rightarrow H$~be a continuous homomorphism of topological groups.
Then,~$\phi$~is analytic and therefore a homomorphism of Lie groups.
\end{thm}
\begin{rmq}
The proof relies heavily on~$\mathbf{Q}$~being dense in~$\mathbf{Q}_p$~and the theorem is false if we replace
~$\mathbf{Q}_p$~by any finite extension of~$\mathbf{Q}_p$. Indeed, suppose for example that~$K = \mathbf{Q}_p (\sqrt \alpha)$~is
a quadratic extension. Any element~$z$~of~$\mathbf{K}$~is of the form~$z = x + \sqrt \alpha y$. Then, the function
\[ f: z = x + \sqrt \alpha y \mapsto x - \sqrt \alpha y \]
is a continuous group homomorphism, it is~$\mathbf{Q}_p$-analytic but not~$\mathbf{K}$-analytic as~$f_{|1 \cdot
\mathbf{Q}_p} = \id$~and~$f_{|\sqrt \alpha \cdot \mathbf{Q}_p} = - \id$.
\end{rmq}
Let~$\Gamma$~be a finitely generated group, the pro-$p$ completion~$\Gamma_p$~of~$\Gamma$~is the projective limit of the
quotient of~$\Gamma$~that are finite~$p$-groups, it is a topological group with respect to the profinite topology. In
particular, for any~$\gamma \in \Gamma$, the group homomorphism~$n
\in \mathbf{Z} \mapsto \gamma^n \in \Gamma$~extends uniquely to a continuous group homomorphism~$t \in \mathbf{Z}_p \mapsto \gamma^t \in
\Gamma_p$. In the context of Tate-analytic diffeomorphisms, if~$p \geq 3$~and~$f \equiv \id \mod p$, then the extension
$n \in \mathbf{Z} \mapsto f^n \in \Diff^{an}_1 (\mathbf{Z}_p^d)$~is the Tate-analytic flow~$t \in \mathbf{Z}_p \mapsto \Phi_t^f \in
\Diff^{an}(\mathbf{Z}_p^d)$~associated to~$f$~given by Bell-Poonen theorem.
\begin{prop}\label{PropEmbeddingOfProPcompletion}
Let~$p$~be a prime, let $c>0$ be such that $c > \frac{1}{p-1}$ and let~$G$~be a compact Lie group over~$\mathbf{Q}_p$. Let
$\Gamma$~be a finitely generated subgroup of~$G$~such that~$G$~is the pro-$p$-completion of~$\Gamma$ and let~$\iota:
\Gamma \rightarrow \Diff^{an}_c (\mathbf{Z}_p^d)$~be a group homomorphism, then~$\iota$~extends
uniquely to a Lie group homomorphism~$\iota: G \rightarrow \Diff^{an}_c (\mathbf{Z}_p^d)$~such that for all~$t \in \mathbf{Z}_p$,
all~$g \in \Gamma$,~$\iota(g^t) = \iota(g)^t$~and the map~$(t, \mathbf x) \in \mathbf{Z}_p \times \mathbf{Z}_p^d \mapsto \iota(g)^t (\mathbf x)$~is
analytic.
\end{prop}
\begin{proof}
Theorem 2.11 of \cite{cantat2014algebraic} shows that~$\iota$~extends uniquely to a continuous map. In
\cite{cantat2014algebraic} this is only shown when $p \geq 3$ and $c = 1$ but the proof is identical
with $p \geq 2$ and $c > \frac{1}{p-1}$ at it is only required that the image of the elements of $\Gamma$ admits a
Tate-analytic flow.
Since~$G$~and~$\Diff^{an}_c (\mathbf{Z}_p^d)$~are both Lie groups over~$\mathbf{Q}_p$,~$\iota$~is automatically a Lie
group homomorphism by Theorem \ref{PropContinousMorphismIsAnalytic}.
\end{proof}
\begin{thm}[\cite{Bourbaki06}, \S 8, Theorem 2]\label{theoremClosedSubgroupAreLieGroups}
Let~$G$~be a finite-dimensional Lie group over~$\mathbf{Q}_p$, then every closed subgroup of~$G$~is a Lie subgroup of~$G$.
\end{thm}
\begin{prop}[\cite{Bourbaki06}, \S 9, Corollary of Proposition 6]
\label{PropOpenSubgroupDerivedSeries}
Let~$G$~be a finite-dimensional Lie group over~$\mathbf{Q}_p$~and $\mathfrak g$ its Lie algebra, there exists an open subgroup
~$G_0$~of~$G$~such that for all~$i \geq 0$, the subgroups~$D^i(G_0)$~and~$D_i(G_0)$ are Lie subgroups with Lie algebra
~$\mathcal D^i (\mathfrak h)$~and~$\mathcal D_i (\mathfrak h)$~respectively.
\end{prop}
\subsection{Nilpotent groups and embedding into~$p$-adic Lie groups.}
\subsubsection{Nilpotent groups}\label{SubSubSecNilpotentGroups}
The main goal of this section is to show that if~$H$~is a finitely generated nilpotent group with generators~$h_1,
\ldots, h_s$, then for any~$m \geq 1$~the subgroup~$H_m$~of~$H$~generated by~$h_1^m, \ldots, h_s^m$~is a finite index subgroup
of~$H$. This will be useful in the proof of Theorem \ref{BoundNilpotentGroups} because if~$H \subset
\Diff^{an}_1 (\mathbf{Z}_p^d)$~we will need to consider such a subgroup~$H_m$~to get the desired result.
Recall the notation introduced in \S~\ref{par:nilpotent_and_solvable} for nilpotent and solvable groups
and Lie algebras. We shall say that an expression that involves~$k$~commutator brackets is a commutator of
length~$k$; for instance~$[ [a, [b,c]], d]$~is a commutator of length 3 and a single element can be viewed
as a commutator of length 0. For~$k \geq 1$, we denote by~$[a_1; \cdots; a_k]$~the commutator~$[a_1, [a_2,
\cdots, [a_{k-1}, a_k] \cdots]$; its length is~$k$.
Let~$G, G', G''$~be groups, a map~$\phi: G \times G' \rightarrow G''$~is \emph{bilinear} if for every~$g \in G, g' \in
G'$, the maps~$\phi(g, \cdot)$~and $\phi( \cdot, g')$ are group homomorphisms. More generally, a map~$G_1 \times \cdots
\times G_m \rightarrow G$~is~$m$-linear if fixing~$m-1$~coordinates yields a group homomorphism. For any
triple of elements~$x,y,z$~in~$G$, we have
\begin{itemize}
\item~$\inv{[x,y]} = [y,x]$.
\item~$[x,yz] = [x,y] [ y, [x,z]] [x,z]$.
\item~$[xy,z] = [x,[y,z]] [y,z] [x,z]$.
\end{itemize}
The image of the map~$(a,b) \mapsto [a,b]$~from~$G \times D^{k-1}(G)$~to~$D^k(G)$~generates $D^k (G)$. It follows from the
last three formulas that, for every~$k \geq 1$, this map induces a bilinear map
\[ \mathrm{co}_k : G \times D^{k-1}(G) \mapsto D^k(G) / D^{k+1} (G) \]
and the image $\im \mathrm{co}_k$ generates $D^k(G) / D^{k+1}(G)$.
\begin{prop}\label{PropGeneratorsOfLowerCentralSeries}
Let~$G$~be a group and~$S$~a set of generators of~$G$.
\begin{enumerate}
\item for every integer~$k \geq 0$, the subgroup~$D^{k}(G) / D^{k+1}(G)$~is generated by the
commutators of length~$k$~consisting of elements of~$S$.
\item if~$G$~is finitely generated, then~$D^{k}(G) / D^{k+1}(G)$~is finitely generated for every~$k
\geq 0$.
\item If~$G$~is nilpotent, then~$D^{\nilp(G)-1} (G)$~is generated by the commutators of length~$\nilp(G) -1$~in
elements of~$S$.
\end{enumerate}
\end{prop}
\begin{proof}
Let us prove the first assertion by induction on~$k$. Let~$X_k$~be the set of commutators of length~$k$
in elements of~$S$. The initialization~$k=0$~follows from~$X_0 = S$~and the fact that~$S$~generates
~$G$. Now, suppose~$k \geq 1$~and that~$X_{k-1}$~generates~$D^{k-1} (G) /
D^{k}(G)$. The image of the map~$\mathrm{co}_k$~generates $D^k(G) / D^{k+1} (G)$; by induction and
since~$\mathrm{co}_k (a,b)$~is a homomorphism with respect to~$a$~and with respect to~$b$, the
elements~$[s, x_{k-1}]$~for~$s$~in~$S$~and $x_{k-1} \in X_{k-1}$~generate~$D^k(G) / D^{k+1}(G)$, and
these elements are exactly the commutators of length~$k$~in the elements of~$S$. The second and third
assertions follow from the first one.
\end{proof}
\begin{prop}\label{PropSubgroupIsFinitelyGenerated}
Let~$H$~be a finitely generated nilpotent group, then every subgroup~$H_0$~of~$H$~is finitely generated.
\end{prop}
For a proof see \cite{segal2005polycyclic} where this is actually shown for polycyclic groups, the result follows since
finitely generated nilpotent groups are polycyclic.
\begin{prop}\label{CorMapNLin}
Let~$H$~be a nilpotent group of nilpotency class~$t$.
\begin{enumerate}
\item the map~$\mathrm{Br}_t : H^t \rightarrow D^{t-1}, (h_1, \cdots, h_t) \mapsto [h_1; h_2;
\cdots; h_t]$~is multilinear.
\item If~$\left\{ h_1, \cdots, h_s \right\}$~generates~$H$, then for every~$m \geq 1$, the subgroup generated
by~$\left\{ h_1^m, \cdots, h_s^m \right\}$~is of finite index in~$H$.
\end{enumerate}
\end{prop}
\begin{proof}[Proof of the first assertion] Let us do an induction on~$t$. The case~$t=1$~being
trivial, suppose the result true for a
nilpotent group of class~$t-1$~and consider~$H$~a nilpotent group of class~$t$. Since~$D^t(H) = 0$,
one has that the map~$\mathrm{co}_{t-1}: (h_1,h) \in H \times D^{t-2}(H) / D^{t-1}(H) \mapsto [h;x] \in
D^{t-1}(H)$~is bilinear; thus,~$\mathrm{Br}_t$~is a homomorphism with respect to the first factor~$h_1
\in H$. Let us show that~$\mathrm{Br}_t$~is a homomorphism in the second coordinates~$h_2$, the other
coordinates are dealt with in the same way. By induction, the map
\[
\mathrm{Br}_{t-1}^{H / D^{t-1}(H)}: (H / D^{t-1}(H))^{t-1}\rightarrow D^{t-2}(H) / D^{t-1}(H)
\]
is~multilinear. Take~$h_1, h_2, h_2', h_3, \cdots, h_{t-1} \in H$, the multilinearity
of~$\mathrm{Br}_{t-1}^{H / D^{t-1}(H)}$~provides an element~$g \in D^{t-1}(H)$~such that \[ [h_1; h_2
h_2'; \cdots ; h_{t-1}] = [h_1, [h_2; \cdots; h_{t-1}]\cdot [h_2 '; \cdots;
h_{t-1}] \cdot g] \] and the bilinearity of~$\mathrm{co}_{t-1}$~gives the result since~$[h_1, g] =0$.
\end{proof}
\begin{proof}[Proof of the second assertion]
We set~$S=\left\{ h_1, \cdots, h_s \right\}$~ and we denote
by~$H_{S,m}$~the subgroup of~$H$~generated by the set ~$\left\{ s^m : s \in S \right\}$. We show by
induction on~$t=\nilp(H)$~that~$H_{S,m}$~is of finite index in~$H$.
If~$t=1$~then~$H$~is abelian and
there is a unique surjective group homomorphism~$\mathbf{Z}^s \rightarrow H$~sending the canonical basis
to~$S = (h_1, \cdots, h_s)$. The subgroup~$H_{S,m}$~is the image of~$m \mathbf{Z}^s$. Therefore, there is a
surjective group homomorphism~$\mathbf{Z}^s / m \mathbf{Z}^s \twoheadrightarrow H / H_{S,m}$~and we get that~$H / H_{S,m}$~has at
most~$m^s$~elements.
Now suppose the result true for a group of nilpotency class~$t-1$~and assume~$\nilp (H) = t$, with~$t
\geq 2$. Set~$T := D^{t-1}(H)$,~$T$~is central in~$H$. One has the exact sequence
\[ 1 \rightarrow T \rightarrow H \rightarrow H/ T \rightarrow 1. \]
By induction, the image of~$H_{S,m}$~in~$H / T$~is of finite index; thus, one can fix a finite set~$A
\subset H$~such that~$H = \bigsqcup_{h \in A} h H_{S,m} T$. To conclude, we only need to show that the
index of~$T \cap H_{S,m}$~in~$T$~is finite. Since,~$T \cap H_{S,m}$~contains the subgroup
of~$t-1$~commutators~$D^{t-1}(H_{S,m})$~it suffices to show that the index of~$D^{t-1}(H_{S,m})$
in~$T$~is finite.
By Proposition \ref{PropGeneratorsOfLowerCentralSeries},~$T$~is generated by the set~$S' = \left\{ [x_1; \cdots;
x_{t-1}]: x_i \in S \right\}$~and~$D^{t-1}(H_{S,m})$~is generated by the set~$S'' = \left\{[x_1^m; \cdots;
x_{t-1}^m] : x_i \in S \right\}$~furthermore, the first assertion shows that~$S''$~consists exactly of the
elements of~$S'$~raised to the power~$m^{t-1}$. So by the abelian case,~$D^{t-1}(H_{S,m})$~is of finite index
in~$T$.
\end{proof}
\subsubsection{Malcev's completion of nilpotent torsion-free finitely generated group}
Denote by~$\hat \mathbf{Z} = \prod_{p \text{ prime}} \mathbf{Z}_p$~equipped with the product topology (the adelic topology). It is the
profinite completion of~$\mathbf{Z}$.
Let~$H$~be a nilpotent torsion-free finitely generated group. It is known that~$H$~embeds into
~$\Tri_1(n, \mathbf{Z})$~the group of upper triangular matrices with integer coefficients and 1's on the diagonal for some
integer~$n$~(see for example \cite{segal2005polycyclic} Theorem 2 of Chapter 5). For the rest of this section, we fix an
embedding~$\iota: H \hookrightarrow \Tri_1(n, \mathbf{Z})$. There are two topologies that one can consider on~$\iota(H)$.
First the adelic topology induced by the inclusion~$\Tri_1(n, \mathbf{Z}) \subset \Tri_1(n, \hat \mathbf{Z})$, and second, the profinite
topology where a basis of neighbourhood for the neutral element are the subgroups of finite index in~$\iota(H)$.
\begin{prop}\label{PropAdelicTopologyAndProfiniteTopologyAreTheSame}
Let~$G \subset \Tri_1(n, \mathbf{Z})$~be a subgroup of matrices with integer coefficients and 1's on the diagonal, then the
profinite topology and the adelic topology on~$G$~are the same. In particular, the profinite completion of $G$
coincides with the closure of $G$ in $\Tri_1 (n , \hat \mathbf{Z})$.
\end{prop}
\begin{proof}
First, let~$K$~be a subgroup of~$\mathrm{GL}_n(\mathbf{Z})$~of the form~$K =\left\{ A \in \mathrm{GL}_n(\mathbf{Z}) : A \equiv \id \mod m \right\}$~for some
integer~$m$, such groups~$K$~form a basis of open neighbourhood of~$\id$~for the adelic topology. It is a normal
subgroup of~$\mathrm{GL}_n(\mathbf{Z})$~with finite quotient, therefore~$G \cap K$~is a finite index
subgroup of~$G$. Therefore the adelic topology is finer than the profinite topology.
Conversely,~$G$~is a unipotent group of matrices over~$\mathbf{Q}$,
therefore it is arithmetic (see \cite{segal2005polycyclic} Exercise 13 of Chapter 6). By the affirmative solution to
the congruence subgroup problem for arithmetic soluble groups (see \cite{chahal1980}), we get that~$G$~is a
congruence subgroup. This means that every finite index subgroup of~$G$
contains a subgroup of the form~$G \cap \left\{ A \in \mathrm{GL}_n (\mathbf{Z}) : A \equiv \id \mod m \right\}$~for some integer~$m$.
Therefore, the profinite topology is finer than the adelic topology; thus, they are the same.
\end{proof}
A consequence of this proposition is that the profinite completion of~$\iota(H)$~is exactly the closure of
~$\iota(H)$~in $\Tri_1(n, \hat \mathbf{Z})$.
\begin{prop}\label{PropCompletionIsProP}
Let~$G$~be a nilpotent subgroup of~$\Tri_1(n, \mathbf{Z})$. The closure of~$G$~in~$\Tri_1(n, \mathbf{Z}_p)$~is the
pro-$p$-completion of~$G$, in particular it is a~$p$-adic Lie group.
\end{prop}
\begin{proof}
Denote by~$\hat G$~the profinite completion of~$G$~and for a prime~$\ell$,~$G_\ell$~the pro-$\ell$-completion of~$G$.
Since~$G$~is nilpotent and a finite nilpotent group is a product of~$\ell$-groups for some primes~$\ell$~(see
\cite{bourbaki1970algebre} chapter 1, \S 7, Theorem~4) we have that~$\hat G = \prod_\ell G_\ell$. By Proposition
\ref{PropAdelicTopologyAndProfiniteTopologyAreTheSame}, we have a continuous injective homomorphism of topological
groups
\[
\hat G = \prod_\ell G_\ell \hookrightarrow \Tri_1(n, \hat \mathbf{Z}) = \prod_\ell \Tri_1(n, \mathbf{Z}_\ell).
\]
For a prime~$p$, this induces a continuous group homomorphism~$G_p \hookrightarrow \prod_\ell \Tri_1(n, \mathbf{Z}_\ell)$.
But,~$G_p$~is a pro-$p$-group and for every prime~$\ell$,~$\Tri_1(n, \mathbf{Z}_\ell) = \varprojlim \Tri_1(n, \mathbf{Z} /\ell^k
\mathbf{Z})$~is a pro-$\ell$-group. Therefore,~$G_p$~can be identified with the image of~$\hat G$~in~$\Tri_1(n, \mathbf{Z}_p)$;
this is exactly the completion of~$G$~in~$\Tri_1(n, \mathbf{Z}_p)$, meaning that~$G_p$~is a closed subgroup of the~$p$-adic
Lie group~$\Tri_1(n, \mathbf{Z}_p)$, so it is a Lie group by Theorem \ref{theoremClosedSubgroupAreLieGroups}.
\end{proof}
\begin{thm}\label{BigtheoremPropClosureIsALieGroup}\label{MinorationpAdic2}
Let $c>0$ be such that $c > \frac{1}{p-1}$ and let~$H$~be a finitely generated nilpotent subgroup
of~$\Diff^{an}_c(\mathbf{Z}_p^d)$, then the closure~$\bar H$~of ~$H$~in~$\Diff^{an}(\mathbf{Z}_p^d)$~is a
finite-dimensional nilpotent Lie group.
Furthermore, denote by~$\mathfrak h$~the Lie algebra of~$\bar H$, then~$\mathfrak h$~is a finite-dimensional nilpotent Lie algebra and
~$\dl(\mathfrak h) \geq \vdl(H)$.
\end{thm}
\begin{proof}
Set~$G = \iota(H)$~and~$\psi := \inv \iota: G \rightarrow \Diff^{an}_c(\mathbf{Z}_p^d)$. By Proposition \ref{PropCompletionIsProP}
and Proposition \ref{PropEmbeddingOfProPcompletion},~$\psi$
extends to a Lie group homomorphism~$\psi: G_p \rightarrow \Diff^{an}_c(\mathbf{Z}_p^d)$~where~$G_p$~is the closure
of~$G$~in~$\Tri_1 (n, \mathbf{Z}_p)$; we show that the image of~$\psi$~is the closure of~$H$~in~$\Diff^{an}(\mathbf{Z}_p^d)$.
Let~$K$~be the image of~$\psi$. Since~$\Tri_1(n,\mathbf{Z}_p)$~is compact and~$G_p$~is closed,~$G_p$~is also compact and
so is~$K$. This implies that the closure~$\overline H$~of~$H$~is included in~$K$. And~$K$~is
included in~$\overline H$~because of the continuity of~$\psi$. This shows that~$\overline H$~is a finite
dimensional Lie group isomorphic to~$G_p / \ker \psi$.
Now, we show the statement for~$\mathfrak h$. By Proposition
\ref{PropOpenSubgroupDerivedSeries}, there exists an open subgroup~$H_1$~of~$\overline H$, such that~$D^i (H_1)$~is a
Lie subgroup of~$\overline H$~with Lie algebra~$\mathcal D^i (\mathfrak h)$. Since~$H_1$~is open, by Theorem
\ref{theoremDiffAnIsALieGroup} there exists an integer~$c >0$~such that~$\Diff^{an}_c (\mathbf{Z}_p^d) \cap H \subset H_1$.
Take~$f_1, \cdots, f_s$~generators of~$H$. Then by Proposition \ref{CorMapNLin} the
subgroup~$H'$~generated by the~$f_i^{p^c}$'s is a finite index subgroup of~$H$~and it is included in~$H_1$~by Lemma
\ref{lemma:PuissanceCongruence}, therefore~$\dl (\mathfrak h) = \dl(H_1) \geq \dl(H') \geq \vdl(H)$.
\end{proof}
\section{Finitely generated nilpotent groups}\label{SecFinitelyGeneratedNilpotentGroups}
\subsection{Base change from~$\mathbf{C}$~to~$\mathbf{Z}_p$: Good models}
To prove Theorem \ref{BoundNilpotentGroups}, we shall ultimately apply Theorem \ref{BigtheoremPropClosureIsALieGroup}.
Thus, we need a method to transfer problems regarding groups of automorphisms defined over~$\mathbf{C}$~to similar problems on
groups of Tate analytic diffeomorphisms over~$\mathbf{Z}_p$, for certain primes~$p$.
\begin{thm}[Lech, see \cite{Lech53}] \label{theoremLechPlongementPadique} Let~$\mathbf{K}$~be a finitely generated field over~$\mathbf{Q}$
and let~$S$~be a finite subset of~$\mathbf{K}$. Then there exists an infinite number of prime numbers~$p$~with an
embedding~$\mathbf{K} \hookrightarrow \mathbf{Q}_p$~such that all elements of~$S$~are mapped to~$\mathbf{Z}_p$.
\end{thm}
Let~$X$~be an irreducible quasiprojective variety over~$\mathbf{C}$~and~$\Gamma$~a finitely generated subgroup of~$\Aut (X_\mathbf{C})$.
\begin{itemize}
\item Let~$R$~be an integral domain. We say that~$(X,\Gamma)$~is \emph{defined over }~$R$, if there
exists an irreducible separated reduced scheme~$X_R$~over~$R$~and an injective homomorphism~$\Gamma \hookrightarrow \Aut_\mathbf{R} (X_\mathbf{R})$
such that~$X$~and~$\Gamma$~are obtained by the base change~$X = X_R \times_{\Spec R} \Spec \mathbf{C}$.
\item Let~$p$~be a prime number. A \emph{model} of~$(X,\Gamma)$~over~$\mathbf{Z}_p$~is the data of
\begin{enumerate}[label=(\roman*)]
\item A ring~$R \subset \mathbf{C}$~over which~$(X, \Gamma)$~is defined and an embedding~$R \hookrightarrow \mathbf{Z}_p$.
\item An irreducible variety~$\mathcal X$~over~$\mathbf{Z}_p$~and an injective homomorphism~$\rho: \Gamma \hookrightarrow \Aut_{\mathbf{Z}_p}
(\mathcal X)$~such that \[ \mathcal X \simeq X_R \times_{\Spec R} \Spec \mathbf{Z}_p. \] is the base change of~$X_R$~and for all~$f
\in \Gamma$, ~$\rho (f)$~is the base change of~$f$.
\end{enumerate}
\item A \emph{good model} over~$\mathbf{Z}_p$~of~$(X,\Gamma)$~is the
data of a model of~$(X,\Gamma)$~with the additional condition that the special fiber~$\mathcal X_{\mathbf{F}_p} = \mathcal X
\times_{\Spec \mathbf{Z}_p} \Spec \mathbf{F}_p$~is geometrically reduced and irreducible and of dimension \[
\dim_{\mathbf{F}_p} (\mathcal X_{\mathbf{F}_p}) = \dim_{\mathbf{Q}_p} (\mathcal X \times_{\Spec R} \Spec \mathbf{Q}_p). \]
\end{itemize}
\begin{prop}[Proposition 4.4 of \cite{bell2010dynamical}, Proposition 3.2 of \cite{cantat2014algebraic}]\label{FromCtoZp}
Let~$X$~be an irreducible complex quasi-projective variety,~$\alpha \in X(\mathbf{C})$~and~$\Gamma$~be a finitely generated
subgroup of~$\Aut_\mathbf{C} (X)$. Then, there exists an infinite number of primes~$p \geq 3$~such that~$(X,\Gamma)$~has a
good model~$\mathcal X$~over~$\mathbf{Z}_p$~and such that~$\alpha$~extends to a section~$\alpha: \Spec \mathbf{Z}_p \rightarrow \mathcal X$.
\end{prop}
\begin{ex}
For simplicity, suppose~$X$~is the affine space~$~\mathbf{A}^d_\mathbf{C}$~with its standard coordinates~$x_1,\cdots,
x_d$~and~$\Gamma \subset \Aut(\mathbf{A}^d_\mathbf{C})$~is a finitely generated group of polynomial automorphisms. This is already
an interesting example. Let~$S$~be a finite symmetrical~$(\inv S = S)$~set of generators of~$\Gamma$. Let~$R$~be the
ring generated by all the coefficients of the elements of~$S$~and the coordinates of~$\alpha$. Then,~$(X,
\Gamma)$~is defined over~$R$. Plus, by Theorem \ref{theoremLechPlongementPadique} there exists a prime~$p$~and an
embedding~$\iota: R \hookrightarrow \mathbf{Z}_p$. Using this embedding, the base change~$\mathcal X = \mathbf{A}^d_{\mathbf{Z}_p}$~and~$\rho:
\Gamma \hookrightarrow \Aut (\mathbf{A}^d_{\mathbf{Z}_p})$~show that~$(\mathbf{A}^d, \Gamma)$~is a good model over~$\mathbf{Z}_p$~and~$\alpha$~extends to a
~$\mathbf{Z}_p$-point of~$\mathcal X$.
\end{ex}
\subsection{From algebraic automorphisms to analytic diffeomorphisms over~$\mathbf{Z}_p$}
In this section, we consider a scheme
~$\mathcal X$~of dimension~$d$~over~$\mathbf{Z}_p$, where~$p\geq 3$~is a prime number, such that
\begin{itemize}
\item~$\mathcal X$~is a quasi-projective variety over~$\mathbf{Z}_p$, and its generic fiber is geometrically
irreducible over ~$\mathbf{Q}_p$.
\item~$\overline \mathcal X = \mathcal X \times_{\Spec \mathbf{Z}_p} \Spec \mathbf{F}_p$~is the special fiber of~$\mathcal X$~and is geometrically
irreducible over~$\mathbf{F}_p$.
\item~$f: \mathcal X \rightarrow \mathcal X$~is an automorphism of~$\mathbf{Z}_p$-schemes.
\item~$\overline f : \overline \mathcal X \rightarrow \overline \mathcal X$~is the restriction of~$\mathcal X$~to the special fiber.
\item~$r: \mathcal X (\mathbf{Z}_p) \rightarrow \overline{\mathcal X} (\mathbf{F}_p)$~is the reduction map.
\item~$x$~is a smooth~$\mathbf{F}_p$-point and there exists~$\alpha \in \mathcal X(\mathbf{Z}_p)$~such that~$r(\alpha) = x$.
\end{itemize}
For the two next propositions, we refer to \cite{bell2010dynamical}. They will enable us to go from
algebraic automorphisms to analytic diffeomorphisms.
\begin{prop}\label{PropExistenceOfIota}
Let~$\mathcal X$~be a quasi-projective scheme over~$\mathbf{Z}_p$. There exists a function~$\iota: \mathbf{Z}_p^d \rightarrow
\mathcal X(\mathbf{Z}_p)$~which induces an analytic bijection between~$\mathbf{Z}_p^d$~and the open subset of~$\mathcal X (\mathbf{Z}_p)$~consisting of
the points~$\beta$~such that~$r(\beta) = x$.
\end{prop}
\begin{prop}\label{PropConjugaisonDiffeoAnalytique}
Suppose that~$\bar f (x) = x$. Let~$\iota: \mathbf{Z}_p^d \rightarrow
\mathcal X (\mathbf{Z}_p)$~be the function defined
in Proposition \ref{PropExistenceOfIota}. Then there exist analytic functions~$F_1,\cdots,F_d \in \mathbf{Z}_p \langle T_1,\cdots,
T_d \rangle$~such that \begin{enumerate}[label = (\roman*)] \item One has \[ \inv \iota \circ f \circ \iota =
(F_1,\cdots,F_d) =: \mathcal F \in \mathbf{Z}_p \langle T_1,\cdots,T_d \rangle^d. \]
\item if~$\bar{\mathcal F}$~is the reduction mod~$p$~of~$\mathcal F$, then~$\bar{\mathcal F} = \mathcal F_0 + \mathcal
F_1$~with~$\mathcal F_0 \in (\mathbf{Z}/p\mathbf{Z})^d$~and~$\mathcal F_1 \in
\mathrm{GL}_d(\mathbf{Z} / p \mathbf{Z})$.
\end{enumerate}
Furthermore~$\mathcal F$~is a Tate-analytic diffeomorphism because~$f$~is an
automorphism.
\end{prop}
\begin{ex}
Propositions \ref{PropExistenceOfIota} and \ref{PropConjugaisonDiffeoAnalytique} are proven in
\cite{bell2010dynamical}. We only do the proof in the case~$\mathcal X = \mathbf{A}^d_{\mathbf{Z}_p}$. Take standard coordinates~$\mathbf x = x_1,
\cdots, x_d$~over~$\mathcal X$. Then,~$\mathcal X = \Spec \mathbf{Z}_p[\mathbf x]$~and~$\overline \mathcal X = \Spec \mathbf{F}_p[\mathbf x]$. The reduction map~$r:
\mathcal X(\mathbf{Z}_p) = \mathbf{Z}_p^d \rightarrow \overline \mathcal X(\mathbf{F}_p) = \mathbf{F}_p^d$~is the reduction mod~$p$~coordinates by coordinates.
Take~$x \in \mathbf{F}_p^d$~and~$z \in \mathbf{Z}_p^d$~such that~$r(z) = x$, then the open subset of~$\mathcal X(\mathbf{Z}_p)$~of elements~$\beta$
such that~$r(\beta) =x$~is the ball of center~$z$~and radius~$1 / p$. The analytic bijection~$\iota$~is given by
~$\iota: m \in \mathbf{Z}_p^d \mapsto z + p \cdot m \in \mathcal X(\mathbf{Z}_p) = \mathbf{Z}_p^d$. This proves Proposition \ref{PropExistenceOfIota}.
Now, take a polynomial automorphism~$f$, the map~$\overline f$~is the polynomial automorphism over~$\mathbf{F}_p^d$~obtained
when taking the coefficients of~$f \mod p$. Take a point~$x \in \mathbf{F}_p^d$~such that~$\bar f (x) = x$, up to a
conjugation by a translation (which does not change the result), we can suppose that~$x = 0 \in \mathbf{F}_p^d$. This means
that~$f$~preserves the ball of center
0 and radius~$1/p$~in~$\mathbf{Z}_p^d$. Writing~$f$~in coordinates, we have
\[
f(\mathbf x) = p a_0 + A_1 (\mathbf x) + A_2(\mathbf x) + \cdots
\]
where~$a_0 \in \mathbf{Z}_p^d$~and~$A_i$~is the homogeneous part of degree~$i$~of~$f$. Then,
\[ \inv \iota \circ f \circ \iota (\mathbf x) = \frac{1}{p} f(p \mathbf x) = a_0 + A_1(\mathbf x) + \sum_{k \geq 2} p^{k-1} A_k (\mathbf x). \]
This is indeed an element of~$\mathbf{Z}_p \langle \mathbf x \rangle^d$~and~$\overline{\frac{1}{p}f(p \mathbf x)}$~is an invertible affine
transformation of~$\mathbf{F}_p^d$, this proves Proposition \ref{PropConjugaisonDiffeoAnalytique}.
\end{ex}
\begin{prop}\label{FromAlgAutoToDIffAnal}[Proposition 3.3 of \cite{cantat2014algebraic}]
Let~$\Gamma$~be a finitely generated subgroup of~$\Aut_{\mathbf{Z}_p} (\mathcal X)$. There
exists a finite index subgroup~$\Gamma_0 \subset \Gamma$~and an open subset~$\mathcal U \subset \mathcal X (\mathbf{Z}_p)$
analytically diffeomorphic to~$\mathbf{Z}_p^d$~such that~$\mathcal U$~is stable by the action of~$\Gamma_0$~on~$\mathcal X$~and this
action over~$\mathcal U$~is conjugated to the action of a subgroup of~$\Diff^{an}_1 (\mathcal U)$.
\end{prop}
\begin{proof}
Since~$r(\alpha) =x \in \overline \mathcal X (\mathbf{F}_p)$, the set~$\overline \mathcal X(\mathbf{F}_p)$~is not empty and since~$\overline \mathcal X$~has finitely
many~$\mathbf{F}_p$-points, there exists a finite index subgroup~$\Gamma_1
\subset \Gamma$~that acts trivially on~$\overline \mathcal X (\mathbf{F}_p)$. The point~$x$~is fixed by~$\Gamma_1$,
let~$\iota$~be as in Proposition \ref{PropExistenceOfIota} and~$\mathcal U$~the open subset of~$\mathcal X(\mathbf{Z}_p)$
consisting of the points~$\beta$~such that~$r(\beta) = x$.
Therefore,~$\Gamma_1$~preserves~$\mathcal U$~and by applying Proposition
\ref{PropConjugaisonDiffeoAnalytique} to the elements of~$\Gamma_1$, we get that conjugation by~$\iota$~induces a
group homomorphism~$\Gamma_1 \hookrightarrow \Diff^{an}(\mathbf{Z}_p^d)$. Composing this embedding with the homomorphism of
reduction$\mod p$~induces a group homomorphism from~$\Gamma_1$~to the finite group of affine transformations
of~$(\mathbf{Z} / p \mathbf{Z})^d$. Denote by~$\Gamma_0$~the kernel of this homomorphism and the theorem is proven.
\end{proof}
\subsection{Proof of Theorem \ref{BoundNilpotentGroups}}
Take~$H$~a finitely generated nilpotent group acting by algebraic automorphisms on a quasi-projective
variety~$X$~over a field of characteristic zero.
We are first going to show that we can suppose~$X$~to be irreducible in order to work on a~$\mathbf{Z}_p$-scheme:~$X$~has a
finite number of irreducible components and~$H$~permutes them. So there exists a finite index subgroup~$H' \subset H$
that stabilizes every irreducible component~$X_i$~of~$X$. Call~$H_i$~the restriction of~$H'$~to~$X_i$, then~$H' = \prod
H_i$~and~$\vdl (H') = \min \vdl (H_i)$. We replace~$X$~by one of its irreducible component of maximal
dimension and~$H$~by~$H'$~restricted to this component,~$H'$~is also finitely generated by Proposition
\ref{PropSubgroupIsFinitelyGenerated}.
Let~$\alpha \in X(\mathbf{C})$,~$X$~is then an irreducible complex quasi-projective variety of dimension~$d$, by proposition
\ref{FromCtoZp}, there exists a prime number~$p \geq 3$~such that~$(X,H)$~admits a good model~$\mathcal X$~over~$\mathbf{Z}_p$~and
such that~$\alpha$~extends to a~$\mathbf{Z}_p$-point of~$\mathcal X$. Now,
by Proposition \ref{FromAlgAutoToDIffAnal}, there exists a finite index subgroup~$H_0 \subset H$~which is isomorphic
to a subgroup of~$\Diff_1^{an} (\mathcal U)$, for~$\mathcal U$~an open subset of~$\mathcal X(\mathbf{Z}_p)$~analytically
diffeomorphic to~$\mathbf{Z}_p^d$. By Proposition \ref{PropSubgroupIsFinitelyGenerated},~$H_0$~is a finitely generated
nilpotent subgroup of~$\Diff^{an}_1(\mathbf{Z}_p^d)$. Using Theorem \ref{MinorationpAdic2}, we get that the Lie
algebra~$\mathfrak h$~associated to~${H_0}$~is nilpotent and~$\dl(\mathfrak h) \geq \vdl(H_0) \geq \vdl(H)$. Applying Theorem
\ref{theoremPAdicEpsteinThurston}, we get~$d \geq \vdl(H)$.
\subsection{Optimality of Theorem \ref{BoundNilpotentGroups}}\label{SubSecOptimality}
\paragraph{An example from \cite{epstein1979transformation}.--} We will use the construction from
\cite{epstein1979transformation} to find groups where Theorem \ref{BoundNilpotentGroups} is optimal.
Let~$n$~be an integer and let~$A$~be the matrix such that~$A(e_i) = e_{i+1}, 1 <i \leq n$~where~$e_i$~is
the canonical basis. Consider the subgroup of affine transformations~$G = \left\{ x \in \mathbf{R}^n \mapsto \exp(tA) x + b : t \in \mathbf{R}, b \in \mathbf{R}^n \right\}$, we will write~$(t;b)$~for the element~$(x \mapsto \exp (tA) x +
b)$. This is a real Lie group of dimension~$n+1$~of nilpotency class~$n$~and derived length 2, diffeomorphic to
~$\mathbf{R}^{n+1}$. The group law is given by
\[ (t;b) (s;c) = (t+s; b + e^{tA}c). \]
Notice that the group law is given by polynomials with rational coefficients in~$s,t$~and the coordinates of~$b$~and
~$c$; thus~$G$~is in fact an algebraic group.
\begin{lemme}\label{LemmeCrochetPolynomial}
Recall the notation of \ref{SubSubSecNilpotentGroups}. Let~$k < n$~be an integer. The map
\[ \left( (t_0;b_0), \cdots, (t_k; b_k) \right) \in G^{k+1} = \mathbf{R}^{(n+1)(k+1)} \mapsto \mathrm{Br}_{k+1} \left(
(t_0; b_0), \cdots, (t_k; b_k) \right) \in G = \mathbf{R}^{n+1} \]
is a nonconstant polynomial map with rational coefficients from~$\mathbf{R}^{(n+1)(k+1)}$~to~$~\mathbf{R}^{n+1}$.
\end{lemme}
\begin{proof}
The map is polynomial with rational coefficients because the group law is, and this map is
not constant because~$\nilp (H) = n > k$.
\end{proof}
Consider the vector space generated by the translations~$T_{e_i}, 2 \leq i \leq n$. The Lie
group~$S$~acts on the variety~$G$~on the left and~$G / S$~is a variety diffeomorphic to~$\mathbf{R}^2$. The
diffeomorphisms are given by
\[ [(t;b)] \in G / S \mapsto (t, b_1) \in \mathbf{R}^2 \]
and
\[ (x,y) \in \mathbf{R}^2 \mapsto \left[ (x; y e_1) \right] \in G / S
\]
where the brackets mean that we take the orbit under the action of~$S$.
The group~$G$~acts by right composition on~$G /S$~and this action is faithful. The formulas are given by
\[ \forall (t;b) \in G, \forall (x,y) \in \mathbf{R}^2 = G / S , \quad (x,y) \cdot (t; b) = \left(x+ t, y +
\sum_{k=1}^n \frac{t^{k-1}}{(k-1)!} b_k \right). \]
We see that the action is therefore by polynomial automorphisms. We will write $(t;b)$ on the left even
though the action is on the right because we view it as a polynomial automorphism of $\mathbf{A}^2_\mathbf{C}$.
\paragraph{A group where theorem \ref{BoundNilpotentGroups} is optimal.--}
Now, take~$H$~a finitely generated
subgroup of~$G$~such that~$\nilp(H) = n$~and~$H$~contains two elements~$(t;b), (s;c)$~such that~$t,s$~and all the
coordinates of~$b,c$~are algebraically independent over~$\mathbf{Q}$. The group~$H$~satisfies the condition of
Theorem \ref{BoundNilpotentGroups}, it acts faithfully on the quasiprojective variety~$\mathbf{A}^2_\mathbf{C}$~and we
have~$\vdl(H) =2$. Indeed, if~$H$~admits an abelian finite index subgroup, then there exists an integer
~$N$~such that~$(t;b)^N$~and~$(s;c)^N$~commute. But this would give a non-trivial polynomial relation over
~$\mathbf{Q}$~between~$s,t$~and the coordinates of~$b,c$~by Lemma \ref{LemmeCrochetPolynomial}, this is absurd. Thus, the bound
in Theorem \ref{BoundNilpotentGroups} is optimal for~$H$.
\paragraph{Derived length versus nilpotency class.--} In Theorem \ref{BoundNilpotentGroups} we suppose that~$H$~is
nilpotent. One might wonder if the bound can be improved using the virtual nilpotency class, i.e the minimum of~$\nilp
(H')$~for~$H'$~of finite index in~$H$. We
show that this is not possible with a similar counterexample as above. Take~$H$~a finitely generated subgroup of~$G$
such that~$H$~contains~$(t_0; b_0), \cdots, (t_{n-1}; b_{n-1}) \in G^{n}$~such that all the~$t_i$'s and
the coordinates of the $b_i$'s are algebraically independent over~$\mathbf{Q}$. We show that every finite index subgroup~$H'$~of~$H$~has a nilpotency
class equal to~$n$. Indeed, there exists an integer~$N$~such that for all~$0 \leq i \leq n-1, h_i := (t_i; b_i)^N \in
H'$. The coordinates of the~$h_i$'s are still algebraically independent over~$\mathbf{Q}$~because the group law is given by
polynomials with rational coefficients and by Lemma \ref{LemmeCrochetPolynomial}, the bracket~$[h_0; \cdots;
h_{n-1}]$~of length~$n$~is not the identity, because that would give a nontrivial polynomial relation between the
coordinates of the~$h_i$'s.
\paragraph{Optimality of Theorem \ref{theoremPAdicEpsteinThurston}.--}
We show that in Theorem \ref{theoremPAdicEpsteinThurston} we can't replace the derived length with the
nilpotency class and that the theorem is optimal. In fact, the counterexample of
\cite{epstein1979transformation} can be adapted over~$\mathbf{Z}_p$~as follows. Consider the group~$G$~given by
\[ G := \left\{ \mathbf x \in \mathbf{Z}_p^n \mapsto \exp (p \cdot t A) \mathbf x + b : t \in \mathbf{Z}_p, b \in \mathbf{Z}_p^n \right\}. \]
The group law is now given by polynomials with coefficients in~$\mathbf{Z}_p$~and Lemma \ref{LemmeCrochetPolynomial} still
holds but the polynomials are with coefficients in~$\mathbf{Z}_p$.
Then,~$G /S$~is analytically diffeomorphic to~$\mathbf{Z}_p^2$~and we have an embedding of Lie groups~$G \hookrightarrow
\Diff^{an} (\mathbf{Z}_p^2)$~given by
\[ \forall (t;b) \in G, \quad (t;b) (x,y) = \left(x + t, y + \sum_{k=1}^n \frac{p^{k-1}t^{k-1}}{(k-1)!} b_k \right). \]
Let~$\mathfrak g \subset \Theta (\mathbf{Z}_p^2)$~be the Lie algebra of~$G$,~$\mathfrak g$~is nilpotent and we show that~$\nilp
(\mathfrak g) = n$. Let~$k = \nilp (\mathfrak g)$, then by Proposition \ref{PropOpenSubgroupDerivedSeries}, there exists
a small subgroup~$G'$~of~$G$~which is a neighbourhood of $\id$ such that~$\nilp (G') = k$. Therefore~$k \leq n$,
suppose~$k < n$. By Lemma \ref{LemmeCrochetPolynomial} the map
\[ (t_0; b_0), \cdots, (t_k; b_k), (x,y) \in \mathbf{Z}_p^{(n+1)(k+1)} \times \mathbf{Z}_p^2 \mapsto \mathrm{Br}_{k+1} ( (t_0;
b_0), \cdots, (t_k; b_k)) (x,y) \in \mathbf{Z}_p^2 \]
is polynomial. Let~$P_1 (\mathbf w), P_2(\mathbf w)$~be the first and second coordinate of this map
where~$\mathbf w$~is a multivariate variable representing all the variables~$t_i, b_i, x,y$. Since,~$\nilp (G) > k$, the
polynomials~$Q_1(\mathbf w) = P_1 (\mathbf w)- x$, $Q_2 (\mathbf w)= P_2 (\mathbf w)- y$~are not zero. Notice that if
$(t;b) \in G$, then the Gauss norm of~$(t;b) - \id \in \mathbf{Z}_p \langle x, y \rangle^2$~is bounded by the norm of~$(t;b)
\in \mathbf{Z}_p^{n+1}$, therefore there exists an integer~$N >0$~such that for all~$(t;b) \in G, (p^N t; p^N b) \in G'$;
thus \[ Q_1 (p^N \mathbf w) \equiv 0, \quad Q_2 (p^N \mathbf w) \equiv 0 \] and this implies that~$Q_1 = 0, Q_2 =
0$, this is a contradiction.
By a similar argument, we can show there are no small abelian subgroups~$G' \subset G$
neighbourhood of the identity therefore~$\dl(\mathfrak g) = 2$~by Proposition \ref{PropOpenSubgroupDerivedSeries} and Theorem
\ref{theoremPAdicEpsteinThurston} is also optimal.
\paragraph*{Acknowledgements.--}
I would like to thank my advisor Serge Cantat for his help. He gave me helpful advice whenever I needed
them. I would also like to thank Junyi Xie for his suggestions. Finally, I would like to thank the reviewer for his/her very
useful observations and detailed advice.
\bibliographystyle{alpha}
|
2,877,628,089,425 | arxiv | \section*{Abstract}
{\bf
In the framework of precision experiments, the search for electric dipole moments and the precise determination of magnetic dipole moments (g-2)
have since long been of prime interest.
Hadronic decays offer the best accuracy, since only the kinematic information carried by a single neutrino per decay is lost.
Thus, they reveal more easily precious information on the helicity of the initial tau lepton. However, in contrast
to one- or two-body hadronic final states, the description of hadronic multi-body final states depends on the model for the
hadronic current. In this work, we determine how
the choice of a hadronic model impacts the extraction of tau electric and magnetic dipole moments.
}
\section{Introduction}
\label{sec:intro}
In light of the recent result on the
anomalous magnetic moment of the muon $(g-2)_\mu$ \cite{fermilab},
the study of the magnetic moments $\mu_\tau$ of the tau lepton
receives new attention motivated by the mass of the tau lepton
being about 17 times larger than the mass of the muon.
In addition, electric dipole moments like $d_\tau$
are a key observable to search for effects of new physics,
as well.
Both $\mu_\tau$ and $d_\tau$ may be studied
measuring to high precision
the production and subsequent decays of $\tau^\pm$-pairs in
$e^+$-$e^-$-collisions at $B$-factories.
Since the tau lepton has many different decay modes with none of them being dominant,
the inclusion of the largest number of decay channels
is required to statistically improve
the precision of such measurements.
For most of the dominating decay modes like $(\pi\nu)$ or $(\ell\nu_\ell\nu_\tau)$,
we can construct the decay amplitudes from first principles. However, the amplitudes for
hadronic multi-body final states depend on modelling the hadronic systems.
Hadronic decays are particularly suited since they include only a single
escaping neutrino in contrast to leptonic decays with two neutrinos ($\nu_\tau$ and $\nu_\ell$)
missing in the final state. The latter results in large uncertainties in the
reconstruction of the total event kinematics. The inclusion of hadronic decays
(37\% branching fraction), however, requires their very good understanding in
order to reduce systematic uncertainties connected to their modelling.
This is particularly true for hadronic multi-body ($n>2$) decays, which
make up about $40\%$ of all hadronic decays \cite{PDG}.
Since for the measurement of the electric and
magnetic moments the full $\tau^\pm$-pair
event is studied, the inclusion of multi-body final states
improves the exploitation of available data sets, presently mostly constrained to
final states of $(e^\pm\nu_e\nu_\tau)$, $(\mu^\pm\nu_\mu\nu_\tau)$,
$(\pi^\pm\nu_\tau)$ and $(\rho^\pm\nu_\tau)$, commonly used for such measurements \cite{belle}.
The choice of the model for hadronic multi-body final states
is not unique and we must thus estimate the impact of the differences between the
true model and the analysis model on the measurement of the tauon
electric and magnetic moments.
This article is structured as follows: in Sec.~\ref{sec:formfactors}
we introduce the form factors $F_2$ and $F_3$ and construct the
spin-density matrix for the production of $\tau^\pm$-pairs. In Sec.~\ref{sec:neutrino},
we elaborate on the effects of the escaping neutrinos on the determination of
$F_{2/3}$.
In Sec.~\ref{sec:hadroniccurrentmodel},
we introduce the hadronic model required for hadronic multi-body final states.
In Sec.~\ref{sec:optimalobservables}, we construct so-called optimal observables used to extract the
value of $F_{2/3}$ from data and use them to study the impact of the hadronic model on the
measurement of $F_{2/3}$, as described in Sec.~\ref{sec:studies} using simulated data.
\section{Form factors}
\label{sec:formfactors}
The coupling of $\tau^\pm$-pairs to the photon field
is described by:
\begin{equation}
-e \bar u_{\lambda_-} \Gamma^\mu v_{\lambda_+}
,\end{equation}
where $u_{\lambda^-}$ and $v_{\lambda^+}$ are the usual Dirac-spinors
of the tauons with helicities $\lambda_\pm$ and the $\Gamma^\mu$ is given by:
\begin{equation}\label{eq:formfactors}
\Gamma^\mu = F_1(q^2) \gamma^\mu + \frac{iF_2(q^2)}{2m_\tau} \sigma^{\mu\nu} q_\nu + \frac{F_3(q^2)}{2m_\tau} \sigma^{\mu\nu} \gamma^5 q_\nu
,\end{equation}
where $q^\mu$ is the total four-momentum. $F_1(q^2)$ is the Dirac form-factor and $F_2(q^2)$ is the Pauli form-factor.
$F_{2/3}$ are connected to the electric and magnetic dipole moments via:
\begin{equation}
F_2(q^2=0)+1 = \frac{2m_\tau}{eQ_\tau} \mu_\tau \quad\text{and}\quad F_3(q^2=0) = \frac{2m_\tau}{eQ_\tau} d_\tau
.\end{equation}
The amplitude for the $\tau^\pm$-pair production in $e^+$-$e^-$-collisions is then
given by:
\begin{equation}
\mathcal{A}_{\lambda_{e^-}\lambda_{e^+}\lambda_-\lambda_+} = \frac{e^2}{q^2}\cdot\bar v_{\lambda_{e^p}} \gamma_\mu u_{\lambda_{e^-}}\cdot u_{\lambda_-} \Gamma^\mu v_{\lambda_+}
,\end{equation}
where $\lambda_{e^-}$ and $\lambda_{e^+}$ are the helicities of the beam particles.
From this amplitude, we can construct the spin-density-matrix for the $\tau^\pm$-pair,
which for the case of unpolarized $e^+$ and $e^-$ beams is given by:
\begin{equation}\label{eq:SDMconstruction}
\chi_{\lambda_-\lambda_+\lambda_-^\prime\lambda_+^\prime} = \frac14\sum_{\lambda_{e^\pm}} \mathcal{A}_{\lambda_{e^-}\lambda_{e^+}\lambda_-\lambda_+}^* \mathcal{A}_{\lambda_{e^-}\lambda_{e^+}\lambda_-^\prime\lambda_+^\prime}
.\end{equation}
Non-zero values of the form factors $F_{2/3}$ change the
spin-density matrix and thus the spin-correlations of the produced $\tau^\pm$-pair.
The changes to the spin-density matrix elements related to $\Re/\Im(F_{2/3})$ are shown
as function of the $\cos(\theta)$ in Fig.~\ref{fig:SDM}, where $\theta$ is the production
angle of the $\tau^-$ with respect to the incoming electron. The varying symmetry properties
of the spin density matrix elements can be seen, and only $\Re(F_2)$ changes the total
production cross-section\footnote{Comparing the spin-density matrix contributions to the ones
given in Ref.~\cite{NLO}, we find similarities between the contributions from $\mathcal{O}(\alpha^3)$ and
$F_2$, resulting in the bias observed in Ref.~\cite{NLO}.}. For most form factors and spin combinations,
extreme forward and backward angles as well as 90 degrees provide no sensitivity. Here, production angles
around $\pm45$ degrees seem most important.
Since tauons decay before crossing any detector element, spin-correlations of the $\tau^\pm$-pair can only be accessed
through the angular distributions of the $\tau^\pm$ decay products. In this work, we focus on
such spin correlations in $\tau^\pm$-pair production\footnote{In this process, the kinematic
range for the measurement of
$F_{2/3}(q^2)$ is limited to $q^2 > 4m_\tau^2$.} and the corresponding intensity distribution
$\mathcal{I}$ of the decay products of both $\tau^\pm$ is constructed via:
\begin{equation}\label{eq:intens}
\mathcal{I} = \sum_{\lambda^{(\prime)}_\pm} \chi_{\lambda_-\lambda_+\lambda_-^\prime\lambda_+^\prime}\cdot D^-_{\lambda_-\lambda_-^\prime}\cdot D^+_{\lambda_+\lambda_+^\prime}
,\end{equation}
where $D^\pm_{\lambda_\pm\lambda_{pm}^\prime}$ are the
spin-density matrices for the $\tau^\pm$ decays.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{./SDM_reMDM.png}
\includegraphics[width=0.45\textwidth]{./SDM_imMDM.png}\\
\includegraphics[width=0.45\textwidth]{./SDM_reEDM.png}
\includegraphics[width=0.45\textwidth]{./SDM_imEDM.png}
\caption{Contributions from the form-factors $F_{2/3}$ to the $\tau^\pm$-pair production spin-density
matrix as function of the production angle $\cos(\theta)$.
Contributions from the real and imaginary parts of $F_{2/3}$ are on the left and right, respectively.
The influence of $F_2$ and $F_3$ are shown on the top and bottom row. Real and imaginary parts of the spin-density matrix
are shown as red and blue lines, respectively.
The vertical axis range is the same for all 10 plots of one contribution and is indicated on the leftmost
sub-plot. Entries below the diagonal are omitted, since they are hermitian conjugates of the
upper-diagonal entries.}
\label{fig:SDM}
\end{figure}
\section{Effects of neutrino kinematics}
\label{sec:neutrino}
In principle, all decay modes of the $\tau$-lepton are suitable for the
determination of the form factors $F_{2/3}$. Simple accuracy studies
similar to studies presented in Sec.~\ref{sec:studies} show that the
accuracy for the form-factors $F_{2/3}$ is similar for all combinations
of the dominant $\tau^\pm$ decay modes. This, however, requires the final-state
kinematic information to be complete, and thus the intensity distribution $\mathcal{I}$ given in
Eq.~(\ref{eq:intens}) can simply be calculated.
However, since in every decay at least one neutrino is escaping,
calculating the intensity distribution is no longer possible and unmeasurable degrees
of freedom have to be integrated out. In events, where only a single neutrino escapes
in each tau decay---making two in total---a two-fold kinematic ambiguity arises
for the direction of the tauons that has to
be averaged in the calculation of $\mathcal{I}$. For this, both $\tau^-$ and
$\tau^+$ must decay hadronically. For every $\tau^\pm$ decaying leptonically, an additional integration
has to be performed:
\begin{equation}\label{eq:integ}
\mathcal{I} \to \iiint\mathcal{I}\,\mathrm{d}\phi\,\mathrm{d}\!\cos\theta\,\mathrm{d}m_{\nu\bar\nu}^2
,\end{equation}
where $m_{\nu\bar\nu}$ is the
invariant mass of the escaping $(\nu\bar\nu)$-system and $\theta$ and $\phi$
the polar and azimuthal angle of the $\tau$ neutrino within this system.
This loss of kinematic information decreases the accuracy for the
form factors $F_{2/3}$ depending on the particular combination of decay channels used.
This reduction in accuracy is summarized in Table~\ref{tab:resolution}, comparing the
accuracies $\delta$ obtained with integrated and with fully known kinematic information:
\begin{equation}
x_{\delta\Re/\Im(F_{2/3})} = \frac{\delta_\text{integrated}\Re/\Im(F_{2/3})}{\delta_\text{known}\Re/\Im(F_{2/3})}
.\end{equation}
The averaging of the two-fold ambiguity for hadronic decays thus leads to a small decrease
in accuracy, while the integration given in Eq.~(\ref{eq:integ}) for leptonic decays
has a much larger effect, in particular for $\Re(F_3)$.
Thus, an increase of the usable data set of hadronic decays would
improve the accuracies for $F_{2/3}$. In this work, we discuss the inclusion of the
multi-body final-state with the highest branching fraction of $9.31\%$\cite{PDG}:
$\tau^\pm\to\pi^\mp\pi^\pm\pi^\pm + \nu$. Since this decay mode can be
combined with all available decay modes of the opposite-sign $\tau$, its inclusion
would increase the number of available purely hadronic events by a factor 1.57.
\input{table.tex}
\section{Hadronic current model for tau decays}
\label{sec:hadroniccurrentmodel}
The spin-density matrices $D^\pm_{\lambda_\pm\lambda_{\pm}^\prime}$ used in Eq.~(\ref{eq:intens}) for the decays
of the $\tau^\pm$ are constructed via:
\begin{equation}
D^\pm_{\lambda_\pm\lambda_\pm^\prime} = \mathfrak{A}_{\lambda_\pm}^{\pm*} \mathfrak{A}_{\lambda_\pm^\prime}^\pm
,\end{equation}
where $\mathfrak{A}_{\lambda_\pm}^{\pm}$ is the amplitude for the decay of a $\tau^\pm$ with helicity $\lambda^\pm$
into a particular final-state. For $\tau^-$ decays into hadronic final-states, this amplitude is given by:
\begin{equation}\label{eq:decampl}
\mathfrak{A}_{\lambda_-}^{-} \propto \bar u_\nu \gamma_\mu (1-\gamma^5) u_{\lambda_-}\,J_\text{had}^\mu = \ell_{\lambda_-\mu} J_\text{had}^\mu
,\end{equation}
where $J_\text{had}^\mu$ is the hadronic current describing the hadronic dynamics of the decay.
For decays into a single $\pi^-$ or $\rho^-$ and an escaping $\nu_\tau$, the corresponding hadronic currents are given by:
\begin{equation}
J_{\pi^-}^\mu \propto p_\pi^\mu\quad\text{and}\quad J_{\rho^-}^\mu \propto \text{BW}_\rho(p_\rho^2)\left(\eta^\mu_\nu - \frac{p_\rho^\mu p_{\rho\nu}}{p_\rho^2}\right) \left(p_{\pi^-}^\nu - p_{\pi^0}^\nu\right)
.\end{equation}
$\text{BW}_\rho(s)$ describes the dynamic amplitude of the intermediate $\rho(770)$ resonance,
subsequently decaying into two pions.
Since this only acts as a scalar factor in the hadronic current, it cancels in the construction of the optimal
observables defined in Eq.~(\ref{eq:OO}) and thus does not affect the measurement of $F_{2/3}$.
The formulation of the hadronic current
in terms of final-state particle momenta
for multi-body final-states\footnote{multi-body final states discussed
here only contain three observed hadrons and do not refer to higher
multiplicities making up $\approx30\%$ of multi-hadron decays. However,
the question of hadronic models is also present in the case of higher
multiplicities.}
is not straightforward and requires modelling of the hadron dynamics.
In this work, we study the decay $\tau^-\to3\pi^\pm + \nu_\tau$ and model
the hadronic current within the isobar model, following previous
analyses \cite{cleo} and \cite{amplPaper}. In the isobar model, the total hadronic current
is composed of several {\it partial waves}, which each corresponds to a particular set
of quantum numbers $J^{PC}$ for the
three-pion system, which subsequently decays into a $\pi^-$ and
another known resonance finally decaying into $\pi^++\pi^-$, hereafter called the isobar.
\begin{equation}\label{eq:pwa}
J_{3\pi}^\mu = \sum_{w\in\{\text{waves}\}} c_w j_w^\mu
.\end{equation}
The complex-valued coefficients $c_w$ encode the strengths
and relative phases of the individual partial waves, while the partial-wave
currents $j_w^\mu$ encode their specific dependence
on the final-state four-momenta. A detailed formulation of the $j_w^\mu$ can be
found in Ref.~\cite{amplPaper}. Besides the isobar model presented here, there
are other models for $J_{3\pi}^\mu$, e.g. R$\chi$T models \cite{chiral} also commonly used.
\section{Optimal observables}
\label{sec:optimalobservables}
The tau lepton form factors $F_{2/3}(q^2)$, which contain the here sought after physics observables
$\mu_\tau$ and $d_\tau$ only enter in the description of the spin density matrix for the $\tau^{\pm}$
pair production [see eq.~(\ref{eq:SDMconstruction})]. We may thus single out their effect on
the intensity $\mathcal{I}$ by rewriting equation~\ref{eq:intens}:
\begin{equation}
\mathcal{I} = \mathcal{I}_\text{SM} + \sum_{x\in\{\Re/\Im(F_{2/3})\}} x \cdot \mathcal{I}_x
,\end{equation}
where $\mathcal{I}_\text{SM}$ is the standard model intensity distribution and
$\mathcal{I}_x$ are the specific intensity distributions corresponding the non-zero real and imaginary parts
$\Re/\Im(F_{2/3})$. Since the form factors $F_{2/3}$ are known to be small, quadratic terms in the form factors are neglected.
Each observable (form factors and thus the dipole moments) depends on specific relations among the
measurable quantities of the final state particles. Using this expansion, we can define four optimal
observables $OO_x$, one for each of the four $x\in\{\Re/\Im(F_{2/3})\}$, being
optimally sensitive to the form factors \cite{OO}:
\begin{equation}\label{eq:OO}
OO_x = \frac{\mathcal{I}_x}{\mathcal{I}_\text{SM}}
.\end{equation}
Using these observables, the form factors can be
extracted via the expectation values of the corresponding $OO_x$ obtained
for a given data-set:
\begin{equation}\label{eq:linear}
\left\langle OO_x\right\rangle = a_x \cdot x + b_x
,\end{equation}
where the coefficients $a_x$ and $b_x$ are determined
from simulations.
\section{Studies using simulated data}
\label{sec:studies}
We now study the impact of the hadronic model on the determination
of $F_{2/3}$ using the optimal observables defined in Sec.~\ref{sec:optimalobservables}.
For this we construct a hadronic toy model consisting of the following nine partial waves:
\begin{equation}\label{eq:waveSet}
\begin{array}{ccc}
a_1[\rho\pi]_S&a_1[\rho\pi]_D&a_1[f_2\pi]_P\\
a_1[\sigma\pi]_P&a_1[f_0\pi]_P&\pi_1[\rho\pi]_P\\
\pi[\sigma\pi]_S&\pi[f_0\pi]_S&\pi[\rho\pi]_P
\end{array}
\end{equation}
where the naming scheme $X[\xi\pi]_L$ denotes a three-pion resonance
$X$ (the hadronic system) decaying into an isobar $\xi$ and a pion with
relative orbital angular momentum $L$. The subsequent decay of the isobar $\xi$ into
two pions is implied and in turn described by a set of known decay amplitudes.
Each resonance $X$ represents a set of quantum numbers $J^{PC}$.
For the model, we used partial-wave coefficients $c_w$ loosely inspired
by a partial-wave analysis of the three-pion final state in Ref.~\cite{compass}. The dominant
wave in this model is the $a_1[\rho\pi]_S$ wave, as is expected following previous
analyses \cite{cleo}. Using our toy model,
we generated data sets with $10^6$ $\tau^\pm$-pair events, where the $\tau^-$
decays into $(3\pi^\pm + \nu_\tau)$ according to the model described above,
while the $\tau^+$ decays into $(\pi^++\bar\nu_\tau)$. In total, we generated
four toy data sets, where one of each of the four $\Re/\Im(F_{2/3})$ takes the
value of 0.01, while the other three values remain 0.
In a first study, we analyze the pseudo data using the same hadronic
model as used for the simulation
and extract the form-factors. We found no
bias and an accuracy comparable to the other hadronic decay modes $(\pi^- + \nu_\tau)$
and $(\rho^- + \nu_\tau)$ for the same number of events is obtained. For $10^6$ simulated events,
we find:
\begin{equation}
\begin{array}{ll}
\delta\Re(F_2) = 0.0006;&\delta\Im(F_2) = 0.0007;\\
\delta\Re(F_3) = 0.0009;& \delta\Im(F_3) = 0.0005,
\end{array}
\end{equation}
In a second study, we analyzed the same simulated data sets but now using
a simplified model for the hadronic current, namely now only comprising the $a_1[\rho\pi]_S$
wave.
To quantify the similarity of two hadroic models, we define the model overlap $\omega_{m,m^\prime}$
of two models $m$ and $m^\prime$ for the hadronic current as the normalized product of the total hadronic
currents $J^\mu_m$, contracted with the corresponding leptonic current $\ell_{\lambda_-}^\mu$ and integrated
over the full Lorentz invariant phase space (LIPS):\footnote{The
overlaps are the same for $\lambda_-=\pm1/2$. The normalization factors $\mathcal{N}_m$
ensure $\omega_{m,m} = 100\%$.}:
\begin{equation}
\omega_{m,m^\prime} = \left|\int\mathrm{dLIPS}\,\left( J_m^{\mu}\ell_{\lambda_-\mu}\right)^*\left(\ell_{\lambda_-\nu}J_{m^\prime}^{\nu}\right)\right|
\Bigg/ \Big(\mathcal{N}_m\cdot\mathcal{N}_{m^\prime}\Big)
,\end{equation}
with the leptonic current $\ell_{\lambda_-}^\mu$ defined in Eq.~(\ref{eq:decampl}).
The model-overlap of $\omega_{\text{true},\text{ana}}$ of the simplified model
with the model used for simulation was $78\%$.
For this study, we also re-determined the coefficients $a_x$ and $b_x$ defined in Eq.~(\ref{eq:linear})
so that they correspond to our simplified analysis model.
Repeating our analysis with a wrong hadronic model results in the
following values for $\Re/\Im(F_{2/3})$:
\begin{equation}
\begin{array}{ll}
\Re(F_2) = 0.0529\pm 0.0008;&\Im(F_2) = 0.0118\pm0.0008;\\
\Re(F_3) = 0.0086\pm0.0012;&\Im(F_3) = 0.0079 \pm 0.0005,
\end{array}
\end{equation}
while the true value for these quantities is always 0.01. We find, that
$\Re(F_2)$ is largely over-estimated, while the effect in $\Im(F_2)$ is not
very large. $\Re(F_3)$ and $\Im(F_3)$ suffer an under-estimation, which,
however, is less than for $\Re(F_2)$. If the true value is set to $0$,
the bias in $\Re(F_2)$ persists, while we observe no bias for $\Re/\Im(F_3)$
in this case.
We now repeated this procedure with different de-tuned analysis
models for every individual partial wave given in Eq.~(\ref{eq:waveSet}).
For this, we scale up one individual partial wave coefficient $c_w$ [see Eq.~(\ref{eq:pwa})]
from the true model such, that the model overlap $\omega_{\text{true},\text{ana}}$
drops to $95\%$, while keeping the remaining coefficients at their nominal values.
Doing so, we find that the values obtained for $F_3$ and $\Im(F_2)$ are consistent
with the input values, regardless of the wave scaled.
Thus, the extraction of these three quantities appears to be
rather robust with respect to changes in the hadronic model.
In the case of $\Re(F_2)$, we observe a significant bias due to the
mismatch between generator and analysis hadronic model. This bias depends on the
individual partial wave that is scaled in the particular study and is given
in Tab.~\ref{tab:reFtwoBias}.
\begin{table}\begin{center}
\caption{$\Re(F_2)$ extracted from a simulated data set with an
input value of $\Re(F_2) = 0.01$, analyzed with a single de-tuned partial wave.
The statistical uncertainties of all values shown are $0.0007$.}
\begin{tabular}{l|lllll}\label{tab:reFtwoBias}
De-tuned wave & $a_1[\rho\pi]_S$&
$a_1[\rho\pi]_D$&$a_1[f_2\pi]_P$ &$a_1[\sigma\pi]_P$&$a_1[f_0\pi]_P$\\
$\Re(F_2)$ &0.0178&0.0168&0.0144&0.0169&0.0143\\\hline
De-tuned partial wave & $\pi_1[\rho\pi]_P$ & $\pi[\sigma\pi]_S$&$\pi[f_0\pi]_S$&$\pi[\rho\pi]_P$ &\\
$\Re(F_2)$ &0.0147&0.0186&0.0162&0.0180&
\end{tabular}
\end{center}\end{table}
In a final study, we de-tuned the $a_1[\rho\pi]_S$-wave
such that the model-overlap $\omega_{\text{true},\text{ana}} = 99\%$.
In this case, we obtain:
\begin{equation}
\begin{array}{ll}
\Re(F_2) = 0.0112\pm 0.0007;&\Im(F_2) = 0.0102\pm0.0007;\\
\Re(F_3) = 0.0097\pm0.0009;&\Im(F_3) = 0.0103 \pm 0.0005.
\end{array}
\end{equation}
Thus, we find that a proper model for the hadronic current $J_{3\pi}^\mu$
alleviates possible bias in the determination of $\Im(F_2)$ and $F_{3}$, while
the bias in $\Re(F_2)$ remains significantly larger than the uncertainty,
even for a model overlap very close
to unity. Since $\Re(F_2)$ is the only quantity that alters the total cross-section
(see Fig.~\ref{fig:SDM}), it might be advisable to neglect the spin-information
of the decays and only use the total $\tau^\pm$-pair production cross-section.
Doing so, we find for the same simulated data introduced above:
\begin{equation}\label{eq:crossSectionResolution}
\Re(F_2) = 0.0108 \pm 0.0015
.\end{equation}
Even though the accuracy for $\Re(F_2)$ is worse by a factor of two, this result is independent
of a hadronic model and thus is not affected by model bias.
Including only the spin-information from the $(\pi^++\bar\nu_\tau)$ decay does not
improve the accuracy given in Eq.~(\ref{eq:crossSectionResolution}). This is expected,
since $\Re(F_2)$ only affects the correlation of both $\tau^\pm$ spins.
However, the measurement of the total cross-section requires that all radiative corrections
are known and is typically very difficult, since it introduces new sources of systematic uncertainties.
Evaluating the distributions from Fig.~\ref{fig:SDM} for each partial wave, we could not single
out particular waves being specifically more sensitive to the observation of EDM or MDMs than others.
The scheme of optimized variables would, however, take into account such possible effects.
\color{black}
\section{Conclusion}
\label{sec:conclusions}
We studied the determination of the
tauon form factors $F_2$ and $F_3$ using simulated $(3\pi^\pm + \nu_\tau)$$\times$ $(\pi^++\bar\nu_\tau)$
$\tau^\pm$-events. We find the $3\pi^\pm$ hadronic final-state
to give an accuracy on the form-factors comparable to
other hadronic channels, assuming the model
for the hadronic current $J_{3\pi}^\mu$ to be perfect. Thus, this
decay channel will help to significantly increase usable
data for purely hadronically decaying $\tau^\pm$-pair events.
For a simulated data set of $10^6$ events, we obtain an
accuracy for $\mu_\tau$ and $d_\tau$ of:
\begin{equation}
\begin{array}{ll}
\delta\Re(\mu_\tau) = 3.46\times10^{-18}e\text{cm}& \delta\Im(\mu_\tau) =3.58\times10^{-18}e\text{cm}\\
\delta\Re(d_\tau) = 4.66\times10^{-18} e\text{cm}& \delta\Im(d_\tau) = 2.61\times10^{-18}e\text{cm}.
\end{array}
\end{equation}
However, the model for $J_{3\pi}^\mu$ is not known a priori
and all models currently used, e.g. the isobar model or
R$\chi$T models \cite{cleo,amplPaper,chiral}, are based on assumptions,
a perfect hadronic model is currently not available.
Thus, we extended our study to hadronic models for $J^\mu_{3\pi}$ that
differ from the true model and found a small bias in the
extraction of $F_3$ and $\Im(F_2)$, while $\Re(F_2)$ is heavily over-estimated.
The observed bias results in an under-estimation of $\Re/\Im(F_3)$,
which in turn vanishes as the analysis model approached
the true model. The bias in $\Re(F_2)$, however, remains significant
even at a model overlap $\omega_{\text{true},\text{ana}}=99\%$ and
thus seems to prohibit the use of the $3\pi^\pm$ channel in a determination
of $\Re(F_2)$. However, since $\Re(F_2)$ alters the total $\tau^\pm$ pair
production cross-section, we may ignore spin effects for such final states and still determine
$\Re(F_2)$. Ignoring spin-correlations decreases the accuracy by
a factor of two, but removes the strong model-dependence.
Finally, we stress that a good knowledge of the hadron
dynamics of multi-particle $\tau^\pm$ decays is prerogative for their inclusion
in precision measurements like $F_{2/3}(q^2)$.
A simple approximation of the hadronic current by the dominating
$a_1\to[\rho\pi]_S$ contribution does not suffice, since according to current knowledge it only describes
around $70\%$ of the $\tau\to3\pi+\nu$ intensity \cite{cleo}.
|
2,877,628,089,426 | arxiv | \section{Introduction}
Fast Radio Bursts (FRBs) are very short duration ($\mu$s - ms) bursts of extragalactic origin and observed in radio frequencies. The number of detected FRBs has been growing steadily since their discovery in 2007 \citep{Lorimer2007} and much more rapidly over the last couple of years largely due to the commissioning of multiple wide Field-of-View (FoV) telescopes like the Australian SKA Pathfinder telescope (ASKAP; \citealt{Shannon2018, Macquart2020_IGM_Baryons_DM_z_relation}) and the Canadian Hydrogen Intensity Mapping Experiment (CHIME; \citealt{First_CHIME_catalog_2021}) telescope with dedicated FRB search surveys. FRBs show a characteristic dispersion sweep in their dynamic spectra caused by the frequency dependent delay induced by propagation through ionised plasma along their line of sight. Dispersion Measure (DM) is used to quantify this dispersion sweep and is proportional to the path integral of the electron density along the propagation path:
\begin{equation}
{\rm DM} = \int_0^z \frac{n_e(z')}{(1 + z')}dl,
\end{equation}
where $n_e(z')$ is the physical electron density at redshift $z'$.
Most FRBs have total observed DMs much larger than the predicted Milky Way contribution derived from the electron density models of the ionised Interstellar Medium (ISM) in our galaxy \citep{NE2001, YMW16}, with the exception of the galactic magnetar SGR1935$+$2154 which exhibited FRB like emission in April 2020 \citep{Bochenek2020_STARE2_FRB, CHIME2020_SGR_burst}.
The total observed DM of an FRB comprises of contributions from the Milky Way's Interstellar Medium (hereafter ISM) (DM$_{\rm ISM}$), the Milky Way's Halo (DM$_{\rm HALO}$), the diffuse Intergalactic Medium (hereafter IGM) (DM$_{\rm IGM}$), the host galaxy's halo and ISM (DM$_{\rm HG}$) and the circumburst environment (DM$_{\rm SOURCE}$). Broadly, these can be grouped into the DM contribution of the Milky Way (DM$_{\rm MW}$), and the contribution of all extra-galactic components (DM$_{\rm EG})$, where:
\begin{equation}
\mathrm{DM}_{\rm MW} = \mathrm{DM}_{\rm ISM} + \mathrm{DM}_{\rm HALO},
\end{equation}
and
\begin{equation}
\mathrm{DM}_{\rm EG} = \mathrm{DM}_{\rm IGM} + \frac{\mathrm{DM}_{\rm HG} + \mathrm{DM}_{\rm SOURCE}}{(1 + z)}.
\end{equation}
In addition to the dispersion, a cold ionised plasma also causes scattering of the radio waves and results in multi-path propagation. This scattering manifests itself as (i) temporal broadening due to the multi-path propagation, (ii) scintillation bands due to the interference of scattered images, and (iii) angular broadening of the apparent source size.
For impulsive signals such as FRBs, the effects of temporal broadening and scintillation can often be readily measured, while for extragalactic continuum radio sources the angular broadening is the most prominent effect of scattering.
Majority of the FRBs detected so far have shown exponential scattering tails and/or what appear to be scintillation bands in their spectra (\citealt{Cordes&Chatterjee2019_FRB_review, First_CHIME_catalog_2021, Macquart2019_spec_idx}). For almost all FRBs, the observed scattering is much larger than the expected scattering from these models of the Milky Way's ISM. The scattering properties of FRBs are thus a very useful probe of the turbulence in the plasma beyond the Milky Way, and lying along the line-of-sight to the FRBs.
Multiple theoretical studies have discussed the potential of using FRBs as probes of extragalactic turbulence \citep{Macquart&Koay2013, Cordes2016_FRB_scattering, Prochaska_Neeleman_2018_DLAs, Vedantham2019_scattering_CGM, Zhu_Feng_2018_scattering_hydrosims}. The dominant source of observed scattering in FRBs is expected to be external to the Milky Way, but the relative contributions from the host galaxies, IGM and galactic halos present along the line of sight remains unresolved \citep{Cordes&Chatterjee2019_FRB_review}. The largest reservoir of ionised plasma encountered by FRBs lies in the IGM, as is evidenced by the recent discovery of a DM$-$redshift($z$) relation in FRBs \citep{Macquart2020_IGM_Baryons_DM_z_relation}. If the ionised plasma in the IGM is also the dominant contributor to the observed scattering, the measured scattering properties of FRBs will predominantly probe the IGM, and potentially yield a relationship between DM and the scattering timescale, $\tau$.
\cite{Ravi2019_observed_prop} searched for such a relation in FRBs using the sample of bursts detected with Parkes radio telescope and found evidence in support with a low to moderate level of significance. On the contrary, similar efforts by \citealt{Hao_qiu_2020_ASKAP_FRB_scattering_props} and \citealt{Cordes2016_FRB_scattering} have ruled out such a relation. If it exists, a DM-$\tau-$ relation would be critical in establishing whether the dominant source of scattering and dispersion in FRBs is the same \citep{Macquart&Koay2013}.
Alternatively, \cite{Vedantham2019_scattering_CGM} have suggested that the Circumgalactic Medium (CGM) of intervening galaxies could explain scattering timescales of the order of milliseconds (at 1 GHz), and therefore measurements of temporal broadening in localized fast radio bursts can be used to constrain the properties of the cool ionized gas clumps in the CGM of intervening galaxies. However, \cite{Prochaska2019_FRB181112} measured a scattering timescale (which was later refined by \cite{Cho2020_pfbinverted_181112}) of $\lesssim 20~\mu$s at 1.3 GHz in FRB20181112A (which had been found to intersect the halo of a foreground galaxy) allowing them to place constraints on the density and turbulence of the ionised plasma in the halo of the foreground galaxy. Similarly, scattering timescales reported for repeating FRB sources FRB20180916B and FRB20121102A were used by \cite{Ocker_and_Cordes_2021_Halo_scattering_limits} to limit the scattering contribution from the Milky Way Halo to an FRB's scattering budget to values less than 12 $\mu$s.
More recently, \cite{Chawla2021_scattering_CHIME_FRBs} performed a population synthesis analysis using the properties of FRBs reported in the first CHIME/FRB catalog \citep{First_CHIME_catalog_2021} and found that a model where scattering originates in the turbulent medium local to the FRB, combined with the circumgalactic medium of intervening galaxies is consistent with the observed properties of their FRB sample.
Therefore, the observed signatures of scattering imprinted upon the FRB profiles are salient features which enable the use of FRBs as a cosmological probe. A caveat however, is that the incoherent dedispersion search technique implemented by most FRB search pipelines precludes the accurate measurement of the scattering widths of FRBs.
Access to the raw voltage data, which typically requires a real-time detection system, enables the use of coherent techniques of removing the dispersion and a near-perfect reconstruction of the intrinsic burst profile unaffected by instrumental smearing.
In this paper, we report the detection of FRB20191107B with the UTMOST radio telescope, and describe its remarkably narrow intrinsic width and scattering timescale revealed after applying coherent dedispersion to the raw voltage data captured for the FRB. Section \ref{sec: FRB191107B props} describes the methodology we use to model the burst properties. In Section \ref{sec:rates of narrow FRBs} we discuss the rates of intrinsically narrow FRBs and the efficiency of current leading surveys in probing the population of such FRBs. In Section \ref{sec: IGM scattering props} we use measured scattering properties of FRB20191107B to put constraints on the strength of turbulence in the IGM and search for a DM$-\tau$ relation in FRBs. In Section \ref{sec: Origin of scattering} we discuss the potential dominant regions for the origin of the observed scattering and identify the local environment of the source as the most likely candidate. We summarise and make our conclusions in Section \ref{sec: Conclusions}.
\section{Detection of FRB20191107B}
\label{sec: FRB191107B props}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/FRB20190711B_dynamic_spectrum.png}
\caption{The dynamic spectrum (bottom) and frequency averaged intensity time series (top) of FRB20191107B, after correcting for the dispersive delay using coherent dedispersion. The capability of UTMOST to capture raw voltages at the native resolution of the instrument has revealed the intrinsically narrow width and the weaker components of the burst. The $x$-axis spans $\sim 3$ milliseconds of data, and each time sample is 10.24 $\mu$s wide. Three components have been identified which have been highlighted in green, cyan and red intervals in the top panel. While they look very weak to the eye, their detection is statistically significant (see Section \ref{subsec:modelling_191107}) and becomes prominent when the data are averaged.}
\label{fig:FRB191107 }
\end{figure}
UTMOST is a 1.6 km long cross radio-interferometer located in New South Wales, Australia \citep{utmost}. Operating at a centre frequency of 835 MHz, it has been running multiple FRB search surveys over the past 5 years (\citealt{Caleb_3frbs, Farah2019}; Gupta et al. in prep.). UTMOST uses a machine learning based real-time detection and classification pipeline \citep{Farah2019} and has discovered 18 FRBs so far.
FRB20191107B was detected in real-time with our FRB pipeline at UTMOST, and raw voltage data sampled at the Nyquist rate of the receiver instrument were captured for $\sim$800 ms around the event.
The FRB was initially detected at a DM of 715.7 pc\,cm$^{-3}$\ and with a Signal-to-Noise (S/N) ratio of 9.9. The observed width was 1.3 ms but due to the event's high DM, this is dominated by the smearing of the pulse due to intra-channel dispersion smearing at the 0.097 MHz frequency resolution of the instrument.
The captured voltages not only provide access to the full-time-resolution data (10.24 $\mu$s), but also preserve the full phase information, allowing for coherent dedispersion of the burst. This revealed the FRB to be a bright and narrow pulse with even narrower components.
We used the \texttt{pdmp} tool from the \texttt{PSRCHIVE}\footnote{\url{http://psrchive.sourceforge.net}} software to optimise the burst's DM and S/N using the high time resolution data and measure a S/N of 23 at a DM of 714.9 pc\,cm$^{-3}$\ and a box-car width of only 61 \textrm{$\mu$}s.
We reprocessed the voltage data and coherently dedispersed the burst at the optimised DM reported by \texttt{pdmp}, which revealed three individual narrow components with a hint of a scattering tail (Fig \ref{fig:FRB191107 }). We model and use the resulting profile for the analysis that follows. The discovery of the FRB was promptly reported as an Astronomer's Telegram in \cite{GuptaAtel2019c_FRB191107} to allow for rapid multi-wavelength follow-up.
Following the methodology used in \cite{Farah2019}, we use the radiometer equation to estimate the apparent fluence of the burst at 6.7 Jy ms. Due to the co-linear arrangement of the individual elements on the East-West arm of the telescope there is a large uncertainty in the localisation of the FRB in the North South direction ($\sim 2 \deg$), and the location of the burst within the primary beam of UTMOST cannot be constrained, such that the fluence is a lower limit. The localisation arc of the burst is an elongated ellipse and can be described using the following equation:
\begin{multline}
RA = 8.032153 - 2.313314\times10^{-4} \times (DEC + 13.837823) \\
+ 1.009132\times 10^{-5} \times (DEC + 13.837823)^{2}
\end{multline}
where $RA$ is in hours, $DEC$ is in deg, and is valid in the range $DEC$ = [-17.1, -10.6]. The best fit coordinates of the FRB are: $RA$ = 08:01:57.08, $DEC$ = -13:44:15.5.
\subsection{Modelling the burst properties}
\label{subsec:modelling_191107}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{Figures/FRB191107_full_corner.png}
\caption{The joint posterior distribution of model parameters in Eqn \ref{fittingeqn}. The dashed lines represent 16 and 84 percentiles in the marginalised 1-D histograms, and the best fit values of the parameters are listed on the top of each histogram.}
\label{fig:posteriors}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/Model_fit_FRB191107.png}
\caption{The pulse profile of FRB20191107B showing its three individual components. The best fit model is plotted in orange along with 2-$\sigma$ contours shaded in black. The bottom panel shows the residuals after subtracting model from the data.}
\label{fig:model_fit_profile }
\end{figure}
The burst profile comprises of three sub-bursts with a trailing, bright narrow component and a hint of an exponential tail as is typical for a radio signal propagating through turbulent media and undergoing multi-path propagation. We therefore model the profile as three Gaussian pulses convolved with a one sided exponential of the form:
\begin{equation}\label{fittingeqn}
S=\sum_i^{n=3} \left\{ \mathrm{A}_i \times \exp \left[\frac{-\left(t-t_i\right)^2}{2 \sigma_i^2}\right] \right\} *\left\{\exp \left[-\frac{t}{\tau}\right]\right\},
\end{equation}
\noindent where * denotes convolution. Here, $\tau$ is the scattering timescale, $\sigma_i$ denotes the width of the Gaussians, $\mathrm{A}_i$ are the amplitudes, and $t_{i}$ are the centres of the Gaussians.
Parameter estimation was performed using the \texttt{BILBY} package \citep{Bilby}, making use of the in-built Markov Chain Monte Carlo (MCMC) sampler \citep{emcee_v3}.
For parameter estimation, we start with uniform priors on all fitted parameters, and use a Gaussian log$-$likelihood function ($L$) defined as:
\begin{equation}
L = -0.5 \times \left[\sum_{j} \left( \frac{(D(t_{j}) - S(t_{j}))^{2}}{n^{2}} \right)+ \log(2 \pi n^{2}) \right]
\end{equation}
\noindent where $D$ is the frequency-averaged, time-series data, $S$ is the model, and $n$ is an estimate of the noise per time sample.
Following a burn-in stage, the joint posterior density of all parameters was estimated with 9,000 samples. The resulting distribution of the posteriors is shown in Fig \ref{fig:posteriors}.
We find that the data are well modelled with the single Gaussian components convolved with an exponential. To test the significance of detection of scattering and detection of the first two weaker components, we compute the Bayes Factor ($\mathcal{B}$) for our model and compare it with the models where we exclude the scattering parameter ($\tau$) and/or the parameters corresponding to the first two Gaussian components ($\sigma_{0,1}, A_{0,1}, t_{0,1})$. We find that our model with the three components and the scattering term included is strongly favoured by the data with $\log \mathcal{B} > 5$ \citep{jeffreys1998theory_bayes_factor_citation, Trotta_2008_Bayes_factor_Jeffreys_scale}. We summarise the Bayes Factor values of our models against a model with a single component with no scattering in Table \ref{tab: model evidences}.
\begin{table}
\centering
\begin{tabular}[c]{|ccc|}
\hline
\hline
Number of components & With scattering tail & Without scattering tail \\
\hline
Bursts 1, 2 and 3 & 32.47 & 27.38 \\
Bursts 2 and 3 & 14.35 & 17.74 \\
Burst 3 only & 7.34 & 0 \\
\hline
\end{tabular}
\caption{This table lists the log Bayes Factor ($\log \mathcal{B}$) values of different models fit to the FRB profile as compared to a model with only one burst component with no scattering. This table shows that our model with three individual components along with an exponential scattering tail provides the best fit to the data, and that the existence of the two weaker components is statistically significant ($\log \mathcal{B} > 5).$}
\label{tab: model evidences}
\end{table}
Using the best fit model we measure a scattering time ($\tau$) of $21.4^{+4}_{-3}$ \textrm{$\mu$}s\ and widths($\sigma_i$) of the three Gaussians as $95.0 ^{+35}_{-25}$, $17.0^{+8}_{-7}$ and $11.3^{+3}_{-3}$ \textrm{$\mu$}s, where the reported uncertainties are the 68\% confidence interval. These values indicate that this FRB consists of one of the narrowest components in an FRB detected with UTMOST until now. However, it is worth mentioning that repeat bursts from FRB20180916B and FRB20200120E have previously been observed to show components with narrower widths \cite{Nimmo2021_microstructure_180916, Nimmo2021_nanosecond_FRB200120E}. The three components are found to be separated in time with a gap of 360 \textrm{$\mu$}s\ between the first and the second components, and of 230 \textrm{$\mu$}s\ between the second and the third components (see Fig \ref{fig:model_fit_profile }), suggesting that the emission regions associated with each component would only be a few kilometers in size.
In the next section, we discuss the significance of the narrow width of the brightest component of the FRB and the sensitivity of FRB surveys to such FRBs. The remarkably low scattering time despite a relatively large observed DM provides the opportunity to place limits on the strength on turbulence in the Intergalactic Medium (IGM) along the line-of-sight to this source, which we explain in Section \ref{sec: IGM scattering props}.
\section{Rates of narrow FRBs}
\label{sec:rates of narrow FRBs}
\begin{figure}
\centering
\includegraphics[width=0.47\textwidth]{Figures/UTMOST_Ob_fl_dm_w_threshold3D.png}
\caption{Detection fluence threshold of UTMOST as a function of the sky-width and DM of FRBs. The plane in orange-yellow shows the detection threshold with the current time and frequency resolution of the instrument, and the green-blue plane shows the detection threshold if UTMOST had infinitely high resolution, i.e. for an ideal instrument. }
\label{fig:Fluence_threshold_UTMOST}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/UTMOST_In_fl_dm_w_threshold3D.png}
\caption{Planes of intrinsic isotropic energy threshold as a function of the intrinsic width and DM of FRBs, for the current survey (orange-yellow) as well as an ideal survey (blue-green) with UTMOST. Our current survey with UTMOST would not be sensitive to FRBs originating in the region between the two planes, which would be detected by an ideal survey with infinite time and frequency resolution. Strictly speaking, our thresholds are upper-limits as our redshift estimates from the DM values are upper-limits, which are used in computing the intrinsic isotropic emission energy in Equation \ref{eqn: energy thresh}. However, correcting for source redshift would equally affect both the `Ideal' and the `Current' survey equally, keeping intact the gap between the two planes.}
\label{fig:Energy_threshold_UTMOST}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.37\textwidth]{Figures/Det_fraction_UTMOST_pli_-1.8.png}
\includegraphics[width=0.37\textwidth]{Figures/Det_fraction_SUPERB_pli_-1.8.png}
\includegraphics[width=0.37\textwidth]{Figures/Det_fraction_CHIME_pli_-1.8.png}
\includegraphics[width=0.37\textwidth]{Figures/Det_fraction_ASKAP_ICS_pli_-1.8.png}
\caption{The distribution of the fraction of the FRB population that would be detected by a survey as a function of the intrinsic FRB sky-width and DM. We have assumed that the FRB population at a given redshift has a power-law intrinsic energy distribution (see Section \ref{sec:rates of narrow FRBs} for details). Different panels show this distribution for 4 prominent FRB surveys. The pink star marks the sky-width ($w_{sky}$) and DM of FRB20191107B scaled to the central frequency of the corresponding survey in each panel. It is evident that all surveys are probing only a small fraction of the population of FRBs like FRB20191107B.}
\label{fig:Det_fraction_surveys}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/Det_fraction_fx_alpha.png}
\caption{The detection fraction of FRBs at FRB20191107B's sky-width and DM, as a function of the assumed power-law index $(\gamma)$ of the intrinsic energy distribution, for the UTMOST, SUPERB, CHIME and ASKAP (for ``incoherent sum'' mode) FRB surveys. The vertical dashed line indicates $\gamma = -1.8$, the power-law index of the intrinsic energy distribution of repeat bursts from FRB20121102A measured by \citep{Gourdji2019_121102_arecibo}, which is the value we adopt to make the maps presented in Fig \ref{fig:Det_fraction_surveys}.}
\label{fig:Det_fraction_UTMOST}
\end{figure}
Despite being intrinsically bright at microsecond resolution, FRB20191107B was detected with a S/N of 9.9 during the real-time search, just above our detection threshold of 9.
The decrease in S/N of the intrinsically narrow burst when observed at coarser time resolution is expected, as, for a constant fluence, the S/N ratio of a burst is inversely proportional to the square-root of its observed width ($w_\mathrm{obs}$). Scattering from the ionised turbulent plasma along the propagation path of the burst can cause the intrinsic width of the burst ($w_i$) to be broadened with an exponential decay timescale ($\tau$). The total width of the burst at the time it reaches the telescope ($w_\mathrm{sky}$) can be calculated as
\begin{equation}
\label{eqn:tsky}
w_\mathrm{sky} = \sqrt{w_i^2 + \tau^2}.
\end{equation}
The observed width of the burst may additionally be smeared by the resolution of the recording instrument.
Additionally, the sampling time of the data searched for bursts in real-time can be much higher than the sky-width of FRBs. The final observed width of a burst with an instrument with a sampling time $w_s$ can be estimated using the following relation:
\begin{equation}
\label{eqn:tobs}
w_{\rm obs} = \sqrt{w_{\rm sky}^2 + w_s^2 + w_{\rm DM}^2},
\end{equation}
\noindent where $w_{\rm DM}$ is the width due to the intrachannel DM smearing based on the frequency resolution of the instrument, and is calculated as (see \citealt{Pulsar_Handbook_2004_Lorimer_and_Kramer}):
\begin{equation}
\label{eqn:tdm}
w_{\rm DM} = 8.3 \times 10^{-6} \times \Delta \nu \times {\rm DM} \times \nu_c^{-3} ~s,
\end{equation}
\noindent and $\Delta \nu$ is the width of each frequency channel in MHz and ${\nu}_c$ is the central frequency of the instrument in GHz.
For a given fluence, the flux density of the source increases with decreasing width. This leads to a boost in the measured S/N of bursts and therefore decreases the sensitivity threshold of an instrument towards intrinsically narrow sky-width events.
However, this boost in S/N only happens as long as the width of the burst is larger than the sampling time of the recording instrument or the DM smearing width due to its finite channel bandwidth, whichever is greater. Therefore, bursts with even narrower widths do not benefit from their increased flux.
As a result, the fluence detection threshold of a survey does not decrease with the width and weaker but intrinsically narrow events which could have been detected by the instrument remain undetected.
Previous work by \cite{Connor2019_interpreting_the_distribution_of_frb_observables} on the interpretation of the width and DM distributions of FRBs detected with different telescopes, has highlighted the possibility that a population of narrow width FRBs may exist which remains relatively unexplored by the current surveys. In order to investigate the fraction of FRB population UTMOST would miss due to this limitation in the sensitivity of our survey, we model a 3-dimensional plane of our survey's detection threshold as a function of the sky-width, DM and the observed fluence of FRBs. Using equations \ref{eqn:tsky}, \ref{eqn:tobs}, \ref{eqn:tdm}, and the reported value of System Equivalent Flux Density (SEFD) we calculate the detection fluence threshold of UTMOST over a 2-dimensional grid of sky-width and DM of FRBs. This plane of detection fluence threshold is shown in Fig \ref{fig:Fluence_threshold_UTMOST} (labelled as `Actual'). We also show the plane of the detection fluence threshold of a hypothetical ideal survey with UTMOST which has infinitely narrow time and frequency resolution such that it does not suffer from any of the instrumental smearing effects on the detected width of the bursts (labelled as `Ideal'). The gap between the two detection threshold planes (actual and ideal) represents the region of incompleteness where our survey would not be detecting any FRBs even though it lies above the theoretical sensitivity limit of the telescope. It is worth mentioning here that the `Actual' threshold is valid only for an ideal detection pipeline, and the search pipeline could have other selection biases against narrow bursts, which are not characterised by Equations \ref{eqn:tsky}--\ref{eqn:tdm}. We do not perform absolute calibration of selection effects through injections, so it is possible that additional narrow bursts are being missed by the survey.
As is now well demonstrated, the DM of FRBs correlate strongly with redshift, \citep{Macquart2020_IGM_Baryons_DM_z_relation}, allowing us to convert the observed specific fluence of bursts into the intrinsic isotropic emitted energy at source using the relation \citep{BingZhang2018_detectibility_of_high_z_FRBs}:
\begin{equation}
\label{eqn: energy thresh} E_{\rm int, iso} = \frac{F_{\rm obs}}{(1+z)} \times 4 \pi D_L^2 \times \nu_c
\end{equation}
\noindent where $F_{\rm obs}$ is the observed specific fluence, $\nu_c$ is the central frequency of detection, and $D_L$ is the luminosity distance to the source. To avoid unnecessary complexity and edge cases we take the IGM's DM contribution as approximately equal to the total DM of an FRB, and use a simple linear relation between DM and source redshift ($z$) ($DM = 1000 \times z$ pc\,cm$^{-3}$) to approximate the redshift of the source (upper-limit), from which we estimate the luminosity distance assuming a standard $\Lambda$-CDM cosmology. This allows us to transform the fluence detection threshold planes (actual and ideal) into the detection threshold planes for the intrinsic, isotropically emitted energies of bursts, as a function of the DM and sky-widths. These planes are shown in Fig \ref{fig:Energy_threshold_UTMOST} for the UTMOST's FRB survey.
If we assume the cumulative intrinsic energy distribution of FRBs at a given redshift follows a power-law function ($N(>E_{\rm int}) \propto E_{\rm int}^\gamma$) with an index $\gamma$, we can then calculate the completeness fraction of FRBs detected with a given survey as the ratio of the number of FRBs above the plane of actual detection threshold to the number of FRBs above the plane of ideal detection threshold. This completeness fraction quantifies the efficiency with which a given survey probes the narrow-width FRB population which lies above its quoted fluence/energy detection threshold. Here we have assumed that the low-energy cutoff of the power-law distribution of FRB intrinsic energy lies below the detection thresholds of all surveys.
We compute this fraction for a few prominent FRB surveys like SUPERB, CHIME/FRB, UTMOST and ASKAP (InCoherent Sum Survey) as functions of sky-width and DM. We assume an intrinsic energy power-law index of $-1.8$ based on the measured value for FRB20121102A by \cite{Gourdji2019_121102_arecibo}. These maps of detected fraction of FRBs are shown in Fig \ref{fig:Det_fraction_surveys}. It is worth highlighting here that this bias against narrow width bursts shown in Fig \ref{fig:Det_fraction_surveys} is due to the frequency and time resolution of the detecting instrument and the selection effects due to inefficiencies in the search pipeline (such as 0-DM RFI excision, poorly trained machine learning based classifier) would be in addition to those presented here. Therefore, our plots do not show a decrease in burst recovery fraction at higher widths as presented by \citealt{Gupta_et_al_2021_mock_frb_injection_utmost} and \citealt{First_CHIME_catalog_2021}.
It is evident from these maps that all surveys would detect only a small fraction of the population of FRBs with narrow sky-widths. For reference, we also plot the measured value of the sky-width of FRB20191107B, after scaling the scattering time to the center frequency of each telescope.
The measured detection fraction is strong function of the intrinsic source energy distribution, and we show this dependence by plotting the detection fraction for a given survey at FRB20191107B's sky-width and DM as a function of the assumed power-law index $\gamma$ for the source energy distribution in Fig \ref{fig:Det_fraction_UTMOST}.
In summary, we find that most ongoing surveys could be missing $>$60\% of the population of FRBs at the observed DM and sky-width of FRB20191107B. The fact that UTMOST detected one FRB with such narrow sky-width and a relatively large DM, suggests that there might exist a significant population of FRBs with narrow intrinsic widths and small scattering times, which remains largely unexplored with the current surveys.
\section{FRB20191107B and properties of the IGM}
\label{sec: IGM scattering props}
FRB20191107B shows a scattering timescale of only 21 $\mu$s despite having a relatively large DM of $\sim 715$ pc\,cm$^{-3}$, offering an interesting source from which to constrain the scattering properties of the IGM. The NE2001 model \citep{NE2001} estimates a contribution from the Milky Way's Interstellar Medium (ISM) of 127 pc\,cm$^{-3}$. This model does not include the effects of the Milky Way halo, and increasingly, this correction to the FRB DM has been adopted by recent works. For example, \cite{Prochaska2019_haloes} have suggested that the halo of Milky Way contributes between 50$-$80pc\,cm$^{-3}$\ to the DM, while \cite{Bhardwaj2021_M81_frb_repeater_frb20200120E} have suggested an upper limit of $<$53\ pc\,cm$^{-3}$\ from the Milky Way halo (based on an FRB found in the very nearby M81 galaxy).
Here we adopt 50 pc\,cm$^{-3}$\ as the DM contribution from the MW halo in all directions on the sky.
Subtracting 50 pc\,cm$^{-3}$\ for the Milky Way's halo contribution, we are left with a DM excess (DM$_{\rm EG}$) of 537 pc\,cm$^{-3}$\ for FRB20191107B which we attribute to the cumulative contributions from the IGM, intervening halos and the host galaxy of the FRB.
\cite{Macquart2020_IGM_Baryons_DM_z_relation} have shown that the IGM is the dominant source of DM for FRBs coming from high redshift galaxies, and that DM can be used as a proxy for distance to the host galaxy of FRBs.
The DM contribution from the IGM and intervening halos can be estimated approximately from the source redshift by \citep{Inoue, Ioka2003, Macquart2020_IGM_Baryons_DM_z_relation}:
\begin{equation}
\label{eqn:DM-z relation}
\mathrm{DM}_\mathrm{IGM} \approx 1000 \times z ~~~{\rm pc\,cm}^{-3}
\end{equation}
We adopt this relation to estimate an upper limit on the redshift of 0.53 for the host galaxy.
\subsection{Scattering in the IGM}
\label{subsec:Scattering_IGM}
Impulsive radio signals originating from cosmological distances provide a unique tool to probe the turbulent properties of the IGM in exquisite detail. FRBs carry on them the signature of the properties of the ionised plasma they have travelled through along their propagation path.
The scattering strength of an ionised medium is quantified by the Scattering Measure (SM), defined as (see \citealt{Cordes_and_Lazio_1991_Scintillation_theory}):
\begin{equation}
{\int_{L}^{L+\Delta L}C_N^2 ~dl},
\end{equation}
where $C_N^2$ is the amplitude of turbulence per unit length of the plasma extended between L and L$ + \Delta$L.
It is usual to simplify by modeling the inhomogeneities associated with the plasma to be located in a single plane, known as the scattering screen.
This thin-screen approximation is a valid assumption in the scenario when one turbulent region dominates the inhomogeneities along the propagation path.
Nevertheless, it is common to apply this assumption in case of extended turbulent medium as the effects of an extended medium can be treated as effects of an equivalent thin-screen with some modified parameters such as the effective screen distance and the strength of turbulence \citep{Lee_and_Jokipii_1975_scattering_theory}.
\cite{Macquart&Koay2013} provide a relation between the observed scattering timescale ($\tau$) and the SM of the intervening medium for applications related to FRBs. They incorporate the effect of the curvature of space-time due to expansion of the universe in the definition of scattering measure as:
\begin{equation}
{\rm SM_{eff}} = \int \frac{C_N^2(l)}{(1 + z)^2} dl.
\end{equation}
The observed scattering timescale due to scattering of a radio pulse in the turbulent plasma and the ${\rm SM_{eff}}$ of the medium are related (when the diffractive scale is larger than the inner scale of turbulence in the scattering medium) as:
\begin{equation}
\label{eqn:tau-SM}
\tau = 1.9 \times 10^{-4} ~ (1 + z_L)^{-1} {\rm \bigg( \frac{\lambda_0}{1~m} \bigg) ^{22/5} \bigg( \frac{D_{\rm eff}}{1~Gpc} \bigg) \bigg( \frac{SM_{\rm eff}}{10^{12} ~m^{-17/3}}\bigg)^{6/5} ~s}
\end{equation}
\noindent where $\lambda_0$ is the wavelength in the observer frame, $z_L$ is the redshift of the scattering-screen, and ${\rm D_{eff}}$ is the ratio of angular diameter distances $D_LD_{LS} / D_S$, where $D_L$, $D_{LS}$ and $D_S$ are the angular diameter distances to the scattering screen, screen to the source, and to the source respectively.
Temporal broadening due to scattering is modulated by the lever arm effect, which maximises the scattering mid-way between the source and observer ($D_L = D_S/2$) (for example in the IGM) as compared to a screen located near the observer or the source, as would be the case when the turbulence in the Milky Way ISM or in the host galaxy respectively.
If the scattering screen is indeed located in the IGM, Eqn. \ref{eqn:tau-SM} allows us to put an upper limit on the strength of the turbulence in the ionised plasma present in the IGM.
Using the scattering timescale measured for FRB20191107B, and for a screen located midway between the source and the observer, we derive an upper limit on the SM of the scattering-screen in the IGM as ${\rm SM_{IGM} < 8.4 \times 10^{-7}~ kpc~m^{-20/3}}$.
Since the dependence of scattering on geometry strongly favours plasma located roughly mid-way along the propagation path to the FRB, this limit is a strong function of the location of the scattering-screen and increases sharply as the screen gets closer to the source or the observer. This dependence of the derived upper limit on the SM is shown in Fig \ref{fig:SM_limit_z}.
We have assumed a host redshift of 0.53 in this calculation, and, lowering the host redshift causes the upper limit to relax, albeit gradually. The SM upper-limits for assumed host redshifts of 0.5, 0.4 and 0.3 are $8.5~\times10^{-7}, 9.1~\times~10^{-7}$, and $1.0~\times~10^{-6}~{\rm kpc~m^{-20/3}}$ respectively.
The strongest existing constraints on the strength of turbulence in the IGM come from measurements of the angular broadening of compact extragalactic radio sources like GRB afterglows and AGNs. \cite{Koay2012_AGN_scattering_limit} used multi-frequency observations of sample of 128 compact radio sources and found no evidence for detectible scattering in the IGM for sources in the redshift range $0 < z < 4$.
Towards the most compact $\sim 10~\mu $as sources in their sample, they report an upper limit on angular broadening of $\lesssim 8~\mu$as, constraining the turbulence in the IGM to ${\rm SM \lesssim 3.3 \times 10^{-5} ~kpc~m^{-20/3}}$.
The limit we obtain from scattering timescale measurements of FRB20191107B improves on their limit by more than an order of magnitude, providing the strongest constraints on the strength of turbulence in the IGM so far.
\begin{figure}
\centering
\includegraphics[width=0.42\textwidth]{Figures/SM_limit_as_fx_of_zL.png}
\caption{Upper limit on the scattering measure of the IGM as a function of the redshift of the screen ($z_L$). The geometric lever arm effect increases the scatter broadening of a pulse from plasma located equally far away from the source and the observer, reducing the requirement on the strength of turbulence of the IGM. The strongest limit of SM $< 2.6\times 10^{13} {\rm ~m^{-17/3}}$ (or $< 8.4\times 10^{-7} {\rm ~kpc~m^{-20/3}}$) is obtained for a scattering screen at an effective $z_L$ of 0.19. The dashed vertical line represents the assumed redshift of the host galaxy of 0.53 (see Section \ref{subsec:Scattering_IGM})}.
\label{fig:SM_limit_z}
\end{figure}
\subsection{Dispersion-scattering relation}
\label{subsec:DM-tau relation}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/Best_fit_slopy_line_for_UTMOST.png}
\caption{Measurements of the scattering timescales scaled to 1 GHz ($\tau_{\rm 1 GHz}$) plotted against the extragalactic component of the DM (DM$_{\rm EG}$) for the FRBs detected with UTMOST. Downward facing arrows indicate the measurements of 2-$\sigma$ upper limits. The black dashed line shows the best-fit power-law model and the dotted lines indicate the region of $\pm$ 1-$\sigma$ scatter in the fit. For each data point the error in the measurement of scattering timescale is estimated from 1-$\sigma$ scatter in the posterior distribution of the fit model, while the error in the DM$_{\rm EG}$ is taken to be equal to half of the Milky Way contribution (DM$_{MW}$) to FRB's DM budget.}
\label{fig:DM-tau_UTMOST}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/Best_fit_slopy_line_for_voltage.png}
\caption{Measurements of the scattering timescales scaled to 1 GHz ($\tau_{\rm 1 GHz}$) plotted against the extragalactic component of the DM (DM$_{\rm EG}$) for the sample of FRBs for which scattering timescales have been measured after coherent dedispersion using the voltage data. Downward facing arrows indicate the measurements of 2-$\sigma$ upper limits. The black dashed line shows the best-fit power-law model and the dotted lines indicate the region of $\pm$ 1-$\sigma$ scatter in the fit. For each data point the error in the measurement of scattering timescale is estimated from 1-$\sigma$ scatter in the posterior distribution of the fit model, while the error in the DM$_{\rm EG}$ is taken to be equal to half of the Milky Way contribution (DM$_{MW}$) to FRB's DM budget.}
\label{fig:DM-tau_voltage}
\end{figure}
\begin{table*}
\centering
\begin{tabular}[c]{|cccccccc|}
\hline
\hline
FRB name & $Gl$($^{\circ}$) & $Gb$($^{\circ}$) & $\tau$ (ms) & DM (pc\,cm$^{-3}$) & DM$_{\rm MW}$ (pc\,cm$^{-3}$) & $\tau_{\rm MW}$ (\textrm{$\mu$}s) & Ref \\
\hline
FRB20170827A & 303.29 & $-51.58$ & 0.00199 & 176.4 & 37.0 & 0.13 & \cite{Farah2018}\\
FRB20170922A & 45.07 & $-38.70$ & 14.14617 & 1111.00 & 45.00 & 0.22 & \cite{Farah2019}\\
FRB20180528A & 258.87 & $-22.35$ & 0.46182 & 899.30 & 70.00 & 0.52 & \cite{Farah2019}\\
FRB20181016A & 345.51 & 22.66 & 2.77090 & 1982.80 & 89.00 & 1.09 & \cite{Farah2019}\\
FRB20181017C & 50.06 & $-46.88$ & 0.07778 & 240.00 & 39.00 & 0.15 & \cite{Farah2019}\\
FRB20181228D & 253.35 & $-26.15$ & <0.18959 & 354.20 & 58.00 & 0.35 & \cite{Farah2019}\\
FRB20190322D & 278.17 & $-36.92$ & <0.31306 & 724.20 & 47.10 & 0.22 & Gupta et al. in prep.\\
FRB20190806B & 89.92 & $-67.25$ & 4.01829 & 388.50 & 30.80 & 0.09 & Gupta et al. in prep.\\
FRB20191107B & 233.40 & 8.83 & 0.01069 & 714.30 & 127.20 & 1.07 & This work\\
FRB20191223B & 278.17 & $-36.92$ & 1.49726 & 724.20 & 47.10 & 0.22 & Gupta et al. in prep.\\
FRB20200508A & 282.02 & $-12.56$ & <0.19445 & 629.00 & 144.90 & 1.32 & Gupta et al. in prep.\\
FRB20200607A & 325.36 & 55.54 & 0.14584 & 466.90 & 30.00 & 0.11 & Gupta et al. in prep.\\
FRB20180924B & 0.74 & $-49.41$ & 1.94215 & 362.40 & 40.50 & 0.16 & \cite{Hao_qiu_2020_ASKAP_FRB_scattering_props}\\
FRB20181112A & 342.60 & $-47.70$ & 0.05998 & 588.80 & 41.70 & 0.17 & \cite{Hao_qiu_2020_ASKAP_FRB_scattering_props}\\
FRB20190102C & 312.65 & $-33.49$ & 0.11710 & 364.40 & 57.40 & 0.33 & \cite{Hao_qiu_2020_ASKAP_FRB_scattering_props}\\
FRB20190608B & 53.21 & $-48.53$ & 9.42513 & 339.50 & 37.30 & 0.14 & \cite{Hao_qiu_2020_ASKAP_FRB_scattering_props}\\
FRB20190611B & 312.94 & $-33.28$ & 0.51410 & 321.40 & 57.80 & 0.34 & \cite{Hao_qiu_2020_ASKAP_FRB_scattering_props}\\
FRB20190711A & 310.91 & $-33.90$ & <3.19883 & 590.50 & 56.50 & 0.32 & \cite{Hao_qiu_2020_ASKAP_FRB_scattering_props}\\
FRB20191001A & 341.23 & $-44.90$ & 1.52133 & 507.90 & 44.00 & 0.19 & \cite{Bhandari2020_frb191001_frb_afterglow}\\
FRB20180916B & 129.71 & $3.73$ & 0.02255 & 348.70 & 199.00 & 4.85 & \cite{Marcote_2020_180916_localisation_Nature}\\
FRB20200120E & 142.20 & $41.22$ & <0.00038 & 87.75 & 41.62 & 0.17 & \cite{Nimmo2021_nanosecond_FRB200120E}\\
\hline
\end{tabular}
\caption{
List of FRB properties ($Gl$, $Gb$, scattering timescales and DM) for the sample of FRBs which have their scattering properties measured after coherent dedispersion.
For comparison, we also list the predicted DM and scattering contribution from the Milky Way ISM along their lines of sight using the NE2001 model.
This sample has been used in Section \ref{subsec:DM-tau relation} to test the existence of a DM-$\tau$ relation. }
\label{table:voltage_frbs}
\end{table*}
If the IGM is the dominant source of the observed scattering in FRBs, then because SM is the integral of the amplitude of turbulence along the propagation path of an FRB, we expect the effective scattering measure along the line of sight to be correlated with the distance to the source. The plasma in the IGM is already known to be the dominant contributor to the DM budget of the FRBs observed from cosmological distances \citep{Macquart2020_IGM_Baryons_DM_z_relation}.
We therefore expect the existence of DM$-$SM or a DM$-\tau$ relation once a sufficiently large enough sample of FRBs are available.
UTMOST has detected 17 new FRBs so far, of which 12 have voltage data at the native resolution of the telescope and with full phase information available -- the largest sample detected at any telescope. This enables the use of coherent dedispersion to remove instrumental smearing effects and allows careful modelling and analysis of the scattering timescales exhibited by FRBs.
Using the modelling procedure outlined in Section \ref{subsec:modelling_191107}, we fit scattered Gaussian template profiles to all the FRBs for which we have voltage data available (\citealt{Farah2019}, Gupta et al. in prep).
The modelled values of the scattering-timescale are plotted against the extra-galactic component of the DM of all the voltage-capture FRBs and are shown in Fig. \ref{fig:DM-tau_UTMOST}. For those FRBs which do not show evidence of an exponential scattering tail we plot the 2$-\sigma$ upper limits on the derived values of $\tau$. The values are scaled to 1 GHz assuming a spectral power-law function with an index $-4$: $\tau(\nu) \propto \nu^{-4}$.
We fit the data with a simple power-law model of the form:
\begin{equation}
\label{eqn:slopy_line_model}
\tau_{\rm 1\,GHz} = 10^b \times {\rm DM}_{\rm EG}^m
\end{equation}
where $m$ is the power-law index, $b$ is the scaling parameter.
For parameter estimation, we use a joint-likelihood function $\mathcal{L}(\theta ~|~ \tau, \tau^{UL}) = \mathcal{L}_1(\theta ~|~ \tau) \times \mathcal{L}_2(\theta ~|~ \tau^{UL})$, where $\mathcal{L}_1$ is the likelihood of the model $M$ with parameters $\theta$ in the presence of scattering timescale measurements $\tau$, and $\mathcal{L}_2$ is the likelihood of the model in the presence of scattering upper-limits $\tau^{UL}$.
We define $\mathcal{L}_1$ as a Gaussian likelihood function of the form (see \citealt{Shannon_and_Cordes_2010_spin_noise_modelling}):
\begin{equation}
\mathcal{L}_1 (\theta ~|~ \tau) = \prod_{i}^{N} \frac{1}{\sqrt{2\pi\varepsilon_1^2}}{\rm exp} \Big[ - \frac{(\tau_i - M_i(\theta))^2}{2\varepsilon_1^2} \Big],
\end{equation}
\noindent and $\mathcal{L}_2$ as an upper-limit likelihood function of the form:
\begin{equation}
\mathcal{L}_2(\theta ~|~ \tau^{UL}) = \prod_i^N 1 - \frac{1}{2}\textrm{erfc} \Big[\frac{\tau_i^{UL} - M_i(\theta)}{\sqrt{2}\varepsilon_2}\Big]
\end{equation}
\noindent $\varepsilon_1$ and $\varepsilon_2$ quantify the uncertainty in the residual, and can be calculated as the quadrature sum of individual uncertainties using the following relations:
\begin{equation}
\varepsilon_1 = \sqrt{m^2 (\Delta\log {\rm DM}_{EG})^2 + (\Delta\log\tau)^2 + s^2}
\end{equation}
\begin{equation}
\varepsilon_2 = \sqrt{m^2 (\Delta\log {\rm DM}_{EG})^2 + s^2}
\end{equation}
\noindent where $\Delta$DM$_{\rm EG}$ is the error in the estimated value of DM$_{EG}$, $\Delta \tau$ is the error in the measured value of $\tau$, and $s$ is an additional parameter introduced to quantify the scatter in the best fit model.
We use the DYNESTY nested sampler \citep{dynesty_sampler2020} to obtain the Bayesian posteriors and evidences of our models.
$\Delta \tau$ are computed using a 1-$\sigma$ confidence interval in the posterior distributions of the best fit templates to the FRB profiles, whereas we take half of the predicted DM contribution of the Milky Way's ISM (NE2001 model; \citealt{NE2001}) as the error in the estimated value of DM$_{\rm EG}$.
We find that the data are well fit by the model with parameters $m=2^{+1}_{-1}, b=-6^{+3}_{-3}$ and a scatter $s=1.1^{+0.4}_{-0.3}$. The best fit model, along with the measured scatter, is plotted in Fig \ref{fig:DM-tau_UTMOST}.
To test the hypothesis that the data support this power-law relation, we compute the Bayes factor ($\mathcal{B}$) using the ratio of the marginal likelihood of our power-law fit to the marginal likelihood of the fit of a model with a power-law index of $m=0$, i.e. a model with no correlation between the observed $\tau$ and the DM$_{EG}$ of the FRBs. We adopt the Jeffreys scale \citep{jeffreys1998theory_bayes_factor_citation, Trotta_2008_Bayes_factor_Jeffreys_scale} for the interpretation of the evaluated Bayes factor, and find that our data provides negligible or only marginal evidence in favour of the power-law model fit ($m \neq 0$), with $\log \mathcal{B} < 1$.
We expand our sample of bursts by adding all those FRBs reported in the literature whose scattering timescales have been measured accurately using coherently dedispersed data. The names and scattering timescales of all FRBs used in this sample are listed in Table \ref{table:voltage_frbs}.
Using this expanded sample of FRBs, the values of the parameters of the power-law model that best fits the data are: $m=1.4^{+0.7}_{-0.6}, b=-4.2^{+1.7}_{-1.8}$, and a scatter $s=1.0^{+0.2}_{-0.2}$. Once again, we also fit the data with a power-law index fixed at 0 indicating no correlation between the observed $\tau$ and DM$_{EG}$, and compare the marginal likelihoods of the two fits in order to evaluate the Bayes factor $\mathcal{B}$. We find that our sample shows slightly increased support for the power-law model, however, the evidence is still weak, with $1 < \log \mathcal{B} < 2.5$.
Therefore, we conclude that our data hints at a potential DM-$\tau$ relation in FRBs, however, we do not find strong evidence to establish the existence of such a relation.
This result is consistent with the findings of \cite{Ravi2019_observed_prop}, \cite{Hao_qiu_2020_ASKAP_FRB_scattering_props} and \cite{First_CHIME_catalog_2021} who investigated the scattering properties of the sample of FRBs detected at Parkes, ASKAP, and CHIME respectively, and found no evidence for a potential DM-$\tau$ relation in their samples of FRBs.
However, their sample of FRBs were not coherently dedispersed and had relatively coarser time resolution data available than the sample analysed in this work. Therefore, it is possible that true scattering timescales of FRBs in their samples were biased towards larger values due to poorer frequency and time resolution of their data, resulting in diluting of the evidence for the existence of the DM-$\tau$ relation.
Future surveys hold the promise of detecting large numbers of FRBs with coherently dedispersed data which can provide a large sample of accurately measured scattering timescales and DM, and will be instrumental in establishing or excluding the existence of a DM-$\tau$ relation in FRBs.
In addition, it is important to note that while past surveys like those at CHIME, Parkes and the ASKAP telescopes have measurement biases against FRBs with widths less than their sampling time due to the lack of access to raw voltage data, surveys at telescopes like UTMOST which have access to coherently dedispersed data of majority of their FRBs still suffer from detection bias against FRBs with large total widths \citep{Gupta_et_al_2021_mock_frb_injection_utmost, Connor2019_interpreting_the_distribution_of_frb_observables}.
Future modelling of the DM-$\tau$ relation will require careful investigation and correction for these detection and measurement biases in the detected sample of FRBs by a given telescope.
\section{On the origin of scattering in FRBs}
\label{sec: Origin of scattering}
Similarly to pulsars, FRBs as a population exhibit a wide range of scattering times, spanning several orders of magnitude, and as with pulsars, scattering is likely to arise in turbulent ionised media located along the line-of-sight. In this section we examine what can be inferred about the location of the scattering medium from the properties of our UTMOST FRBs. We consider all likely causes of scattering along the line-of-sight, from the ISM and halo of the Milky Way, the IGM, the FRB's host galaxy, and the ``circumburst'' environment in the immediate vicinity of the FRB.
\subsection{ISM of the Milky Way}
\label{subsec: origin_of_scattering:ISM of Milky Way}
Estimates of scattering timescale along a given line of sight due to the ISM of the Milky Way are available from the NE2001 and/or YMW16 models \citep{NE2001, YMW16}, which can be compared against the measured scattering timescale for FRBs. We list these predicted scattering timescales from the NE2001 model, along with the measured scattering timescales for FRBs in our sample in Table \ref{table:voltage_frbs} for side-by-side comparison.
Most FRBs in our sample have been detected at high Galactic latitudes, where the estimates of scattering from the NE2001 and YMW16 models, due to the ISM, are several orders of magnitude smaller than the observed FRB scattering timescales. \cite{Ocker_and_Cordes_2021_Halo_scattering_limits} have also modelled the contribution of the thick disc of the Milky Way ISM, and they predict scattering timescales in the range 29 ns to 0.25\textrm{$\mu$}s\ for lines of sight sampling the thick disc above galactic latitudes $>20^{\rm o}$, also making a negligible contribution to the observed scattering timescales in FRBs.
\subsection{The Milky Way halo}
\label{subsec: origin_of_scattering:halo of Milky Way}
The density profile of the tenuous ionised material in the Milky Way halo remains poorly constrained by observations, due to small numbers of suitable tracers and its very low emissivity \citep{Gupta_et_al_2012_MW_Halo_EM}. FRBs have opened up an entirely new means of probing the density and turbulence of this material \citep{Platts2020_MW_halo}. \cite{Ocker_and_Cordes_2021_Halo_scattering_limits} have used two nearby repeating FRBs, namely, FRB20121102A and FRB20180916B, to constrain the amplitude of scattering due to the Milky Way halo. They set an upper limit on scattering timescale at 1 GHz of $\lesssim$ 12 $\mu$s, which, as they point out, is comparable to scattering effects of the Galactic disc ISM at high Galactic latitudes. This rules out the Milky Way halo as the origin of the scattering timescales observed in our FRBs.
\subsection{The Intergalactic Medium}
\label{subsec: origin_of_scattering:IGM}
In the previous section (Section \ref{subsec:DM-tau relation}) we have shown that there is little to no evidence for the existence of a strong DM-$\tau$ relation amongst FRBs (Section \ref{subsec:DM-tau relation}). Additionally, the stringent upper limit we set on the strength (SM $< 8.4 \times 10^{-7} {\rm ~kpc~m^{-20/3}}$) (see Fig \ref{fig:SM_limit_z}) of turbulence in the IGM using FRB20191107B suggests that the diffuse ionised plasma in the IGM is unlikely to be the dominant source of scattering observed in FRBs. The rest of our FRBs have scattering timescales much greater than this turbulence can provide.
Gaseous disks and the circumgalactic medium (CGM) of intervening galaxies can intersect the lines of sight of FRBs as the radiation traverses the IGM, as has been seen in FRBs reported by \citep{Ravi2019_localisation, Prochaska2019_FRB181112, Simha2020_190608_IGM_cosmic_web, Connor2020_FBR191108}.
The probability of intersecting a galaxy disc is expected to be low and is computed to be only approximately 5\% \citep{Macquart&Koay2013}) for a sources up to $z < 1.5$. While true that, due to the geometric lever arm effect, the ISM of foreground galaxies would dominate the observed scattering along that line of sight, the low probability of intersecting such foreground systems means that intervening galactic discs can only be used to account for the observed scattering in at most one or two FRBs in our sample.
\cite{Vedantham2019_scattering_CGM} have argued that the CGM of intervening galaxies can contribute between 0.1-10 ms of temporal broadening (at 30 cm wavelength) due to scattering.
However, \cite{Cho2020_pfbinverted_181112} have set an upper limit on the scattering timescale of 20 $\mu$s for FRB20181112A, despite its line of sight passing through the halo of a foreground galaxy, showing that even smaller scattering times than those estimated by \cite{Vedantham2019_scattering_CGM} are possible.
Extending the analysis of \cite{Cho2020_pfbinverted_181112}, \cite{Simha2020_190608_IGM_cosmic_web} studied the of the foreground halo present along the line of sight to FRB20190608B and find that the ionised plasma in the halo of the intervening galaxy cannot produce sufficient scattering to account for the observed scattering timescale in the FRB.
\cite{Ocker_and_Cordes_2021_Halo_scattering_limits} also use the scattering timescales observed in FRB20191108A and repeat bursts from FRB20190816B to constrain the density of ionised plasma in the galaxy halos, and conclude that the halos may not be sufficiently turbulent and very little scattering occurs in the intervening galaxy halos along these FRB lines of sight.
Therefore, for most of the FRBs in our sample, we are able to rule out all sources of scattering along the line of sight, up to the host galaxy itself, where the ISM and/or the circumburst medium could explain the observed scattering properties in our FRBs.
\subsection{Scattering in the FRB host galaxy}
\label{subsec: origin_of_scattering:host galaxy}
Circumstantial evidence for scattering in the host galaxies of FRBs has been reported in the literature. \cite{Farah2018} and \cite{Masui2015} found evidence for the presence of two scattering screens along the lines of sight to FRB20170827A and FRB20110523A respectively, and suggest that the spectral modulations associated with the temporal broadening of the bursts can be attributed to a scattering screen located within the host galaxy of the burst, while the broader scale spectral modulations are consistent with originating from within the Milky Way.
We next investigate properties of the host galaxy ISM, under the assumption that it is the dominant source of scattering observed in FRBs. To enable a direct comparison with the Milky Way's ISM, we scale the observed scattering timescales of FRBs to take out the effect of the expansion of the Universe with redshift.
The exponential decay timescale of a signal due to scattering from a turbulent medium with power-law variation in densities (approximated as a single screen) scale with frequency ($\nu$) as a power-law: $\tau(\nu) \propto \nu^{-\alpha}$, where $\alpha$ is found to have the value between 4.0 (for square power-law variation distribution) and 4.4 (for Kolmogorov distribution) \citep{Lee_and_Jokiipi1976, Oswald21_Meerkat_pulsar_scattering_indices}.
At Earth, the frequency of emission has been redshifted and the scattering time dilated by a factor $1+z$. Consequently, the scattering produced in the host galaxy ($\tau_{\rm host}$) is modified by a combination of two effects:
\begin{equation}
\tau_{\rm obs} = \tau_{\rm host} \frac{(1+z)}{~~(1+z)^{\alpha}}.
\end{equation}
For $\alpha = 4$, the scattering thus scales as $(1+z)^{-3}$.
In addition to the cosmological effects of frequency redshift and time dilation, the change in the relative distance to the screen with respect to the observer and the source needs to be considered when estimating $\tau_{\rm host}$ from $\tau_{\rm obs}$.
\cite{Cordes2016_FRB_scattering} have shown that the observed scattering within the host galaxy would be a factor of $\sim$3 lower than that observed from the Earth, as the waves would be planar when they arrive at the screen from a distant galaxy (after invoking reciprocity) as opposed to spherical when observed from within the host galaxy.
Therefore, under the assumption that the host's ISM is the dominant source of scattering, $\tau_{\rm host}$ can be related to $\tau_{\rm obs}$ as:
\begin{equation}
\tau_\mathrm{host} = 1/3\times \tau_\mathrm{obs}~(1+z)^{3}.
\label{eqn: tau_host_from_tau_obs}
\end{equation}
We scale the observed scattering timescale of the FRBs in our sample according to the above relation, and plot them over the observed scattering timescales of Milky Way pulsars in Fig \ref{fig:DM_tau_burst}.
The host galaxy contribution to the total DM is still not well constrained and a value of $\lesssim 100$ pc\,cm$^{-3}$\ has been commonly used in the literature. Assuming that DM$_{\rm host}$ typically lies in the range 10 to 100 pc\,cm$^{-3}$, we identify a region in the $\tau$-DM$_{\rm host}$ space where our scaled scattering timescales of FRBs would lie. This region is shown as the grey box in Fig \ref{fig:DM_tau_burst}.
It is evident that the values of $\tau_{\rm host}$ are orders of magnitude larger than the scattering timescales observed in the Milky Way pulsars in the same DM range as adopted for the the hosts (DM$_{\rm host}$). Similar levels of scattering are observed from those Milky Way pulsars which show much larger values of DM ($>100$ pc\,cm$^{-3}$), and typically lie in the dense galactic disc of our galaxy.
Since scattering in pulsars is dominated by the ISM of the Milky Way, associating the scattering in FRBs to the ISM of their host galaxies requires that the ISM of the host galaxies is many orders of magnitude more turbulent than the ISM of the Milky Way (see \citealt{Xu_and_Zhang_2016_origin_of_scattering}). FRBs would have to occur predominantly in galaxies with highly turbulent conditions in the ISM.
Alternatively, the FRB lines of sight may be traversing larger distances through the host galaxy's ISM, increasing the likelihood of encountering multiple turbulent clumps along their path and resulting in large values of scattering similar to those seen in high-DM pulsars in the Milky Way. However, this would require that the host galaxies also produce large contributions ($>100$ pc\,cm$^{-3}$) to the observed DM of the FRB, and in case of spiral galaxies, that they appear edge-on at high inclinations when observed from the Earth.
In contrast, the small sample of FRB host galaxies that have been identified so far have shown a wide variety of properties, such as a spread of $>2$ orders of magnitude in the star formation rate and total stellar mass \citep{Heintz_2020_host_galaxies_ASKAP, Mannings_2020_host_galaxies_env}. These values do not indicate that FRB progenitors reside exclusively in galaxies which are likely to have highly turbulent ISM (expected in galaxies with increased energy feedback from star formation \cite{Xu_and_Zhang_2016_origin_of_scattering}). Furthermore, the majority of the localised FRBs have been found to originate in the outskirts of their host galaxies (the FRBs are offset physically in the range $0.4-5.3 R_\mathrm{eff}$, where $R_\mathrm{eff}$ is the host galaxy's effective radius: \cite{Heintz_2020_host_galaxies_ASKAP, Mannings_2020_host_galaxies_env}) and the inclinations have been found to be generally low.
\cite{Niino_2020_host_galaxy_DM} analyzed distributions of DM of the FRB samples observed by ASKAP and the Parkes radio telescope and estimated the total DM contribution of the host galaxies (including the host's halo) to be $\sim120$ pc\,cm$^{-3}$.
More recently, \cite{Chittidi2020_190608_host_galaxy_disection} estimated the DM contribution from the host galaxy of FRB20190608B to be $\approx 85$ pc\,cm$^{-3}$, and the scattering contribution to be only $3\mu$s out of the 3.3 ms scattering timescale measured at 1.3 GHz by \cite{Day_et_al_MNRAS}.
We conclude that the properties of the localised sample of host galaxies do not support the scenario that the ISM of the host galaxy is the dominant source of scattering in most FRBs.
This leaves the turbulence in the circumburst environment of the source as most likely to produce the observed scattering in FRBs, as has been suggested previously by \cite{Masui2015, Spitler2016}.
The large scatter in the scattering timescales of FRBs as a group would appear to be better explained by invoking a diversity in the circumburst medium of different sources, as opposed to invoking a wide range of turbulence in the host ISM or in the IGM.
We make an order-of-magnitude estimate for scattering due to the circumburst medium as follows: assuming a fiducial distance of 1 pc to the scattering screen located in the circumburst medium, then using Eqn \ref{eqn:tau-SM} we derive SM $= 7\times10^{20} {\rm m^{-17/3}}$ (or $\sim 24~ {\rm kpc~m^{-20/3}}$) for the environment of FRB20191107B, but would be up to 4 orders of magnitude larger for some of the most scattered FRBs in our sample. This is much larger than the typical values of SM observed for Milky Way pulsars, but is comparable to the SM values of some of the overdense regions identified by \cite{NE2001_II_2003} along the lines of sight to some pulsars in the Milky Way. For example, the Vela supernova remnant has an estimated SM of $\sim34$ kpc m$^{-20/3}$, while some lines of sight to a few extragalactic sources (e.g. NGC6334B) have been estimated to have even higher values of SM \citep{NE2001_II_2003}.
Consequently, the high values of SM required for the turbulent clumps of ionised plasma present along the line of sight to FRBs in the host galaxies do have some analogue in the Milky Way.
The presence of such dense and turbulent media in the vicinity of the source can also produce a notable contribution to the Rotation Measure (RM) of an FRB, and can lead to a RM-$\tau$ relation in FRBs. Since UTMOST uses only a single circular polarisation to observe and detect FRBs \citep{utmost}, we do not have the RM values of the majority of FRBs in our sample.
As more RM and $\tau$ values for FRBs are reported from large ongoing surveys, searching for a potential RM-$\tau$ relation in FRBs can shed more light on the origin of scattering in the vicinity of the burst progenitor.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figures/DM-tau_burst.png}
\caption{Expected scattering timescales of FRBs after scaling to the frequency of 1 GHz in the rest frame of the host galaxy under the assumption that the host galaxy is the dominant source of scattering. The red circles show the measured values of scattering timescale at Earth. Taking out the effect of redshifting of frequency and dilation of time, the scattering timescales scale along the dashed grey lines (following Eqn. \ref{eqn: tau_host_from_tau_obs}) for each FRB and would lie in the region highlighted in the grey box depending upon the DM contributed by the host galaxy.}
\label{fig:DM_tau_burst}
\end{figure}
\section{Conclusions}
\label{sec: Conclusions}
We report the detection of FRB20191107B with UTMOST. Using the raw voltage data captured for the FRB we have analysed the temporal properties of FRB20191107B - the narrowest FRB so far detected with UTMOST, at the native instrument time resolution (10.24 $\mu$s). We model three components in the burst profile and measure a DM of 715.7 pc\,cm$^{-3}$, scattering time $\tau = 21.4 \mu$s, and an intrinsic width of only 11.3 $\mu$s.
We model the limitation in the sensitivity for narrow width FRBs due to the limited time and frequency resolution of a survey. Assuming a power-law distribution of burst energies with a power-law index of $-1.8$, we find that UTMOST's FRB survey would only detect $\sim5$\% of FRBs at the measured width of FRB20191107B.
Using the reported sensitivities of other prominent radio telescopes surveys like CHIME, Murriyang (also known as the Parkes radio telescope) and ASKAP, we find that most current FRB surveys are also similarly insensitive to FRBs with intrinsically narrow widths.
The detection of a single event --- FRB20191107B suggests that a significant population of such FRBs may exist which evades the searched parameter space of most active FRB surveys.
For FRBs originating at a given redshift, FRBs with narrower intrinsic widths will have a higher flux, and are more likely to be detected than the FRBs with wide widths. Therefore, improving the sensitivity of a survey to narrow width events will also make the survey more sensitive to high-redshift FRBs.
FRBs carry with them the signature of the properties of the ionised plasma that lies along their propagation path. FRBs coming from large Gpc distances offer unique probes into the properties of their host galaxies, any intervening galaxies and the IGM along their lines of sight \citep{Ravi_et_al2019_tomography_of_universe}. Detecting a large sample ($10^3$-$10^5$) of FRBs from high redshifts ($z > 3$) has been deemed necessary to enable their use as cosmological probes into the history of the evolution of Universe \citep{Caleb2019_EoR, Fialkov_and_Loeb2016_EoR, Pagano_et_al2021_EoR, Hashimoto_et_al2021EoR}.
Our findings suggest that increasing the sensitivity for narrow width FRBs would be useful in probing the population of high redshift FRBs and future surveys should be designed while considering the benefits and costs of searching for FRBs at a higher time and frequency resolution.
We use the observed scattering timescale of FRB20191107B to place a stringent upper limit on strength of turbulence in the IGM (SM $< 8.4 \times 10^{-7} {\rm ~kpc~m^{-20/3}}$ (see Section \ref{subsec:DM-tau relation}, Fig \ref{fig:SM_limit_z}).
We build a sample of 21 FRBs for which scattering timescale measurements have been reported using analysis of the high-time resolution data with full phase information retained via voltage capture and search for a DM$_{EG}-\tau$ relation in FRBs but find only marginal evidence in favour of its existence.
The lack of evidence in favour of a DM$_{EG}-\tau$ relation and the strong upper limit on the strength of turbulence in the IGM along the line of sight to FRB20191107B argue against IGM being the dominant source of scattering in FRBs.
Recent detections of microsecond scale scattering in the local population of repeating FRBs $-$ FRB20180916B and FRB20200120E \citep{Marcote_2020_180916_localisation_Nature, Nimmo2021_nanosecond_FRB200120E} have been used by \cite{Ocker_and_Cordes_2021_Halo_scattering_limits} to strongly limit the amount of scattering produced by the Milky Way halo ($<12 \mu$s).
We identify the circumburst medium of the FRB progenitor as the most likely origin of scattering in most FRBs, and compare the required levels of turbulence in the circumburst medium with some of the dense ionised regions found in Milky Way.
We find that the circumburst environment of FRBs must be much more turbulent than the environment of an average Milky Way pulsar, which is consistent with the recent findings of \cite{Chawla2021_scattering_CHIME_FRBs}.
\section*{Acknowledgements}
The authors would like to thank Ryan M.~Shannon and Stefan Os{\l}owski for important discussions during the preparation of this paper. The Molonglo Observatory is owned and operated by the University of Sydney, with support from the School of Physics and the University. The UTMOST project is also supported by the Swinburne University of Technology. We acknowledge the Australian Research Council grants CE110001020 (CAASTRO) and the Laureate Fellowship FL150100148. ATD is supported by an ARC Future Fellowship grant FT150100415. This research made use of numpy \citep{numpy}, pandas \citep{pandas-official}, matplotlib \citep{matplotlib}, astropy \cite{astropy:2013, astropy:2018}, BILBY \citep{Bilby} and PSRCHIVE \citep{PSRCHIVE2004} packages.
\section*{Data Availability}
The coherently dedispersed dynamic spectra, the frequency averaged time series and the codes used in modelling of the burst profile are available at the github repository: \url{https://github.com/vivgastro/FRB191107_paper_data_release/}. The codes used to estimate the detection fraction of narrow FRBs in Section \ref{sec:rates of narrow FRBs} are also available in the same repository. Further raw data can be made available upon reasonable request to the authors.
\bibliographystyle{mnras}
|
2,877,628,089,427 | arxiv | \section{Experimental details}\label{sect:experimental_details}
\subsection{CIFAR-10}
The CIFAR-10 datasest \citep{cifar10} was fetched using the TensorFlow datasets\footnote{\url{https://www.tensorflow.org/datasets/catalog/cifar10}}.
In all of the CIFAR-10 experiments, the data was preprocessed by subtracting mean and dividing by a standard deviation for each pixel and data point separately (equivalent to using \texttt{LayerNorm} as the first layer).
We inflated all of the standard deviations by $10^{-15}$ to avoid division by zero.
All the classification tasks were converted into regression tasks by encoding the targets as $C$--dimensional vectors, where $C$ is the number of classes, with the entry corresponding to the correct label set to $\frac{C - 1}{C}$ and all other entries to $-\frac{1}{C}$.
This enabled us to perform closed form NNGP and NTK inference using the Gaussian likelihood/MSE loss.
\subsubsection{Hyperparameter search}\label{sect:hyperparam_appendix}
The hyperparameter search was on a fixed architecture with 8x Convolution + ReLU, Attention, Flatten, and a Dense readout layer.
We used $1.7562$ and $0.1841$ respectively for the weight and bias variances as in \citep[appendix G.1]{novak2019bayesian} except for the attention output variance $\outStd^2$ which was set to one.
The convolutional layers were used with the \texttt{SAME} padding, stride one, and filter size $3 \times 3$.
For attention kernels with positional encodings, the reported $\rho$ parameter (\Cref{eq:decay_pos_emb}) is actually $\rho / (\queryStd^2 \keyStd^2)$ so that the relative scale of the contribution of $\covPosEmb{}{}$ remains the same with changing $\queryStd^2 \keyStd^2$.
There were two stages of the hyperparameter search, first to identify the most promising candidates (\Cref{tab:hypers_stage_one}), and second to refine the parameters of these candidate kernels (\Cref{tab:hypers_stage_two}).
The second stage also included the \emph{residual attention kernel} (\Cref{eq:residual_kernel}); the $\alpha$ in the second table should thus be interpreted as the one stated in \Cref{eq:residual_kernel} (cf.\ \Cref{sect:residual_attention_appendix}).
The best hyperparameters used in \Cref{fig:cifar_pe} and \Cref{tab:full_data_results} can be found in a bold typeset in \Cref{tab:hypers_stage_two}.
All computation was done in 32-bit precision, and run on up to 8 NVIDIA V100 GPUs with 16Gb of RAM each.
\begin{table}[htbp]
\caption{Hyperparameter values for the first stage of search.
\textsc{value positional encoding} stands for whether the positional encodings should be added to all $Q$, $K$, and $V$ (\textsc{True}), or only to the inputs of $Q$ and $K$ (\textsc{False}; see \Cref{sect:struct_pe}).
\textsc{encodings covariance} represent whether positional encodings should be added ($0$ for no), and if so, what should their initialisation covariance be ($I$ for identity, and $\covPosEmb{}{}$ for the covariance defined in \Cref{eq:decay_pos_emb}).
$\zeta = \text{softmax}$ was only used when \textsc{query/key scaling} was $d^{-1}$ (see \Cref{sect:beyon_vanilla_attn}).
$\varphi, \rho, \alpha$ were skipped when \textsc{value positional encoding} was \textsc{False}, and $\varphi$ was only varied when \textsc{encodings covariance} was $\covPosEmb{}{}$.}
\label{tab:hypers_stage_one}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lc}
\toprule
hyperparameter & values \\
\midrule
query/key scaling & $\{ d^{-1/2} , d^{-1} \}$ \\
$\zeta$ (\Cref{sect:softmax_alternatives}) & \{softmax, identity\} \\
value positional encoding & \{True, False\} \\
encodings covariance & $\{ 0, I , \covPosEmb{}{} \}$ \\
$\varphi$ (\Cref{eq:decay_pos_emb}) & $\{ 1, 5 \}$ \\
$\rho$ (\Cref{eq:decay_pos_emb}) & $\{ 1\}$ \\
$\alpha$ (\Cref{eq:kernel_interp_op}) & $\{ 0.5, 0.8 \}$ \\
$\stdSymbol_{\querySymbol} \cdot \stdSymbol_{\keySymbol}$ & $\{ 0.1, 1.0 \}$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.18in
\end{table}
\begin{table}[htbp]
\caption{Hyperparameter values for the first stage of search.
See \Cref{tab:hypers_stage_one} for description of the individual hyperparameters.
The parameters that achieved the best \emph{NNGP} validation accuracy and were selected for the subsequent experiments are in a bold typeset.}
\label{tab:hypers_stage_two}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcc}
\toprule
hyperparameter & struct & residual \\
\midrule
query/key scaling & $\{ \boldsymbol{d^{-1}} \}$ & $\{ \boldsymbol{d^{-1}} \}$ \\
$\zeta$ (\Cref{sect:softmax_alternatives}) & \{\textbf{softmax}\} & \{\textbf{identity}\} \\
value positional encoding & \{True, \textbf{False}\} & -- \\
encodings covariance & $\{ \boldsymbol{\covPosEmb{}{}} \}$ & $\{ \boldsymbol{\covPosEmb{}{}} \}$ \\
$\varphi$ (\Cref{eq:decay_pos_emb}) & $\{ 1, \boldsymbol{5}, 10 \}$ & $\{ 1, \boldsymbol{5}, 10 \}$ \\
$\alpha$ (\Cref{eq:kernel_interp_op,eq:residual_kernel}) & $\{ \boldsymbol{0.4}, 0.5, 0.65, 0.8, 0.9 \}$ & $\{ 0.4, 0.5, \boldsymbol{0.65}, 0.8, 0.9 \}$ \\
$\rho$ (\Cref{eq:decay_pos_emb}) & $\{ 0.5, 1, \boldsymbol{1.5} \}$ & $\{ \boldsymbol{0.5} , 1 , 1.5 \}$ \\
$\stdSymbol_{\querySymbol} \cdot \stdSymbol_{\keySymbol}$ & $\{ 0.001, \boldsymbol{0.1}, 1.0 \}$ & -- \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.18in
\end{table}
\subsubsection{Details for \Cref{fig:convergence_plots}}\label{sect:convergence_appendix}
The downsampling was performed using \texttt{skimage.transform.resize} with parameters \texttt{mode="reflect"} and \texttt{anti\_aliasing=True}, using downsampled height and width of size 8 as mentioned.
Both the convergence and accuracy plots are for the $d^{-1/2}$ vanilla NNGP kernel with $\zeta = \text{softmax}$.
The intractable softmax integral of the limiting covariance function was estimated using MC integration with 2048 samples.
We used $1.7562$ and $0.1841$ respectively for the weight and bias variances as in \citep[appendix G.1]{novak2019bayesian} for all the convolutional and dense layers, $1.7562$ for the $\keyStd^2, \queryStd^2$ and $\valueStd^2$, and $\outStd^2 = 1$.
The convolutional layer used \texttt{VALID} paddingstride one, and filter size $3 \times 3$.
As in \citep{novak2019bayesian}, The reported distance between kernel matrices is the logarithm of
\begin{align}
\frac{
\|
\hat{\mathcal{K}} - \mathcal{K}
\|_F^2
}{
\|
\mathcal{K}
\|_F^2
}
\, ,
\end{align}
where $\hat{\mathcal{K}}$ and $\mathcal{K}$ are respectively the empirical and the predicted theoretical covariance matrices for the training set.
All computation was done in 32-bit precision, and run on up to 8 NVIDIA V100 GPUs with 16Gb of RAM each.
\subsubsection{Details for \Cref{fig:softmax_replacements}}\label{sect:replacements_appendix}
We used a 45K/5K train/validation split of the usual 50K CIFAR-10 training set and reported the validation set accuracy after training for 1000 epochs with batch size 64 and the Adam optimiser.
The attention layers used the usual $d^{-1/2}$ scaling of the query/key inner products, and the convolutional layers used the \texttt{SAME} padding, stride one, and filter size $3 \times 3$.
We used $2.0$ and $10^{-2}$ respectively for the weight and bias variances except in the attention where $\queryStd^2 = \keyStd^2 = \valueStd^2 = 2$ but $\outStd^2 = 1$.
Further, we used the \texttt{append} type positional encodings (\Cref{sect:positional_encodings}) with the same embedding dimension as \textsc{n\_channels} (\Cref{tab:softmax_replacements_hypers}), thus doubling the embedding dimension of the attention layer inputs.
All computation was done in 32-bit precision, and run on a single NVIDIA V100 GPU with 16Gb of RAM each.
\begin{table}[htbp]
\caption{Hyperparameter values for which results are reported in \Cref{fig:softmax_replacements}.
\textsc{n\_channels} is the number of channels used in the convolutional layers.
The same number was used for $\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}$ and output dimension in the attention layer, but $\headSymbol_{\sequenceVariable}^{\depthSymbol} = \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} = \floor{\textsc{n\_channels}}$ to reduce the memory footprint.
The learning rate was fixed throughout the training, relying only on Adam to adapt step size.
Each configuration was run with three random seeds and each of the corresponding results was included in the appropriate column in \Cref{fig:softmax_replacements}.}
\label{tab:softmax_replacements_hypers}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lc}
\toprule
hyperparameter & values \\
\midrule
$\zeta$ (attention) & \{relu, abs, softmax\} \\
LayerNorm & \{none, per\_head , at\_output\} \\
\midrule
n\_channels & $\{ 32, 192 \}$ \\
learning rate & $\{ 10^{-3}, 10^{-2} \}$ \\
%
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.18in
\end{table}
\subsubsection{Details for \Cref{fig:cifar_pe}}\label{sect:depth_appendix}
We used $1.7562$ and $0.1841$ respectively for the weight and bias variances as in \citep[appendix G.1]{novak2019bayesian} except for the attention output variance $\outStd^2$ which was set to one.
The convolutional layers were used with the \texttt{SAME} padding, stride one, and filter size $3 \times 3$.
For the vanilla attention kernels, we report the best performance over $\stdSymbol_{\querySymbol} \stdSymbol_{\keySymbol} = \{ 10^{-3}, 10^{-1}, 1, 2, 10 \}$ at each depth.
The \texttt{Struct} and \texttt{Residual} were used with the best hyperparameters found during hyperparameter search as reported in \Cref{sect:hyperparam_appendix}.
All computation was done in 32-bit precision, and run on up to 8 NVIDIA V100 GPUs with 16Gb of RAM each.
\subsubsection{Details for \Cref{tab:full_data_results}}\label{sect:full_cifar_appendix}
The best set-up from \Cref{sect:hyperparam_appendix} was used (including the best hyperparameters as stated in \Cref{tab:hypers_stage_two}).
All computation was done in 64-bit precision, and run on up to 8 NVIDIA V100 GPUs with 16Gb of RAM each.
\subsection{IMDb}
\subsubsection{General settings for \Cref{tab:imdb_full_results} and \Cref{tab:imdb_small_results}.}
The IMDb reviews dataset \citep{maas2011learning} was fetched using TensorFlow datasets\footnote{\url{https://www.tensorflow.org/datasets/catalog/imdb\_reviews}}.
All sentences were truncated or padded to 1000 tokens using the default settings of \texttt{tf.keras.preprocessing.text.Tokenizer}\footnote{\url{https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/text/Tokenizer}}. No words were removed from the embedding model dictionary. Tokens were embedded using GloVe embeddings \citep{pennington2014glove} with no other pre-processing. Binary targets were mapped to $\left\{-0.5, 0.5\right\}$ values. Diagonal regularizers for inference were selected based on validation performance among the values of $10^{-7},10^{-6},\dots,1$ multiplied by the mean trace of the kernel.
When applicable, all models used ReLU nonlinearities, \texttt{Struct} (Structured positional encoding, $d^{-1}$ scaling, \Cref{tab:kernel_overview}) kernel with $\zeta$ being the row-wise softmax function (\Cref{eq:projection_def}), decaying positional embeddings used only for the attention keys and queries, with $\varphi=2.5$ (\Cref{eq:decay_pos_emb}), $\alpha = 0.75$, and $\rho = 1$ (\Cref{eq:kernel_interp_op}). These parameters were selected based on preliminary experiments with CIFAR-10, and fine-tuning on IMDb specifically is an interesting avenue for future research.
All preliminary and validation experiments were carried out in 32-bit precision, while test evaluation (reported in the \Cref{tab:imdb_full_results} and \Cref{tab:imdb_small_results}) were done in 64-bit precision. All experiments were run on machines with up to 8 NVIDIA V100 GPUs with 16Gb of RAM each.
\subsubsection{Details for \Cref{tab:imdb_full_results}}\label{sect:imdb_full_appendix}
Words were embedded using GloVe 840B.300d embeddings.
The embedding model was selected on a small-scale experiment (4000 train and 4000 validation sets) among GloVe 6B 50-, 100-, 200-, and 300-dimensional variants, as well as GloVe 840B.300d, and 1024-dimensional ELMO \citep{Peters:2018} embeddings (using TensorFlow Hub\footnote{\url{https://tfhub.dev/google/elmo/3}}). In this preliminary experiment, GloVe 840B.300d, GloVe6B.300d, and ELMO.1024d performed similarly, and GloVe 840B.300d was chosen for the full dataset experiment.
The validation experiment was run on the 25K training set partitioned into a 15K and 10K training and validation sets, with the best models then evaluated on the 25K training and 25K test sets.\footnote{Precisely, subsets of sizes 14880/9920 and 24960/24960 were used to make the dataset be divisible by 8 (the number of GPUs) times 20 (the batch size), which is a technical limitation of the Neural Tangents \citep{novak2020neural} library.}
All layers used weight and bias variances $2$ and $0.01$ respectively, expect for attention outputs and values variances which were set to $1$, and the top linear readout layer with weight variance 1 and no bias.
Three classes of models were considered:
\begin{enumerate}
\item \texttt{GAP-only}, doing only global average pooling over inputs followed by the linear readout.
\item \texttt{GAP-FCN}, in which GAP was followed by 0, 1, or 2 fully connected layers.
\item \texttt{Struct}, allowing the same models as \texttt{GAP-FCN}, except for necessarily having an attention layer before GAP.
\end{enumerate}
Each class could also have an optional \texttt{LayerNorm} layer following GAP. The best model from each class was then evaluated on the test set.
\subsubsection{Details for \Cref{tab:imdb_small_results}}\label{sect:imdb_small_appendix}
All convolutional layers used the total window (context) size of 9 tokens, stride 1, and \texttt{SAME} (zero) padding.
Experiments were run on a 3200/1600/1600 train/validation/test splits. Four classes of models were considered:
\begin{enumerate}
\item \texttt{GAP-only}, identical to the one in \Cref{sect:imdb_full_appendix}.
\item \texttt{GAP-FCN}, also identical to the one in \Cref{sect:imdb_full_appendix}.
\item \texttt{CNN-GAP}, allowing the same models as in \texttt{GAP-FCN}, but having GAP preceeded by 0, 1, 2, 4, or 8 CNN layers.
\item \texttt{Struct}, allowing the same models as in \texttt{CNN-GAP}, but having 1 or 2 attention layers (each optionally followed by \texttt{LayerNorm} over channels) before GAP. If the model also had CNN layers, attention and CNN layers were interleaved, attention layers being located closer to GAP (for example, a model with 8 CNN layers and 2 attention layers would have 7 CNN layers followed by attention, CNN, attention, GAP).
\end{enumerate}
All models were allowed to have either ReLU or Erf nonlinearity, with weight and bias variances set to 2 and 0.01 for ReLU, and 1.7562 and 0.1841 for Erf, with the same values used by attention keys and queries layers, but having variance 1 for values and output layers. The readout linear layer had weight variance 1 and no bias.
\begin{table}[htbp]
\caption{Best validation accuracy for various finite attention architectures. The reported numbers are an average over three random seeds.}
\label{tab:finite_attention_best}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccc}
\toprule
& softmax & relu & identity \\
\midrule
none & 64.10 & 69.68 & 70.46 \\
per\_head & 68.96 & 77.40 & 75.28 \\
at\_output & 71.70 & 79.00 & 79.56 \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.18in
\end{table}
\section{Proofs}\label{sect:proofs}
\textbf{Assumptions:} We assume the~input set $\mathcal{X} \subset \R{\mathbb{N} \times \genericDimenstion^{0}}$ is \emph{countable}, and
the~usual \emph{Borel product $\sigma$-algebra} on any of the involved countable real spaces (inputs, weights, outputs of intermediary layers).
We also assume that the nonlinearities $\phi$ and $\zeta$ are \emph{continuous} and (entrywise) \emph{polynomially bounded}, i.e.,
$|\phi(z)| \leq \sum_{t = 0}^{m} c_t |z|^t$ for some $m \in \mathbb{N}$ and $c_0, \ldots, c_m \in \R{}_+$ independent of $z$,\footnote{This is a relaxation of the original `linear envelope' condition $|\phi(z)| \leq c + m |z|$ for some $c, m \in \R{}_+$, used in \citep{matthews2018gaussian,garriga2019deep} and stated in \Cref{thm:gp_convergence_sqrt}.
We decided to keep the reference to the linear envelope condition in the main text since it is general enough to guarantee convergence for all bounded (e.g., softmax, tanh) and ReLU like (e.g., ReLU, Leaky ReLU, SeLU) nonlinearities, and matches the existing literature with which the readers may already be familiar.
Nevertheless, all the presented proofs are valid for the polynomially bounded nonlinearities, similarly to \citep{yang2019v2}.} and $|\zeta(G)_{a i}| \leq \sum_{t = 0}^m c_0 |G_{a i}|^t$ for some $m \in \mathbb{N}$ and $c_0, \ldots , c_M \in \R{}_+$ independent of $G$.
For the NTK proofs, we further assume that $\nabla \phi$ and $\nabla \zeta$ are continuous bounded almost everywhere, where for ReLU, Leaky ReLU, or similar, we set $\nabla \phi(0) \coloneqq \lim_{z \to 0^-} \nabla \phi(z)$ which for ReLU/Leaky ReLU is equal to zero.
As \citet{matthews2018gaussian}, we will need to use the~`infinite width, finite fan-out' construction of the~sequence of NNs.
In particular, we will assume that for any attention layer $\ell \in [L + 1]$ and $n \in \mathbb{N}$, the~output is computed as defined in~\Cref{eq:attention_out}, but we will add a~countably infinite number of additional heads which do not affect the output of the $n$\textsuperscript{th} network, but are used by wider networks,
i.e., each head $h > \headSymbol_{\sequenceVariable}^{\depthSymbol}$ is only used to compute the~outputs by networks with index $m \in \mathbb{N}$ such that $d_{m}^{\ell,H} \geq h$.
Similar construction can be used for fully connected, convolutional, and other types of layers as demonstrated in~\citep{matthews2018gaussian,garriga2019deep}.
Since the~outputs remain unchanged, a~proof of convergence of the~`infinite width, finite fan-out networks' implies convergence of the~standard finite width networks, and thus the~construction should be viewed only as an~analytical tool which will allow us to treat all the~random variables
\begin{equation*}
\{
\indexedActivation{\ell}{n, ij}{x}, \indexedActivation{\ell h}{n, ij}{x}
\colon
n, h, i, j \in \mathbb{N}, \ell \in [L + 1], x \in \mathcal{X}
\}
\, ,
\end{equation*}
as defined on the~same probability space, and thus allows us to make claims about convergence in probability and similar.
Finally, we will be using the \emph{NTK parametrisation} \citep{jacot2018ntk} within the NTK convergence proofs, i.e., we implicitly treat each weight $\weightMatSymbol_{ij} \sim \mathcal{N}(0, \sigma^2 / d)$, i.i.d., as $\weightMatSymbol = \frac{\sigma}{\sqrt{d}} \widetilde{\weightMatSymbol}$ where only $\widetilde{\weightMatSymbol}$ is trainable.
This parametrisation ensures that not only the forward but also the backward pass are properly normalised; under certain conditions, proofs for NTK parametrisation can be extended to standard parametrisation \citep{lee2019wide}.
\textbf{Notation:} For random variables $X, (X_n)_{n \geq 1}$, $X_n \rightsquigarrow X$ denotes convergence in distribution, and $X_n \overset{P}{\longrightarrow} X$ convergence in probability.
For vectors $x, y \in \R{m}$, $\langle x, y \rangle = \sum_{j=1}^m x_j y_j$ denotes the~usual inner product, and for matrices $A, B \in \R{m \times m}$, $\langle A , B \rangle = \langle \vectorise(A) , \vectorise(B) \rangle = \sum_{i, j = 1}^{m} A_{i j} B_{i j}$ denotes the~Frobenius inner product.
For any $A \in \R{m \times k}$, we will use $A_{i \cdot} \in \R{1 \times k}$ and $A_{\cdot j} \in \R{m \times 1}$ to respectively denote $i$\textsuperscript{th} row and $j$\textsuperscript{th} column of the~matrix.
Later on, we will be working with finite subsets $\mathcal{L} \subset \mathcal{X} \times \mathbb{N}$ for which we define the~coordinate projections
\begin{align*}
\mathcal{L}_{\mathcal{X}}
\coloneqq
\{ x \in \mathcal{X} \colon \exists i \in \mathbb{N} \text{ s.t. } (x, i) \in \mathcal{L} \}
\, , \qquad
\mathcal{L}_{\mathbb{N}}
\coloneqq
\{ i \in \mathbb{N} \colon \exists x \in \mathcal{X} \text{ s.t. } (x, i) \in \mathcal{L} \}
\, .
\end{align*}
Since \citet{yang2019v2} provides convergence for attention architectures under the $d^{-1}$ only in the NNGP regime, we use $\tau \in \{ 1, \frac{1}{2} \}$ to refer to the different $d^{-\tau}$ within the NTK proofs.
As explained in \Cref{sect:linear_scaling_limit}, the $\tau = 1$ limit is not very interesting when $\weightMatSymbol^{\querySymbol}$ and $\weightMatSymbol^{\keySymbol}$ are initialised independently with zero mean, and thus we will be assuming $\weightMatSymbol^{\querySymbol} = \weightMatSymbol^{\keySymbol}$ a.s.\ whenever $\tau = 1$.
Finally, we use $\stdSymbol_{\outputSymbol \valueSymbol} \coloneqq \stdSymbol_{\outputSymbol} \stdSymbol_{\valueSymbol}$, $\stdSymbol_{\querySymbol \keySymbol} \coloneqq \stdSymbol_{\querySymbol} \stdSymbol_{\keySymbol}$, $\lesssim$ as `less then up to a universal constant', $\poly(x_1, \ldots , x_M)$ for a polynomial in $x_1, \ldots, x_m \in \R{}$, and the shorthand
\begin{align}
\tildeLogit{n, a i}{\ell h} (x)
\coloneqq
\zeta(G_{n, a i}^{\ell h}(x))
\, .
\end{align}
\textbf{Proof technique:} The now common way of establishing convergence of various deep NNs architectures is to inductively prove that whenever a preceding layer's outputs converge in distribution to a GP, the outputs of the subsequent layer converge to a GP too under the same assumptions on the nonlinearities and initialisation \citep[e.g.,][]{matthews2018gaussian,lee2018deep,novak2019bayesian,garriga2019deep,yang2019v1,yang2019v2}.
We prove this induction step for NNGP under the $d^{-1/2}$ scaling in \Cref{thm:gp_convergence_sqrt} (recall that the equivalent result under the $d^{-1}$ is already known due to \citet{yang2019v2}), and for NTK in \Cref{thm:ntk_convergence}.
As in \citep{matthews2018gaussian}, our technique is based on exchangeability (\Cref{lem:exchangeability}), and we repeatedly make use of \Cref{thm:mean_convergence} which says that if a sequence of real valued random variables $(X_n)_{n \geq 1}$ converges in distribution to $X$, and the $(X_n)_{n \geq 1}$ are uniformly integrable (\Cref{def:ui} below), then $X$ is integrable and $\E [ X_{n} ] \to \E [ X ]$.
\begin{lemma}[Exchangeability]\label{lem:exchangeability}
For any $n \in \mathbb{N}$, the outputs of an attention layer $\indexedActivation{\ell}{n, a i}{x}$ are exchangeable along the $i$ index.
Furthermore, each of $\indexedActivation{\ellh}{n}{x}, G_{n}^{\ell h}(x), \query{\sequenceVariable}{\depthSymbol \headIndex}(x), \key{\sequenceVariable}{\depthSymbol \headIndex}(x), \val{\sequenceVariable}{\depthSymbol \headIndex}(x)$ is exchangeable over the $h$ index, and for a fixed $h$, each of $\indexedActivation{\ellh}{n, a i}{x}, \query{n, a i}{\ell h}(x), \key{n, a i}{\ellh}(x), \val{n, a i}{\ell h}(x)$ is exchangeable over the $i$ index.
\end{lemma}
\begin{definition}[Uniform integrability]\label{def:ui}
A collection of real valued random variables $\mathcal{C}$ is called uniformly integrable if for any $\varepsilon > 0$ there exists $c_{\varepsilon} \geq 0$ s.t.\ $\E |X| \indicator{|X| \geq c_{\varepsilon}} \leq \varepsilon$ for all $X \in \mathcal{C}$ simultaneously.
\end{definition}
\begin{proof}[Proof of \Cref{lem:exchangeability}]
Recall that by the de~Finnetti's theorem, it is sufficient to exhibit a set of random variables conditioned on which the set of random variables becomes i.i.d.
This is is trivial for the columns of $\indexedActivation{\ell}{n, a i}{x}$ as we can simply condition on $\{ \indexedActivation{\ellh}{n, a i}{x} \colon h \in [\headSymbol_{\sequenceVariable}^{\depthSymbol}] \}$.
The remainder of the claims can be obtained by observing that
\begin{equation*}
\indexedActivation{\ell h}{n}{x}
=
\zeta \biggl(
\frac{1}{\sqrt{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}}
\indexedActivity{\ell - 1}{n}{x}
\weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \querySymbol}
(\indexedActivity{\ell - 1}{n}{x} \weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \keySymbol})^\top
\biggr)
\indexedActivity{\ell - 1}{n}{x}
\weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \valueSymbol}
\, ,
\end{equation*}
and thus if we condition on $\indexedActivity{\ell - 1}{n}{x}$, the variables associated with individual heads are i.i.d.
\end{proof}
\subsection{$d^{-1/2}$ NNGP convergence proof}\label{sect:nngp_proofs}
\nngpConvergence*
\begin{proof}%
Since we have assumed that the~input set $\mathcal{X}$ is countable, we can use \Cref{lem:fin_dim_marg} to see that all that we need to do to prove \Cref{thm:gp_convergence_sqrt} is to show that every finite dimensional marginal of $\activations{\sequenceVariable}{\depthSymbol}$ converges to the~corresponding Gaussian limit.
Because the~finite coordinate projections are continuous by definition of the~product topology, the~continuous mapping theorem \citep[theorem~9.3.7]{dudley02} tells us it is sufficient to prove convergence of the~finite dimensional marginals of
\begin{equation}\label{eq:def_channels}
%
%
\{
\indexedActivation{\ell}{n, \cdot j}{x} \colon x \in \mathcal{X}, j \in \mathbb{N}
\}
\, ,
\end{equation}
as any finite dimensional marginal of $\activations{\sequenceVariable}{\depthSymbol}$ can be obtained by a~finite coordinate projection.
Focusing on an~arbitrary finite marginal $\mathcal{L} \subset \mathcal{X} \times \mathbb{N}$, we follow \citeauthor{matthews2018gaussian}\ and use the~Cram{\' e}r-Wold device \citep[p.~383]{billingsey86} to reduce the~problem to that of establishing convergence of
\begin{equation}\label{eq:projection_def}
\projection_{\sequenceVariable}
\coloneqq
\sum_{(x, i) \in \mathcal{L}}
\langle
\alpha^{x, i} ,
\indexedActivation{\ell}{n, \cdot i}{x}
\rangle
\, ,
\end{equation}
for any choice of $\{ \alpha^{(x, i)} \in \R{\genericDimenstion^s} \colon (x, i) \in \mathcal{L} \} \subset \R{\genericDimenstion^s \times \mathcal{L}}$.
We can rewrite $\projection_{\sequenceVariable}$ as
\begin{align*}
\projection_{\sequenceVariable}
&=
\sum_{x, i \in \mathcal{L}}
\langle
\alpha^{x, i} ,
\indexedActivation{\ell}{n, \cdot i}{x}
\rangle
=
%
%
\sum_{x, i \in \mathcal{L}}
\bigl\langle
\alpha^{x, i} ,
\bigl[
\indexedActivation{\ell 1}{n}{x},
\ldots,
\indexedActivation{\ell \headSymbol_{\sequenceVariable}^{\depthSymbol}}{n}{x}
\bigr]
\weightO{\cdot i}
\bigr\rangle
%
\\
&=
\frac{1}{\sqrt{\headSymbol_{\sequenceVariable}^{\depthSymbol}}}
\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
%
\sum_{(x, i)}
\bigl\langle
\alpha^{x, i} ,
\sqrt{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\indexedActivation{\ell h}{n}{x}
\weightOGen{n, \cdot i}{\ell h, O}
\bigr\rangle
%
\eqqcolon
\frac{1}{\sqrt{\headSymbol_{\sequenceVariable}^{\depthSymbol}}}
\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\summand_{\sequenceVariable, \headIndex}
\, ,
\end{align*}
where we have defined $\weightOGen{n, \cdot i}{\ell h, O} \coloneqq [\weightOGen{n, (h \layerDimension{\depthSymbol} + 1) i}{\ell, O}, \ldots, \weightOGen{n, (h \layerDimension{\depthSymbol} + \layerDimension{\depthSymbol} - 1) i}{\ell, O}] \subset \R{\layerDimension{\depthSymbol}}$.
We are now prepared to apply lemma~10 from \citep{matthews2018gaussian} which we restate (with minor modifications) here.
\begin{lemma}[Adaptation of theorem~2 from \citep{Blum1958}]\label{lemma:eCLT}
For each $n \in \mathbb{N}$, let $\{ X_{n, i} \colon i = 1,2, \ldots \}$ be an infinitely exchangeable sequence with $\E X_{n, 1} = 0$ and $\E X_{n, 1}^2 = \sigma^2_{\rowIndex}$, such that $\lim_{n \to \infty} \sigma^2_{\rowIndex} = \sigma^2_{*}$ for some $\sigma^2_{*} \geq 0$.
Let
\begin{equation}
S_{n}
\coloneqq
\frac{1}{\sqrt{d_n}}
\sum_{i=1}^{d_n}
X_{n, i}
\, ,
\end{equation}
for some sequence $(d_n)_{n \geq 1} \subset \mathbb{N}$ s.t.\ $\lim_{n \to \infty} d_n = \infty$.
Assume:
\begin{enumerate}[\hspace{1em}(a)]
\item $\E{X_{n, 1} X_{n, 2}} = 0 $
\item $ \lim_{n \to \infty }\E{X_{n, 1}^{2} X_{n, 2}^{2}} = \sigma_{*}^{4} $
\item $ \E{|X_{n, 1}|^{3}} = \mathrm{o}(\sqrt{d_n}) $
\end{enumerate}
Then $S_{n} \rightsquigarrow Z$, where $Z = 0$ (a.s.) if $\sigma^2_{*} = 0$, and $Z \sim \mathcal{N} (0, \sigma^2_{*})$ otherwise.
\end{lemma}
Substituting $S_{n} = \projection_{\sequenceVariable}$ and $X_{n, h} = \summand_{\sequenceVariable, \headIndex}$, convergence of $\projection_{\sequenceVariable}$ follows from \Cref{lemma:eCLT}:
\begin{itemize}
\item Exchangeability requirement is satisfied by \Cref{lem:head_exchangeability}.
\item Zero mean and covariance follow from \Cref{lem:head_mean_corr_zero}.
\item Convergence of variance is established in \Cref{lem:head_var_convergence}.
\item Convergence of $\E \lbrack \summandLogit{n , 1}^2 \summandLogit{n , 2}^2 \rbrack$ and $\E |\summand_{\sequenceVariable, \headIndex}|^3 = \mathrm{o}(\sqrt{\layerDimension{\depthSymbol}})$ are implied by \Cref{lem:head_all_sqmoments_converge}.
\end{itemize}
Combining the~above with \Cref{lem:inner_prod_converge,lem:logit_dist_convergence} concludes the~proof.
\end{proof}
\vspace{0.5\baselineskip}
\begin{lemma}[Infinite exchangeability]\label{lem:head_exchangeability}
$\summand_{\sequenceVariable, \headIndex}$ are exchangeable over the~index $h$.
\end{lemma}
\begin{proof}
Recall that by the~de~Finetti's theorem, it is sufficient to exhibit a~set of random variables conditioned on which the~$\{ \summand_{\sequenceVariable, \headIndex} \colon h \in \mathbb{N} \}$ are i.i.d.
From \Cref{sect:background}, we have
\begin{equation*}
\indexedActivation{\ell h}{n}{x}
=
\zeta \biggl(
\frac{1}{\sqrt{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}}
\indexedActivity{\ell - 1}{n}{x}
\weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \querySymbol}
(\indexedActivity{\ell - 1}{n}{x} \weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \keySymbol})^\top
\biggr)
\indexedActivity{\ell - 1}{n}{x}
\weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \valueSymbol}
\, .
\end{equation*}
Hence if we condition on $\{ \indexedActivity{\ell - 1}{n, \cdot j}{x} \colon j \in [\layerDimension{\ell - 1}], x \in \mathcal{L}_{\mathcal{X}} \}$, where $\mathcal{L}_{\mathcal{X}} \coloneqq \{ x \in \mathcal{X} \colon \exists i \in \mathbb{N} \text{ s.t. } (x, i) \in \mathcal{L} \}$, it is easy to see that $\{ \summand_{\sequenceVariable, \headIndex} \}_{h \geq 1}$ are exchangeable.
\end{proof}
\begin{lemma}[Zero mean and covariance]\label{lem:head_mean_corr_zero}
$\E \gamma_{n , 1} = \E \gamma_{n , 1} \gamma_{n , 2} = 0$.
\end{lemma}
\begin{proof}
Using $\E \weightOGen{n, \cdot i}{\ell 1, O} = 0$, $\E \gamma_{n , 1} = 0$ if $|\! \E \indexedActivation{\ell 1}{n, ij}{x} | < \infty$ for all $(i, j) \in [\genericDimenstion^s] \times \mathbb{N}$.
Substituting $\E \indexedActivation{\ell 1}{n, ij}{x} = \E \zeta(G_{n}^{\ell 1})_{i \cdot} \indexedActivity{\ell - 1}{n}{x} \weightV{\cdot j} = 0$ as long as $|\! \E \zeta(G_{n}^{\ell 1})_{i a} \indexedActivity{\ell - 1}{n, b k}{x}| < \infty$ for any $a, b, k \in [\genericDimenstion^s]$.
This can be obtained by combining H{\" o}lder's inequality with \Cref{lem:mmnt_propagation}.
An~analogous argument applies for $\E \gamma_{n , 1} \gamma_{n , 2} = 0$ since $\E [ \weightOGen{n, \cdot i}{\ell 1, O} (\weightOGen{n, \cdot i}{\ell 2, O})^\top] = 0$ by assumption.
\end{proof}
\begin{lemma}[Convergence of variance]\label{lem:head_var_convergence}
$\lim_{n \to \mathbb{N}} \E \gamma_{n , 1}^2 = \sigma^2_{*}$.
\end{lemma}
\begin{proof}
Observe that $\E \gamma_{n , 1}^2$ can be written as
\begin{align*}
\frac{\outStd^2}{\layerDimension{\depthSymbol}}
\! \!
\sum_{(x, i), (x', j)}
\! \! \! \!
(\alpha^{x, i})^\top
\E \biggl[
\indexedActivation{\ell 1}{n}{x}
\varepsilon_{\cdot i}
(\varepsilon_{\cdot j})^\top
\indexedActivation{\ell 1}{n}{x'}^\top
\biggr]
\alpha^{x', j}
=
\frac{\outStd^2}{\layerDimension{\depthSymbol}}
\sum_{(x, i)}
(\alpha^{x, i})^\top
\E \biggl[
\indexedActivation{\ell 1}{n}{x}
\indexedActivation{\ell 1}{n}{x}^\top
\biggr]
\alpha^{x, i}
\, ,
\end{align*}
and thus it will be sufficient to show that $\E [ \indexedActivation{\ell 1}{n}{x} \indexedActivation{\ell 1}{n}{x}^\top ] / \layerDimension{\depthSymbol}$
converges to the mean of the weak distributional limit.
\begin{align*}
\frac{1}{\layerDimension{\depthSymbol}}
\E \biggl[
\indexedActivation{\ell 1}{n}{x}
\indexedActivation{\ell 1}{n}{x}^\top
\biggr]
&=
\frac{1}{\layerDimension{\depthSymbol}}
\E \biggl[
\zeta(G_{n}^{\ell 1} (x))
\indexedActivity{\ell - 1}{n}{x}
W_{n}^{\ell 1 , V}
(W_{n}^{\ell 1 , V})^\top
\indexedActivity{\ell - 1}{n}{x}^\top
\zeta(G_{n}^{\ell 1} (x))^\top
\biggr]
\\
&=
\valueStd^2
\E \biggl[
\zeta(G_{n}^{\ell 1} (x))
\frac{
\indexedActivity{\ell - 1}{n}{x}
\indexedActivity{\ell - 1}{n}{x}^\top
}{\layerDimension{\ell - 1}}
\zeta(G_{n}^{\ell 1} (x))^\top
\biggr]
\, ,
\end{align*}
suggests the~desired result could be obtained by application of \Cref{thm:mean_convergence} which requires that the~integrands converge in distribution to the~relevant limit, and that their collection is uniformly integrable.
Combination of the continuous mapping theorem and \Cref{lem:inner_prod_converge,lem:logit_dist_convergence,lem:slutsky} yields convergence in distribution; application of the H{\" o}lder's inequality, the polynomial bound on $\zeta$, and \Cref{lem:mmnt_propagation} yields uniform integrability by \Cref{lem:sup_ui}, concluding the proof.
\end{proof}
\begin{lemma}\label{lem:head_all_sqmoments_converge}
For any $h, h' \in \mathbb{N}$, $\E \lbrack \gamma_{n , h}^2 \gamma_{n , h'}^2 \rbrack$
to the mean of the weak limit of $\{ \gamma_{n , h}^2 \gamma_{n , h'}^2 \}_{n \geq 1}$'s distributions.
\end{lemma}
\begin{proof}[Proof of \Cref{lem:head_all_sqmoments_converge}]
Defining $\tildeActivation{h}{n, \cdot i}{x} \coloneqq \sqrt{\headSymbol_{\sequenceVariable}^{\depthSymbol}} \indexedActivation{\ell h}{n}{x} \weightOGen{n, \cdot i}{\ell h, O}$, we observe $\E \lbrack \gamma_{n , h}^2 \gamma_{n , h'}^2 \rbrack$ equals
\begin{align*}
%
%
%
\sum_{\substack{(x_1, i_1) \\ (x_2, i_2)}}
\sum_{\substack{(x_1', j_1) \\ (x_2', j_2)}}
(\alpha^{x_1, i_1})^\top
\E \biggl[
\tildeActivation{h}{n, \cdot i_1}{x_1}
\tildeActivation{h}{n, \cdot i_2}{x_2}
\alpha^{x_2, i_2}
(\alpha^{x_1', j_1})^\top
\tildeActivation{h'}{n, \cdot j_1}{x_1'}
\tildeActivation{h'}{n, \cdot j_2}{x_2'}
\biggr]
\alpha^{x_2', j_2}
\, ,
\end{align*}
which means that the~expectation can be evaluated as a~weighted sum of terms of the~form
\begin{equation*}
\E \bigl[
\tildeActivation{h}{n, a i}{s}
\tildeActivation{h}{n, b j}{t}
\tildeActivation{h'}{n, c k}{u}
\tildeActivation{h'}{n, d l}{v}
\bigr]
\, ,
\end{equation*}
where $a, b, c, d \in [\genericDimenstion^s]$, $i, j, k, l \in \mathcal{L}_{\mathcal{I}}$, and $s, t, u, v \in \mathcal{L}_{\mathcal{X}}$.
We therefore only need to show convergence of these expectations.
Substituting:
\begin{align*}
\E \bigl[
\tildeActivation{h}{n, a i}{s}
&
\tildeActivation{h}{n, b j}{t}
\tildeActivation{h'}{n, c k}{u}
\tildeActivation{h'}{n, d l}{v}
\bigr]
\\
&=
\biggl(\frac{\outStd^2}{\layerDimension{\depthSymbol}}\biggr)^2
\E \biggl[
\indexedActivation{\ell h}{n, a \cdot}{s} \varepsilon_{\cdot i}^{h}
(\varepsilon_{\cdot j}^{h})^\top \indexedActivation{\ell h}{n, b \cdot}{t}^\top
\indexedActivation{\ell h'}{n, c \cdot}{u} \varepsilon_{\cdot k}^{h'}
(\varepsilon_{\cdot l}^{h'})^\top \indexedActivation{\ell h'}{n, d \cdot}{v}^\top
\biggr]
\\
&=
\biggl(\frac{\outStd^2}{\layerDimension{\depthSymbol}}\biggr)^2
\E \biggl[
\indexedActivation{\ell h}{n, a \cdot}{s}
\indexedActivation{\ell h}{n, b \cdot}{t}^\top
\indexedActivation{\ell h'}{n, c \cdot}{u}
\indexedActivation{\ell h'}{n, d \cdot}{v}^\top
\biggr]
\delta_{i = j} \delta_{k = l}
\, ,
\end{align*}
where $\varepsilon_{i}^h$ are i.i.d.\ standard normal random variables,
and re-purposing the~$i, j$ indices, we have
\begin{align*}
&\frac{1}{(\layerDimension{\depthSymbol})^2}
\E \biggl[
\indexedActivation{\ell h}{n, a \cdot}{s}
\indexedActivation{\ell h}{n, b \cdot}{t}^\top
\indexedActivation{\ell h'}{n, c \cdot}{u}
\indexedActivation{\ell h'}{n, d \cdot}{v}^\top
\biggr]
%
%
=
\frac{1}{(\layerDimension{\depthSymbol})^2}
\sum_{i, j = 1}^{\layerDimension{\depthSymbol}}
\E \biggl[
\indexedActivation{\ell h}{n, a i}{s}
\indexedActivation{\ell h}{n, b i}{t}
\indexedActivation{\ell h'}{n, c j}{u}
\indexedActivation{\ell h'}{n, d j}{v}
\biggr]
\\
&=
\frac{1}{\layerDimension{\depthSymbol}}
\E \biggl[
\indexedActivation{\ell h}{n, a 1}{s}
\indexedActivation{\ell h}{n, b 1}{t}
\indexedActivation{\ell h'}{n, c 1}{u}
\indexedActivation{\ell h'}{n, d 1}{v}
\biggr]
+
\frac{\layerDimension{\depthSymbol} - 1}{\layerDimension{\depthSymbol}}
\E \biggl[
\indexedActivation{\ell h}{n, a 1}{s}
\indexedActivation{\ell h}{n, b 1}{t}
\indexedActivation{\ell h'}{n, c 2}{u}
\indexedActivation{\ell h'}{n, d 2}{v}
\biggr]
\, .
\end{align*}
Note that we can bound the integrands by a universal constant (\Cref{lem:mmnt_propagation}), and thus we can focus only on the latter term on the r.h.s.
We can thus turn to
\begin{align*}%
&\E \biggl[
\indexedActivation{\ell h}{n, a 1}{s}
\indexedActivation{\ell h}{n, b 1}{t}
\indexedActivation{\ell h'}{n, c 2}{u}
\indexedActivation{\ell h'}{n, d 2}{v}
\biggr]
\\
&=
%
\stdSymbol_{\valueSymbol}^4
\E \biggl[
\zeta(G_{n}^{\ell h} (s))_{a \cdot}
\frac{
\indexedActivity{\ell - 1}{n}{s}
\indexedActivity{\ell - 1}{n}{t}^\top
}{
\layerDimension{\ell - 1}
}
\zeta(G_{n}^{\ell h} (s))_{b \cdot}^\top
\zeta(G_{n}^{\ell h'} (u))_{c \cdot}
\frac{
\indexedActivity{\ell - 1}{n}{u}
\indexedActivity{\ell - 1}{n}{v}^\top
}{
\layerDimension{\ell - 1}
}
\zeta(G_{n}^{\ell h'} (v))_{d \cdot}^\top
\biggr]
\nonumber
\, .
\end{align*}
Observe that by \Cref{lem:inner_prod_converge},
\begin{equation*}
\left(
\frac{
\indexedActivity{\ell - 1}{n}{s}
\indexedActivity{\ell - 1}{n}{t}^\top
}{
\layerDimension{\ell - 1}
}
\, , \,
\frac{
\indexedActivity{\ell - 1}{n}{u}
\indexedActivity{\ell - 1}{n}{v}^\top
}{
\layerDimension{\ell - 1}
}
\right)
\overset{P}{\longrightarrow}
(
\kerntildef{}{\ell}{s}{t}
\, , \,
\kerntildef{}{\ell}{u}{v}
)
\, ,
\end{equation*}
and by \Cref{lem:logit_dist_convergence} and the~continuous mapping theorem
$$ \zeta(G_{n}^{\ell h} (s))_{a \cdot} \zeta(G_{n}^{\ell h} (s))_{b \cdot} \zeta(G_{n}^{\ell h'} (u))_{c \cdot} \zeta(G_{n}^{\ell h'} (v))_{d \cdot} \, ,$$
converges in distribution.
By \Cref{lem:slutsky}, this means that the~integrand converges in distribution.
Finally, to obtain the convergence of the expectation, we apply \Cref{thm:mean_convergence} where the required uniform integrability can be obtained by applying H{\" o}lder's inequality and \Cref{lem:mmnt_propagation}.
\end{proof}
\subsubsection{Convergence of $\logit{\sequenceVariable}{\depthSymbol \headIndex}$}
\begin{lemma}\label{lem:logit_dist_convergence}
Let the~assumptions of \Cref{thm:gp_convergence_sqrt} hold.
Then
$\logits{\sequenceVariable}{\depthSymbol} \coloneqq \{ \logit{\sequenceVariable}{\depthSymbol \headIndex}(x) \colon x \in \mathcal{X} , h \in \mathbb{N} \}$ %
converges in distribution to a~centred GP with covariance as described in \Cref{eq:logit_cov}.
\end{lemma}
\begin{proof}
Using \Cref{lem:fin_dim_marg} and the~Cram{\' e}r Wold device \citep[p.~383]{billingsey86}, we can again restrict our attention to one~dimensional projections of finite dimensional marginals of $\logits{\sequenceVariable}{\depthSymbol}$
\begin{align*}
\projectionLogitSymbol_{\sequenceVariable}^{\logitSymbol}
&\coloneqq
\sum_{(x, h) \in \mathcal{L}}
\langle
\beta^{x, h} ,
\logit{\sequenceVariable}{\depthSymbol \headIndex}(x)
\rangle_F
=
\sum_{(x, h) \in \mathcal{L}}
\bigl\langle
\beta^{x, h} ,
\frac{1}{\sqrt{\layerDimension{\depthSymbol}}}
\sum_{j = 1}^{\layerDimension{\depthSymbol}}
\query{n, \cdot j}{\ell h}(x)
(\key{n, \cdot j}{\ell h}(x))^\top
\bigr\rangle_F
\\
&=
\frac{1}{\sqrt{\layerDimension{\depthSymbol}}}
\sum_{j = 1}^{\layerDimension{\depthSymbol}}
\underbrace{
\sum_{(x, h) \in \mathcal{L}}
\bigl\langle
\beta^{x, h} ,
\query{n, \cdot j}{\ell h}(x)
(\key{n, \cdot j}{\ell h}(x))^\top
\bigr\rangle_F
}_{\eqqcolon \summandLogit{\sequenceVariable, j}}
\, ,
\end{align*}
%
The~above formula suggests the~desired result follows from \Cref{lemma:eCLT}:
\begin{itemize}
\item Exchangeability requirement is satisfied by \Cref{lem:logit_exchangeability}.
\item Zero mean and covariance follow from \Cref{lem:logit_mean_corr_zero}.
\item Convergence of variance is established in \Cref{cor:logit_var_convergence}.
\item Convergence of $\E \lbrack \summandLogit{n , 1}^2 \summandLogit{n , 2}^2 \rbrack$ is proved in \Cref{lem:logit_square_moments}.
\item $\mathrm{o}(\layerDimension{\depthSymbol})$ growth of the~third absolute moments is implied by \Cref{lem:logit_third_moments}.
\qedhere
\end{itemize}
\end{proof}
\begin{lemma}\label{lem:logit_exchangeability}
Under the~assumptions of \Cref{thm:gp_convergence_sqrt}, $\summandLogit{n, j}$ are exchangeable over the~$j$ index.
\end{lemma}
\begin{proof}
Observe
\begin{align*}
\query{n, \cdot j}{\ell h}(x)
(\key{n, \cdot j}{\ell h}(x))^\top
=
\indexedActivity{\ell - 1}{n}{x}
\weightQ{\cdot i}
(\weightK{\cdot i})^\top
\indexedActivity{\ell - 1}{n}{x}^\top
\, ,
\end{align*}
which means that the~individual terms $\summandLogit{n, j}$ are i.i.d.\ if we condition on
%
$\{ \indexedActivity{\ell - 1}{n}{x} \colon x \in \mathcal{L}_{\mathcal{X}} \}$.
%
%
Application of de Finetti's theorem concludes the~proof.
\end{proof}
\begin{lemma}\label{lem:logit_mean_corr_zero}
Under the~assumptions of \Cref{thm:gp_convergence_sqrt},
$\E [\summandLogit{n, 1}] = \E [\summandLogit{n, 1} \summandLogit{n, 2}] = 0$.
\end{lemma}
\begin{proof}
For $\E [\summandLogit{n, 1}] = 0$, note that for any $h \in \mathbb{N}$,
$\E [\summandLogit{n, 1}]$ can be expressed as a~sum over terms
\begin{equation*}
\bigl\langle
\beta^{x, h} ,
\E [
\indexedActivity{\ell - 1}{n}{x}
\weightQ{\cdot j}
(\weightK{\cdot j})^\top
\indexedActivity{\ell - 1}{n}{x}^\top
]
\bigr\rangle_F
=
\bigl\langle
\beta^{x, h} ,
0
\bigr\rangle_F
=
0
\, ,
\end{equation*}
as long as $\E [ \indexedActivity{\ell - 1}{n, \cdot 1}{x} \indexedActivity{\ell - 1}{n, \cdot 1}{x}^\top ]$
is entry-wise finite for any $(x, n) \in \mathcal{L}_{\mathcal{X}} \times \mathbb{N}$ which can be obtained by \Cref{lem:mmnt_propagation}.
For $\E [\summandLogit{n, 1} \summandLogit{n, 2}] = 0$, we have to evaluate a~weighted sum of terms of the~form
\begin{align*}
&(\beta^{x , h})^\top
\E \biggl[
\indexedActivity{\ell - 1}{n}{x}
\weightQ{\cdot 1}
(\weightK{\cdot 1})^\top
\indexedActivity{\ell - 1}{n}{x}^\top
\indexedActivity{\ell - 1}{n}{x'}
W_{n, \cdot 2}^{\ell h', Q}
(W_{n, \cdot 2}^{\ell h', K})^\top
\indexedActivity{\ell - 1}{n}{x'}^\top
\biggr]
\beta^{x' , h'}
%
%
\, ,
\end{align*}
which are all equal to zero as long as
\begin{align*}
\E \biggl[
\frac{
\indexedActivity{\ell - 1}{n}{x}
\indexedActivity{\ell - 1}{n}{x}^\top
}{\layerDimension{\ell - 1}}
\frac{
\indexedActivity{\ell - 1}{n}{x'}
\indexedActivity{\ell - 1}{n}{x'}^\top
}{\layerDimension{\ell - 1}}
\biggr]
\, ,
\end{align*}
is entry-wise finite.
Since the~integrand converges in probability to $\kerntildef{}{\ell}{x}{x} \kerntildef{}{\ell}{x'}{x'}$ by \Cref{lem:inner_prod_converge}, an~argument analogous to the~one made above for the~$\E [\summandLogit{n, 1}] = 0$ concludes the~proof.
\end{proof}
\begin{corollary}\label{cor:logit_var_convergence}
Under the~assumptions of \Cref{thm:gp_convergence_sqrt}, $\lim_{n \to \infty} \E \lbrack \summandLogit{n , 1}^2 \rbrack = \sigma^2_{*}$.
\end{corollary}
\begin{proof}
The~second half the~proof of \Cref{lem:logit_mean_corr_zero} establishes $\E [\summandLogit{n, i} \summandLogit{n, j}]$ converges for any $i, j$.
\end{proof}
\vspace{0.5\baselineskip}
\begin{lemma}\label{lem:logit_square_moments}
Under the~assumptions of \Cref{thm:gp_convergence_sqrt}, $\lim_{n \to \infty} \E \lbrack \summandLogit{n , 1}^2 \summandLogit{n , 2}^2 \rbrack = \sigma_{*}^{4}$.
\end{lemma}
\begin{proof}
Defining $R_{n, j}^{h}(x) \coloneqq \query{n, \cdot j}{\ell h}(x) (\key{n, \cdot j}{\ell h}(x))^\top$, we can rewrite
$\E \bigl[ \summandLogit{n , 1}^2 \summandLogit{n , 2}^2 \bigr]$ as
\begin{align*}
\sum_{\substack{(x, h) , (x', h')}}
\!\!\!\!
%
%
(\beta^{x, h})^\top
\E \biggl[
R_{n, 1}^{h}(x)
R_{n, 1}^{h}(x)^\top
\beta^{x, h}
(\beta^{x', h'})^\top
R_{n, 2}^{h'}(x')
R_{n, 2}^{h'}(x')^\top
\biggr]
\beta^{x', h'}
\, ,
\end{align*}
where we have w.l.o.g.\ assumed all matrices have been flattened as $\langle A , B \rangle_F = \vectorise(A)^\top \vectorise(B)$.
The~above could be further rewritten as a~weighted sum of terms which take the following form:
\begin{align*}
&\E \biggl[
\query{n, a_1 1}{\ell h}(x)
\key{n, b_1 1}{\ell h}(x)
\query{n, a_2 1}{\ell h}(x)
\key{n, b_2 1}{\ell h}(x)
\query{n, a_3 2}{\ell h'}(x')
\key{n, b_3 2}{\ell h'}(x')
\query{n, a_4 2}{\ell h'}(x')
\key{n, b_4 2}{\ell h'}(x')
\biggr]
\\
&\propto
%
\E \biggl[
\frac{
\indexedActivity{\ell - 1}{n, a_1 \cdot}{x}
\indexedActivity{\ell - 1}{n, a_2 \cdot}{x}^\top
}{\layerDimension{\ell - 1}}
\frac{
\indexedActivity{\ell - 1}{n, b_1 \cdot}{x}
\indexedActivity{\ell - 1}{n, b_2 \cdot}{x}^\top
}{\layerDimension{\ell - 1}}
\frac{
\indexedActivity{\ell - 1}{n, a_3 \cdot}{x'}
\indexedActivity{\ell - 1}{n, a_4 \cdot}{x'}^\top
}{\layerDimension{\ell - 1}}
\frac{
\indexedActivity{\ell - 1}{n, b_3 \cdot}{x'}
\indexedActivity{\ell - 1}{n, b_4 \cdot}{x'}^\top
}{\layerDimension{\ell - 1}}
\biggr]
\, .
\end{align*}
Thanks to \Cref{lem:inner_prod_converge} and the continuous mapping theorem, we know that the~integrand converges in probability to
\begin{equation*}
\stdSymbol_{\querySymbol}^4 \stdSymbol_{\keySymbol}^4
\kerntildef{a_1 a_2}{\ell}{x}{x}
\kerntildef{b_1 b_2}{\ell}{x}{x}
\kerntildef{a_3 a_4}{\ell}{x'}{x'}
\kerntildef{b_3 b_4}{\ell}{x'}{x'}
\, ,
\end{equation*}
and thus we can use \Cref{thm:mean_convergence} to obtain that the~above expactation converges as long as the sequence of integrands is uniformly integrable.
Noting that we can upper bound by $\max_{c \in [\genericDimenstion^s} \max_{z \in \mathcal{L}_{\mathcal{X}}} \E | \indexedActivity{\ell - 1}{n, c 1}{z} |^8$ by H{\" o}lder's inequality and exchangeability, uniform integrability can be obtained by \Cref{lem:sup_ui}.
\end{proof}
\begin{lemma}\label{lem:logit_third_moments}
Under the~assumptions of \Cref{thm:gp_convergence_sqrt}, $\E | \summandLogit{n, 1} |^3 = \mathrm{o}(\sqrt{\layerDimension{\depthSymbol}})$.
\end{lemma}
\begin{proof}
Using H{\" o}lder's inequality, it is sufficient to show $\limsup_{n} \E | \summandLogit{n, 1} |^4 < \infty$.
Setting $R_{n, j}^{h}(x) \coloneqq \query{n, \cdot j}{\ell h}(x) (\key{n, \cdot j}{\ell h}(x))^\top$
\begin{align*}
\E | \summandLogit{n, 1} |^4
=
\sum_{\substack{(x, h) , (x', h')}}
\!\!\!\!
%
%
(\beta^{x, h})^\top
\E \biggl[
R_{n, 1}^{h}(x)
R_{n, 1}^{h}(x)^\top
\beta^{x, h}
(\beta^{x', h'})^\top
R_{n, 1}^{h'}(x')
R_{n, 1}^{h'}(x')^\top
\biggr]
\beta^{x', h'}
\, ,
\end{align*}
analogously to the~proof of \Cref{lem:logit_square_moments}.
Substituting for the~individual terms and using H{\" o}lder's inequality, we we can see that each of the~terms in the~above sum can be itself decomposed into a~sum over $(\layerDimension{\ell - 1})^8$ terms that are up to a constant upper bounded by
\begin{equation*}
%
\max_{a \in [\genericDimenstion^s]} \max_{z \in \{ x, x' \} }
\E | \indexedActivity{\ell - 1}{n, a 1}{z} |^8
\, ,
\end{equation*}
which means we can conclude this proof by bounding this quantity by a constant independent of $n$ by \Cref{lem:mmnt_propagation}.
\end{proof}
\subsection{NTK convergence proof}\label{sect:ntk_proofs}
We need to prove convergence of the attention NTK at initialisation, i.e., for any $a, b \in [\genericDimenstion^s]$, $i, j \in \mathbb{N}$, and $x, x' \in \mathcal{X}$
\begin{align}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \theta_{n}^{\leq \ell}
}
%
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \theta_{n}^{\leq \ell}
}^\top
%
\overset{P}{\longrightarrow}
\delta_{i=j}
\ntkf{a b}{\ell}{x}{x'}
\, ,
\end{align}
where $\theta_{n}^{\leq \ell}$ is the collection of trainable parameters in the first $\ell$ layers, as $n \to \infty$.
We will further use $\theta_{n}^{\ell}$ to refer to the trainable parameters of the $\ell$\textsuperscript{th} layer; e.g., for the attention layer $\theta_{n}^{\ell} = \{ \widetilde{\weightMatSymbol}_{n}^{\ell} \} \cup \bigcup_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}} \{ \widetilde{\weightMatSymbol}_{n}^{\ell h, Q} , \widetilde{\weightMatSymbol}_{n}^{\ell h, K}, \widetilde{\weightMatSymbol}_{n}^{\ell h, V} \}$.
Note that
\begin{align}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \theta_{n}^{\leq \ell}
}
%
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \theta_{n}^{\leq \ell}
}^\top
%
=
\underbrace{
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \theta_{n}^{\ell}
}
%
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \theta_{n}^{\ell}
}^\top
%
}_{\text{direct}}
+
\underbrace{
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \indexedActivity{\ell - 1}{n}{x}
}
\frac{
\partial \indexedActivity{\ell - 1}{n}{x}
}{
\partial \theta_{n}^{< \ell}
}
%
\frac{
\partial \indexedActivity{\ell - 1}{n}{x'}
}{
\partial \theta_{n}^{< \ell}
}^\top
%
%
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \indexedActivity{\ell - 1}{n}{x'}
}^\top
%
}_{\text{indirect}}
\, ,
\end{align}
where the \emph{direct} part corresponds to the contribution due to gradient w.r.t.\ the parameters of the $\ell$\textsuperscript{th} layer itself, and the \emph{indirect} part is due to effect of the $\ell$\textsuperscript{th} layer on the contribution due to the parameters of preceding layers.
The next two sections show convergence of each of these terms to a constant in probability, implying the desired result:
\begin{theorem}[NTK convergence]\label{thm:ntk_convergence}
Under the assumptions of \Cref{thm:gp_convergence_sqrt} (including those stated at the beginning of \Cref{sect:proofs}), for any $a, b \in [\genericDimenstion^s]$, and $x, x' \in \mathcal{X}$
\begin{align*}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \theta_{n}^{\leq \ell}
}
%
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \theta_{n}^{\leq \ell}
}^\top
%
\overset{P}{\longrightarrow}
\delta_{i=j}
\ntkf{a b}{\ell}{x}{x'}
\, ,
\end{align*}
where
\begin{align}
\ntkf{a b}{\ell}{x}{x'}
=
&
2
\kernelf{a b}{\ell}{x}{x'}
+
\OVStd^2
\sum_{\substack{a', b'}}^{\genericDimenstion^s}
\ntktildef{a' b'}{\ell}{x}{x'}
\E [
\tildeLogit{a a'}{\ell 1} (x)
\tildeLogit{b b'}{\ell 1} (x')
]
+
\nonumber
\\
&
\delta_{\tau = \frac{1}{2}}
\OVStd^2
\QKStd^2
(
2\kerntildef{a b}{\ell}{x}{x'}
+
\widetilde{\ntk}_{a b}^{\ell}(x, x')
)
\sum_{\substack{c_1, c_2 \\ d_1 , d_2}}^{\genericDimenstion^s}
\kerntildef{c_1 c_2}{\ell}{x}{x'}
\kerntildef{d_1 d_2}{
\ell}{x}{x'}
\E \left[
\frac{
\partial
\tildeLogitN{a c_1}{\ell 1}{x}
}{
\partial
G_{a d_1}^{\ell 1}(x)
}
\frac{
\partial
\tildeLogitN{b c_2}{\ell 1}{x'}
}{
\partial
G_{b d_2}^{\ell 1}(x')
}
\right]
+
\nonumber
\\
&
\delta_{\substack{\tau = \frac{1}{2}}}
\OVStd^2
\QKStd^2
\kerntildef{a b}{\ell}{x}{x'}
\sum_{\substack{c_1 , c_2 \\ d_1, d_2}}^{\ell}
\kerntildef{c_1 c_2}{\ell}{x}{x'}
\widetilde{\ntk}_{d_1 d_2}^{\ell}(x, x')
\E \left[
\frac{
\partial
\tildeLogit{a c_1}{\ell 1} (x)
}{
\partial
G_{a d_1}^{\ell 1} (x)
}
\frac{
\partial
\tildeLogit{b c_2}{\ell 1} (x')
}{
\partial
G_{b d_2}^{\ell 1} (x')
}
\right]
\, .
\end{align}
\end{theorem}
\Cref{thm:ntk_convergence} will be proven in the following two subsections.
\subsubsection{Direct contribution}
The direct contribution of an attention layer can be expanded as
\begin{align*}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \theta_{n}^{\ell}
}
%
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \theta_{n}^{\ell}
}^\top
%
=
&\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ell, O}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ell, O}
}^\top
+
\\
&\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, V}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, V}
}^\top
+
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, Q}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, Q}
}^\top
+
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, K}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, K}
}^\top
\, .
\end{align*}
We prove convergence of each of these terms next.
\begin{lemma}\label{lem:wo_ntk}
$
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ell, O}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ell, O}
}^\top
\overset{P}{\longrightarrow}
\delta_{i = j}
\kernelf{a b}{\ell}{x}{x'}
\, .
$
\end{lemma}
\begin{proof}[Proof of \Cref{lem:wo_ntk}]
Observe
\begin{align*}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ell, O}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ell, O}
}^\top
&=
\delta_{i = j}
\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\sum_{k = 1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\indexedActivation{\ellh}{n, a k}{x}
\indexedActivation{\ellh}{n, b k}{x'}
\\
&=
\delta_{i = j}
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\sum_{c_1, c_2 = 1}^{\genericDimenstion^s}
\tildeLogitN{n, a c_1}{\ell h}{x}
\tildeLogitN{n, b c_2}{\ell h}{x'}
%
\frac{
\langle
\val{n, c_1 \cdot}{\ellh}(x)
,
\val{n, c_2 \cdot}{\ellh}(x')
\rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}
}
\, .
\end{align*}
Since $\genericDimenstion^s$ is fixed, we can focus on an arbitrary pair $c_1, c_2 \in [\genericDimenstion^s]$.
Notice that by the continuous mapping theorem and \Cref{lem:inner_prod_converge,lem:slutsky}, the individual summands converge in distribution
$$
\tildeLogitN{n, a c_1}{\ellh}{x}
\tildeLogitN{n, b c_2}{\ellh}{x'}
\frac{
\langle
\val{n, c_1 \cdot}{\ellh}(x)
,
\val{n, c_2 \cdot}{\ellh}(x')
\rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}
}
\rightsquigarrow
\valueStd^2
\tildeLogitN{a c_1}{\ellh}{x}
\tildeLogitN{b c_2}{\ellh}{x'}
\kerntildef{c_1 c_2}{\ell}{x}{x'}
\, ,
$$
where $\tildeLogit{}{\ell
h}$ follows the $\zeta_\#$ pushforward of the GP distribution of $G^{\ell}$ described in \Cref{thm:gp_convergence_sqrt} if $\tau = \frac{1}{2}$, or $\tildeLogitN{}{\ell}{x} = \zeta (\stdSymbol_{\querySymbol} \stdSymbol_{\keySymbol} \kerntildef{}{\ell}{x}{x})$ a.s.\ if $\tau = 1$ \citep[appendix A]{yang2019v2}.
The desired result could thus be established by application of \Cref{lem:wlln_exch}, averaging over the $h$ index, if its assumptions hold.
Starting with the exchangeability assumption, note that if we condition on $\indexedActivity{\ell - 1}{n}{x}, \indexedActivity{\ell - 1}{n}{x'}$, the individual terms are i.i.d.\ because the parameters of individual heads are i.i.d.
Since $\{\tildeLogitN{a c_1}{\ell h}{x} \tildeLogitN{b c_2}{\ellh}{x'}\}_{h \geq 1}$ are also i.i.d.\ (see \Cref{thm:gp_convergence_sqrt} for $\tau = \frac{1}{2}$, and constancy under $\tau = 1$), it is also clear that the $\E [ X_{*, 1} X_{*, 2} ] = (\E [X_{*, 1}])^2$ is satisfied.
All that remains is to show $\limsup_{n \to \infty} \E | X_{n, 1} |^{2 + \varepsilon} < \infty$, and where we will use $\varepsilon = 2$ for convenience.
By H{\" o}lder's inequality
\begin{align*}
\E \left\{
\left[
\tildeLogitN{n, a c_1}{\ell 1}{x}
\tildeLogitN{n, b c_2}{\ell 1}{x'}
\vphantom{
\frac{
\langle
\val{n, c_1 \cdot}{\ell 1}(x)
,
\val{n, c_2 \cdot}{\ell 1}(x')
\rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}
}
}
\right.\right.
&
\left.\left.
\frac{
\langle
\val{n, c_1 \cdot}{\ell 1}(x)
,
\val{n, c_2 \cdot}{\ell 1}(x')
\rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}
}
\right]^4
\right\}
\\
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
&\lesssim
\poly \biggl(
\max_{c, c' \in [\genericDimenstion^s], z \in \{x, x'\}}
\E
|
\tildeLogitN{n, c c'}{\ell 1}{z}
|^{16}
,
\max_{c \in [\genericDimenstion^s], z \in \{x, x'\}}
\E |
\indexedActivity{\ell -1}{n, c 1}{z}
%
%
%
%
%
%
%
%
%
|^{16}
\biggr)
\, ,
\end{align*}
where we used the assumed exchangeability of $\indexedActivity{\ell-1}{n}{z}$ over its columns.
Application of \Cref{lem:mmnt_propagation} implies that the above can be bounded by a constant independent of $n$, implying all assumptions of \Cref{lem:wlln_exch} are satisfied.
\end{proof}
\begin{lemma}\label{lem:wv_ntk}
$
\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, V}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, V}
}^\top
\overset{P}{\longrightarrow}
\delta_{i = j}
\kernelf{a b}{\ell}{x}{x'}
\, .
$
\end{lemma}
\begin{proof}[Proof of \Cref{lem:wv_ntk}]
Note that
\begin{align*}
\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, V}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, V}
}^\top
&=
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\sum_{k = 1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\widetilde{\weightMatSymbol}_{n, k i}^{\ellh, O}
\widetilde{\weightMatSymbol}_{n, k j}^{\ellh, O}
\frac{\valueStd^2}{\layerDimension{\ell - 1}}
\left\langle
\tildeLogitN{n, a \cdot}{\ellh}{x}
\indexedActivity{\ell - 1}{}{x}
,
\tildeLogitN{n, b \cdot}{\ellh}{x}
\indexedActivity{\ell - 1}{}{x'}
\right\rangle
\\
&=
\frac{\outStd^2\valueStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{h, k}
\sum_{c_1, c_2 = 1}^{\genericDimenstion^s}
\widetilde{\weightMatSymbol}_{n, k i}^{\ellh, O}
\widetilde{\weightMatSymbol}_{n, k j}^{\ellh, O}
\tildeLogitN{n, a c_1}{\ellh}{x}
\tildeLogitN{n, b c_2}{\ellh}{x'}
\frac{
\langle
\indexedActivity{\ell - 1}{c_1 \cdot}{x}
,
\indexedActivity{\ell - 1}{c_2 \cdot}{x'}
\rangle
}{
\layerDimension{\ell - 1}
}
\, .
\end{align*}
Since $\genericDimenstion^s$ is fixed, we can focus on an arbitrary $c_1, c_2 \in [\genericDimenstion^s]$.
Notice that by the assumed independence of the entries of $\widetilde{\weightMatSymbol}_{n}^{\ellh, O}$, the continuous mapping theorem and \Cref{lem:inner_prod_converge,lem:slutsky}, the individual summands converge in distribution
$$
\widetilde{\weightMatSymbol}_{n, k i}^{\ellh, O}
\widetilde{\weightMatSymbol}_{n, k j}^{\ellh, O}
\tildeLogitN{n, a c_1}{\ellh}{x}
\tildeLogitN{n, b c_2}{\ellh}{x'}
\frac{
\langle
\indexedActivity{\ell - 1}{c_1 \cdot}{x}
,
\indexedActivity{\ell - 1}{c_2 \cdot}{x'}
\rangle
}{
\layerDimension{\ell - 1}
}
\rightsquigarrow
\widetilde{\weightMatSymbol}_{n, k i}^{\ellh, O}
\widetilde{\weightMatSymbol}_{n, k j}^{\ellh, O}
\tildeLogitN{a c_1}{\ellh}{x}
\tildeLogitN{b c_2}{\ellh}{x'}
\kerntildef{c_1 c_2}{\ell}{x}{x'}
\, ,
$$
with the distribution of
$\tildeLogitN{}{\ellh}{x}$
as in the proof of \Cref{lem:wo_ntk}.
The desired result can thus again be obtained by applying \Cref{lem:wlln_exch}, averaging over $h$ and $k$, if its assumptions hold.
As $\E [\widetilde{\weightMatSymbol}_{n , k i}^{\ellh, O} \widetilde{\weightMatSymbol}_{n, k j}^{\ellh, O}] = \delta_{i = j}$ and $\E |\widetilde{\weightMatSymbol}_{n , k i}^{\ellh, O}|^t < \infty$ for any $t \geq 1$, the same argument as in \Cref{lem:wo_ntk} applies.
\end{proof}
\begin{lemma}\label{lem:wqk_ntk}
$
\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, Q}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, Q}
}^\top
+
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, K}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, K}
}^\top
%
$
converges in probability to
$$
\delta_{i = j}
\delta_{\tau = \frac{1}{2}}
2
\OVStd^2
\QKStd^2
\kerntildef{a b}{\ell}{x}{x'}
\sum_{\substack{c_1, c_2 \\ d_1 , d_2}}^{\genericDimenstion^s}
\kerntildef{c_1 c_2}{\ell}{x}{x'}
\kerntildef{d_1 d_2}{
\ell}{x}{x'}
\E \left[
\frac{
\partial
\tildeLogitN{a c_1}{\ell 1}{x}
}{
\partial
G_{a d_1}^{\ell 1}(x)
}
\frac{
\partial
\tildeLogitN{b c_2}{\ell 1}{x'}
}{
\partial
G_{b d_2}^{\ell 1}(x')
}
\right]
\, .
$$
\end{lemma}
\begin{proof}[Proof of \Cref{lem:wqk_ntk}]
By symmetry, it is sufficient to prove convergence for the gradients w.r.t.\ $\widetilde{\weightMatSymbol}_n^{\ellh, K}$.
Observe
\begin{align*}
&\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, K}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, K}
}^\top
\\
&=
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{h = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\sum_{k_1, k_2 = 1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{\substack{c_1, c_2 \\ d_1, d_2}}^{\genericDimenstion^s}
%
%
\widetilde{\weightMatSymbol}_{n, k_1 i}^{\ellh, O}
\widetilde{\weightMatSymbol}_{n, k_2 j}^{\ellh, O}
\val{n, c_1 k_1}{\ellh} (x)
\val{n, c_2 k_2}{\ellh} (x')
\frac{
\partial
\tildeLogitN{n, a c_1}{\ellh}{x}
}{
\partial
G_{n, a d_1}^{\ellh}(x)
}
\frac{
\partial
\tildeLogitN{n, b c_2}{\ellh}{x'}
}{
\partial
G_{n, b d_2}^{\ellh}(x')
}
\frac{
\partial
G_{n, a d_1}^{\ellh}(x)
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, K}
}
\frac{
\partial
G_{n, b d_2}^{\ellh}(x')
}{
\partial \widetilde{\weightMatSymbol}_n^{\ellh, K}
}^\top
\, .
\end{align*}
Since $\genericDimenstion^s$ is fixed, we can focus on arbitrary $c_1, c_2, d_1, d_2 \in [\genericDimenstion^s]$.
Rewriting the r.h.s.\ above for one such choice, we obtain
\begin{align*}
\frac{\outStd^2 \keyStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\! \!
\sum_{h , k_1, k_2}
\! \!
\widetilde{\weightMatSymbol}_{n, k_1 i}^{\ellh, O}
\widetilde{\weightMatSymbol}_{n, k_2 j}^{\ellh, O}
\val{n, c_1 k_1}{\ellh} (x)
\val{n, c_2 k_2}{\ellh} (x')
\frac{
\partial
\tildeLogitN{n, a c_1}{\ellh}{x}
}{
\partial
G_{n, a d_1}^{\ellh}(x)
}
\frac{
\partial
\tildeLogitN{n, b c_2}{\ellh}{x'}
}{
\partial
G_{n, b d_2}^{\ellh}(x')
}
\frac{
\langle
\query{n, a \cdot}{\ellh} (x)
,
\query{n, b \cdot}{\ellh} (x')
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\frac{
\langle
\indexedActivity{\ell - 1}{d_1 \cdot}{x}
,
\indexedActivity{\ell - 1}{d_2 \cdot}{x'}
\rangle
}{
\layerDimension{\ell - 1}
}
\, .
\end{align*}
Noting that
$
%
\langle
\indexedActivity{\ell - 1}{d_1 \cdot}{x}
,
\indexedActivity{\ell - 1}{d_2 \cdot}{x'}
\rangle
%
/
\layerDimension{\ell - 1}
%
$
only depends on the spatial dimension indices $d_1$ and $d_2$, we can use \Cref{lem:inner_prod_converge} to establish it converges in probability to $\kerntildef{d_1 d_2}{\ell}{x}{x'}$, implying that we only need to prove that the rest of the terms in the above sum also converges in probability.
Let
\begin{align*}
\bar{\generalSum}_{n}
=
\frac{1}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{h, k_1, k_2}
\widetilde{\weightMatSymbol}_{n, k_1 i}^{\ellh, O}
\widetilde{\weightMatSymbol}_{n, k_2 j}^{\ellh, O}
\val{n, c_1 k_1}{\ellh} (x)
\val{n, c_2 k_2}{\ellh} (x')
\frac{
\partial
\tildeLogitN{n, a c_1}{\ellh}{x}
}{
\partial
G_{n, a d_1}^{\ellh}(x)
}
\frac{
\partial
\tildeLogitN{n, b c_2}{\ellh}{x'}
}{
\partial
G_{n, b d_2}^{\ellh}(x')
}
\frac{
\langle
\query{n, a \cdot}{\ellh} (x)
,
\query{n, b \cdot}{\ellh} (x')
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\, ,
\end{align*}
and note that
$
\E [\bar{\generalSum}_{n}]
=
\delta_{i = j}
\E \left[
\frac{
\langle
\query{n, a \cdot}{\ell 1} (x)
,
\query{n, b \cdot}{\ell 1} (x')
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\frac{
\langle
\indexedActivity{\ell - 1}{n, c_1 \cdot}{x}
,
\indexedActivity{\ell - 1}{n, c_2 \cdot}{x'}
\rangle
}{
\layerDimension{\ell - 1}
}
\frac{
\partial
\tildeLogitN{n, a c_1}{\ell 1}{x}
}{
\partial
G_{n, a d_1}^{\ell 1}(x)
}
\frac{
\partial
\tildeLogitN{n, b c_2}{\ell 1}{x'}
}{
\partial
G_{n, b d_2}^{\ell 1}(x')
}
\right]
$
by exchangeability.
This suggests that the required result could be obtained using the Chebyshev's inequality
\begin{align*}
\text{Pr} (
| \bar{\generalSum}_{n} - \E \bar{\generalSum}_{n} |
\geq
\delta
)
\leq
\frac{\E [ \bar{\generalSum}_{n}^2 ] - \{ \E [ \bar{\generalSum}_{n} ] \}^2 }{\delta^2}
\, ,
\end{align*}
if $\E [ \bar{\generalSum}_{n} ]$ converges to the desired limit.
To establish this convergence, observe
\begin{align}\label{eq:wqk_weak_limit_mean}
\delta_{i = j}
\frac{
\langle
\query{n, a \cdot}{\ell 1} (x)
,
\query{n, b \cdot}{\ell 1} (x')
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\frac{
\langle
\indexedActivity{\ell - 1}{n, c_1 \cdot}{x}
,
\indexedActivity{\ell - 1}{n, c_2 \cdot}{x'}
\rangle
}{
\layerDimension{\ell - 1}
}
&\frac{
\partial
\tildeLogitN{n, a c_1}{\ell 1}{x}
}{
\partial
G_{n, a d_1}^{\ell 1}(x)
}
\frac{
\partial
\tildeLogitN{n, b c_2}{\ell 1}{x'}
}{
\partial
G_{n, b d_2}^{\ell 1}(x')
}
\nonumber
\\
\rightsquigarrow
\delta_{i = j}
\delta_{\tau = \frac{1}{2}}
\queryStd^2
\kerntildef{a b}{\ell}{x}{x'}
\kerntildef{c_1 c_2}{\ell}{x}{x'}
&\frac{
\partial
\tildeLogitN{a c_1}{\ell 1}{x}
}{
\partial
G_{a d_1}^{\ell 1}(x)
}
\frac{
\partial
\tildeLogitN{b c_2}{\ell 1}{x'}
}{
\partial
G_{b d_2}^{\ell 1}(x')
}
\, ,
\end{align}
since the first two terms converge in probability (\Cref{lem:inner_prod_converge}), and the last converges in distribution by \Cref{thm:gp_convergence_sqrt} and the continuous mapping theorem, implying that the product of all three thus converges in distribution by \Cref{lem:slutsky}.
Convergence of $\E [\bar{\generalSum}_{n}]$ could thus be obtained by establishing uniform integrability of the $(\bar{\generalSum}_{n})_{n \geq 1}$ sequence (\Cref{thm:mean_convergence}).
By \Cref{lem:sup_ui}, uniform integrability of $\bar{\generalSum}_{n}$ can be established by showing $\E [ \bar{\generalSum}_{n}^2 ] \to \{ \E [ \bar{\generalSum}_{*} ] \}^2$ which would also imply $\bar{\generalSum}_{n} \overset{P}{\longrightarrow} \E [\bar{\generalSum}_{*}]$ by the above Chebyshev's inequality.
For the rest of this proof, we drop the $x, x'$ from our equations for brevity; this allows us to write
\begin{align*}
\E [\bar{\generalSum}_{n}^2]
=
\frac{1}{(\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol})^2}
\sum_{\substack{h_1, h_2 \\ k_1, k_2, k_3, k_4}}
\E \left[
\prod_{t = 0}^{1}
\widetilde{\weightMatSymbol}_{n, k_{2t + 1} i}^{\ellh_{t + 1}, O}
\widetilde{\weightMatSymbol}_{n, k_{2t + 2} j}^{\ellh_{t + 1}, O}
\val{n, c_1 k_{2t + 1}}{\ellh_{t + 1}}
\val{n, c_2 k_{2t + 2}}{\ellh_{t + 1}}
\frac{
\langle
\query{n, a \cdot}{\ellh_{t + 1}}
,
\query{n, b \cdot}{\ellh_{t+1}}
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ellh_{t + 1}}
}{
\partial
G_{n, a d_1}^{\ellh_{t + 1}}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ellh_{t + 1}}
}{
\partial
G_{n, b d_2}^{\ellh_{t + 1}}
}
\right]
\, .
\end{align*}
From above, we can restrict our attention to groups of terms that include at least $\mathcal{O}((\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol})^2)$ of the summands as long as the expectation of the square of each term can be bounded by a constant independent of the $h, k$ and $n$ indices.
\begin{align}\label{eq:wqk_weak_limit_bound}
\E \left\{
\left[
\prod_{t = 0}^{1}
\right.\right.
&
\left.\left.
\widetilde{\weightMatSymbol}_{n, k_{2t + 1} i}^{\ellh_{t + 1}, O}
\widetilde{\weightMatSymbol}_{n, k_{2t + 2} j}^{\ellh_{t + 1}, O}
\val{n, c_1 k_{2t + 1}}{\ellh_{t + 1}}
\val{n, c_2 k_{2t + 2}}{\ellh_{t + 1}}
\frac{
\langle
\query{n, a \cdot}{\ellh_{t + 1}}
,
\query{n, b \cdot}{\ellh_{t+1}}
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ellh_{t + 1}}
}{
\partial
G_{n, a d_1}^{\ellh_{t + 1}}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ellh_{t + 1}}
}{
\partial
G_{n, b d_2}^{\ellh_{t + 1}}
}
\right]^2
\right\}
\nonumber
\\
&\lesssim
\E \left\{
\left[
\prod_{t = 0}^{1}
\val{n, c_1 k_{2t + 1}}{\ellh_{t + 1}}
\val{n, c_2 k_{2t + 2}}{\ellh_{t + 1}}
\frac{
\langle
\query{n, a \cdot}{\ellh_{t + 1}}
,
\query{n, b \cdot}{\ellh_{t+1}}
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\right]^4
\right\}
\lesssim
\poly \left(
\max_{c \in [\genericDimenstion^s] , z \in \{ x, x' \}}
\E | \indexedActivity{\ell - 1}{n, c 1}{z} |^{16}
\right)
\, ,
\end{align}
by H{\" o}lder's inequality and exchangeability.
Application of \Cref{lem:mmnt_propagation} allows us to bound the above r.h.s.\ by a constant independent of $h, k$ and $n$ as desired.
We can thus only focus on the terms for which $h_1 \neq h_2$.
Among these, the only ones with non-zero expectation are those where $i = j$, $k_1 = k_2$, and $k_3 = k_4$, contributing to $\E [ \bar{\generalSum}_{n}^2 ]$ by
\begin{align}\label{eq:wqk_square_simplif}
\delta_{i = j}
\frac{\outStd^2 \valueStd^2}{(\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol})^2}
\sum_{\substack{h_1 h_2 \\ k_1, k_2}}
\E \left[
\left(
\frac{
\langle
g_{n, c_1 \cdot}^{\ell - 1}
,
g_{n, c_2 \cdot}^{\ell - 1}
\rangle
}{
\layerDimension{\ell - 1}
}
\right)^2
\frac{
\langle
\query{n, a \cdot}{\ell 1}
,
\query{n, b \cdot}{\ell 1}
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\frac{
\langle
\query{n, a \cdot}{\ell 2}
,
\query{n, b \cdot}{\ell 2}
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 1}
}{
\partial
G_{n, a d_1}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 2}
}{
\partial
G_{n, a d_1}^{\ell 2}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 2}
}{
\partial
G_{n, b d_2}^{\ell 2}
}
\right]
\end{align}
by exchangeability.
Noting that the sum cancels out with the $\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}$ terms, we see that the limit of $\E [ \bar{\generalSum}_{n}^2 ]$ will be identical to that of \Cref{eq:wqk_square_simplif}.
Applying \Cref{lem:inner_prod_converge,lem:slutsky}, \Cref{thm:gp_convergence_sqrt} (resp.\ the result by \citet[appendix A]{yang2019v2} if $\tau = 1$), and the continuous mapping theorem
\begin{align}\label{eq:wqk_weak_limit_square}
\delta_{i = j}
&\left(
\frac{
\langle
g_{n, c_1 \cdot}^{\ell - 1}
,
g_{n, c_2 \cdot}^{\ell - 1}
\rangle
}{
\layerDimension{\ell - 1}
}
\right)^2
\frac{
\langle
\query{n, a \cdot}{\ell 1}
,
\query{n, b \cdot}{\ell 1}
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\frac{
\langle
\query{n, a \cdot}{\ell 2}
,
\query{n, b \cdot}{\ell 2}
\rangle
}{
(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 1}
}{
\partial
G_{n, a d_1}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 2}
}{
\partial
G_{n, a d_1}^{\ell 2}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 2}
}{
\partial
G_{n, b d_2}^{\ell 2}
}
\nonumber
\\
&\rightsquigarrow
\delta_{i = j}
\delta_{\tau = \frac{1}{2}}
\stdSymbol_{\querySymbol}^4
[\kerntildef{a b}{\ell}{x}{x'}]^2
[\kerntildef{c_1 c_2}{\ell}{x}{x'}]^2
\frac{
\partial
\tildeLogit{a c_1}{\ell 1}
}{
\partial
G_{a d_1}^{\ell 1}
}
\frac{
\partial
\tildeLogit{b c_2}{\ell 1}
}{
\partial
G_{b d_2}^{\ell 1}
}
\frac{
\partial
\tildeLogit{a c_1}{\ell 2}
}{
\partial
G_{a d_1}^{\ell 2}
}
\frac{
\partial
\tildeLogit{b c_2}{\ell 2}
}{
\partial
G_{b d_2}^{\ell 2}
}
\, ,
\end{align}
where $\frac{\partial \tildeLogit{}{\ell h}}{\partial G^{\ell h}}$
follows the $(\nabla \zeta)_{\#}$ pushforward of the GP distribution of $G^{\ell}$ described in \Cref{thm:gp_convergence_sqrt} if $\tau = \frac{1}{2}$, and is a.s.\ constant if $\tau = 1$ as the limit $\tildeLogit{}{\ell h}$ is a.s.\ constant \citep[appendix A]{yang2019v2}, both by the assumed continuity of $\nabla \zeta$.
Finally, because \Cref{eq:wqk_weak_limit_bound} establishes uniform integrability, and $\frac{\partial \tildeLogit{}{\ell 1}}{\partial G^{\ell 1}}$ is independent of $\frac{\partial \tildeLogit{}{\ell 2}}{\partial G^{\ell 2}}$ by \Cref{thm:gp_convergence_sqrt}, we can combine \Cref{eq:wqk_weak_limit_mean,eq:wqk_weak_limit_square} with \Cref{thm:mean_convergence} to conclude that both $\E [ \bar{\generalSum}_{n}^2 ]$ and $\{ \E [ \bar{\generalSum}_{n} ] \}^2$ converge to the same limit.
\end{proof}
\subsubsection{Indirect contribution}
The indirect contribution of an attention layer can be expanded as
\begin{align*}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \indexedActivity{\ell - 1}{n}{x}
}
\frac{
\partial \indexedActivity{\ell - 1}{n}{x}
}{
\partial \theta_{n}^{< \ell}
}
%
\frac{
\partial \indexedActivity{\ell - 1}{n}{x'}
}{
\partial \theta_{n}^{< \ell}
}^\top
%
%
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \indexedActivity{\ell - 1}{n}{x'}
}^\top
%
=
\sum_{a', b' = 1}^{\genericDimenstion^s}
\sum_{i', j' = 1}^{\layerDimension{\ell - 1}}
\ntkhatf{
a' i' , b' j'
%
}{\ell}{x}{x'}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \indexedActivity{\ell - 1}{n, a' i'}{x}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \indexedActivity{\ell - 1}{n, b' j'}{x'}
}
\, ,
\end{align*}
where\footnote{$\widehat{\ntk}$ should technically also be subscripted with $n$ as all other variables dependent on the $\theta_{n}^{\leq \ell}$; we make an exception here and omit this from our notation as the number of subscripts of $\widehat{\ntk}$ is already high.}
\begin{align}\label{eq:ntk_hat}
\ntkhatf{
a' i' , b' j'
%
}{\ell}{x}{x'}
\coloneqq
\left\langle
\frac{
\partial \indexedActivity{\ell - 1}{n, a' i'}{x}
}{
\partial \theta_{n}^{< \ell}
}
,
\frac{
\partial \indexedActivity{\ell - 1}{n, b' j'}{x'}
}{
\partial \theta_{n}^{< \ell}
}
\right\rangle
\, ,
\end{align}
which we know converges a.s., and thus also in probability, to $\delta_{i' = j'} \ntktildef{a' b'}{\ell}{x}{x'}$ for architectures without attention layers \citep{yang2019v2}.
Expanding the indirect contribution further
\begin{align*}
&\sum_{a', b'}
\sum_{i', j'}
\ntkhatf{
a' i' , b' j'
%
}{\ell}{x}{x'}
\frac{
\partial \indexedActivation{\ell}{n, a i}{x}
}{
\partial \indexedActivity{\ell - 1}{n, a' i'}{x}
}
\frac{
\partial \indexedActivation{\ell}{n, b j}{x'}
}{
\partial \indexedActivity{\ell - 1}{n, b' j'}{x'}
}
\\
&=
\sum_{a', b'}
\sum_{i', j'}
\ntkhatf{
a' i' , b' j'
%
}{\ell}{x}{x'}
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{h_1, h_2 = 1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\sum_{k_1, k_2 = 1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\frac{
\partial
\tildeLogitN{n , a \cdot}{\ell h_1}{x}
\val{n, \cdot k_1}{\ell h_1} (x)
}{
\partial \indexedActivity{\ell - 1}{n, a' i'}{x}
}
\frac{
\partial
\tildeLogitN{n , b \cdot}{\ell h_1}{x}
\val{n, \cdot k_2}{\ell h_2} (x')
}{
\partial \indexedActivity{\ell - 1}{n, b' j'}{x'}
}
\, .
\end{align*}
In the rest of this section, we drop the $x, x'$ from most of our equations so as to reduce the number of multi-line expressions.
Continuing with the inner sum from above we obtain
\begin{align*}
&\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{\substack{h_1, h_2\\ k_1, k_2}}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\frac{
\partial
\tildeLogit{n , a \cdot}{\ell h_1}
\val{n, \cdot k_1}{\ell h_1}
}{
\partial g_{n, a' i'}^{\ell - 1}
}
\frac{
\partial
\tildeLogit{n , b \cdot}{\ell h_1}
\val{n, \cdot k_2}{\ell h_2}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
\\
&=
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{\substack{h_1, h_2\\ k_1, k_2}}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\begin{aligned}[t]
&\biggl(
\sum_{c_1=1}^{\genericDimenstion^s}
\tildeLogit{n, a c_1}{\ellh_1}
\frac{
\partial
\val{n, c_1 k_1}{\ell h_1}
}{
\partial g_{n, a' i'}^{\ell - 1}
}
+
\val{n, c_1 k_1}{\ell h_1}
\sum_{d_1 = 1}^{\genericDimenstion^s}
\frac{
\partial
\tildeLogit{n, a c_1}{\ellh_1}
}{
\partial
G_{n, a d_1}^{\ellh_1}
}
\frac{
\partial
G_{n, a d_1}^{\ellh_1}
}{
\partial g_{n, a' i'}^{\ell - 1}
}
\biggr)
\cdot
\\
&\biggl(
\sum_{c_2=1}^{\genericDimenstion^s}
\tildeLogit{n, b c_2}{\ellh_2}
\frac{
\partial
\val{n, c_2 k_2}{\ell h_2}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
+
\val{n, c_2 k_2}{\ell h_2}
\sum_{d_2 = 1}^{\genericDimenstion^s}
\frac{
\partial
\tildeLogit{n, b c_2}{\ellh_2}
}{
\partial
G_{n, b d_2}^{\ellh_2}
}
\frac{
\partial
G_{n, b d_2}^{\ellh_2}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
\biggr)
\, ,
\end{aligned}
\end{align*}
which gives us four sums after multiplying out the terms inside the parenthesis, for each of which we prove convergence separately.
Since the spatial dimension $\genericDimenstion^s$ does not change with $n$, we will restrict our attention to an arbitrary fixed choice of $a', b', c_1, c_2, d_1, d_2 \in [\genericDimenstion^s]$ throughout.
\begin{lemma}\label{lem:gg_ntk}
$
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{i', j'}
\sum_{\substack{h_1, h_2\\ k_1, k_2}}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\tildeLogit{n, a c_1}{\ellh_1}
\tildeLogit{n, b c_2}{\ellh_2}
\frac{
\partial
\val{n, c_1 k_1}{\ell h_1}
}{
\partial g_{n, a' i'}^{\ell - 1}
}
\frac{
\partial
\val{n, c_2 k_2}{\ell h_2}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
$
converges in probability to
\begin{align*}
\delta_{i = j}
\delta_{\substack{c_1 = a' \\ c_2 = b'}}
\OVStd^2
\ntktildef{a' b'}{\ell}{x}{x'}
\E [
\tildeLogit{a c_1}{\ell 1} (x)
\tildeLogit{b c_2}{\ell 1} (x')
]
\, ,
\end{align*}
\end{lemma}
\begin{proof}[Proof of \Cref{lem:gg_ntk}]
Note that
\begin{align*}
&\frac{1}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{i', j'}
\sum_{\substack{h_1, h_2\\ k_1, k_2}}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\tildeLogit{n, a c_1}{\ellh_1}
\tildeLogit{n, b c_2}{\ellh_2}
\frac{
\partial
\val{n, c_1 k_1}{\ell h_1}
}{
\partial g_{n, a' i'}^{\ell - 1}
}
\frac{
\partial
\val{n, c_2 k_2}{\ell h_2}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
\\
&=
\delta_{\substack{c_1 = a' \\ c_2 = b'}}
\frac{\valueStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} \layerDimension{\ell - 1}}
\sum_{i', j'}
\sum_{\substack{h_1, h_2\\ k_1, k_2}}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\widetilde{\weightMatSymbol}_{n, i' k_1}^{\ell h_1, V}
\widetilde{\weightMatSymbol}_{n, j' k_2}^{\ell h_2, V}
\tildeLogit{n, a c_1}{\ellh_1}
\tildeLogit{n, b c_2}{\ellh_2}
\coloneqq
\delta_{\substack{c_1 = a' \\ c_2 = b'}}
\valueStd^2 \,
\bar{\generalSum}_{n}
\, .
\end{align*}
Further,
$
\E [\bar{\generalSum}_{n}]
=
\E [
\widehat{\ntk}_{
a' 1 , b' 1
%
}^{\ell}
\tildeLogit{n, a c_1}{\ell 1}
\tildeLogit{n, b c_2}{\ell 1}
]
$
by exchangeability.
As in the proof of \Cref{lem:wqk_ntk}, the desired result could thus be obtained by an application of Chebyshev's inequality, $\text{Pr}(|\bar{\generalSum}_{n} - \E \bar{\generalSum}_{n}| \geq \delta) \leq \delta^{-2} [ \E [\bar{\generalSum}_{n}^2] - \{ \E [ \bar{\generalSum}_{n} ] \}^2 ] $, if $\E [ \bar{\generalSum}_{n} ]$ converges to the desired limit and $|\! \E [\bar{\generalSum}_{n}^2] - \{ \E [ \bar{\generalSum}_{n} ] \}^2| \to 0$ as $n \to \infty$.
To establish convergence of the mean, first note that
$
\widehat{\ntk}_{
a' 1 b' 1
%
}^{\ell}
\tildeLogit{n, a c_1}{\ell 1}
\tildeLogit{n, b c_2}{\ell 1}
\rightsquigarrow
\widetilde{\ntk}_{
a' b'
}^{\ell}
\tildeLogit{a c_1}{\ell 1}
\tildeLogit{b c_2}{\ell 1}
$
by \Cref{thm:gp_convergence_sqrt}, the continuous mapping theorem,
$
\widehat{\ntk}_{
a' 1 b' 1
%
}^{\ell}
\overset{P}{\longrightarrow}
\widetilde{\ntk}_{
a' b'
}^{\ell}
$ \citep{yang2019v2}, and \Cref{lem:slutsky}.
Inspecting \Cref{thm:mean_convergence} and \Cref{lem:sup_ui}, we see it is sufficient to show $\E [ \bar{\generalSum}_{n}^2 ] \to \{ \E [ \bar{\generalSum}_{*} ] \}^2$ to establish both convergence of the mean, and $\bar{\generalSum}_{n} \overset{P}{\longrightarrow} \E [ \bar{\generalSum}_{*} ]$.
We thus turn to $\E [\bar{\generalSum}_{n}^2]$
\begin{align*}
%
%
\frac{1}{(
\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} \layerDimension{\ell - 1}
)^2}
\sum_{\substack{i'_1, j'_1\\ i'_2, j'_2}}
\sum_{\substack{h_1, h_2, h_3, h_4\\ k_1, k_2, k_3, k_4}}
\! \! \! \!
\E \left[
\prod_{t=0}^1
\widehat{\ntk}_{
a' i'_{t+1} ,
b' j'_{t+1}
%
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_{2t + 1} i}^{\ell h_{2t + 1} , O}
\widetilde{\weightMatSymbol}_{n , k_{2t + 2} j}^{\ell h_{2t + 2} , O}
\widetilde{\weightMatSymbol}_{n, i'_{t + 1} k_{2t + 1}}^{\ell h_{2t + 1}, V}
\widetilde{\weightMatSymbol}_{n, j'_{t + 1} k_{2t + 2}}^{\ell h_{2t + 2}, V}
\tildeLogit{n, a c_1}{\ellh_{2t + 1}}
\tildeLogit{n, b c_2}{\ellh_{2t + 2}}
\right]
.
\end{align*}
We can thus restrict our attention to groups of terms that include at least $\mathcal{O}((\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} \layerDimension{\ell - 1})^2)$ of the summands, as long as the expectation of the square of each term can be bounded by a constant independent of the $h, k, i', j'$ and $n$ indices.
Observe that
\begin{align}\label{eq:gg_bound}
\E \left\{
\left[
\prod_{t=0}^1
\widehat{\ntk}_{
a' i'_{t+1} ,
b' j'_{t+1}
%
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_{2t + 1} i}^{\ell h_{2t + 1} , O}
\right.\right.
&\left.\left.
\vphantom{\prod_{t=0}^1}
\widetilde{\weightMatSymbol}_{n , k_{2t + 2} j}^{\ell h_{2t + 2} , O}
\widetilde{\weightMatSymbol}_{n, i'_{t + 1} k_1}^{\ell h_{2t + 1}, V}
\widetilde{\weightMatSymbol}_{n, j'_{t + 1} k_2}^{\ell h_{2t + 2}, V}
\tildeLogit{n, a c_1}{\ellh_{2t + 1}}
\tildeLogit{n, b c_2}{\ellh_{2t + 2}}
\right]^2
\right\}
\nonumber
\\
&\lesssim
\poly \left(
\max_{\substack{
a', b' \in [\genericDimenstion^s],
i', j' \in \{ 1, 2 \} \\
z, z' \in \{ x, x' \}
}}
\E [
\widehat{\ntk}_{
a' i' , \,
b' j'
%
}^{\ell} (z, z')^4
]
\, ,
\max_{\substack{
c, c' \in [\genericDimenstion^s] \\
z \in \{ x , x' \}
}}
\E [
\tildeLogit{n, c , c'}{\ell 1} (z)^8
]
\right)
\, ,
\end{align}
and thus we can obtain the desired bound by applying \Cref{lem:mmnt_propagation}.
We thus shift our attention to the terms that are not $\mathrm{o}( (\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} \layerDimension{\ell - 1})^2 )$ of which there are three types: (i)~$i = j$, $(h_1, k_1, i'_1) = (h_2, k_2, j'_2)$, and $(h_3, k_3, i'_3) = (h_4, k_4, j'_4)$; (ii)~$i = j$, $(h_1, k_1, i'_1) = (h_4, k_4, j'_4)$, and $(h_2, k_2, i'_2) = (h_3, k_3, j'_3)$; (iii)~$(h_1, k_1, i'_1) = (h_3, k_3, i'_3)$, and $(h_2, k_2, i'_2) = (h_4, k_4, i'_4)$.
Hence the limit of $\E [\bar{\generalSum}_{n}^2 ]$ will up to a constant coincide with that of
\begin{align*}
\E \left[
\left(
\widehat{\ntk}_{
a' 1 ,
b' 2
%
}^{\ell}
\tildeLogit{n, a a'}{\ell 1}
\tildeLogit{n, b b'}{\ell 2}
\right)^2
\right]
+
\delta_{i = j}
\E \left[
\left(
\widehat{\ntk}_{
a' 1 ,
b' 1
%
}^{\ell}
\widehat{\ntk}_{
a' 2 ,
b' 2
%
}^{\ell}
+
\widehat{\ntk}_{
a' 1 ,
b' 2
%
}^{\ell}
\widehat{\ntk}_{
a' 2 ,
b' 1
%
}^{\ell}
\right)
\tildeLogit{n, a a'}{\ell 1}
\tildeLogit{n, b b'}{\ell 1}
\tildeLogit{n, a a'}{\ell 2}
\tildeLogit{n, b b'}{\ell 2}
\right]
\, ,
\end{align*}
by exchangeability.
Noticing $\tildeLogit{n}{\ell}$ converges in distribution by \Cref{thm:gp_convergence_sqrt} and the continuous mapping theorem, and the $\widehat{\ntk}^{\ell}$ converges in this distribution \citep{yang2019v2}, both integrands converge in distribution by the continuous mapping theorem and \Cref{lem:slutsky}.
Since the $\tildeLogit{n}{\ell h}$ corresponding to different heads are independent in the limit (\Cref{thm:gp_convergence_sqrt}), and the limit of $\widehat{\ntk}_{a' i' , b' j'}^{\ell}$ is non-zero only if $i' = j'$
%
\citep{yang2019v2}, application of \Cref{thm:mean_convergence} combined with the bound from \Cref{eq:gg_bound} concludes the proof.
\end{proof}
\begin{lemma}\label{lem:vv_ntk}
$
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{i', j'}
\sum_{\substack{h_1, h_2\\ k_1, k_2}}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\val{n, c_1 k_1}{\ell h_1}
\val{n, c_2 k_2}{\ell h_2}
\frac{
\partial
\tildeLogit{n, a c_1}{\ellh_1}
}{
\partial
G_{n, a d_1}^{\ellh_1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ellh_2}
}{
\partial
G_{n, b d_2}^{\ellh_2}
}
\frac{
\partial
G_{n, a d_1}^{\ellh_1}
}{
\partial g_{n, a' i'}^{\ell - 1}
}
\frac{
\partial
G_{n, b d_2}^{\ellh_2}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
$
converges in probability to
\begin{align*}
\delta_{\substack{i = j \\ \tau = \frac{1}{2}}}
\OVStd^2
\QKStd^2
\widetilde{\ntk}_{a' b'}^{\ell}(x, x')
\kerntildef{c_1 c_2}{\ell}{x}{x'}
\left(
\delta_{\substack{d_1 = a' \\ d_2 = b'}}
\kerntildef{a b}{\ell}{x}{x'}
+
\delta_{\substack{a' = a \\ b' = b}}
\kerntildef{d_1 d_2}{\ell}{x}{x'}
\right)
\E \left[
\frac{
\partial
\tildeLogit{a c_1}{\ell 1} (x)
}{
\partial
G_{a d_1}^{\ell 1} (x)
}
\frac{
\partial
\tildeLogit{b c_2}{\ell 1} (x')
}{
\partial
G_{b d_2}^{\ell 1} (x')
}
\right]
\, .
\end{align*}
\end{lemma}
\begin{proof}[Proof of \Cref{lem:vv_ntk}]
To make the notation more succinct, we define
\begin{align}\label{eq:vv_grad_logit_summand}
\frac{
\partial
G_{n, a d_1}^{\ellh_1}
}{
\partial g_{n, a' i'}^{\ell - 1}
}
=
\frac{1}{(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{\tau} \sqrt{\layerDimension{\ell - 1}}}
%
\sum_{u_1 = 1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}
\underbrace{
\delta_{d_1 = a'}
\stdSymbol_{\keySymbol}
\query{a u_1}{h_1}
%
\widetilde{\weightMatSymbol}_{n, i' u_1}^{\ellh_1 , K}
%
%
%
+
\delta_{a' = a}
\stdSymbol_{\querySymbol}
\key{n, d_1 u_1}{\ell h_1}
%
\widetilde{\weightMatSymbol}_{n, i' u_1}^{\ellh_1 , Q}
%
%
%
}_{\coloneqq \Gamma_{i' u_1}^{h_1}}
%
\, .
\end{align}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
which leads us to
\begin{align*}
\bar{\generalSum}_{n}
=
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} \layerDimension{\ell - 1} (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}}
\sum_{\substack{i', j' \\ h_1, h_2}}
\sum_{\substack{k_1, k_2 \\ u_1 , u_2}}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\val{n, c_1 k_1}{\ell h_1}
\val{n, c_2 k_2}{\ell h_2}
\frac{
\partial
\tildeLogit{n, a c_1}{\ellh_1}
}{
\partial
G_{n, a d_1}^{\ellh_1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ellh_2}
}{
\partial
G_{n, b d_2}^{\ellh_2}
}
\Gamma_{n, i' u_1}^{h_1}
\Gamma_{n, j' u_2}^{h_2}
\, .
\end{align*}
Unlike in the proof of \Cref{lem:gg_ntk}, the mean
\begin{align}\label{eq:vv_mean}
\E [
\bar{\generalSum}_{n}
]
=
\delta_{i = j}
\frac{\OVStd^2}{\layerDimension{\ell - 1} (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}}
\sum_{i', j'}
\sum_{\substack{u_1 , u_2}}
\E \left[
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
\frac{
\langle
g_{n, c_1 \cdot}^{\ell - 1}
,
g_{n, c_2 \cdot}^{\ell - 1}
\rangle
}{
\layerDimension{\ell - 1}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 1}
}{
\partial
G_{n, a d_1}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\Gamma_{n, i' u_1}^{1}
\Gamma_{n, j' u_2}^{1}
\right]
\, ,
\end{align}
only eliminates some of the sums.
%
This issue can be resolved with the help of \Cref{lem:vv_subtask_convg}.
\begin{lemma}\label{lem:vv_subtask_convg}
The random variable
\begin{align*}
\bar{\generalSum}_{n}^{h_1 h_2}
\coloneqq
\frac{1}{\layerDimension{\ell - 1} (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}}
\sum_{i', j'}
\sum_{\substack{u_1 , u_2}}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
\Gamma_{n, i' u_1}^{h_1}
\Gamma_{n, j' u_2}^{h_2}
\, ,
\end{align*}
converges in probability to
$$
\delta_{\tau = \frac{1}{2}}
\delta_{h_1 = h_2}
\QKStd^2
\widetilde{\ntk}_{a' b'}^{\ell}(x, x')
\left(
\delta_{\substack{d_1 = a' \\ d_2 = b'}}
\kerntildef{a b}{\ell}{x}{x'}
+
\delta_{\substack{a' = a \\ b' = b}}
\kerntildef{d_1 d_2}{\ell}{x}{x'}
\right)
\, .
$$
\end{lemma}
\begin{proof}
Notice
\begin{align}\label{eq:vv_subtask_mean}
\E [\bar{\generalSum}_n^{h_1 h_2}]
=
&\delta_{h_1 = h_2}
\frac{\QKStd^2}{(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau - 1}}
\E
\left[
\widehat{\ntk}_{a' 1, b' 1}^{\ell}
\left(
\delta_{\substack{d_1 = a' \\ d_2 = b'}}
\frac{
\langle
g_{n, a \cdot}^{\ell}
,
g_{n, b \cdot}^{\ell}
\rangle
}{
\layerDimension{\ell - 1}
}
+
\delta_{\substack{a' = a \\ b' = b}}
\frac{
\langle
g_{n, d_1 \cdot}^{\ell}
,
g_{n, d_2 \cdot}^{\ell}
\rangle
}{
\layerDimension{\ell - 1}
}
\right)
\right]
+
\nonumber
\\
&\delta_{h_1 = h_2}
\frac{\QKStd^2}{(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau - 1}}
\E \left[
\widehat{\ntk}_{a' 1, b' 1}^{\ell}
\left(
\delta_{\substack{d_1 = a' \\ b' = b}}
\frac{
g_{n, a 1}^{\ell - 1}
g_{n, d_2 1}^{\ell - 1}
}{
\layerDimension{\ell - 1}
}
+
\delta_{\substack{a' = a \\ d_2 = b'}}
\frac{
g_{n, d_1 1}^{\ell - 1}
g_{n, b 1}^{\ell - 1}
}{
\layerDimension{\ell - 1}
}
\right)
\right]
\, ,
\end{align}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
and thus we can combine the fact that $\widehat{\ntk}_{a'
i', b' j'}^{\ell} \overset{P}{\longrightarrow} \delta_{i' = j'} \widetilde{\ntk}_{a' b'}^{\ell}$ \citep{yang2019v2} with \Cref{lem:inner_prod_converge,lem:mmnt_propagation}, the continuous mapping theorem, and \Cref{thm:mean_convergence} to obtain
$$
\E [ \bar{\generalSum}_{n}^{h_1 h_2} ]
\to
\delta_{\tau = \frac{1}{2}}
\delta_{h_2 = h_2}
\QKStd^2
\widetilde{\ntk}_{a' b'}^{\ell}(x, x')
\left(
\delta_{\substack{d_1 = a' \\ d_2 = b'}}
\kerntildef{a b}{\ell}{x}{x'}
+
\delta_{\substack{a' = a \\ b' = b}}
\kerntildef{d_1 d_2}{\ell}{x}{x'}
\right)
\, ,
$$
as $n \to \infty$.
To obtain the convergence of $\bar{\generalSum}_{n}^{h_1 h_2}$ to in probability, it is thus sufficient to show $| \! \E [ (\bar{\generalSum}_{n}^{h_1 h_2})^2 ] - \{ \E [ \bar{\generalSum}_{n}^{h_1 h_2} ] \}^2|$ converges to zero as $n \to \infty$.
Substituting
\begin{align*}
\E [ (\bar{\generalSum}_{n}^{h_1 h_2})^2 ]
=
\frac{1}{(\layerDimension{\ell - 1} (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau})^2}
\sum_{\substack{i'_1, j'_1 \\ i'_2, j'_2}}
\sum_{\substack{u_1 , u_2 \\ u_3 , u_4}}
\E \left[
\widehat{\ntk}_{
a' i'_1 , b' j'_1
%
}^{\ell}
\widehat{\ntk}_{
a' i'_2 , b' j'_2
%
}^{\ell}
\Gamma_{n, i'_1 u_1}^{1}
\Gamma_{n, j'_1 u_2}^{1}
\Gamma_{n, i'_2 u_3}^{1}
\Gamma_{n, j'_2 u_4}^{1}
\right]
\, ,
\end{align*}
we can once again restrict our attention to groups of terms that include at least $\mathcal{O}((\layerDimension{\ell - 1} (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau})^2)$ of the summands as long as each term can be bounded by a constant independent of the $i', j'$ and $n$ indices.
This bound can be again obtained by a repeated application of H{\" o}lder's inequality, followed by \Cref{lem:mmnt_propagation}.
We can thus shift our attention to the terms for which either of the following holds: (i)~$(i'_1, u_1) = (j'_1, u_2)$ and $(i'_2, u_3) = (j'_2, u_4)$; or (ii)~$(i'_1, u_1) = (i'_2, u_3)$ and $(j'_1, u_2) = (j'_2, u_4)$; or $(i'_1, u_1) = (j'_2, u_4)$ and $(j'_1, u_2) = (i'_2, u_3)$.
As in \Cref{eq:vv_subtask_mean}, we can use the above established boundedness to see that the contribution from any terms that involve the cross terms like $\query{a 1}{\ell 1} \key{n, d_1 1}{\ell 1} \widetilde{\weightMatSymbol}_{n, 1 1}^{\ell 1 , K}
\widetilde{\weightMatSymbol}_{n 1 1}^{\ell 1, Q}$, and terms with either of $i'_1 \neq j'_2$ and $i'_2 \neq j'_2$ (the limit of $\widehat{\ntk}_{a' i', b', j'}$ is zero if $i' \neq j'$), vanish.
With some algebraic manipulation analogous to that in \Cref{eq:vv_subtask_mean}, we thus obtain
$$
\E [ (\bar{\generalSum}_{n}^{h_1 h_2})^2 ]
\to
\delta_{\tau = \frac{1}{2}}
\delta_{h_1 = h_2}
\left[
\QKStd^2
\widetilde{\ntk}_{a' b'}^{\ell}(x, x')
\left(
\delta_{\substack{d_1 = a' \\ d_2 = b'}}
\kerntildef{a b}{\ell}{x}{x'}
+
\delta_{\substack{a' = a \\ b' = b}}
\kerntildef{d_1 d_2}{\ell}{x}{x'}
\right)
\right]^2
\, ,
$$
as desired.
Application of Cheybshev's inequality concludes the proof.
\end{proof}
With $\bar{\generalSum}_{n}^{h_1 h_2}$ defined as in \Cref{lem:vv_subtask_convg}, we can revisit \Cref{eq:vv_mean}
\begin{align*}
\E [\bar{\generalSum}_{n}]
=
\delta_{i = j}
\OVStd^2
\E \left[
\bar{\generalSum}_{n}^{1 1}
\frac{
\langle
g_{n, c_1 \cdot}^{\ell - 1}
,
g_{n, c_2 \cdot}^{\ell - 1}
\rangle
}{
\layerDimension{\ell - 1}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 1}
}{
\partial
G_{n, a d_1}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\right]
\, .
\end{align*}
Note that the first two terms converge in probability to constant by \Cref{lem:inner_prod_converge,lem:vv_subtask_convg} and the continuous mapping theorem,
$$
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 1}
}{
\partial
G_{n, a d_1}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\rightsquigarrow
\frac{
\partial
\tildeLogit{a c_1}{\ell 1}
}{
\partial
G_{a d_1}^{\ell 1}
}
\frac{
\partial
\tildeLogit{b c_2}{\ell 1}
}{
\partial
G_{b d_2}^{\ell 1}
}
\, ,
$$
if $\tau = \frac{1}{2}$ by \Cref{thm:gp_convergence_sqrt}, and in probability to a constant if $\tau = 1$ \citep[appendix A]{yang2019v2}, both using the assumed continuity of $\nabla \zeta$.
Since $\nabla \zeta$ is also assumed to be bounded, we can combine H{\" o}lder's inequality with \Cref{lem:mmnt_propagation} to establish uniform integrability (see the proof of \Cref{lem:vv_subtask_convg} for the bound on $\bar{\generalSum}_{n}^{ 1 1 }$) via \Cref{lem:sup_ui}, and with that convergence of $\E [ \bar{\generalSum}_n ]$ by \Cref{lem:slutsky} and \Cref{thm:mean_convergence}, yielding
\begin{align*}
\E [\bar{\generalSum}_{n}]
\to
\delta_{\substack{i = j \\ \tau = \frac{1}{2}}}
%
\OVStd^2
\QKStd^2
%
%
\widetilde{\ntk}_{a' b'}^{\ell}(x, x')
\kerntildef{c_1 c_2}{\ell}{x}{x'}
\left(
\delta_{\substack{d_1 = a' \\ d_2 = b'}}
\kerntildef{a b}{\ell}{x}{x'}
+
\delta_{\substack{a' = a \\ b' = b}}
\kerntildef{d_1 d_2}{\ell}{x}{x'}
\right)
\E \left[
\frac{
\partial
\tildeLogit{a c_1}{\ell 1} (x)
}{
\partial
G_{a d_1}^{\ell 1} (x)
}
\frac{
\partial
\tildeLogit{b c_2}{\ell 1} (x')
}{
\partial
G_{b d_2}^{\ell 1} (x')
}
\right]
\, .
\end{align*}
Convergence of $\bar{\generalSum}_{n}$ to the same constant can be obtained via Chebyshev's inequality by proving $| \! \E [ \bar{\generalSum}_{n}^2] - \{ \E [ \bar{\generalSum}_{n}] \}^2 | \to 0$.
Using the notation from \Cref{lem:vv_subtask_convg},
the second moment of $\bar{\generalSum}_{n}$ can be written as
\begin{align*}
\E [\bar{\generalSum}_{n}^2]
=
\frac{\stdSymbol_{\outputSymbol}^{4}}{(\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol})^2}
\sum_{\substack{h_1, h_2, h_3, h_4 \\ k_1, k_2, k_3, k_4}}
\E \left[
\prod_{t=0}^1
\bar{\generalSum}_{n}^{h_{2t + 1} h_{2t + 2}}
\widetilde{\weightMatSymbol}_{n , k_{2t + 1} i}^{\ell h_{2t + 1} , O}
\widetilde{\weightMatSymbol}_{n , k_{2t + 2} j}^{\ell h_{2t + 2} , O}
\val{n, c_1 k_{2t + 1}}{\ell h_{2t + 1}}
\val{n, c_2 k_{2t + 2}}{\ell h_{2t + 2}}
\frac{
\partial
\tildeLogit{n, a c_1}{\ellh_{2t + 1}}
}{
\partial
G_{n, a d_1}^{\ellh_{2t + 1}}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ellh_{2t + 2}}
}{
\partial
G_{n, b d_2}^{\ellh_{2t + 2}}
}
\right]
\, .
\end{align*}
Because $\nabla \zeta$ is bounded by assumption, we can again use H{\" o}lder's inequality together with \Cref{lem:mmnt_propagation} to bound each of the summands by a constant independent of the $h, k$ and $n$ indices.
This means we can restrict our attention only to groups of terms that include at least $\mathcal{O}((\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol})^2)$ of the summands.
These fall into one of the following three categories: (i)~$i = j$, $(h_1, k_1) = (h_2, k_2)$, and $(h_3, k_3) = (h_4, k_4)$; (i)~$(h_1, k_1) = (h_3, k_3)$, and $(h_2, k_4) = (h_4, k_4)$; and (iii)~$i = j$, $(h_1, k_1) = (h_4, k_4)$, and $(h_2, k_2) = (h_3, k_3)$.
Using exchangeability, we thus obtain
\begin{align*}
\E [\bar{\generalSum}_{n}^2]
=
&\stdSymbol_{\outputSymbol \valueSymbol}^4
\E \left[
\bar{\generalSum}_{n}^{1 2}
\bar{\generalSum}_{n}^{1 2}
\frac{
\langle
g_{n, c_1 \cdot}^{\ell - 1}
,
g_{n, c_1 \cdot}^{\ell - 1}
\rangle
}{
\layerDimension{\ell - 1}
}
\frac{
\langle
g_{n, c_2 \cdot}^{\ell - 1}
,
g_{n, c_2 \cdot}^{\ell - 1}
\rangle
}{
\layerDimension{\ell - 1}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 1}
}{
\partial
G_{n, a d_1}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 2}
}{
\partial
G_{n, a d_1}^{\ell 2}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 2}
}{
\partial
G_{n, b d_2}^{\ell 2}
}
\right]
+
\\
&\delta_{i = j}
\stdSymbol_{\outputSymbol \valueSymbol}^4
\E \left[
(
\bar{\generalSum}_{n}^{1 1}
\bar{\generalSum}_{n}^{2 2}
+
\bar{\generalSum}_{n}^{1 2}
\bar{\generalSum}_{n}^{2 1}
)
\left(
\frac{
\langle
g_{n, c_1 \cdot}^{\ell - 1}
,
g_{n, c_2 \cdot}^{\ell - 1}
\rangle
}{
\layerDimension{\ell - 1}
}
\right)^2
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 1}
}{
\partial
G_{n, a d_1}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, a c_1}{\ell 2}
}{
\partial
G_{n, a d_1}^{\ell 2}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 2}
}{
\partial
G_{n, b d_2}^{\ell 2}
}
\right]
+
\\
&\mathrm{o}((\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol})^2)
\, .
\end{align*}
Note by the assumed continuity of $\nabla \zeta$, \Cref{thm:gp_convergence_sqrt}, \Cref{lem:inner_prod_converge,lem:vv_subtask_convg}, the continuous mapping theorem, and \Cref{lem:slutsky}, both the integrands converge in distribution, which, combined with the above derived bound and \Cref{thm:mean_convergence}, implies
\begin{align*}
\E [ \bar{\generalSum}_{n}^2 ]
\to
\delta_{\substack{i = j \\ \tau = \frac{1}{2}}}
\left[
\OVStd^2
\QKStd^2
%
%
\widetilde{\ntk}_{a' b'}^{\ell}(x, x')
\kerntildef{c_1 c_2}{\ell}{x}{x'}
\left(
\delta_{\substack{d_1 = a' \\ d_2 = b'}}
\kerntildef{a b}{\ell}{x}{x'}
+
\delta_{\substack{a' = a \\ b' = b}}
\kerntildef{d_1 d_2}{\ell}{x}{x'}
\right)
\E \left[
\frac{
\partial
\tildeLogit{a c_1}{\ell 1} (x)
}{
\partial
G_{a d_1}^{\ell 1} (x)
}
\frac{
\partial
\tildeLogit{b c_2}{\ell 1} (x')
}{
\partial
G_{b d_2}^{\ell 1} (x')
}
\right]
\right]^2
\end{align*}
where we have used the fact that $\bar{\generalSum}_{n}^{h_1 h_2}$ converges in probability to zero whenever $h_1 \neq h_2$ (\Cref{lem:vv_subtask_convg}), and the asymptotic indepedence of $\tildeLogit{n}{\ell 1}$ and $\tildeLogit{n}{\ell 2}$ (\Cref{thm:gp_convergence_sqrt} if $\tau = \frac{1}{2}$, resp.\ \citep[appendix A]{yang2019v2} if $\tau = 1$).
\end{proof}
\begin{lemma}\label{lem:gv_ntk}
$
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{i', j'}
\sum_{\substack{h_1, h_2\\ k_1, k_2}}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\tildeLogit{n, a c_1}{\ellh_1}
\val{n, c_2 k_2}{\ell h_2}
\frac{
\partial
\val{n, c_1 k_1}{\ell h_1}
}{
\partial g_{n, a' i'}^{\ell - 1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ellh_2}
}{
\partial
G_{n, b d_2}^{\ellh_2}
}
\frac{
\partial
G_{n, b d_2}^{\ellh_2}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
\overset{P}{\longrightarrow}
0
\, .
$
\end{lemma}
\begin{proof}[Proof of \Cref{lem:gv_ntk}]
%
Observing that
$
\frac{
\partial
\val{n, c_1 k_1}{\ell h_1}
}{
\partial g_{n, a' i'}^{\ell - 1}
}
=
\delta_{c_1 = a'}
\stdSymbol_{\valueSymbol}
\frac{
\widetilde{\weightMatSymbol}_{n, i' k_1}^{\ell h_1, V}
}{
\sqrt{\layerDimension{\ell - 1}}
}
$
and setting
\begin{align*}
\bar{\generalSum}_{n}
=
%
%
\frac{\stdSymbol_{\valueSymbol}}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} \sqrt{\layerDimension{\ell - 1}}}
\sum_{i', j'}
\sum_{\substack{h_1, h_2\\ k_1, k_2}}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_1 i}^{\ell h_1 , O}
\widetilde{\weightMatSymbol}_{n , k_2 j}^{\ell h_2 , O}
\widetilde{\weightMatSymbol}_{n, i' k_1}^{\ell h_1, V}
\val{n, c_2 k_2}{\ell h_2}
\tildeLogit{n, a c_1}{\ellh_1}
\frac{
\partial
\tildeLogit{n, b c_2}{\ellh_2}
}{
\partial
G_{n, b d_2}^{\ellh_2}
}
\frac{
\partial
G_{n, b d_2}^{\ellh_2}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
\, ,
\end{align*}
we immediately see that
\begin{align*}
\E [\bar{\generalSum}_n]
=
\delta_{i = j}
%
%
\frac{\valueStd^2}{\layerDimension{\ell - 1}}
\sum_{i', j'}
\E \left[
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
g_{n, c_2 i'}^{\ell - 1}
\tildeLogit{n, a c_1}{\ell 1}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\frac{
\partial
G_{n, b d_2}^{\ell 1}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
\right]
\, .
\end{align*}
Analogously to the proof of \Cref{lem:vv_ntk}, we define
\begin{align}\label{eq:gv_subtask_variable}
\bar{\generalSum}_{n}^{h}
&=
\frac{1}{\layerDimension{\ell - 1}}
\sum_{i', j'}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
g_{n, c_2 i'}^{\ell - 1}
\frac{
\partial
G_{n, b d_2}^{\ell h}
}{
\partial g_{n, b' j'}^{\ell - 1}
}
\\
&=
\frac{1}{(\layerDimension{\ell - 1})^{3/2} (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{\tau}}
\sum_{i', j'}
\sum_{u_1 = 1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
g_{n, c_2 i'}^{\ell - 1}
\biggl(
\underbrace{
\delta_{d_2 = b'}
\stdSymbol_{\keySymbol}
\query{n, b u_1}{\ellh}
\widetilde{\weightMatSymbol}_{n, j' u_1}^{\ellh , K}
+
\delta_{b' = b}
\stdSymbol_{\querySymbol}
\key{n, d_2 u_1}{\ell h}
\widetilde{\weightMatSymbol}_{n, j' u_1}^{\ellh , Q}
}_{\eqqcolon \Gamma_{n, j' u_1}^{h}}
\biggr)
\, ,
\end{align}
and make use of an auxiliary lemma.
\begin{lemma}\label{lem:gv_subtask_convg}
$\bar{\generalSum}_{n}^{h} \overset{P}{\longrightarrow} 0$.
\end{lemma}
\begin{proof}
Observe $\E [ \bar{\generalSum}_{n}^{h} ] = 0$ if $\tau = \frac{1}{2}$ (independence of key and query weights), and
\begin{align*}
\E [ \bar{\generalSum}_{n}^{h} ]
=
\frac{\stdSymbol_{\querySymbol \keySymbol}}{(\layerDimension{\ell - 1})^2}
\sum_{i', j'}
\E \left[
\widehat{\ntk}_{
a' i' , b' j'
%
}^{\ell}
g_{n, c_2 i'}^{\ell - 1}
\biggl(
\delta_{d_2 = b'}
g_{n, b j'}^{\ell - 1}
+
\delta_{b' = b}
g_{n, d_2 j'}^{\ell - 1}
\biggr)
\right]
\, ,
\end{align*}
if $\tau = 1$ (key and query weights are equal a.s.).
Since each of the summands can be bounded by a constant indpendent of the $i', j'$ and $n$ indices by \Cref{lem:mmnt_propagation}, we can restrict our focus to the terms for which $i' \neq j'$, yielding
\begin{align*}
\E [ \bar{\generalSum}_{n}^{h} ]
=
\stdSymbol_{\querySymbol \keySymbol}
\frac{\layerDimension{\ell - 1}(\layerDimension{\ell - 1} - 1)}{(\layerDimension{\ell - 1})^2}
\E \left[
\widehat{\ntk}_{
a' 1 , b' 2
%
}^{\ell}
g_{n, c_2 1}^{\ell - 1}
\biggl(
\delta_{d_2 = b'}
g_{n, b j'}^{\ell - 1}
+
\delta_{b' = b}
g_{n, d_2 j'}^{\ell - 1}
\biggr)
\right]
+
\mathrm{o}((\layerDimension{\ell - 1})^2)
\, ,
\end{align*}
Since $\widehat{\ntk}_{a' 1 , b' 2}^{\ell} \overset{P}{\longrightarrow} 0$ \citep{yang2019v2}, and the $g_{n, c_2 i'}^{\ell - 1} g_{n, b j'}^{\ell - 1}$ products converge in distribution by continuity of the assumed $\phi$ and the continuous mapping theorem, the integrand converges to zero in distribution by \Cref{lem:slutsky}.
Using \Cref{lem:mmnt_propagation,lem:sup_ui} and \Cref{thm:mean_convergence}, we again establish $\E [ \bar{\generalSum}_{n}^{h} ] \to 0$.
To obtain convergence in probability, observe
\begin{align*}
\E [ (\bar{\generalSum}_{n}^h)^2 ]
=
\frac{1}{(\layerDimension{\ell - 1})^{3} (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau}}
\sum_{\substack{i'_1, j'_1 \\ i'_2, j'_2}}
\sum_{u_1, u_2}
\E \left[
\widehat{\ntk}_{
a' i'_1 , b' j'_1
%
}^{\ell}
\widehat{\ntk}_{
a' i'_2 , b' j'_2
%
}^{\ell}
g_{n, c_2 i'_1}^{\ell - 1}
g_{n, c_2 i'_2}^{\ell - 1}
\Gamma_{n, j'_1 u_1}^{h}
\Gamma_{n, j'_2 u_2}^{h}
\right]
\, ,
\end{align*}
and note that we can again bound each of the summands using H{\" o}lder's inequality and \Cref{lem:mmnt_propagation} as in to \Cref{lem:vv_subtask_convg}.
We can thus restrict our attention to groups of terms that include at least $\mathcal{O}((\layerDimension{\ell - 1})^3 (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{2\tau})$ of the summands.
If $\tau = 1$, we can thus focus on $u_1 \neq u_2$, in which case integrating $\Gamma_{n, j'_1 u_1}^{h} \Gamma_{n, j'_2 u_2}^{h}$
over key and query weights will yield an additional $\layerDimension{\ell - 1}$ factor,
for example
$$
\E [
\query{b u_1}{\ellh}
\widetilde{\weightMatSymbol}_{n, j'_1 u_1}^{\ellh , K}
\query{b u_2}{\ellh}
\widetilde{\weightMatSymbol}_{n, j'_2 u_2}^{\ellh , K}
]
=
\frac{\queryStd^2}{\layerDimension{\ell}}
g_{n, b' j'_1}
g_{n, b' j'_2}
\, ,
$$
using the equality of key and query weights.
Since $\widehat{\ntk}_{a' i', b' j'}^{\ell}$ converges in probability to zero whenever $i' \neq j'$ \citep{yang2019v2}, and there are only $(\layerDimension{\ell - 1})^2$ terms for which $i'_1 = j'_1$ and $i'_2 = j'_2$, we can use the continuous mapping theorem, \Cref{lem:slutsky}, and \Cref{thm:mean_convergence} to establish that $\E [ (\bar{\generalSum}_{n}^{h})^2 ] \to 0$.
If $\tau = \frac{1}{2}$, all terms for which $u_1 \neq 0$ will have zero expectation (independence of key and query weights), and thus analogous argument to the one for $\tau = 1$.
\end{proof}
With \Cref{lem:gv_subtask_convg} at hand, we can simplify
\begin{align*}
\E [\bar{\generalSum}_n]
=
\delta_{i = j}
%
\valueStd^2
\E \left[
\bar{\generalSum}_{n}^1
\tildeLogit{n, a c_1}{\ell 1}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\right]
\, ,
\end{align*}
and use the assumed continuity of $\nabla \zeta$ together with the continuous mapping theorem and \Cref{lem:slutsky} to establish that the integrand converges in distribution to zero.
Since $\nabla \zeta$ is bounded by assumption, we can used H{\" o}lder's inequality and \Cref{lem:mmnt_propagation} to establish uniform integrability via \Cref{lem:sup_ui} (see the proof of \Cref{lem:gv_subtask_convg} for the bound on $\bar{\generalSum}_n^1$).
We thus have $\E [\bar{\generalSum}_n] \to 0$
by \Cref{thm:mean_convergence}.
To establish $\bar{\generalSum}_{n} \overset{P}{\longrightarrow} 0$, it is sufficient to show $\E [ (\bar{\generalSum}_{n})^2 ] \to 0$ and apply Chebyshev's inequality.\footnote{We will be using the explicit parenthesis here to distinguish between $\bar{\generalSum}_{n}^h$ with $h = 2$, and $(\bar{\generalSum}_n)^2$.}
We have
\begin{align*}
\E [(\bar{\generalSum}_{n})^2]
=
%
\frac{\valueStd^2}{(\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}\layerDimension{\ell - 1})^2}
\sum_{\substack{i'_1, j'_1 \\ i'_2, j'_2}}
\sum_{\substack{h_1, h_2, h_3, h_4 \\ k_1, k_2, k_3, k_4}}
\begin{aligned}[t]
\E \left[
\prod_{t=0}^1
\right.&\left.
\widehat{\ntk}_{
a' i'_{t+1} , b' j'_{t+1}
}^{\ell}
\widetilde{\weightMatSymbol}_{n , k_{2t + 1} i}^{\ell h_{2t + 1} , O}
\widetilde{\weightMatSymbol}_{n , k_{2t + 2} j}^{\ell h_{2t + 2} , O}
\right.
\\
&\left.
\widetilde{\weightMatSymbol}_{n, i'_{t+1} k_{2t + 1}}^{\ell h_{2t + 1}, V}
\val{n, c_2 k_{2t + 2}}{\ell h_{2t + 2}}
\tildeLogit{n, a c_1}{\ellh_{2t + 1}}
\frac{
\partial
\tildeLogit{n, b c_2}{\ellh_{2t + 2}}
}{
\partial
G_{n, b d_2}^{\ellh_{2t + 2}}
}
\sqrt{\layerDimension{\ell - 1}}
\frac{
\partial
G_{n, b d_2}^{\ellh_{2t + 2}}
}{
\partial g_{n, b' j'_{t+1}}^{\ell - 1}
}
\right]
\, ,
\end{aligned}
\end{align*}
where notice we are multiplying
$
\frac{
\partial
G_{n, b d_2}^{\ellh_{2t + 2}}
}{
\partial g_{n, b' j'_{t+1}}^{\ell - 1}
}
$
by $\sqrt{\layerDimension{\ell - 1}}$ as this term scales as $(\layerDimension{\ell - 1})^{-1/2}$ (see \Cref{eq:gv_subtask_variable}).
Since $\nabla \zeta$ is bounded by assumption, we can use the H{\" o}lder's inequality to bound each of the summands by
\begin{align*}
\poly \biggl(
\max_{\substack{
a', b' \in [\genericDimenstion^s],
i', j' \in \{ 1, 2 \} \\
z, z' \in \{ x, x' \}
}}
\E [
\widehat{\ntk}_{
a' i' , \,
b' j'
%
}^{\ell} (z, z')^4
]
\, ,
\max_{\substack{
c, c' \in [\genericDimenstion^s] \\
z \in \{ x , x' \}
}}
\E [
\tildeLogit{n, c c'}{\ell 1} (z)^8
]
\, ,
\max_{\substack{c \in [\genericDimenstion^s] \\ z \in \{ x, x' \}}}
\E | \indexedActivity{n, \ell - 1}{c 1}{z} |^{16}
\biggr)
\, ,
\end{align*}
which will be bounded by a constant independent of the $i', j', h, k$ and $n$ by \Cref{lem:mmnt_propagation}.
By \Cref{lem:sup_ui}, we can thus restrict our attention to the terms that are not $\mathrm{o}((\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} \layerDimension{\ell - 1})^2)$, which fall into one of the following three categories:
(i)~$i = j$, $(h_1, k_1) = (h_2 , k_2)$, and $(h_3, k_3) = (h_4, k_4)$;
(ii)~$(h_1, k_1, i'_1) = (h_3 , k_3, i'_2)$, and $(h_2, k_2, j'_1) = (h_4, k_4, j'_2)$;
(iii)~$i = j$, $(h_1, k_1) = (h_3, k_3)$, and $(h_2 , k_2) = (h_4, k_4)$.
Using exchangeability, we thus obtain
\begin{align}\label{eq:gv_square}
\E [(\bar{\generalSum}_{n})^2]
=
&\stdSymbol_{\valueSymbol}^4
\E \left[
\frac{
\langle
g_{n, c_2 \cdot}^{\ell - 1}
,
g_{n, c_2 \cdot}^{\ell - 1}
\rangle
}{
\layerDimension{\ell - 1}
}
\left(
\widehat{\ntk}_{
a' 1, b' 2
}^{\ell}
\tildeLogit{n, a c_1}{\ell 1}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\sqrt{\layerDimension{\ell - 1}}
\frac{
\partial
G_{n, b d_2}^{\ell 2}
}{
\partial g_{n, b' 2}^{\ell - 1}
}
\right)^2
\right]
+
\\
&
\delta_{i = j}
2
\valueStd^2
\E \left[
\bar{\generalSum}_{n}^{1}
\bar{\generalSum}_{n}^{2}
\tildeLogit{n, a c_1}{\ell 1}
\tildeLogit{n, a c_1}{\ell 2}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 2}
}{
\partial
G_{n, b d_2}^{\ell 2}
}
\right]
+
%
%
\mathrm{o}((\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} \layerDimension{\ell - 1})^2)
\, ,
\end{align}
where we have used
$
\E [
\widetilde{\weightMatSymbol}_{n, i'_{1} 1}^{\ell 1, V}
\val{n, c_2 1}{\ell 1}
\widetilde{\weightMatSymbol}_{n, i'_2 2}^{\ell 2, V}
\val{n, c_2 2}{\ell 2}
]
=
\frac{\valueStd^2}{\layerDimension{\ell - 1}}
g_{n, c_2 i'_1}^{\ell - 1}
g_{n, c_2 i'_2}^{\ell - 1}
$
and the definition of $\bar{\generalSum}_{n}^h$ from \Cref{eq:gv_subtask_variable}.
We prove convergence of both of these expectations to zero separately.
Starting with the second expectation in \Cref{eq:gv_square}, we can use the assumed continuity of $\nabla \zeta$, \Cref{thm:gp_convergence_sqrt}, \Cref{eq:gv_subtask_variable}, the continuous mapping theorem, and \Cref{lem:slutsky} to establish that the integrand converges in distribution to zero.
Because $\nabla \zeta$ is bounded by assumption, we can combine H{\" o}lder's inequality and \Cref{lem:mmnt_propagation} to establish uniform integrability via \Cref{lem:sup_ui} (see the proof of \Cref{lem:gv_subtask_convg} for the bound on $\bar{\generalSum}_{n}^h$), and thus convergence of the expectation to zero by \Cref{thm:mean_convergence}.
For the first expectation in \Cref{eq:gv_square}, note that the absolute value of the expectation can be upper bounded by
\begin{align*}
\E \left[
\left|
\frac{
\langle
g_{n, c_2 \cdot}^{\ell - 1}
,
g_{n, c_2 \cdot}^{\ell - 1}
\rangle
}{
\layerDimension{\ell - 1}
}
\right|
(
\widehat{\ntk}_{
a' 1, b' 2
}^{\ell}
)^2
\left(
\left(
\tildeLogit{n, a c_1}{\ell 1}
\frac{
\partial
\tildeLogit{n, b c_2}{\ell 1}
}{
\partial
G_{n, b d_2}^{\ell 1}
}
\right)^2
+
\layerDimension{\ell - 1}
\left(
\frac{
\partial
G_{n, b d_2}^{\ell 2}
}{
\partial g_{n, b' 2}^{\ell - 1}
}
\right)^2
\right)
\right]
\, ,
\end{align*}
where, when multiplied out, we can use that $\widehat{\ntk}_{a' 1, b' 2}^{\ell} \overset{P}{\longrightarrow} 0$ \citep{yang2019v2}, and an argument analogous to the one above---using \Cref{lem:inner_prod_converge} and the continuous mapping theorem to obtain convergence in probability for the inner product---to establish convergence to zero.
Finally, for the second term, observe
\begin{align*}
\sqrt{\layerDimension{\ell - 1}}
\frac{
\partial
G_{n, b d_2}^{\ell 2}
}{
\partial g_{n, b' 2}^{\ell - 1}
}
=
\frac{1}{(\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol})^{\tau}}
\sum_{u_1 = 1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}
\delta_{d_2 = b'}
\stdSymbol_{\keySymbol}
\query{n, b u_1}{\ellh}
\widetilde{\weightMatSymbol}_{n, j' u_1}^{\ellh , K}
+
\delta_{b' = b}
\stdSymbol_{\querySymbol}
\key{n, d_2 u_1}{\ell h}
\widetilde{\weightMatSymbol}_{n, j' u_1}^{\ellh , Q}
\, ,
\end{align*}
which converges in probability to a constant if $\tau = 1$ by \Cref{lem:wlln_exch} (using \citep{yang2019v2} to establish convergence in distribution of the keys and queries, and \Cref{lem:mmnt_propagation} for the moment bound).
If $\tau = \frac{1}{2}$, the sum will converge in distribution \citep{yang2019v2} and since the rest of the term in the expectation converge in probability, their product converges in distribution by \Cref{lem:slutsky}.
One can then again combine H{\" o}lder's inequality, \Cref{lem:mmnt_propagation,lem:sup_ui} and \Cref{thm:mean_convergence} to obtain convergence of the expectation to zero.
Hence $\E [ (\bar{\generalSum}_{n})^2 ] \to 0$, implying $\bar{\generalSum}_{n} \overset{P}{\longrightarrow} 0$ as desired.
\end{proof}
\subsection{Expressivity of $d^{-1}$ and $d^{-1/2}$ induced attention kernels}
\linearNoConv*
\begin{proof}[Proof of \Cref{prop:linear_scale_no_conv}]
Consider $\kernelf{aa}{\text{CNN}}{x}{x} = \sum_{i = 1}^{d_f} \kerntildef{N_a(i) N_a(i)}{}{x}{x} \frac{1}{d_f}$, and the corresponding attention kernel $\kernelf{aa}{\text{ATTN}}{x}{x} = \sum_{i, j=1}^{\genericDimenstion^s} \kerntildef{i j}{}{x}{x} \bar{\softmax}_{ai}^{x} \bar{\softmax}_{aj}^{x}$.
Note that $\kernelf{aa}{\text{CNN}}{x}{x}$ is just sum of terms on a subset of the diagonal of $\kerntildef{}{}{x}{x}$.
Hence it must be that $\bar{\softmax}_{ai}^{x} = \pm (d_f)^{-1/2}$ since we require that the same set of coefficients $\{ \bar{\softmax}_{ai}^x \colon i \in [\genericDimenstion^s] \}$ works all kernels $\tilde{\kernel}$ simultaneously, and thus for any $\kerntildef{aa}{}{x}{x}$ including all diagonal matrices with non-negative entries.
Therefore $\bar{\softmax}_{ai}^{x} \bar{\softmax}_{aj}^{x} = \pm (d_f)^{-1}$ for all $i, j$, making signs the only degree of freedom.\footnote{As a side note, this degree of freedom disappears when $\bar{\softmax}$ is a limit of the softmax variables (non-negativity).}
We conclude by noting that we can make $\kernelf{aa}{\text{ATTN}}{x}{x} \neq \kernelf{aa}{\text{CNN}}{x}{x}$ by choosing $\kerntildef{aa}{}{x}{x}$ diagonal except for one pair of off-diagonal entries.
\end{proof}
\sqrtConvRecover*
\begin{proof}[Proof of \Cref{prop:sqrt_scale_conv_recover}]
We provide a simple construction here, and expand on more realistic ones after the proof.
Consider $\Omega = [0, 1)$ with the usual Borel $\sigma$-algebra $\mathcal{B}$ and the Lebesgue measure $\lambda$.
Let $\xbar{\mathbb{R}} = \R{} \cup \{-\infty, \infty \}$ be the extended real axis and $\xbar{\mathcal{B}}$ be the $\sigma$-algebra generated by the interval topology on $\xbar{\mathbb{R}}$.
Now construct the random variables $G_{ai} \colon \Omega \to \xbar{\mathbb{R}}$ such that $G_{ai} = - \infty$ a.s.\ if $i \notin N_a$, and $G_{ai} = \infty \cdot \indicator{A_{ai}}$ a.s.\ where and $A_{a i} \coloneqq \left[\frac{i_{(a)} - 1}{d_f}, \frac{i_{(a)}}{d}\right)$, with $i_{(a)}$ being the position of $i$ in the ordered set $N_a$, and $\infty \cdot 0$ is to be interpreted as $0$.
\end{proof}
For a more realistic construction consider the usual $G(x) = d^{-1/2} Q(x) K(x)^\top$ but now additionally multiply each row of $Q(x$ by a corresponding scalar random variable $c_a^Q \colon \Omega \to \xbar{\mathbb{R}}$, similarly each row of $K(x)$ by $c_a^K \colon \Omega \to \xbar{\mathbb{R}}$.
Then $G_{ab}(x) = d^{-1/2} c_a^Q c_b^K \langle Q_{a\cdot}(x) , K_{b\cdot}(x) \rangle$ and thus one can achieve the desired result by setting up the joint distribution of $\{ c_1^{Q} , \ldots , c_{\genericDimenstion^s}^{Q}, c_{1}^{K}, \ldots , c_{\genericDimenstion^s}^{K} \}$ in analogy to that in the above proof.
\subsection{Auxiliary results}
\begin{lemma}[Billingsley, 1986, p. 19]\label{lem:fin_dim_marg}
Let $X, (X_n)_{n \geq 1}$ be random variables taking values in $(\R{\mathbb{N}}, \mathcal{B}^N)$, $\mathcal{B}^\mathbb{N}$ the~usual Borel $\sigma$-algebra.
Then $X_n \rightsquigarrow X$ if and only if for each finite $J \subset \mathbb{N}$ and the~corresponding projection $\Gamma^J \colon \R{\mathbb{N}} \to \R{J}$, $\Gamma^J (X_n) \rightsquigarrow \Gamma^J (X)$ as $n \to \infty$.
\end{lemma}
\begin{lemma}[Billingsley, 1986, p. 31]\label{lem:sup_ui}
A sequence of real valued random variables $(X_n)_{n \geq 1}$ is uniformly integrable if
$$\sup_{n} \E | X_n |^{1 + \varepsilon} < \infty \, .$$
\end{lemma}
\begin{theorem}[Billingsley, 1986, theorem 3.5]\label{thm:mean_convergence}
If $(X_n)_{n \geq 1}$ are uniformly integrable and $X_{n} \rightsquigarrow X$, then $X$ is integrable and
$$\E [X_{n}] \to \E [X] \, .$$
\end{theorem}
\begin{lemma}[Slutsky's lemmas]\label{lem:slutsky}
Let $X, (X_n)_{n \geq 1}$ and $(Y_n)_{n \geq 1}$ be real valued random variables defined on the same probability space, and assume $X_n \rightsquigarrow X$ and $Y_n \overset{P}{\longrightarrow} c$ for some $c \in \R{}$.
Then
%
\begin{align}
&X_n Y_n \rightsquigarrow c X
\, ,
%
&X_n + Y_n \rightsquigarrow X + c
\, .
\end{align}
\end{lemma}
\begin{lemma}[Weak LLN for exchangeable triangular arrays]\label{lem:wlln_exch}
Let
$X_n \coloneqq \{ X_{n, i} \colon i = 1, 2, \ldots \}$
be an infinitely exchangeable sequence of random variables on $\R{\mathbb{N}}$
s.t.\ $\limsup_{n \to \infty} \E |X_{n, 1}|^{2 + \varepsilon} < \infty$ for some $\varepsilon > 0$, and
define
$
%
\bar{\generalSum}_{n}
\coloneqq
\frac{1}{d_n}
\sum_{i=1}^{d_n}
X_{n, i}
\, ,
%
$
for some sequence $(d_n)_{n \geq 1}$ s.t.\ $\lim_{n \to \infty} d_n = \infty$.
Assuming all $X_{n}$ are defined on the same space, if $X_{n}$ converges in distribution to some infinitely exchangeable $X_{*} = \{ X_{*, i} \colon i = 1, 2, \ldots \}$ s.t.\ $\E [X_{*, 1} X_{* , 2}] = (\E [ X_{*, 1} ] )^2$,
then as $n \to \infty$,
$\E [\bar{\generalSum}_{n} ] \to \E [X_{*, 1}]$,
$\E [\bar{\generalSum}_{n}^2 ] \to (\E [X_{*, 1}] )^2$, and
$$\bar{\generalSum}_{n} \overset{P}{\longrightarrow} \E [X_{*, 1}] \, .$$
\end{lemma}
\begin{proof}[Proof of \Cref{lem:wlln_exch}]
By exchangeability $\E [\bar{\generalSum}_{n}] = \E [X_{n, 1}]$, and thus by \Cref{lem:sup_ui} and \Cref{thm:mean_convergence}, $\E [\bar{\generalSum}_{n}] \to \E [ X_{\star, 1}]$.
Similarly,
\begin{align*}
\E[ \bar{\generalSum}_n^2 ]
=
\frac{1}{d_{n}}
\E [X_{n, 1}^2]
+
\frac{d_{n} (d_{n} - 1)}{d_{n}^2}
\E [ X_{n, 1} X_{n, 2}]
\, ,
\end{align*}
and thus by the continuous mapping theorem and again by \Cref{lem:sup_ui} and \Cref{thm:mean_convergence}, $\E[ \bar{\generalSum}_n^2 ] \to (\E[ X_{\star,1} ] )^2$ as $n \to \infty$.
Finally, the convergence in probability follows by Chebyshev's inequality
\begin{equation*}
\text{Pr} \left\{
\vertbars{\bar{\generalSum}_{n} - \E \bar{\generalSum}_{n} }
\geq
\delta
\right\}
\leq
\frac{
\E[ \bar{\generalSum}_n^2 ]
-
(\E [\bar{\generalSum}_n])^2
}{
\delta^2
}
\, .
\qedhere
\end{equation*}
\end{proof}
\begin{lemma}[Moment propagation]\label{lem:mmnt_propagation}
Under the assumptions of \Cref{thm:gp_convergence_sqrt}, for any $x, x' \in \mathcal{X}$, $\ell \in [L + 1]$, and $t \geq 1$
\begin{align*}
\sup_{\substack{c \in [\genericDimenstion^s] \\ i \in \mathbb{N}}}
&\sup_{n}
\E |
\indexedActivity{\ell - 1}{n, c i}{x}
|^t
<
\infty
\, ,
\\
\sup_{\substack{c \in [\genericDimenstion^s] \\ i \in \mathbb{N}}}
&\sup_{n}
\E |
\indexedActivation{\ell}{n, c i}{x}
|^t
<
\infty
\, ,
\\
\sup_{\substack{c \in [\genericDimenstion^s] \\ h, i \in \mathbb{N}}}
&\sup_{n}
\E |
\indexedActivation{\ellh}{n, c i}{x}
|^{t}
<
\infty
\, ,
\\
\sup_{\substack{
c, c' \in [\genericDimenstion^s] \\
h \in \mathbb{N}
}}
&\sup_{n}
\E |
\tildeLogit{n, c c'}{\ell h} (x)
|^t
<
\infty
\, ,
\\
\sup_{\substack{
a, b \in [\genericDimenstion^s] \\
i, j \in \mathbb{N}
}}
&\sup_{n}
\E |
\widehat{\ntk}_{a i , \, b j}^{\ell} ( x, x' )
|^t
<
\infty
\, .
\end{align*}
\end{lemma}
\begin{proof}[Proof of \Cref{lem:mmnt_propagation}]
Beginning with
$
\E |
\indexedActivity{\ell - 1}{n, c i}{x}
|^t
$
and
$
\E |
\indexedActivation{\ell}{n, c i}{x}
|^t
$,
we see that this condition holds if none of the layers $1, \ldots, \ell - 1$ uses attention by the assumed polynomial boundedness of $\phi$ as a corollary of \citep[lemma 20]{matthews2018gaussian} for dense, and \citep[pages 14 and 15]{garriga2019deep} for convolutional networks.\footnote{Note that the bound on $\E | \indexedActivity{0}{n, c i}{x} |^t = |x_{c i}|^t$ is trivial, which then leads to a bound on $\E |\indexedActivation{\ell}{n, c i}{x} |^t$ as the individual columns will be i.i.d.\ Gaussian for any $n$.}
To extend to the case where one or more of the preceding layers include attention, we see that it is sufficient to focus on bound for $f^{\ell}$ in the first attention layer (i.e., with the lowest $\ell$ among the attention layer), as the bound for the following $g^{\ell}$ can be obtained from the assumed polynomial boundedness of $\phi$ and exchangeability, and the bound on the following layers by a simple inductive argument.
Thus, focusing on
$
\E |
\indexedActivation{\ell}{n, c i}{x}
|^t
=
\E |
\indexedActivation{\ell}{n, c 1}{x}
|^t
$ (exchangeability),
we see that proving the bound for an arbitrary fixed $c \in [\genericDimenstion^s]$ will be sufficient as $\genericDimenstion^s$ is finite.
Substituting
\begin{align*}
\E |
\indexedActivation{\ell}{n, c 1}{x}
|^t
=
\E \left\{
\E \left[
\Biggl|
%
\sum_{h=1}^{\headSymbol_{\sequenceVariable}^{\depthSymbol}}
\sum_{k=1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\indexedActivation{\ellh}{n, c k}{x}
\weightMatSymbol_{n, k 1}^{\ell h , O}
\Biggr|^t
\, \,
\Bigg\vert
\, \,
\indexedActivation{\ell 1}{n, c \cdot }{x}
\, , \ldots \, ,
\indexedActivation{\ell\headSymbol_{\sequenceVariable}^{\depthSymbol}}{n, c \cdot }{x}
\right]
\right\}
\lesssim
\E \, \Biggl|
\frac{\outStd^2}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{h , k}
\indexedActivation{\ellh}{n, c k }{x}^2
\Biggr|^{\frac{t}{2}}
\, ,
\end{align*}
where we have used that if $\varepsilon \sim \mathcal{N}( 0, I)$ and $v \in \R{d}$ is a fixed vector, then $\langle v , \varepsilon \rangle$ is in distribution equal to $\| v \|_2 \varepsilon'$ where $\varepsilon' \sim \mathcal{N}(0, 1)$ by standard Gaussian identities, and the fact that $\E | \varepsilon' |^t < \infty$.
Using H{\" o}lder's inequality if necessary, we can assume $t$ is even, and with that multiply out the r.h.s.\ above, leading to
\begin{align*}
\E \, \Biggl|
\frac{1}{\headSymbol_{\sequenceVariable}^{\depthSymbol} \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}}
\sum_{h , k}
\indexedActivation{\ellh}{n, c k }{x}^2
\Biggr|^{\frac{t}{2}}
\lesssim
\E |
\indexedActivation{\ell 1}{n, c 1 }{x}
|^{t}
\, ,
\end{align*}
by exchangeability.
It will thus be sufficient to establish
$
\sup_{c \in [\genericDimenstion^s], h, i \in \mathbb{N}}
\sup_{n}
\E |
\indexedActivation{\ellh}{n, c i}{x}
|^{t}
<
\infty
$
for any $t \geq 1$.
Observe
\begin{align*}
\E |
\indexedActivation{\ellh}{n, c i}{x}
|^{t}
=
\E \left\{
\E \left[
\Biggl|
\sum_{j = 1}^{\layerDimension{\ell - 1}}
\tildeLogitN{n, c \cdot}{\ellh}{x}
\indexedActivity{\ell - 1}{n, \cdot j}{x}
\weightV{\cdot i}
\Biggr|^{t}
\, \,
\Bigg\vert
\, \,
\tildeLogit{n}{\ellh} (x) \, ,
\indexedActivity{\ell - 1}{n}{x}
\right]
\right\}
\lesssim
\E \Biggl|
\frac{\valueStd^2}{\layerDimension{\ell - 1}}
\sum_{j}
\left(
\tildeLogitN{n, c \cdot}{\ellh}{x}
\indexedActivity{\ell - 1}{n, \cdot j}{x}
\right)^2
\Biggr|^{\frac{t}{2}}
\, ,
\end{align*}
meaning we can combine an argument analogous to the one above with H{\" o}lder's inequality and exchangeability to obtain
\begin{align*}
\E \Biggl|
\frac{1}{\layerDimension{\ell - 1}}
\sum_{j}
\left(
\tildeLogitN{n, c \cdot}{\ellh}{x}
\indexedActivity{\ell - 1}{n, \cdot j}{x}
\right)^2
\Biggr|^{\frac{t}{2}}
\lesssim
\poly \biggl(
\max_{c' \in [\genericDimenstion^s]}
\E |
\tildeLogitN{n, c c'}{\ellh}{x}
|^{2t}
\, ,
\max_{c' \in [\genericDimenstion^s]}
\sup_{j \in \mathbb{N}}
\E |
\indexedActivity{\ell - 1}{n, c' j}{x}
|^{2t}
\biggr)
\, .
\end{align*}
As shown at the beginning of this proof, we can bound
$
\E |
\indexedActivity{\ell - 1}{n, c' j}{x}
|^{4t}
$
by a constant independent of $c', j$ and $n$, and thus it only remains to show that
$
\max_{c, c' \in [\genericDimenstion^s]}
\sup_{\substack{
h \in \mathbb{N}
}}
\sup_{n}
\E |
\tildeLogit{n, c c'}{\ell h} (x)
|^t
<
\infty
$
in order to bound the $\E | \indexedActivation{\ell}{n, c 1}{x} |^t$.
Using the assumed entrywise polynomial boundedness of $\zeta$, we can see it will be sufficient to establish
$
\max_{c, c' \in [\genericDimenstion^s]}
\sup_{\substack{
h \in \mathbb{N}
}}
\sup_{n}
\E |
G_{n, c c'}^{\ell h} (x)
|^t
<
\infty
$.
We do this separately for $\tau = 1$ and $\tau = \frac{1}{2}$.
Starting with the former, we can again replicate the argument from above, yielding
\begin{align*}
\E |
G_{n, c c'}^{\ell h} (x)
|^t
=
\E \Biggl|
\frac{1}{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}
\sum_{k = 1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}
\query{n , c k}{\ellh} (x)
\key{n, c' k}{\ellh} (x)
\Biggr|^{t}
\lesssim
\E |
\query{n , c 1}{\ell 1} (x)
|^{2t}
\lesssim
\E |
\indexedActivity{\ell - 1}{n, c 1}{x}
|^{4t}
\, ,
\end{align*}
by exchangeability, H{\" o}lder's inequality, and the assumed $\weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \querySymbol} = \weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \keySymbol}$ a.s.\ under $\tau = 1$ (\Cref{sect:linear_scaling_limit}).
For the $\tau = \frac{1}{2}$ case, we start by w.l.o.g.\ assuming we need bound for $t \in \mathbb{N}$ even
\begin{align*}
\E |
G_{n, c c'}^{\ell h} (x)
|^{t}
=
\E \Biggl|
\frac{1}{\sqrt{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}}
\sum_{k = 1}^{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}
\query{n , c k}{\ellh} (x)
\key{n, c' k}{\ellh} (x)
\Biggr|^{t}
=
\left(\frac{1}{\sqrt{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}}\right)^t
\sum_{k_1, \ldots, k_t}
\E \left[
\prod_{s=1}^{t}
\query{n , c k_s}{\ellh} (x)
\key{n, c' k_s}{\ellh} (x)
\right]
\, ,
\end{align*}
and noting that because $\weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \querySymbol}$ and $\weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \keySymbol}$ are assumed independent under $\tau = \frac{1}{2}$, there will be at most $\mathcal{O} ( (G)^{t / 2} )$ terms with non-zero expectation, meaning that we can again apply exchangeability and the distributional equivalence between $\query{n}{\ellh}(x)$ and $\key{n}{\ellh}(x)$ to obtain
\begin{equation*}
\E |
G_{n, c c'}^{\ell h} (x)
|^{t}
\lesssim
\E |
\query{n , c 1}{\ell 1} (x)
|^{2t}
\lesssim
\E |
\indexedActivity{\ell - 1}{n, c 1}{x}
|^{4t}
\, .
\end{equation*}
The convergence of
$
\max_{\substack{
a, b \in [\genericDimenstion^s]
}}
\sup_{i, j \in \mathbb{N}}
\sup_{n}
\E |
\widehat{\ntk}_{a i , \, b j}^{\ell} ( x, x' )
|^t
<
\infty
$
for non-attention layers can be obtained by combining exchangeability between the two groups of $\widehat{\ntk}_{a i , \, b j}^{\ell}$ variables with $i \neq j$ (resp.\ $i = j$) indices, and the results in \citep{yang2019v1} which show that expectations of polynomially bounded functions converge (this is essentially due to the assumed polynomial boundedness of $\phi$ and $\zeta$ and their (weak) derivatives, the fact that the pre-nonlinearities in the first layer are Gaussian for all of the considered architectures by linearity of Gaussian variables, and the standard combination of \Cref{lem:sup_ui} and \Cref{thm:mean_convergence}---see the proofs of theorems 4.3 and 5.1 in \citep{yang2019v1}).
%
This can then be used to prove the bound for the first attention layer by inspecting the proofs in \Cref{sect:ntk_proofs} and noting that
$
\sup_{n}
\E |
\widehat{\ntk}_{a i , \, b j}^{\ell} ( x, x' )
|^t
$
can be always bounded by a polynomial in suprema over quantities from previous layers that we already know are uniformly bounded.
The proof for subsequent attention layers can thus proceed by induction.
\end{proof}
\begin{lemma}[Convergence of inner products]\label{lem:inner_prod_converge}
Under the assumptions of \Cref{thm:gp_convergence_sqrt}, the following holds for any $a, b \in [\genericDimenstion^s]$, $x, x' \in \mathcal{X}$, $\ell \in [L + 1]$, and $h \in [\headSymbol_{\sequenceVariable}^{\depthSymbol}]$
\begin{align}
&
\E \left[
\frac{
\langle \indexedActivity{\ell - 1}{n, a \cdot}{x} , \indexedActivity{\ell - 1}{n, b \cdot}{x'} \rangle
}{
\layerDimension{\ell - 1}
}
\right]
\overset{n \to \infty}{\to}
\kerntildef{a b}{\ell}{x}{x'}
\, ,
&&
\frac{
\langle \indexedActivity{\ell - 1}{n, a \cdot}{x} , \indexedActivity{\ell - 1}{n, b \cdot}{x'} \rangle
}{
\layerDimension{\ell - 1}
}
\overset{P}{\longrightarrow}
\kerntildef{a b}{\ell}{x}{x'}
\, ,
\\
&
\E \left[
\frac{
\langle \query{n, a \cdot}{\ell h}(x) , \query{n, b \cdot}{\ell h}(x') \rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}
}
\right]
\overset{n \to \infty}{\to}
\queryStd^2
\kerntildef{a b}{\ell}{x}{x'}
\, ,
&&
\frac{
\langle \query{n, a \cdot}{\ell h}(x) , \query{n, b \cdot}{\ell h}(x') \rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}
}
\overset{P}{\longrightarrow}
\queryStd^2
\kerntildef{a b}{\ell}{x}{x'}
\, ,
\\
&
\E \left[
\frac{
\langle \key{n, a \cdot}{\ell h}(x) , \key{n, b \cdot}{\ell h}(x') \rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}
}
\right]
\overset{n \to \infty}{\to}
\keyStd^2
\kerntildef{a b}{\ell}{x}{x'}
\, ,
&&
\frac{
\langle \key{n, a \cdot}{\ell h}(x) , \key{n, b \cdot}{\ell h}(x') \rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}
}
\overset{P}{\longrightarrow}
\keyStd^2
\kerntildef{a b}{\ell}{x}{x'}
\, ,
\\
&
\E \left[
\frac{
\langle \query{n, a \cdot}{\ell h}(x) , \key{n, b \cdot}{\ell h}(x') \rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}
}
\right]
\overset{n \to \infty}{\to}
\delta_{\tau = 1}
\stdSymbol_{\querySymbol \keySymbol}
\kerntildef{a b}{\ell}{x}{x'}
\, ,
&&
\frac{
\langle \query{n, a \cdot}{\ell h}(x) , \key{n, b \cdot}{\ell h}(x') \rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}
}
\overset{P}{\longrightarrow}
\delta_{\tau = 1}
\stdSymbol_{\querySymbol \keySymbol}
\kerntildef{a b}{\ell}{x}{x'}
\, .
\end{align}
\end{lemma}
\begin{proof}[Proof of \Cref{lem:inner_prod_converge}]
Notice that all the statements involving $Q^{\ellh}$ or $K^{\ellh}$ are of the form
\begin{align*}
\frac{
\indexedActivity{\ell - 1}{n, a \cdot}{x}
W_{n}^{\ellh}
(W_{n}^{\ellh})^\top
\indexedActivity{\ell - 1}{n, b \cdot}{x'}^\top
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}
}
\, ,
\end{align*}
i.e., with the same weight matrix multiplying the layer inputs $g_{n}^{\ell - 1}(x)$ (recall that under $\tau = 1$, we assumed $\weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \querySymbol} = \weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \keySymbol}$ a.s.).
Taking the expectation of the above term we obtain a term proportional up to $\stdSymbol_{\querySymbol}$ or $\stdSymbol_{\keySymbol}$ to
\begin{align*}
\E \left[
\frac{
\langle \indexedActivity{\ell - 1}{n, a \cdot}{x} , \indexedActivity{\ell - 1}{n, b \cdot}{x'} \rangle
}{
\layerDimension{\ell - 1}
}
\right]
=
\E \left[
\indexedActivity{\ell - 1}{n, a 1}{x} , \indexedActivity{\ell - 1}{n, b 1}{x'}
\right]
\, ,
\end{align*}
by the assumed exchangeability of $g_{n}^{\ell - 1}$.
Since the integrand converges in distribution by the assumed continuity of $\phi$ and the continuous mapping theorem \citep[theorem~9.3.7]{dudley02}, we can combine \Cref{lem:mmnt_propagation} with \Cref{lem:sup_ui} to obtain uniform integrability and thus
$
\E [
\indexedActivity{\ell - 1}{n, a 1}{x} , \indexedActivity{\ell - 1}{n, b 1}{x'}
]
\to
\kerntildef{a b}{\ell}{x}{x'}
$
by \Cref{thm:mean_convergence}, proving the convergence of expectations.
The obtain the convergence in probability, it is sufficient to show that
\begin{align*}
\E \Biggl|
%
%
\frac{
\indexedActivity{\ell - 1}{n, a \cdot}{x}
W_{n}^{\ellh}
(W_{n}^{\ellh})^\top
\indexedActivity{\ell - 1}{n, b \cdot}{x'}^\top
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}
}
%
%
\Biggr|^2
=
\sum_{\substack{i_1, j_1 \\ i_2 j_2}}^{\layerDimension{\ell - 1}}
\E \left[
\indexedActivity{\ell - 1}{n, a i_1}{x}
\indexedActivity{\ell - 1}{n, b j_1}{x}
\indexedActivity{\ell - 1}{n, a i_2}{x}
\indexedActivity{\ell - 1}{n, b j_2}{x}
W_{n, i_1 \cdot}^{\ellh}
(W_{n, j_1 \cdot}^{\ellh})^\top
W_{n, i_2 \cdot}^{\ellh}
(W_{n, j_2 \cdot}^{\ellh})^\top
\right]
\, ,
\end{align*}
converges to the square of the mean as by Chebyshev's inequality: $\text{Pr}(|X_n - \E X| \geq \delta) \leq \delta^{-2}(\E [X_n^2] - \{ \E [X_n] \}^2)$.
Since we can bound each of the summands using H{\" o}lder's inequality combined with \Cref{lem:mmnt_propagation}, the limit of the above expectation will up to a constant coincide with that of
\begin{align*}
\frac{1}{(\layerDimension{\ell - 1})^2}
\sum_{i , j}^{\layerDimension{\ell - 1}}
\E \left[
\indexedActivity{\ell - 1}{n, a i}{x}
\indexedActivity{\ell - 1}{n, b i}{x}
\indexedActivity{\ell - 1}{n, a j}{x}
\indexedActivity{\ell - 1}{n, b j}{x}
\right]
=
%
\E \left[
\indexedActivity{\ell - 1}{n, a 1}{x}
\indexedActivity{\ell - 1}{n, b 1}{x}
\indexedActivity{\ell - 1}{n, a 2}{x}
\indexedActivity{\ell - 1}{n, b 2}{x}
\right]
+
\mathrm{o}((\layerDimension{\ell-1})^2)
\, ,
\end{align*}
where the equality is by the assumed exchangeability.
Since the individual columns of $g_{n}^{\ell - 1}$ are asymptotically independent by assumption, we can use an argument analogous to that we made for the
$
\E [
\indexedActivity{\ell - 1}{n, a 1}{x}
\indexedActivity{\ell - 1}{n, b 1}{x}
]
$
above to obtain the $(\kerntildef{a b}{\ell}{x}{x'})^2$ limit.
Noting that the l.h.s.\ above is equal to
$
\E \bigl[
\bigl(
%
\langle \indexedActivity{\ell - 1}{n, a \cdot}{x} , \indexedActivity{\ell - 1}{n, b \cdot}{x'} \rangle
%
/
\layerDimension{\ell - 1}
%
\bigr)^2
\bigr]
$
concludes the proof.
\end{proof}
\section{Positional encodings}\label{sect:positional_encodings_appendix}
As in the proofs for attention without positional encodings, we assume the `infinite width, finite fan-out' construction of the~sequence of NNs.
In particular, we will assume that for any $n \in \mathbb{N}$, there is a countably infinite set of random variables $\{ \posEmb{n, \cdot i}{\ell} \colon i \in \mathbb{N} \} = \{ \posEmb{\cdot i}{\ell} \colon i \in \mathbb{N} \}$, where $\posEmb{\cdot i}{\ell} \sim \mathcal{N} ( 0 , \covPosEmb{}{} )$ i.i.d.\ over the $i$ index, but only a finite number $\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}} \in \mathbb{N}$ is \texttt{add}-ed,
$$
\tildeActivity{\ell - 1}{n}{x} = \sqrt{\alpha}
g_{n}^{\ell - 1}(x) + \sqrt{1 - \alpha}
\posEmb{n}{\ell}
\, ,
$$
or \texttt{append}-ed
$$
\tildeActivity{\ell - 1}{n}{x} = [ g_{n}^{\ell - 1}(x) , \posEmb{n}{\ell}]
\, ,
$$
to each of the layer inputs $\indexedActivity{\ell - 1}{n}{x}$.
In the \texttt{append} case, we further assume $\alpha = \lim_{n \to \infty} \layerDimension{\ell - 1} / (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}} + \layerDimension{\ell - 1})$.
\subsection{NNGP limit}
Note that all \Cref{thm:gp_convergence_sqrt} relies on is that the layer's inputs $\{ \indexedActivity{\ell - 1}{n}{x} \colon x \in \mathcal{X} \}$ converge in distribution to some $\indexedActivity{\ell - 1}{}{x}$ with mutually independent columns, and on the fact that the elementwise absolute moments of $\indexedActivity{\ell - 1}{n, a i}{x}$ are bounded uniformly in $a, i$ and $n$.
Let us thus replace $g_{n}^{\ell - 1}(x)$ by $\tildeActivity{\ell - 1}{n}{x}$, and see whether the proofs still apply.
\textbf{Exchangeability:}
The proofs of exchangeability in \Cref{lem:exchangeability,lem:head_exchangeability,lem:logit_exchangeability} are all based on conditioning on $\indexedActivity{\ell - 1}{n}{x}$ for some fixed finite subset of the inputs $x$, and then showing that the random variables are conditionally i.i.d.\ for any given $n \in \mathbb{N}$.
If positional encodings are used, the variables will be again i.i.d.\ if we add $\posEmb{n}{\ell}$ into the conditioning set.
\textbf{Convergence in distribution:}
To establish $\{ \tildeActivity{\ell - 1}{n}{x} \colon x \in \mathcal{X} \}$ converges in distribution in the \texttt{add} case, we can use a simple argument based on the Cram{\' e}r-Wold device and pointwise convergence of the characteristic function, which implies convergence in distribution by L{\' e}vy's continuity theorem (using the fact that $\posEmb{n, \cdot i}{\ell}$ are assumed to be i.i.d.\ $\mathcal{N}(0, \covPosEmb{}{})$ and the distribution of a particular $\posEmb{n, \cdot i}{\ell}$ does not change with $n$).
An alternative approach has to be taken for the \texttt{append} case where the weak limit of the layer input's distribution may not be well defined; closer inspection of \Cref{sect:nngp_proofs} reveals that all the proofs depend on the convergence of the layer inputs only through \Cref{lem:inner_prod_converge,lem:mmnt_propagation}, which we discuss next.
\textbf{Convergence of inner products and boundedness of moments:}
The proof of each statement of \Cref{lem:mmnt_propagation} relies on $\{ \indexedActivity{\ell - 1}{n}{x} \colon x \in \mathcal{X} \}$ only through the bound
$
\max_{c \in [\genericDimenstion^s]}
\sup_{\substack{i \in \mathbb{N}}}
\sup_{n}
\E |
\indexedActivity{\ell - 1}{n, c i}{x}
|^t
<
\infty
$
which is essentially established using the assumed polynomial bound on $| \phi |$ and the Gaussianity of the weights at initialisation.
All we need to extend \Cref{lem:mmnt_propagation} to the case where positional encodings are used is to establish
$
\max_{c \in [\genericDimenstion^s]}
\sup_{\substack{i \in \mathbb{N}}}
\sup_{n}
\E |
\tildeActivity{\ell - 1}{n, c i}{x}
|^t
<
\infty
$.
This can be done by observing
$
\E |
\tildeActivity{\ell - 1}{n, c i}{x}
|^t
\leq
\max \{
\E |
\indexedActivity{\ell - 1}{n, c i}{x}
|^t
,
\E |
\posEmb{n, c 1}{\ell}
|^t
\}
<
\infty
$
by the assumption $\posEmb{n, \cdot i}{\ell} \sim S(0, \covPosEmb{}{})$ i.i.d.\ over the $i$ index for any $n \in \mathbb{N}$.
Similarly, the proof of \Cref{lem:inner_prod_converge} can be modified by observing that
\begin{align*}
\E \left[
\frac{
\langle \tildeActivity{\ell - 1}{n, a \cdot}{x} , \tildeActivity{\ell - 1}{n, b \cdot}{x'} \rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}} + \layerDimension{\ell - 1}
}
\right]
=
\frac{\layerDimension{\ell - 1}}{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}} + \layerDimension{\ell - 1}}
\E \left[
\indexedActivity{\ell - 1}{n, a 1}{x} , \indexedActivity{\ell - 1}{n, b 1}{x'}
\right]
+
\frac{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}}}{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}} + \layerDimension{\ell - 1}}
\underbrace{
\E \left[
\posEmb{n, a 1}{\ell}
\posEmb{n, b 1}{\ell}
\right]
}_{= \covPosEmb{a b}{}}
\, ,
\end{align*}
in the \texttt{append} case by the independence of $\posEmb{n}{\ell}$ and exchangeability.
Using the Gaussianity of positional encodings and $\alpha = \lim_{n \to \infty} \layerDimension{\ell - 1} / (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}} + \layerDimension{\ell - 1})$, an analogous argument to that made in \Cref{lem:inner_prod_converge} can be used to establish convergence of the r.h.s.\ to $\mathcal{I}\circ\kerntildef{a b}{\ell}{x}{x'} = \alpha \kerntildef{a b}{\ell}{x}{x'} + (1 - \alpha) \covPosEmb{a b}{}$ in both probability and expectation.
For the \texttt{add} case,
\begin{align*}
\E \left[
\frac{
\langle \tildeActivity{\ell - 1}{n, a \cdot}{x} , \tildeActivity{\ell - 1}{n, b \cdot}{x'} \rangle
}{
\layerDimension{\ell - 1}
}
\right]
=
\alpha
\E \left[
\indexedActivity{\ell - 1}{n, a 1}{x} , \indexedActivity{\ell - 1}{n, b 1}{x'}
\right]
+
(1 - \alpha)
\underbrace{
\E \left[
\posEmb{n, a 1}{\ell}
\posEmb{n, b 1}{\ell}
\right]
}_{= \covPosEmb{a b}{}}
\, ,
\end{align*}
again by the independence of $\posEmb{n}{\ell}$ and exchangeability, and thus a similar argument to the one above applies.
Putting all of the above together, addition of positional encodings does not prevent GP behaviour in the infinite width limit;
the only modification of the results in \Cref{sect:nngp_proofs} is thus replacement of any $\kerntildef{a b}{\ell}{x}{x'}$ in the expression for the limiting covariance of $f^{\ell}$ and $G^{\ell}$ by $\mathcal{I}\circ\kerntildef{a b}{\ell}{x}{x'}$.
\subsection{NTK limit}
There are two sets of changes to the NTK limit.
First, the gradients w.r.t.\ $\indexedActivity{\ell - 1}{n}{x}$ in the \emph{indirect} part will now be multiplied by $\sqrt{\alpha}$ in the \texttt{add} case, and by $[\layerDimension{\ell - 1} / (\layerDimension{\ell} + \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}})]^{1/2}$---to ensure convergence of corresponding inner products---in the \texttt{append} case, and all the terms of the form
$$
\E \Biggl|
\frac{
\langle \tildeActivity{\ell - 1}{n, a \cdot}{x} , \tildeActivity{\ell - 1}{n, b \cdot}{x'} \rangle
}{
\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}} + \layerDimension{\ell - 1}
}
\Biggr|^k
\, ,
$$
for some $k > 0$ in the \emph{direct} part will converge to the $k$\textsuperscript{th} power of the $\mathcal{I}\circ\kerntildef{a b}{\ell}{x}{x'} = \alpha \kerntildef{a b }{\ell}{x}{x'} + (1 - \alpha) \covPosEmb{a b}{}$ kernel as discussed.
Since we have shown that \Cref{lem:mmnt_propagation,lem:inner_prod_converge} hold mutatis mutandis in the previous section, the rest of the proofs in the direct part can be modified in the obvious way, replacing $\kerntildef{a b}{\ell}{x}{x'}$ by $\mathcal{I}\circ\kerntildef{a b}{\ell}{x}{x'}$ as necessary.
Second, there will be a new contribution to the \emph{direct} part due to the gradient w.r.t.\ the trainable $\posEmb{n}{\ell}$.
Since $\posEmb{n}{\ell}$ is \texttt{add}-ed (resp.\ \texttt{append}-ed) to the layer input $\indexedActivity{\ell - 1}{n}{x}$, this contribution will be quite similar to the \emph{indirect} contribution, however with $\ntkhatf{a' i', b' j'}{\ell}{x}{x'}$ (\Cref{eq:ntk_hat}) replaced by $(1 - \alpha) \delta_{a' = b'} \delta_{i' = j'}$.
Inspecting \Cref{lem:gg_ntk,lem:vv_ntk,lem:gv_ntk}, this will lead to two changes.
Firstly, since $\E |(1 - \alpha) \delta_{a' = b'} \delta_{i' = j'}|^{t} < \infty$, all bounds involving $\E | \widehat{\ntk}_{a' i', b' j'}|^t$ can be trivially reduced.
Secondly, as shown in the previous section,
all appearances of the $\kerntildef{a b}{\ell}{x}{x'}$ are to be replaced $\mathcal{I}\circ\kerntildef{a b}{\ell}{x}{x'}$, including those involved indirectly through the modified asymptotic distribution of $G_{n}^{\ell}$.
The rest of the proofs is affected by the introduction of positional encodings only through \Cref{lem:mmnt_propagation,lem:inner_prod_converge} which, as mentioned, do hold in a modified form.
Substituting $(1 - \alpha) \delta_{a' = b'} \delta_{i' = j'}$ for $\ntkhatf{a' i', b' j'}{\ell}{x}{x'}$ in \Cref{lem:gg_ntk,lem:vv_ntk}, we thus conclude that the new contribution to the NTK due to the gradient w.r.t.\ $\posEmb{n}{\ell}$ is
\begin{align*}
&
%
(1 - \alpha)
\OVStd^2
\sum_{c = 1}^{\genericDimenstion^s}
\E [
\tildeLogit{a c}{\ell 1} (x)
\tildeLogit{b c}{\ell 1} (x')
]
+
\\
%
\delta_{\tau = \frac{1}{2}}
&(1 - \alpha)
\OVStd^2
\QKStd^2
\sum_{\substack{c_1, c_2 \\ d_1, d_2}}^{\genericDimenstion^s}
\mathcal{I} \circ \kerntildef{c_1 c_2}{\ell}{x}{x'}
\left(
\delta_{\substack{d_1 = d_2}}
\mathcal{I} \circ \kerntildef{a b}{\ell}{x}{x'}
+
\delta_{\substack{a = b}}
\mathcal{I} \circ \kerntildef{d_1 d_2}{\ell}{x}{x'}
\right)
\E \left[
\frac{
\partial
\tildeLogit{a c_1}{\ell 1} (x)
}{
\partial
G_{a d_1}^{\ell 1} (x)
}
\frac{
\partial
\tildeLogit{b c_2}{\ell 1} (x')
}{
\partial
G_{b d_2}^{\ell 1} (x')
}
\right]
\, .
%
\end{align*}
\section{Residual attention}\label{sect:residual_attention_appendix}
Observe that by \citep{garriga2019deep,yang2019v2}, the covariance induced by the skip connection, $\indexedActivation{\ell}{n}{x} = \sqrt{\alpha} \indexedActivity{\ell - 1}{n}{x} + \sqrt{1 - \alpha} \tildeActivation{\ell}{n}{x}$, in the infinite width limit is equal to
\begin{align*}
\E [
\indexedActivation{\ell}{a 1}{x}
\indexedActivation{\ell}{b 1}{x'}
]
&=
\alpha
\E [
\indexedActivity{\ell - 1}{a 1}{x}
\indexedActivity{\ell - 1}{b 1}{x'}
]
+
(1 - \alpha)
\E [
\tildeActivation{\ell}{a 1}{x}
\tildeActivation{\ell}{a 1}{x'}
]
\\
&=
\alpha
\kerntildef{a b}{\ell}{x}{x'}
+
(1 - \alpha)
\E [
\tildeActivation{\ell}{a 1}{x}
\tildeActivation{\ell}{a 1}{x'}
]
\, .
\end{align*}
To obtain the
$
\alpha \kerntildef{a b}{\ell}{x}{x'}
+
(1 - \alpha)
\covPosEmb{a \cdot}{}
\kerntildef{}{\ell}{x}{x'}
\covPosEmb{b \cdot}{\top}
$
from \Cref{eq:residual_kernel}, it is thus sufficient to choose $\tildeActivation{\ell}{n}{x}$ to be the output of attention layer under the $d^{-1}$ scaling with structured positional encodings (covariance $\covPosEmb{}{}$), identity function for $\zeta$ and the interpolation parameter \emph{for the attention layer} set to zero, resulting in the
$
\covPosEmb{a \cdot}{}
\kerntildef{}{\ell}{x}{x'}
\covPosEmb{b \cdot}{\top}
$ (see \Cref{tab:kernel_overview}).
\section{Introduction}\label{sect:intro}
One of the currently most active research directions in theoretical deep learning is the study of NN behaviour as the number of parameters in each layer goes to infinity \citep[e.g.,][]{matthews2018gaussian,lee2018deep,garriga2019deep,novak2019bayesian,li2018learning,allenzhu19convergence,du2019gd,arora2019,yang2019v2}.
Building upon these efforts, we study the asymptotic behaviour of NNs with attention layers \citep{bahdanu2015nmt,vaswani2017attention} and derive the corresponding neural network Gaussian proccess (NNGP) and Neural Tangent kernels \citep[NTK,][]{jacot2018ntk,lee2019wide}.
Beyond their recent empirical successes \citep[e.g.,][]{radford2019language,devlin2019bert}, attention layers are also interesting from the theoretical perspective as the standard proof techniques used to establish asymptotic Gaussianity of the input-to-output mappings represented by wide NNs \citep{matthews2018gaussian,yang2019v2} cannot be applied.
To understand why, consider the following simplified attention layer model: let $x \in \R{\genericDimenstion^s \times d'}$ be the input with $\genericDimenstion^s$ \emph{spatial} and $d'$ \emph{embedding} dimensions (by spatial, we mean, e.g., the number of tokens in a string or pixels in an image),
$\weightMatSymbol^{\querySymbol}, \weightMatSymbol^{\keySymbol}, \weightMatSymbol^{\valueSymbol} \in \R{d' \times d}$ be weight matrices, and define queries
$Q(x) \coloneqq x \weightMatSymbol^{\querySymbol}$, keys $K(x) \coloneqq x \weightMatSymbol^{\keySymbol}$, and values $V(x) \coloneqq x \weightMatSymbol^{\valueSymbol}$ as usual.
The attention layer output is then
\begin{align}\label{eq:single_head_informal}
f(x)
\coloneqq
\zeta \biggl(
\frac{
Q(x) K(x)^\top
}{\sqrt{d}}
\biggr)
V(x)
=
\zeta (
G(x)
)
V(x)
\, ,
\end{align}
where $\zeta$ is the row-wise softmax function.
Now observe that
$\dim G(x) = \genericDimenstion^s \times \genericDimenstion^s$
where the spatial dimension $\genericDimenstion^s$ stays finite even as the number of parameters---here proportional to $d$---goes to infinity.
As we will show rigorously in \Cref{sect:theory}, this fact combined with the $d^{-1/2}$ scaling
causes each column of $f(x)$ to be a linear combination of the \emph{same stochastic matrix} $\zeta(G(x))$, and thus statistically dependent even in the infinite width limit.
Since the exchangeability based arguments
\citep{matthews2018gaussian,garriga2019deep} require that certain moment statistics of $f(x)$ asymptotically behave as if its columns were independent \citep[see condition {\em b} in lemma 10,][]{matthews2018gaussian}, they do not extend to attention layers in a straightforward manner.
Similarly, the proofs based on Gaussian conditioning \citep{novak2019bayesian,yang2019v2} require that given the input $x$, the conditional covariance of each column of $f(x)$ converges (in probability) to the same \emph{deterministic} positive semidefinite matrix \citep[see propositions 5.5 and G.4 in][]{yang2019v2} which will not be the case due to the aforementioned stochasticity of $\zeta(G(x))$.
Among the many interesting contributions in \citep{yang2019v2}, the author proposes to resolve the above issue by replacing the $d^{-1/2}$ scaling in \Cref{eq:single_head_informal} by $d^{-1}$ which does enable application of the Gaussian conditioning type arguments.
However, it also forces the attention layer to only perform computation similar to average pooling in the infinite width limit, and reduces the overall expressivity of attention even if suitable modifications preventing the pooling behaviour are considered (see \Cref{sect:linear_scaling_limit}).
We address the above issues by modifying the exchangability based technique and provide a rigorous characterisation of the infinite width behaviour under both the $d^{-1/2}$ and $d^{-1}$ scalings.
We also show that positional encodings \citep{gehring2017convolutional,vaswani2017attention} can improve empirical performance even in the infinite width limit, and propose modifications to the attention mechanism which results in further gains for both finite and infinite NNs.
In experiments, we moderately improve upon the previous state-of-the-art result on CIFAR-10 for GP models without data augmentation and advanced preprocessing \citep[cf.][]{li2019enhanced}.
Finally, since attention is often applied to text datasets, we release code allowing applications of NNGP/NTK models to variable-length sequences, including an example on the IMDb reviews dataset.
\begin{figure}[tbp]
\begin{center}
\centerline{
\includegraphics[width=0.5\columnwidth,keepaspectratio]{fig/single_head_sample_logy}
\hfill
\includegraphics[width=0.5\columnwidth,keepaspectratio]{fig/multihead_head_sample_logy}
}
\vskip -0.2in
\caption{
\textbf{Distribution of an attention layer output for single-head} (left) \textbf{and 100-head} (right) architecture at initialisation under the $d^{-1/2}$ scaling when $d$ is large ($1000$).
Red line is the Gaussian density with sample mean and variance substituted for its parameters.
Unlike multi-head, the empirical distribution of single-head attention significantly deviates from Gaussian despite $d \gg 0$.}
\label{fig:scale_mixture}
\end{center}
\vskip -0.25in
\end{figure}
\newlength{\tableFS}
\setlength{\tableFS}{8.4pt}
\begin{table*}[t]
\caption{
\textbf{Overview of the discussed kernels.}
The $d$ column refers to the $d^{-1}$ and $d^{-1/2}$ scaling of the $Q (x) K(x)^\top$.
$(\tilde{\kernel}, \widetilde{\ntk})$ denote the input and $(\kappa, \Theta)$ the output NNGP and NTK kernels.
\textsc{NNGP} and \textsc{NTK} columns are stated as updates for full $\genericDimenstion^s \times \genericDimenstion^s$ covariance blocks unless the generic spatial dimension subscripts $a b$ are used.
To fit to page width, we use superscripts to denote dependence on inputs, e.g., replacing $\kerntildef{}{}{x}{x}$ by $\tilde{\kernel}^{x\inputSymbol'}$.
$\langle A, B \rangle_F = \sum_{ij} A_{ij} B_{ij}$ is the Frobenius product of matrices $A, B$, with $\| A \|_F^2 = \langle A, A \rangle$.
$\mathcal{I}$ denotes interpolation,
e.g.,
$\mathcal{I} \circ \kerntildef{}{}{x}{x'} = \alpha \kerntildef{}{}{x}{x'} + (1 - \alpha) \covPosEmb{}{}$ with fixed hyperparameters $\alpha \in [0, 1]$ and $\covPosEmb{}{}$ (a generic covariance related to initialisation of positional encodings);
the special case $\covPosEmb{}{} = I$ is denoted by $\mathcal{I}_{I}$.
$\ddag$ is for optional operators (e.g., $\mathcal{I}^\ddag \circ \tilde{\kernel}^{x\inputSymbol'}$ can be replaced with $\tilde{\kernel}^{x\inputSymbol'}$).
$\weightMatSymbol^{\querySymbol} = \weightMatSymbol^{\keySymbol}$ initialisation assumed for all $d^{-1}$, and $\zeta = \text{identity}$ for all $d^{-1/2}$ kernels (see \Cref{sect:linear_scaling_limit,sect:softmax_alternatives} respectively).
See \Cref{sect:theory,sect:beyon_vanilla_attn} for derivations, and \citep{yang2019v2} for the \textsc{LayerNorm} kernel (stated here for ease of reference).
}
\label{tab:kernel_overview}
\begin{center}
\fontsize{\tableFS}{1.2\tableFS}
\begin{sc}
\renewcommand{\arraystretch}{1.75}
\begin{tabular}{lcccc}
\toprule
Kernel & $d$ & NNGP & NTK \\
\midrule
\multirow{2}{*}{Vanilla} &
%
$1$
&
$
\zeta(
\tilde{\kernel}^{x\inputSymbol}
%
)
%
\tilde{\kernel}^{x\inputSymbol'}
%
%
\zeta(
\tilde{\kernel}^{x'x'}
%
)^\top
$
&
$
2
\kappa^{x\inputSymbol'}
%
+
\zeta(
\tilde{\kernel}^{x\inputSymbol}
%
%
)
%
\widetilde{\ntk}^{x\inputSymbol'}
%
%
\zeta(
\tilde{\kernel}^{x'x'}
%
)^\top
$
\\
&
$\frac{1}{2}$
&
$
\tilde{\kernel}^{x\inputSymbol'}
%
%
\norm{
\tilde{\kernel}^{x\inputSymbol'}
%
}_F^2
$
&
$
4
\kappa_{ab}^{x\inputSymbol'}
%
+
\langle
\tilde{\kernel}^{x\inputSymbol'}
%
,
2
\tilde{\kernel}_{ab}^{x\inputSymbol'}
%
\widetilde{\ntk}^{x\inputSymbol'}
%
+
\widetilde{\ntk}_{ab}^{x\inputSymbol'}
%
\tilde{\kernel}^{x\inputSymbol'}
%
\rangle_F
$
\\
\midrule
\multirow{2}{*}{
\begin{minipage}[t]{0.2\columnwidth}%
Random Positional Encoding
\end{minipage}
}
&
$1$
&
$
\zeta (
\mathcal{I}_I \circ
\tilde{\kernel}^{x\inputSymbol}
%
)
[
\mathcal{I}_I \circ
\tilde{\kernel}^{x\inputSymbol'}
%
]
\zeta (
\mathcal{I}_I \circ
\tilde{\kernel}^{x'x'}
%
)^\top
$
&
$
2
\kappa^{x\inputSymbol'}
%
+
\zeta (
\mathcal{I}_I \circ
\tilde{\kernel}^{x\inputSymbol}
%
)
[
\mathcal{I}_I \circ
\widetilde{\ntk}^{x\inputSymbol'}
%
]
%
%
%
%
%
%
%
\zeta (
\mathcal{I}_I \circ
\tilde{\kernel}^{x'x'}
%
)^\top
$
%
\\
&
$\frac{1}{2}$
&
$
\mathcal{I}_I \circ
\tilde{\kernel}^{x\inputSymbol'}
%
%
\norm{
\mathcal{I}_I \circ \tilde{\kernel}^{x\inputSymbol'}
%
}_F^2
$
&
$
4
\kappa_{ab}^{x\inputSymbol'}
%
+
\langle
\mathcal{I}_I \circ
\tilde{\kernel}^{x\inputSymbol'}
%
,
2
[
\mathcal{I}_I \circ
\tilde{\kernel}_{ab}^{x\inputSymbol'}
%
]
%
\mathcal{I}_I \circ
\widetilde{\ntk}^{x\inputSymbol'}
%
%
+
[
\mathcal{I}_I \circ
\widetilde{\ntk}_{ab}^{x\inputSymbol'}
%
]
\mathcal{I}_I \circ
\tilde{\kernel}^{x\inputSymbol'}
%
\rangle_F
$
\\
\midrule
\multirow{2}{*}{
\begin{minipage}[t]{0.2\columnwidth}%
Structured Positional Encoding
\end{minipage}
}
&
$1$
&
$
\zeta (
\mathcal{I} \circ
\tilde{\kernel}^{x\inputSymbol}
%
)
[
\mathcal{I}^\ddag \circ
\tilde{\kernel}^{x\inputSymbol'}
%
]
\zeta (
\mathcal{I} \circ
\tilde{\kernel}^{x'x'}
%
)^\top
$
&
$
2
\kappa^{x\inputSymbol'}
%
+
\zeta (
\mathcal{I} \circ
\tilde{\kernel}^{x\inputSymbol}
%
)
[
\mathcal{I}^\ddag \circ
\widetilde{\ntk}^{x\inputSymbol'}
%
]
\zeta (
\mathcal{I} \circ
\tilde{\kernel}^{x'x'}
%
)^\top
$
%
\\
&
%
$\frac{1}{2}$
%
&
%
$
\mathcal{I} \circ
\tilde{\kernel}^{x\inputSymbol'}
%
%
\langle
\mathcal{I}^\ddag \circ
\tilde{\kernel}^{x\inputSymbol'}
%
,
\mathcal{I} \circ \tilde{\kernel}^{x\inputSymbol'}
%
\rangle_F
$
%
&
%
$
\begin{aligned}
4 \kappa_{ab}^{x\inputSymbol'}
&+
\langle
\mathcal{I}^\ddag \circ
\tilde{\kernel}^{x\inputSymbol'}
,
[
\mathcal{I} \circ
\tilde{\kernel}_{ab}^{x\inputSymbol'}
]
\mathcal{I} \circ
\widetilde{\ntk}^{x\inputSymbol'}
+
[
\mathcal{I} \circ
\widetilde{\ntk}_{ab}^{x\inputSymbol'}
]
\mathcal{I} \circ
\tilde{\kernel}^{x\inputSymbol'}
\rangle_F
\\
&+
\langle
\mathcal{I} \circ
\tilde{\kernel}^{x\inputSymbol'}
\phantom{\ddag}
,
[
\mathcal{I} \circ
\tilde{\kernel}_{ab}^{x\inputSymbol'}
]
\mathcal{I}^\ddag \circ
\widetilde{\ntk}^{x\inputSymbol'}
\rangle_F
\end{aligned}
$
%
\\
%
\midrule
Residual
&
--
&
$
\alpha
\tilde{\kernel}^{x\inputSymbol'}
%
+
(1 - \alpha)
\covPosEmb{}{}
\tilde{\kernel}^{x\inputSymbol'}
%
\covPosEmb{}{\top}
$
&
$
2 (1 - \alpha)
\kappa^{x\inputSymbol'}
%
+
\alpha
\widetilde{\ntk}^{x x'}
+
(1 - \alpha)
\covPosEmb{}{}
\widetilde{\ntk}^{x\inputSymbol'}
%
\covPosEmb{}{\top}
$
\\
\midrule
LayerNorm &
--
&
$
%
\tilde{\kernel}_{ab}^{x\inputSymbol'}
%
%
%
[
\tilde{\kernel}_{aa}^{x\inputSymbol}
%
\tilde{\kernel}_{bb}^{x\inputSymbol'}
%
]^{-1/2}
%
%
$
&
$
%
\widetilde{\ntk}_{ab}^{x\inputSymbol'}
%
%
%
[
\widetilde{\ntk}_{aa}^{x\inputSymbol}
%
\widetilde{\ntk}_{bb}^{x'x'}
%
]^{-1/2}
%
%
$
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\\
%
\bottomrule
\end{tabular}
\end{sc}
\end{center}
\vskip -0.18in
\end{table*}
\section{Definitions and notation}\label{sect:background}
\textbf{Neural networks:}
$\indexedActivation{\ell}{}{x}$ denotes the~output of $\ell$\textsuperscript{th} layer for an~input $x \in \mathcal{X} \subset \R{\genericDimenstion^s \times \genericDimenstion^{0}}$, and $\indexedActivity{\ell}{}{x} \coloneqq \phi (\indexedActivation{\ell}{}{x})$ the~corresponding post-nonlinearity where $\phi \colon \R{} \to \R{}$ is the~activation function applied elementwise (for convenience, we set $\indexedActivity{0}{}{x} = x$).
We assume the~network has $L \in \mathbb{N}$ hidden layers, making $\indexedActivation{L + 1}{}{x}$
the output,
and that the input set $\mathcal{X}$ is \emph{countable}.
As we will be examining behaviour of sequences of increasingly wide NNs, the~variables corresponding to the~$n$\textsuperscript{th} network are going to be denoted by a~subscript $n$ (e.g., $\indexedActivation{\ell}{n}{x}$ is the~output of $\ell$\textsuperscript{th} layer of the~$n$\textsuperscript{th} network in the~sequence evaluated at $x$).
We also use
\begin{align*}
\activations{n, \cdot j}{\ell}
&\coloneqq
\{
\indexedActivation{\ell}{n, ij}{x}
\colon
x \in \mathcal{X},
i \in [\genericDimenstion^s]
\}
\\
\activations{n}{\ell}
&\coloneqq
\{
\activations{n, \cdot j}{\ell}
\colon
j \in \mathbb{N}
\}
\, ,
\end{align*}
with $[\genericDimenstion^s] = \{1, 2, \ldots, \genericDimenstion^s \}$.
To reduce clutter, we omit the $\ell$ index where it is clear from the context or unimportant.
\textbf{Shapes:} $\indexedActivation{\ell}{n}{x}, \indexedActivity{\ell}{n}{x} \in \R{\genericDimenstion^s \times \layerDimension{\depthSymbol}}$ with $\genericDimenstion^s, \layerDimension{\depthSymbol} \in \mathbb{N}$ respectively the~\emph{spatial} and \emph{embedding} dimensions.
If there are multiple spatial dimensions, such as height and width for images, we assume these have been flattened into a single dimension.
Finally, we will allow the row space dimension of $\weightMatSymbol_{n}^{\ell,Q}, \weightMatSymbol_{n}^{\ell,K} \in \R{\layerDimension{\ell - 1} \times \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}$ to differ from that of $\weightMatSymbol_{n}^{\ell,V} \in \R{\layerDimension{\ell - 1} \times \layerDimension{\depthSymbol}}$, leading to the modified definition
\begin{align}\label{eq:logit_def}
G_{n}^{\ell} (x)
=
\frac{Q_{n}^{\ell}(x) K_{n}^{\ell} (x)^\top}{\sqrt{\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol}}}
\end{align}
\textbf{Multi-head attention:} \Cref{eq:single_head_informal} describes a vanilla version of a \emph{single-head} attention layer.
Later in this paper, we examine the \emph{multi-head} attention alternative in which the~output $\indexedActivation{\ell}{n}{x}$ is computed as
\begin{equation}\label{eq:attention_out}
\indexedActivation{
\ell
}{n}{x}
=
\bigl[
\indexedActivation{
\ell
1}{n}{x},
\ldots,
\indexedActivation{
\ell
\headSymbol_{\sequenceVariable}^{\depthSymbol}}{n}{x}
\bigr]
\weightMatSymbol_{\sequenceVariable}^{\depthSymbol, \outputSymbol}
\, ,
\end{equation}
i.e., by stacking the outputs of $\headSymbol_{\sequenceVariable}^{\depthSymbol} \in \mathbb{N}$ independently parametrised heads into a $\genericDimenstion^s \times \headSymbol_{\sequenceVariable}^{\depthSymbol} \layerDimension{\depthSymbol}$ matrix and projecting back into $\genericDimenstion^s \times \layerDimension{\depthSymbol}$ by $\weightMatSymbol_{\sequenceVariable}^{\depthSymbol, \outputSymbol} \in \R{\headSymbol_{\sequenceVariable}^{\depthSymbol} \layerDimension{\depthSymbol} \times \layerDimension{\depthSymbol}}$.
The embedding dimension of each head $\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol}$ can optionally differ from $\layerDimension{\depthSymbol}$.
To distinguish the weight matrices corresponding to the individual heads, we will be using a superscript $h$, e.g.,
$\query{\sequenceVariable}{\depthSymbol \headIndex} (x) = \indexedActivity{\ell - 1}{n}{x} \weightMatSymbol_{\sequenceVariable}^{\depthSymbol \headIndex, \querySymbol}$.
\textbf{Weight distribution:}
As usual, we will assume Gaussian initialisation of the weights, i.e.,
$\weightQ{ij} \sim \mathcal{N} (0, \queryStd^2 / \layerDimension{\ell - 1} )$,
$\weightK{ij} \sim \mathcal{N} (0, \keyStd^2 / \layerDimension{\ell - 1} )$,
$\weightV{ij} \sim \mathcal{N} (0, \valueStd^2 / \layerDimension{\ell - 1} )$,
and
$\weightO{ij} \sim \mathcal{N} (0, \outStd^2 / (\headSymbol_{\sequenceVariable}^{\depthSymbol} \layerDimension{\ell}) )$, all i.i.d.\ over the $i, j$ and $\ell, h$ indices for all $n$.
The scaling of variance by inverse of the input dimension is standard and ensures that the asymptotic variances do not diverge \citep{neal1996,lecun1998efficient,he2015delving}.
Throughout \Cref{sect:beyon_vanilla_attn,sect:theory}, we assume all the $\sigma^2$ parameters are equal to one, and only state the results in full generality in the appendix.
\textbf{NNGP/NTK:}
As discussed in the introduction, randomly initialised NNs induce a distribution over the $f_{n}^{L + 1}$ mappings.
For a variety of architectures, this distribution converges (weakly) to that of a GP as $\min_{\ell \in [L]} \layerDimension{\depthSymbol} \to \infty$, both at initialisation (NNGP), and after continuous gradient descent optimisation of the randomly initialised NN
with respect to a mean squared error loss (NTK).
Both the NNGP and NTK distributions are typically zero mean, and we use $\kappa^{L + 1}$ and $\Theta^{L + 1}$ to denote their respective kernel functions.
These kernel functions tend to have a recursive structure where each layer in the underlying NN architecture is associated with a mapping $(\kappa^{\ell - 1}, \Theta^{\ell - 1}) \mapsto (\kappa^{\ell}, \Theta^{\ell})$ transforming the NNGP and NTK kernels according to the layer's effect on the outputs in the infinite width limit.
Since nonlinearities are typically not treated as separate layers, we use $\tilde{\kernel}^{\ell}$ and $\widetilde{\ntk}^{\ell}$ to denote the intermediate transformation $(\kappa^{\ell - 1}, \Theta^{\ell - 1}) \mapsto (\tilde{\kernel}^{\ell}, \widetilde{\ntk}^{\ell})$ they induce.
We generally assume every layer is followed by a nonlinearity, setting $(\tilde{\kernel}^{\ell}, \widetilde{\ntk}^{\ell}) = (\kappa^{\ell-1}, \Theta^{\ell-1})$ if none is used.
In the next two sections,
we uncover the mappings $(\tilde{\kernel}^{\ell}, \widetilde{\ntk}^{\ell}) \mapsto (\kappa^{\ell}, \Theta^{\ell})$ induced by various attention architectures.
\section{Attention and Gaussian process behaviour}\label{sect:theory}
Throughout the rest of this paper, we restrict our focus to increasingly wide NNs including at least one attention layer.
In particular, we consider sequences of NNs such that
\begin{align}\label{eq:sim_limit}
\lim_{n \to \infty} \min_{\ell \in [L]} \layerDimension{\depthSymbol} = \infty
\, ,
\end{align}
and the reader should thus interpret any statements involving $n \to \infty$ as implicitly assuming
\Cref{eq:sim_limit} holds.
Due to the space constraints, most of the technical discussion including derivation of the NTK is relegated to \Cref{sect:proofs}.
In this section, we only focus on the key step in our proof which relies on an inductive argument adapted from \citep{matthews2018gaussian}.
On a high level, the induction is applied from $\ell = 1$ to $\ell = L + 1$, and establishes that whenever
$f_{n}^{\ell - 1}$ converges in distribution to
$\mathcal{GP}(0 ,\kappa^{\ell - 1})$ at initialisation, $f_{n}^{\ell}$ also converges in distribution to $\mathcal{GP}(0 ,\kappa^{\ell})$ as $n \to \infty$.
Since this fact is known for dense, convolutional, and average pooling layers, and almost all nonlinearities \citep{matthews2018gaussian,lee2018deep,garriga2019deep,novak2019bayesian,yang2019v2}, it will be sufficient to show the same for attention layers.
\subsection{Infinite width limit under the $d^{-1}$ scaling}\label{sect:linear_scaling}
As illustrated in \Cref{fig:scale_mixture}, use of the $d^{-1/2}$ scaling within a \emph{single-head} architecture leads to a scale mixture behaviour of the attention layer outputs as the number of parameters goes to infinity.
To obtain a Gaussian limit, \citet[appendix A]{yang2019v2} proposes to replace the definition in \Cref{eq:logit_def} by
$
G_{n} %
(x)
=
(d_{n}^{G})^{-1}
Q_{n} %
(x)
K_{n} %
(x)^\top
$,
i.e., the use of $d^{-1}$ scaling.
The desired result then follows:
\begin{theorem}[$d^{-1}$ limit \citep{yang2019v2}]\label{thm:linear_scaling_limit}
Under the $d^{-1}$ scaling and the assumptions stated in \citep{yang2019v2}:
\vspace{-0.5\baselineskip}
\begin{enumerate}[\hspace{-0.5em}(I)]
\renewcommand\labelenumi{\emph{\bfseries(\theenumi)}}
\item
For any $(x, x') \in \mathcal{X} \times \mathcal{X}$ and $a, b, i, j \in [\genericDimenstion^s]$, there exist constants $(\bar{\softmax}_{a i}^x, \bar{\softmax}_{b j}^{x'}) \in \R{} \times \R{}$ such that
\begin{align}\label{eq:logit_determ_limit}
(
\zeta(
G_{n}%
(x)
)_{ai},
\zeta(
G_{n}%
(x')
)_{bj}
)
\to
(
\bar{\softmax}_{a i}^x,
\bar{\softmax}_{b j}^{x'}
)
\end{align}
in probability as $n \to \infty$.
\item
%
%
$\activations{n}{}$ %
converges in distribution to $f%
\sim \mathcal{GP}(0, \kappa)$ with
%
\begin{align}\label{eq:determ_kernel}
\kernelf{a b}{}{x}{x'}
&=
\E [
\indexedActivation{}{a1}{x}
\indexedActivation{}{b1}{x'}
]
\\
&=
\sum_{i , j = 1}^{\genericDimenstion^s}
\kerntildef{i j}{}{x}{x'}
\bar{\softmax}_{a i}^x
\bar{\softmax}_{b j}^{x'}
\nonumber
\, ,
\end{align}
and $\activations{\cdot k}{}$ and $\activations{\cdot l}{}$ are independent for any $k \neq l$.
%
\end{enumerate}
\end{theorem}
An analogous result also holds for multi-head attention architectures
which follows by the usual argument for fully connected layers as long as either the number of embedding dimensions per head or the number of heads goes to infinity.
\subsection{Limitations of the $d^{-1}$ scaling}\label{sect:linear_scaling_limit}
While \Cref{thm:linear_scaling_limit} is a good starting point, several issues have to be resolved before using the kernel function described in \Cref{eq:determ_kernel} in practice.
Firstly, since $\weightMatSymbol_{n}^{Q}$ and $\weightMatSymbol_{n}^{K}$ are initialised independently, the $d^{-1}$ scaled inner products of keys and queries will converge to zero (the mean), and thus for any $a, i$ and $x$, $\bar{\softmax}_{a i}^{x} \to (\genericDimenstion^s)^{-1}$ in probability by the continuous mapping theorem.
This issue was already noted by \citeauthor{yang2019v2} in appendix~A but not discussed further as the main focus of the paper lies elsewhere.
In any case, substituting $(\genericDimenstion^s)^{-1}$ for all the $\bar{\softmax}$ coefficients will make $\kernelf{a b}{}{x}{x '} = \kernelf{i j}{}{x}{x '}$ for any $a, b, i, j \in [\genericDimenstion^s]$, and in fact all of these entries will be equivalent to output of a simple global average pooling kernel \citep[equation 17]{novak2019bayesian}.\footnote{In fact, the asymptotic distribution induced by such an attention layer followed by flatten and dense layers is the same as that induced by global average pooling followed by a dense layer.}
Perhaps the simplest way to address the above issue is by drawing the initial weights such that $\weightMatSymbol_{n}^{Q} = \weightMatSymbol_{n}^{K}$.
This will ensure that the key and query for a particular spatial dimension will point in the same direction and thus the attention weight corresponding to itself will be large with high probability.
The resulting formula for $\kernelf{a b}{}{x}{x'}$ is
\begin{align}\label{eq:fixed_determ_kernel}
%
%
%
\sum_{i , j = 1}^{\genericDimenstion^s}
\kerntildef{i j}{}{x}{x'}
\zeta(
%
\kerntildef{a i}{}{x}{x}
)
\zeta(
%
\kerntildef{b j}{}{x'}{x'}
)
\, .
\end{align}
Since \Cref{eq:fixed_determ_kernel} resolves the issue of reduction to average pooling, a natural question is whether swapping $d^{-1/2}$ for $d^{-1}$ has any undesirable consequences in the infinite width limit.
As we will see, this question can be answered in affirmative.
In particular, we start by a proposition inspired by \citep{cordonnier2020On} in which the authors show that an attention layer with a sufficient number of heads is at least as expressive as a standard convolutional layer,
and that attention layers often empirically learn to perform computation akin to convolution.
In contrast, \Cref{prop:linear_scale_no_conv} proves that there is no initial distribution of $\weightMatSymbol_{n}^{Q}$ and $\weightMatSymbol_{n}^{K}$ which would recover the convolutional kernel \citep{novak2019bayesian,garriga2019deep} in the infinite width limit.
\begin{restatable}{proposition}{linearNoConv}
\label{prop:linear_scale_no_conv}
There is no set of attention coefficients $\{ \bar{\softmax}_{a i}^{x} \! \in \R{} \! \colon a, i \in [\genericDimenstion^s], x \in \mathcal{X} \}$ such that for \emph{all} positive semidefinite kernels $\tilde{\kernel}$ simultaneously
\begin{align*}
\sum_{i , j = 1}^{\genericDimenstion^s}
\kerntildef{i j}{}{x}{x'}
\bar{\softmax}_{a i}^x
\bar{\softmax}_{b j}^{x'}
%
=
\sum_{i = 1}^{d_f}
\kerntildef{N_a(i) N_b(i)}{}{x}{x'}
\frac{1}{d_f}
\, ,
\end{align*}
where $d_f$ is the dimension of the (flattened) convolutional filter, $N_a, N_b \subset [\genericDimenstion^s]$ are the ordered subsets of pixels which are used to compute the new values of pixels $a$ and $b$, respectively, and $N_a(i), N_b(i)$ are the $i$\textsuperscript{th} pixels in $N_a, N_b$.
\end{restatable}
In the next section, we will see that the convolutional kernel can be recovered under the $d^{-1/2}$ scaling (\Cref{prop:sqrt_scale_conv_recover}).
However, we need to establish convergence scaling first.
\begin{figure}[tbp]
\begin{center}
\centerline{
\includegraphics[width=0.5\columnwidth,keepaspectratio]{fig/channel_sample_dist}
\hfill
\includegraphics[width=0.5\columnwidth,keepaspectratio]{fig/channel_sample_acc}
}
\vskip -0.15in
\caption{
\textbf{Convergence} (left) \textbf{and validation accuracy} (right) \textbf{plots for an empirical NNGP kernel} estimated by Monte Carlo on a 2K/4K train/validation subset of 8x8-downsampled CIFAR-10, as the number weight samples averaged over (x-axis) and the number of parameters (y-axis) grows. Architecture: Convolution + ReLU, 2x Attention + ReLU, Flatten, Dense. For attention layers, $\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol} = \text{\#channels}$ but $\headSymbol_{\sequenceVariable}^{\depthSymbol} = \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \valueSymbol} = \floor{\sqrt{\text{\#channels}}}$ to reduce the memory footprint. Details in \Cref{sect:convergence_appendix}.}
\label{fig:convergence_plots}
\end{center}
\vskip -0.25in
\end{figure}
\subsection{Infinite width limit under the $d^{-1/2}$ scaling}\label{sect:multi_head}
As discussed in \Cref{sect:intro}, single-head attention architectures can exhibit non-Gaussian asymptotic behaviour under the $d^{-1/2}$ scaling.
This is inconvenient for our purposes as many modern NN architectures combine attention with fully connected, convolutional, and other layer types, all of which have Gaussian NNGP and NTK limits \citep[e.g.,][]{novak2019bayesian,garriga2019deep,yang2019v2}.
This Gaussianity simplifies derivation of the infinite width behaviour of many architectures and allows for easy integration with existing software libraries \citep{novak2020neural}.
Fortunately, the output of an attention layer becomes asymptotically Gaussian when the number of heads becomes large.
\begin{restatable}[$d^{-1/2}$ limit]{theorem}{nngpConvergence}
\label{thm:gp_convergence_sqrt}
Let
$\ell \in \{ 2, \ldots , L + 1 \}$, and
$\phi$ be such that $|\phi(x)| \leq c + m |x|$ for some $c, m \in \R{}_+$.
Assume $\activations{n}{\ell - 1}$
converges in distribution to $f^{\ell - 1} \sim \mathcal{GP}(0, \kappa^{\ell-1})$, such that $\activations{\cdot j}{\ell - 1}$ and $\activations{\cdot k}{\ell - 1}$ are independent for any $j \neq k$,
the~variables $\{ \activations{n, \cdot j}{\ell - 1} \colon j \in \mathbb{N} \}$ are exchangeable over $j$.
Then as $ \min \, \{ n, \headSymbol_{\sequenceVariable}^{\depthSymbol} , \genericDimenstion_{\sequenceVariable}^{\depthSymbol, \logitSymbol} \} \to \infty \, $:
\vspace{-0.5\baselineskip}
\begin{enumerate}[\hspace{-0.5em}(I)]
\renewcommand\labelenumi{\emph{\bfseries(\theenumi)}}
\item
$G_{n}^{\ell} = \{ \logit{\sequenceVariable}{\depthSymbol \headIndex}(x) \colon x \in \mathcal{X} , h \in \mathbb{N} \}$ converges in distribution to $G^{\ell} \sim \mathcal{GP} (0, \kappa^{\ell, G})$ with
\begin{align}\label{eq:logit_cov}
\E [G_{a i}^{\ell h}(x) G_{b j}^{\ell h'}(x')]
&=
\delta_{h = h'}
%
\kerntildef{a b}{\ell}{x}{x'} \,
\kerntildef{i j}{\ell}{x}{x'}
%
\, .
\end{align}
\item
$\activations{\sequenceVariable}{\depthSymbol}$ converges in distribution to $f^{\ell} \sim \mathcal{GP}(0, \kappa^{\ell})$ with
\begin{align}\label{eq:sqrt_scaling_kernel}
&\kernelf{a b}{\ell}{x}{x'}
=
\E [
f_{a1}^{\ell}(x)
f_{b1}^{\ell}(x')
]
\\
&=
%
%
\sum_{i , j = 1}^{\genericDimenstion^s}
\kerntildef{i j}{\ell}{x}{x'}
\E [\zeta(G^{\ell 1}(x))_{a i} \zeta(G^{\ell 1}(x'))_{b j}]
\nonumber
\, ,
\end{align}
and $\activations{\cdot k}{\ell}$ and $\activations{\cdot l}{\ell}$ are independent for any $k \neq l$.
\end{enumerate}
\end{restatable}
We can now revisit our argument from the previous section, and prove that unlike in \Cref{prop:linear_scale_no_conv}, $d^{-1/2}$ scaling ensures a convolutional kernel can in principle be recovered.
\begin{restatable}{proposition}{sqrtConvRecover}
\label{prop:sqrt_scale_conv_recover}
Under the $d^{-1/2}$ scaling, there exists a distribution over $G$ such that
for any $x, x'$ and $a, b, i, j$
\begin{align}
&\E [
\zeta(G(x))_{a i}
\zeta(G(x'))_{b j}
]
\nonumber
\\
&=
\begin{cases}
\frac{1}{d_f}\, , & \exists k \in [d] \text{ s.t.\ } i = N_a(k) \, , j = N_b(k) \, , \\
0 \, ,& \text{ otherwise.}
\end{cases}
\end{align}
\end{restatable}
\section{Beyond the vanilla attention definition}\label{sect:beyon_vanilla_attn}
Before progressing to empirical evaluation of infinitely wide attention architectures,
two practical considerations have to be addressed: (i)~the $d^{-1/2}$ scaling induced kernel in \Cref{eq:sqrt_scaling_kernel} involves an analytically intractable integral $\E [\zeta(G^{\ell 1}(x)) \zeta(G^{\ell 1}(x'))]$; (ii)~incorporation of positional encodings \citep{gehring2017convolutional,vaswani2017attention}.
\subsection{Alternatives to softmax in attention networks}\label{sect:softmax_alternatives}
We propose to resolve the analytical intractability of the $\E [\zeta(G^{\ell 1}(x)) \zeta(G^{\ell 1}(x'))]$ in \Cref{eq:sqrt_scaling_kernel} by substituting functions other than softmax for $\zeta$.
In particular, we consider two alternatives:
(i)~$\zeta(x) = \text{ReLU}(x)$, and (ii)~$\zeta(x) = x$, both applied elementwise. Besides analytical tractability of the expectation, our motivation for choosing (i) and (ii) is that ReLU removes the normalisation while still enforcing positivity of the attention weights, while the identity function allows the attention layer to learn an arbitrary linear combination of the values without constraints.
To see if either is a sensible modification, we evaluated performance of \emph{finite} attention networks on CIFAR-10 for different choices of $\zeta$.
Since softmax typically dampens the marginal variance of attention layer outputs (variance of a convex combination of random variables is upper bounded by the maximum of the individual variances), and both ReLU and identity can also significantly affect scale of the outputs, we optionally add layer normalisation as is common in attention architectures.
We consider no normalisation (\texttt{none}),\footnote{Despite the similarity between attention with ReLU or identity for $\zeta$ and dense layers with cubic nonlinearities, which are known to be hard to train, we found that layer normalisation was not strictly necessary.
We believe this is partly because we only used a single attention layer, and partly because the weights for keys, queries, and values are initialised independently which leads to relatively better behaved distribution of gradients at initialisation.}
normalisation applied after each head prior to multiplication by $\weightMatSymbol_{\sequenceVariable}^{\depthSymbol, \outputSymbol}$ (\texttt{per\_head}), and normalisation applied to the output after $\weightMatSymbol_{\sequenceVariable}^{\depthSymbol, \outputSymbol}$ (\texttt{at\_output}).
\Cref{fig:softmax_replacements} shows the results across varying hyperparameters and random seeds, and \Cref{tab:finite_attention_best} (\Cref{sect:replacements_appendix}) reports accuracies attained under optimal hyperparameter settings.
As you can see, both the replacement of softmax and addition of layer normalisation significantly increases the performance of the NN, with $\zeta(x) = x$ and \texttt{at\_output} normalisation being the best across variety of hyperparameter choices.
In light of the above, we will restrict our attention to the identity function alternative for $\zeta$ in the rest of the paper, and contrast its performance with the standard softmax choice where possible (finite NNs, and infinite attention NNs under the $d^{-1}$ scaling---see \Cref{thm:linear_scaling_limit}).
Similarly, we will also leverage the \texttt{at\_output} layer normalisation over the embedding dimension in our experiments.
As shown by \citet[appendix A]{yang2019v2}, layer normalisation does not prevent Gaussianity of the infinite width limit (see \Cref{tab:kernel_overview} for the associated NNGP and NTK kernel transformations).
\begin{figure}[tbp]
\begin{center}
\centerline{
\includegraphics[width=0.9\columnwidth,keepaspectratio]{fig/zeta_layernorm}
}
\vskip -0.15in
\caption{\textbf{Comparison of $\zeta$ alternatives.}
Architecture: 4x Convolution + ReLU, Attention, Flatten, Dense.
The captured variability is due to multiple random seeds, varying learning rate and network width, illustrating robustness of the reported results.
Softmax significantly underperforms other $\zeta$ alternatives whenever attention is followed by layer normalisation. Details in \Cref{sect:replacements_appendix}.}
\label{fig:softmax_replacements}
\end{center}
\vskip -0.25in
\end{figure}
\subsection{Positional encodings}\label{sect:positional_encodings}
While substituting the identity function for $\zeta$ as suggested in \Cref{sect:softmax_alternatives} would technically allow us to move on to the experimental evaluation already, we found that positional encodings are as important in the infinite width limit as they are for the finite attention layers \citep{vaswani2017attention}.
Since there are many possible variants of the positional encoding implementation, we focus only on the major points here and provide more detail in \Cref{sect:positional_encodings_appendix}.
In finite networks, some of the most common ways to implement positional encodings is to modify the attention layer input by either \texttt{add}-ing $\indexedActivity{\ell - 1}{n}{x} + \posEmb{n}{\ell}$ or \texttt{append}-ing $[ \indexedActivity{\ell - 1}{n}{x} \, , \posEmb{n}{\ell} ]$ a matrix
$\posEmb{n}{\ell}$
which may be either fixed or a trainable parameter.
The purpose of $\posEmb{n}{\ell}$ is to provide the attention layer with information about the relationships between individual spatial dimensions (e.g., position of a particular pixel in an image, or of a token in a string).
\subsubsection{Effect on the infinite width limit}
If we assume $\posEmb{n}{\ell}$ is trainable and each of its columns is initialised independently from $\mathcal{N} (0 , R)$,
$\covPosEmb{}{}$ positive semidefinite,
it can be shown that both in the \texttt{add} and \texttt{append} case, the attention layer output converges (in distribution) to a Gaussian infinite width limit (see \Cref{sect:positional_encodings_appendix}).
The corresponding kernels can be stated in terms of an operator $\mathcal{I}$ which interpolates any given kernel $\kappa$ with $\covPosEmb{}{}$
\begin{align}\label{eq:kernel_interp_op}
\mathcal{I}
\colon
\kernelf{}{}{x}{x'}
\mapsto
\alpha \kernelf{}{}{x}{x'}
+
(1 - \alpha)
\covPosEmb{}{}
\, ,
\end{align}
where $\alpha \in [0, 1]$ is a hyperparameter,\footnote{If $\posEmb{n}{\ell}$ is \texttt{append}-ed, $\alpha = \lim_{n \to \infty} \layerDimension{\ell - 1} / (\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}} + \layerDimension{\ell - 1})$ with $\genericDimenstion_{\sequenceVariable}^{\depthSymbol, \posEmb{}{}}$ the row space dimension of $\posEmb{n}{\ell}$.
When $\posEmb{n}{\ell}$ is \texttt{add}-ed, we replace $\indexedActivity{\ell - 1}{n}{x}$ by
$\sqrt{\alpha} \indexedActivity{\ell - 1}{n}{x} + \sqrt{1 - \alpha} \posEmb{n}{\ell}$
so as to prevent increase of the layer's input variance (see \Cref{sect:positional_encodings_appendix}).} yielding the following modification of the kernel induced by the $d^{-1}$ scaling and $\weightMatSymbol^{\querySymbol} = \weightMatSymbol^{\keySymbol}$ initialisation (\Cref{eq:fixed_determ_kernel}):
\begin{align}\label{eq:sum_pe_lin_scale_kernel}
\kernelf{a b}{}{x}{x'}
=
\bar{\softmax}_{a}^{x}
[
\mathcal{I} \circ
\kerntildef{}{}{x}{x'}
]
(\bar{\softmax}_{b}^{x'})^\top
\, ,
\end{align}
where
$
\bar{\softmax}_{a}^{x}
\coloneqq
\zeta (
\mathcal{I} \circ
\kerntildef{}{}{x}{x}
)_{a \cdot}
$
and similarly for $\bar{\softmax}_{b}^{x'}$.
The modification of the kernel induced by the $d^{-1/2}$ scaling, $\weightMatSymbol^{\querySymbol}, \weightMatSymbol^{\keySymbol}$ initialised independently, and $\zeta$ replaced by the identity function (\Cref{eq:sqrt_scaling_kernel}), then leads to:
\begin{align}\label{eq:sum_pe_sqrt_scale_kernel}
\kernelf{a b}{}{x}{x'}
=
\mathcal{I} \circ
\kerntildef{a b}{}{x}{x'}
\!
\sum_{i, j=1}^{\genericDimenstion^s}
[
\mathcal{I} \circ
\kerntildef{i j}{}{x}{x'}
]^2
.
\end{align}
Several comments are in order.
Firstly, the typical choice of the initialisation covariance for $\posEmb{n}{\ell}$ is $\covPosEmb{}{} = \rho I$, $\rho > 0$. This may be reasonable for the
$
\bar{\softmax}_{a}^{x}
=
\zeta (
%
\mathcal{I} \circ
\kerntildef{}{}{x}{x}
)_{a \cdot}
$
in \Cref{eq:sum_pe_lin_scale_kernel} when $\zeta$ is the softmax function as it increases attention to the matching input spatial dimension, but does not seem to have any ``attention-like'' interpretation in \Cref{eq:sum_pe_sqrt_scale_kernel} where the effect of applying $\mathcal{I}$
to $\tilde{\kernel}^{}$ with $\covPosEmb{ }{} = \rho I$
is essentially analogous to that of just adding i.i.d.\ Gaussian noise to each of the attention layer inputs.
Secondly, the right hand side of \Cref{eq:sum_pe_sqrt_scale_kernel} is just a scaled version of the discussed $\mathcal{I} \circ \tilde{\kernel}$ kernel,
with the scaling constant disappearing when the attention layer is followed by layer normalization (\Cref{tab:kernel_overview}).
Both of these call into question whether the performance of the corresponding finite NN architectures will translate to its infinite width equivalent.
We address some of these issues next.
\subsubsection{Structured positional encodings}\label{sect:struct_pe}
As mentioned, the main purpose of positional encodings is to inject structural information present in the inputs which would be otherwise ignored by the attention layer.
A natural way to resolve the issues discussed in previous section is thus to try to incorporate similar information directly into the $\covPosEmb{}{}$ covariance matrix.
In particular, we propose
\begin{align}\label{eq:decay_pos_emb}
\covPosEmb{a b}{}
=
\rho
\begin{cases}
\exp \{ -\varphi [r_{h}(a, b)^2 + r_{v}(a, b)^2 ] \}
& \text{(image)} \\
\exp \{ -\varphi \, r_{s}(a, b)^2 \}
& \text{(string)}
\end{cases}
%
%
%
%
\end{align}
where $\rho, \varphi > 0$ are hyperparameters, $r_h (a, b)$ and $r_v(a, b)$ are the absolute horizontal and vertical distances between the pixels $a$ and $b$ divided by the image width and height respectively, and $r_s(a, b)$ is the absolute distance between the relative position of tokens $a$ and $b$, e.g., if $a$ is the 4\textsuperscript{th} token out of 7 in the first, and $b$ is the 2\textsuperscript{nd} token out of 9 in the second string, then $r_s(a, b) = |\frac{4}{7} - \frac{2}{9}|$.
To motivate the above definition, let us briefly revisit \Cref{eq:sum_pe_lin_scale_kernel}.
Intuitively, the $d^{-1}$ kernel
$
\bar{\softmax}_{a}^{x}
[
\mathcal{I} \circ
\kerntildef{}{}{x}{x'}
]
(\bar{\softmax}_{b}^{x'})^\top
$
is a result of multiplying the asymptotically Gaussian values $\val{}{} \sim \mathcal{GP} (0, \mathcal{I} \circ \tilde{\kernel})$ by matrices of row-wise stacked $\bar{\softmax}^x = [ \bar{\softmax}_1^x ; \ldots ; \bar{\softmax}_{\genericDimenstion^s}^x]$ vectors, e.g.,
$
\indexedActivation{}{}{x} =
\bar{\softmax}^x
\val{}{}(x)
$,\footnote{By standard Gaussian identities, if $Z \sim \mathcal{N} (0, \Sigma)$, and $A$ is a deterministic matrix, then $A Z \sim \mathcal{N} (0, A \Sigma A^\top).$}
meaning that the $\bar{\softmax}$
vectors serve the role of attention weights in the infinite width limit.
This in turn implies that the greater the similarity under $\kerntildef{a b}{}{x}{x}$ the higher the attention paid by $a$ to $b$.
Thus, if we want to inject information about the relevance of neighbouring pixels in an image or tokens in a string, we need to increase the corresponding entries of
$
\mathcal{I} \circ
\kerntildef{}{}{x}{x}
=
\alpha \kerntildef{}{}{x}{x'}
+
(1 - \alpha)
\covPosEmb{}{}
$
which can be achieved exactly by substituting the $\covPosEmb{}{}$ from \Cref{eq:decay_pos_emb}.
The above reasoning only provides the motivation for modifying the attention weights using positional encodings but not necessarily for modifying the asymptotic distribution of the values $\val{}{}$.
Adding positional encodings only inside the $\zeta$ is not uncommon \citep[e.g.,][]{shaw2018self}, and thus we will also experiment with kernels induced by adding positional encodings
only to the inputs of $\query{n}{}$ and $\key{n}{}$, leading to
\begin{align}\label{eq:decay_lin_scale_kernel}
\kernelf{a b }{}{x}{x'}
=
\bar{\softmax}_{a}^{x}
%
\kerntildef{}{}{x}{x'}
%
(\bar{\softmax}_{b}^{x'})^\top
\, ,
\end{align}
under the $d^{-1}$ scaling (cf.\ \Cref{eq:sum_pe_lin_scale_kernel}), and
\begin{align*} %
\kernelf{a b}{}{x}{x'}
=
\mathcal{I} \circ
\kerntildef{a b}{}{x}{x'}
\!
\sum_{i, j=1}^{\genericDimenstion^s}
\kerntildef{i j}{}{x}{x'}
%
\mathcal{I} \circ
\kerntildef{i j}{}{x}{x'}
%
\, ,
\end{align*}
under the $d^{-1/2}$ scaling (cf.\ \Cref{eq:sum_pe_sqrt_scale_kernel}).
Finally, note that the last kernel remains a scaled version of the aforementioned $\mathcal{I} \circ \tilde{\kernel}$ kernel, albeit now with $\covPosEmb{}{}$ as in \Cref{eq:decay_pos_emb}.
In our experience, using just $\mathcal{I} \circ \tilde{\kernel}$ without the scaling leads to improved empirical performance, and further gains can be obtained with the related kernel
\begin{align}\label{eq:residual_kernel}
\kernelf{a b }{}{x}{x'}
=
\alpha \kerntildef{a b}{}{x}{x'}
+
(1 - \alpha)
\covPosEmb{a \cdot}{}
\kerntildef{}{}{x}{x'}
\covPosEmb{b \cdot}{\top}
\, .
\end{align}
We call \Cref{eq:residual_kernel} the \emph{residual} attention kernel, as it can be obtained as a limit of architecture with a skip connection, $\indexedActivation{\ell}{n}{x} = \sqrt{\alpha} \indexedActivity{\ell - 1}{n}{x} + \sqrt{1 - \alpha} \tildeActivation{\ell}{n}{x}$, where $\tildeActivation{\ell}{n}{x}$ is output of an attention layer (details in \Cref{sect:residual_attention_appendix}).
\section{Experiments}
We evaluate the attention NNGP/NTK kernels on the CIFAR-10 \citep{cifar10} and IMDb reviews \citep{maas2011learning} datasets.
While IMDb is a more typical setting for attention models (\Cref{sec:imdb}), we included CIFAR-10 experiments (\Cref{sec:cifar}) due to desire to compare with other NNGPs/NTKs on an established benchmark \citep[e.g.,][]{novak2019bayesian,du2019gd,li2019enhanced},
and the recent successes of attention on vision tasks \citep[e.g.,][]{wang2017residual,wang2018non,hu2018squeeze,woo2018cbam,chen2018,ramachandran2018stand,bello2019attention}.
Our experimental code utilises the JAX \citep{jax2018github} and Neural Tangents \citep{novak2020neural} libraries.
\begin{figure}[tp]
\begin{center}
\centerline{
\includegraphics[width=\columnwidth,keepaspectratio]{fig/depth_all_logx.pdf}
}
\vskip -0.15in
\caption{
\textbf{Validation accuracy as a function of depth for various NNGP kernels} on a 2K/4K train/validation split of CIFAR-10 (no pixel downsampling).
Architecture: \texttt{[depth]}x Convolution + ReLU, followed by a single instance of the kernel specified in the legend (attention kernels combined with additional Flatten), and Dense.
See \Cref{tab:kernel_overview} for attention, and \citep{novak2019bayesian,garriga2019deep} for Convolutional, Flatten, and Global Average Pooling (GAP) kernel descriptions.
Results reported for best hyperparameters ($d^{-1}$ scaling generally resulted in better performance for the \texttt{Struct} kernel).
More experimental details in \Cref{sect:depth_appendix}.
Notice the improved performance of attention kernels with positional embeddings and layer normalisation (i.e., \texttt{Struct}, \texttt{Residual}) over their \texttt{Vanilla} counterpart.
}
\label{fig:cifar_pe}
\end{center}
\vskip -0.25in
\end{figure}
\subsection{CIFAR-10}\label{sec:cifar}
We have run two types of experiments on CIFAR-10:
(i)~smaller scale experiments focused on understanding how different hyperparameters of the attention kernel affect empirical performance;
(ii)~a larger scale experiment comparing attention kernels to existing NNGP/NTK benchmarks.
The smaller scale experiments were run on a randomly selected subset of six thousand observations from the \emph{training} set, with the 2K/4K train/validation split.
This subset was used in \Cref{fig:convergence_plots,fig:cifar_pe}, and for hyperparameter tuning.
Selected hyperparameters were then employed in the larger scale experiment with the usual 50K/10K train/test split.
All kernels evaluated in this section correspond to NN architectures composed of multiple stacked convolutional layers with ReLU activations, followed by either simple flattening, global average pooling (GAP), or one of our attention kernels itself followed by flattening and, except for the \texttt{Vanilla} attention case (see \Cref{tab:kernel_overview}), also by layer normalisation;
the output is then computed by a single dense layer placed on top.
The choice to use only one attention layer was made to facilitate comparison with \citep{novak2019bayesian,du2019gd,li2019enhanced} where the same set-up with a stack of convolutional layers was considered.
Adding more attention layers did not result in significant gains during hyperparameter search though.
Exact details regarding data normalisation, hyperparameter tuning, and other experimental settings can be found in \Cref{sect:experimental_details}.
\begin{table}[tbp]
\caption{\textbf{CIFAR-10 test accuracies} of attention kernels and existing NNGP/NTK alternatives.
The standard 50K/10K train/test split is used (no pixel downsampling).
Best hyperparemeters from the 2K/4K subset experiments used for each kernel, $d^{-1}$ scaling for the \texttt{Struct} kernel (see \Cref{tab:kernel_overview}). Details in \Cref{sect:full_cifar_appendix}.}
\label{tab:full_data_results}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\noindent
\begin{tabularx}{\columnwidth}{X l c}
\toprule
Kernel & NNGP & NTK \\
\midrule
Flatten & 65.54 & 66.27 \\
GAP \cite{li2019enhanced} & 77.85 & 77.39 \\
LAP \cite{li2019enhanced} & 80.36 & 79.71 \\
%
Struct & 80.55 & 79.93 \\
Residual & \textbf{80.72} & \textbf{80.10} \\
\bottomrule
\end{tabularx}
\end{sc}
\end{small}
\end{center}
\vskip -0.18in
\end{table}
The most important observations from the smaller scale experiments are captured in \Cref{fig:cifar_pe} which shows the validation accuracy of various NNGP models as a function of kernel choice and number of convolutional layers (\texttt{depth}) preceding the final flatten/GAP/attention plus dense block.
Firstly, notice that except for the \texttt{Flatten} model, all other kernel choices achieve their best performance at smaller depths which is consistent with existing literature \citep{arora2019,li2019enhanced}.
Secondly, observe that both the \texttt{Struct} and \texttt{Residual} attention kernels significantly outperform the \texttt{Vanilla} one, demonstrating that the use of positional embeddings and layer normalisation is helpful even in the infinite width limit as claimed in \Cref{sect:positional_encodings}.
In contrast, we did not find significant evidence for $\zeta(x) = x$ outperforming the standard softmax choice as was the case for finite networks (see \Cref{fig:softmax_replacements}), with the best set of hyperparameters for \texttt{Struct} $d^{-1}$ with softmax being only marginally better than the best results with the identity function (recall that no $d^{-1/2}$ kernels use $\zeta = \text{softmax}$ due to the intractability discussed in \Cref{sect:beyon_vanilla_attn}).
This finding provides hope that the $d^{-1/2}$ kernels also do not sacrifice much in terms of performance by using identity for $\zeta$, but also points to salient differences between the qualitative effects of individual hyperparameter choices in finite and infinite attention layers.
Using the insights from the smaller scale experiments, we ran the larger scale experiment on the full dataset using eight layer models and the \texttt{Struct} and \texttt{Residual} attention kernels.
We used the positional embedding covariance matrix defined in \Cref{eq:decay_pos_emb} in both cases, and $d^{-1}$ with softmax for the \texttt{Struct} kernel (further details in \Cref{sect:full_cifar_appendix}).
The results can be found in \Cref{tab:full_data_results}.
As you can see, attention performs significantly better than the \texttt{GAP} kernel \citep{arora2019}, and also provides a moderate improvement over the recent local average pooling (\texttt{LAP}) results \citep{li2019enhanced}.
Since we used the validation accuracy from smaller scale experiments to determine our hyperparameters, we are comparing against the best cross-validation results from \citep{li2019enhanced} for fairness.
\begin{table}[t]
\caption{\textbf{IMDb sentiment classification, test accuracies} of simple NNGP/NTK models on the 25K/25K train/test split using GloVe word embeddings (\citet{pennington2014glove}; 840B.300d). \texttt{GAP-only} corresponds to a single global average pooling layer followed by a linear fully connected readout. \texttt{GAP-FCN} has 2 ReLU fully connected layers after GAP. \texttt{Struct} has an attention layer preceding GAP, followed by one (NNGP) or two (NTK) fully connected layers. Models selected on a validation set of 10K reviews. Details in \Cref{sect:imdb_full_appendix}.}
\label{tab:imdb_full_results}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabularx}{\columnwidth}{X l c}
\toprule
Kernel & NNGP & NTK \\
\midrule
GAP-only & \multicolumn{2}{c}{--\,\,\,\,\,84.98\,\,\,\,\,--} \\
GAP-FCN & 85.82 & 85.80 \\
Struct & \textbf{86.09} & \textbf{86.09} \\
\bottomrule
\end{tabularx}
\end{sc}
\end{small}
\end{center}
\vskip -0.18in
\end{table}
\subsection{IMDb reviews}\label{sec:imdb}
Although there has been interest in applying attention in vision, to date it has been predominantly recognized for performance on language tasks. However, most of available NNGP/NTK kernel implementations \citep{matthews2018gaussian, lee2018deep, garriga2019deep, arora2019, yang2019v2, li2019enhanced} are hard-coded for the specific experiments performed in the respective paper. Neural Tangents \citep{novak2020neural} allows for some flexibility, yet still accepts only inputs of fixed length and having exactly zero (i.e. inputs to fully connected networks) or two (images for CNNs) spatial dimensions.
We release code allowing use of NNGP/NTK models (with or without attention) on inputs of variable spatial extent and arbitrary dimensionality (e.g., one spatial dimension for texts and time series, three spatial dimensions for videos).
Our implementation seamlessly extends the Neural Tangents library, enabling research and application of NNGP and NTK models to new domains with almost no extra effort.
As an example, we present the first benchmarks of simple NNGP and NTK models on the IMDb sentiment classification dataset in \Cref{tab:imdb_full_results}. We observe that \texttt{Struct} kernels outperform the \texttt{GAP-only} kernel (corresponding to linear regression on the word embeddings mean), but provides marginal benefit compared to a fully connected model on top of the pooling layer (\texttt{GAP-FCN}). We conjecture this is due to high-quality word embeddings partially incorporating the inductive bias of the considered model. Indeed, we further demonstrate this effect by contrasting the gaps in performance between different kernel families on high- and low-quality word embeddings in \Cref{tab:imdb_small_results}.
\begin{table}[t]
\caption{\textbf{IMDb sentiment classification, test accuracies} on a 3.2K/1.6K train/test split. When high-quality word embeddings are used (300-dimensional GloVe trained on 840B tokens), complex models yield diminishing returns. Contrarily, simple embeddings (50-dimensional GloVe trained on 6B tokens) lead to significant gaps in model performance due to respective inductive biases ($\texttt{GAP-only} < \texttt{GAP-FCN} << \texttt{CNN-GAP} \approx \texttt{Struct}$). Models selected on a validation set of 1.6K reviews. Details in \Cref{sect:imdb_small_appendix}.}\label{tab:imdb_small_results}
\vskip 0.15in
\begin{sc}
\begin{small}
\begin{tabularx}{\columnwidth}{X X c c c}
\toprule
\multicolumn{2}{l}{Embeddings:} & GloVe 840B & GloVe 6B \\
\multicolumn{2}{l}{(dimension)} & (300) & (50) \\
\midrule
\multicolumn{2}{l}{GAP only} & 83.81 & 73.00 \\
\midrule
\multirow{3}{*}{NNGP} & GAP-FCN & 83.75 & 74.44 \\
& CNN-GAP & 84.69 & 81.00 \\
& Struct & 83.56 & 80.88 \\
\midrule
\multirow{3}{*}{NTK} & GAP-FCN & 83.81 & 74.88 \\
& CNN-GAP & \textbf{84.88} & 80.31 \\
& Struct & 84.00 & \textbf{81.06} \\
\bottomrule
\end{tabularx}
\end{small}
\end{sc}
\vskip -0.18in
\end{table}
Naturally, our sample IMDb results are not competitive with the state-of-the-art, which achieve up to 97.4\% \citep[Table 4]{thongtan-phienthrakul-2019-sentiment}. However, we hope they will be a useful baseline for future research in infinite width sequence models, and that our codebase will substantially facilitate the process by enabling variable-length, arbitrary-dimensional input processing.
\section{Conclusion}
Unlike under the $d^{-1}$ scaling of $Q(x) K (x)^\top$ proposed in \citep{yang2019v2}, the standard $d^{-1/2}$ scaling may lead to non-Gaussian asymptotic behaviour of attention layer outputs.
Gaussianity of the limit can however be obtained by taking the number of heads to infinity.
We explored the effect of positional encodings and replacements for the softmax function in attention layers, leading to improved performance for both finite and infinite attention architectures.
On CIFAR-10, attention improves moderately upon the previous state-of-the-art for GPs without trainable kernels and advanced data preprocessing \citep{li2019enhanced}.
We further released code allowing application of NNGP/NTK kernels to variable-length sequences and demonstrated its use on the IMDb reviews dataset.
While caution is needed in extrapolation of any results, we hope that particularly \Cref{fig:softmax_replacements} and \Cref{tab:full_data_results} inspire novel NN architectures and kernel designs.
\section*{Acknowledgements}
We thank Jaehoon Lee for frequent discussion, help with scaling up the experiments, and feedback on the manuscript. We thank Prajit Ramachandran for frequent discussion about attention architectures. We thank Greg Yang, Niki Parmar, and Ashish Vaswani, for useful discussion and feedback on the project. Finally, we thank Sam Schoenholz for insightful code reviews.
|
2,877,628,089,428 | arxiv | \section{Introduction}
\label{sec1}
Image super resolution (SR) is a problem of enhancing
the resolution of observed low resolution (LR) images.
The importance of super resolution is increasing because of the growing
needs for remastering old films or investigating low resolution
surveillance videos, for example. A large number of methods are proposed for SR, and
a group of actively studied methods are based on sparse
coding~\cite{OLS96,OLS05,Nat1995,elad2010sparse}, which is the focus of
this paper.
Sparse coding is a methodology in signal processing, where an observed
signal is approximated by a linear combination of simple components
called {\it{atoms}}.
A distinctive feature of sparse coding is that the number of atoms
prepared for the signal reconstruction is large, while the number of
atoms actually used for representing a signal is small. The nature of
sparse coding enables us to compactly represent signals and effectively
remove noises.
Sparse coding is used for super resolution from single LR
image~\cite{yang1,Yang:2010:ISV:1892456.1892463,elad1} and recently
from multiple LR images~\cite{Kato2015}.
Multi-frame image SR is
expected to offer clearer high resolution (HR) image than single-frame
SR, if the relative position of observed LR images are accurately
estimated. Indeed, estimation of relative displacement or shift of
multiple images, which is referred to as {\it{image
registration}}~\cite{Zitov2003977}, plays a critical role in SR as well
as sparse coding.
In our previous work for image SR~\cite{Kato2015}, we treated problems
of image registration and sparse coding separately. That is,
firstly we estimate relative displacement of observed LR images,
then we applied the sparse coding algorithm to aligned LR
images. Since the objective of registration of LR images is in realizing
high-quality HR image restoration by sparse representation, it is natural
to perform image registration so that the error in sparse image
representation is minimized. The contribution of this paper is treating
the sub-pixel level image registration and sparse coding problems in a
unified framework. More concretely, we simultaneously estimate both
displacements of LR images and coefficients of SC with a single objective.
Theoretically, we cast the multi-frame SR problem into a particular
framework called {\it{double sparsity}}, which is an interesting
approach for sparse modeling~\cite{rubinstein,DBLP:conf/icip/ZhanZYHH13}.
The rest of this paper is organized as follows. Section~2 describes the
problem setting and the underlying model of multi-frame super
resolution. The sparse coding approach for super
resolution is also shown in this section.
Section~III briefly explains how fine relative displacements (shifts) between
observations are expressed by combinations of pixel-level displacements.
Section~IV describes our
proposed approach for estimating displacements and sparse
coding coefficients in a unified framework. Section~V shows the experimental
results, and the last section is devoted to concluding remarks.
\section{Notation and Formulation}
We first explain the notion of single-frame super resolution by
sparse coding, then extend it to multi-frame super resolution.
\subsection{Single Frame Super Resolution}
\label{sec:SFSR}
Let $\mathbf{Y} \in \mathbb{R}^{Q}$ be the observed LR image. The
aim of super solution is constructing an HR image $\mathbf{X} \in
\mathbb{R}^{P}$ from $\mathbf{Y}$.
To reduce computational costs, image super
resolution is often performed for small image regions called {\it{patches}}, then
they are combined to construct a whole image. Following this way, we
consider reconstructing an HR image patch represented by a vector
${\bf{x}} \in \mathbb{R}^{p}$ by using a single LR image patch ${\bf{y}} \in \mathbb{R}^{q}$.
After obtaining all the HR patches, certain post-processing
for constructing the full-size HR image is performed, which is explained
in sections 5.2 and 5.3 of our previous paper in detail~\cite{Kato2015}.
For each patch, we assume the following degradation process:
each patch pair
$(\mathbf{x}, \mathbf{y})$ is connected by the observation model
\begin{align}
\label{eq:Smodel_patch}
\mathbf{y} = G\mathbf{x} + \bm{\varepsilon},
\end{align}
where $G$ is a degradation operator composed of blur and down-sampling
operations, and $\bm{\varepsilon} \in \mathbb{R}^{q}$ is the additive
observation noise.
Sparse coding~\cite{OLS96,elad2010sparse} is a methodology to represent observed signals with combinations
of only a small number of basis vectors chosen from a large number of candidates.
These basis vectors will be called {\it atoms} henceforth.
Let $\mathbf{D} = [\mathbf{d}_1,\mathbf{d}_2,\dots,\mathbf{d}_K] \in
\mathbb{R}^{q \times K}$ be a {\it{dictionary}} which consists of $K$
atoms, and let ${\boldsymbol \alpha \in \mathbb{R}^{K}}$ be the coefficient
vector for sparse representation of the patch $\mathbf{y} \in
\mathbb{R}^{q}$. Typically, $K > q$. The problem of sparse coding is
formulated as follows:
\begin{align}
\underset{{{\boldsymbol{\alpha}}}}{\text{minimize}}
\| {\mathbf{y}}- \mathbf{D}{\boldsymbol
\alpha}\|_2^2 + \eta
\|{\boldsymbol \alpha}\|_1, \quad \eta >0,
\label{eq:bp}
\end{align}
where $\|{\boldsymbol \alpha}\|_{1} = \sum_{j=1}^{K} |\alpha_{j}|$ is
the $\ell^{1}$-norm of a vector.
This problem~\eqref{eq:bp} adopts the $\ell^1$-norm of coefficients as a
measure of sparsity, and is referred to as the $\ell^1$-norm sparse
coding. By minimizing both the approximation error and the $\ell^{1}$-norm of the coefficient of the
atoms for patch representation, the resultant coefficient $\boldsymbol
\alpha$ has only a few nonzero elements, and the observed patch
$\mathbf{y}$ is well approximated by using a small number of atoms.
More specifically, we call the problem of obtaining the coefficients
${\boldsymbol \alpha}$ with a fixed dictionary $\mathbf{D}$ {\it{sparse
coding}}.
On the other hand, the problem of optimizing the dictionary
${\mathbf{D}}$ with a set of observations
$\{{\mathbf{y}}_{i}\}_{i=1}^{n}$ is called {\it{dictionary
learning}}:
\begin{align}
\underset{{{\mathbf{D}}} }{\text{minimize}}
\sum_{i=1}^{n}
\| {\mathbf{y}_{i}}- \mathbf{D}{\boldsymbol
\alpha}_{i}\|_2^2,
\end{align}
where ${\boldsymbol{\alpha}}_{i}$ is given by solving the
problem~\eqref{eq:bp} for each ${\mathbf{y}}_{i}$.
There are a number of methods for dictionary
learning~\cite{Engan:1999:MOD:1257298.1257971,ksvd} and
sparse
coding~\cite{Rezaiifar93orthogonalmatching,Tibshirani94regressionshrinkage}. In
this work, we use algorithms for learning dictionary and coefficients
proposed in~\cite{DBLP:conf/nips/LeeBRN06} because of
their computational efficiency.
We assume the HR patch $\bf{x}$ is represented by a sparse combination
of HR atoms as ${\bf{x}} = {\mathbf{D}}^{h} \boldsymbol \alpha$. Because
of relation \eqref{eq:Smodel_patch}, the LR patch ${\bf{y}}$ is
connected with the HR patch ${\bf{x}}$ as
\begin{align}
\mathbf{y} \simeq G {\mathbf{x}} = G \mathbf{D}^{h} {\boldsymbol \alpha} =
{\mathbf{D}^{l}} {\boldsymbol \alpha},
\label{eq_scsr}
\end{align}
where $\mathbf{D}^{l}$ is the LR dictionary, atoms in which are
generated from $\mathbf{D}^{h}$ by the above explained image degeneration
process and have one-to-one correspondence to the HR atoms.
The above correspondence \eqref{eq_scsr} naturally leads us to the
following two step procedure for single-frame super resolution: firstly representing the LR patch by the
sparse combination of LR atoms, and secondly reconstructing the
corresponding HR patch by using the combination coefficients for
the LR atoms for combining the HR atoms as shown in
Fig.~\ref{fig:SCSR}.
\begin{figure}[!t]
\centering
\includegraphics[scale=.37]{./figs/ScSR2}
\caption{An illustrative diagram of super resolution by sparse coding.
\label{fig:SCSR}}
\end{figure}
Before performing this single-frame SR, an HR dictionary has to be
prepared by an appropriate dictionary learning algorithm with HR
training images.
\subsection{Multi-frame Super Resolution by Sparse Coding}
\label{sec:MFSR}
Suppose $N$ low resolution images $\mathbf{Y}_{i} \in \mathbb{R}^{Q},
i=1,\dots, N$ are observed, all of which are differently degenerated
from a high resolution image $\mathbf{X} \in \mathbb{R}^{P}$.
Without loss of generality, we assume that the LR image
$\mathbf{Y}_{1}$ is the {\it{target}} for super resolution, and other
$N-1$ LR images are called {\it{auxiliary}} images.
We assume that each patch is exposed by the following
image degradation process: for the target patch ${\mathbf{y}}_{1}$, the
image degradation is modeled by Eq.~\eqref{eq:Smodel_patch}, and for
auxiliary patch pairs $(\mathbf{x}, \mathbf{y}_{i}), i=2,\dots, N$,
observation model
\begin{align}
\label{eq:Mmodel_patch}
\mathbf{y}_{i} = G W_{i}\mathbf{x} +
\bm{\varepsilon}
\end{align}
is assumed, where $W_{i}$ is the parallel shift and clipping operator
corresponds to the $i$-th LR patch, which is explained later.
We note that $G$ can be not identical for different images
${\mathbf{Y}}_{i}$, but they are assumed to be identical for
the sake of simplicity, and we concentrate on estimating $W_{i}, i=2,\dots,N$ for $N-1$ different observations.
In our multi-frame SR, firstly, the target patch ${\bf{y}}_{1}$ is extracted from the target
image $\mathbf{Y}_{1}$.
Then we consider estimating the shift of the
$i$-th image $\mathbf{Y}_{i}$.
We can roughly estimate the displacement of the target patch
${\bf{y}}_{1}$ in the auxiliary image $\mathbf{Y}_{i}$
by sub-pixel-level accuracy matching in the LR space.
To avoid the negative effect at the non-overlapping
region and to use the informative area only, the pixels completely
included in the placed target patch is extracted as the $i$-th LR patch
${\bf{y}}_{i} \in \mathbb{R}^{(5-1)\times (5-1)}$.
Since the target patch ${\mathbf{y}}_{1}$ is represented by
a vector of length $q = \sqrt{q} \times \sqrt{q}$, the size of patch in
$2$-dimensional expression is $\sqrt{q} \times \sqrt{q}$. On the other
hand, the boundary-clipped patch is of size $q^{\prime} = (\sqrt{q}-1)
\times (\sqrt{q}-1)$.
Once $W_{i},i=2,\dots,N$ are estimated, the image SR based on sparse
coding is straight-forward.
With the shift and clipping $\{W_{2},\dots,W_{N}\}$ of
auxiliary patches ${\bf{y}}_{2},\dots,{\bf{y}}_{N} \in
\mathbb{R}^{q^{\prime}}$, the LR dictionaries correspond to the
observations $\{{\bf{y}}_{i}\}_{i=1}^{N}$
are stacked to construct a stacked LR dictionary:
\begin{align}
\label{eq:joint}
\tilde{\mathbf{D}}^l &= \begin{bmatrix}
\mathbf{D}^l \\
\mathbf{D}^l_2 \\[3pt]
\mathbf{D}^l_3 \\
\vdots \\
\mathbf{D}^l_N
\end{bmatrix}
=
\begin{bmatrix}
G\mathbf{D}^h \\
G W_2\mathbf{D}^h \\
G W_3\mathbf{D}^h \\
\vdots \\
G W_{N}\mathbf{D}^h
\end{bmatrix}.
\end{align}
This dictionary is used to approximate the stacked LR patch
\begin{equation}
\label{eq:jointy}
\tilde{\bf{y}} =
\begin{bmatrix}
{\bf{y}}_{1}\\
{\bf{y}}_{2}\\
\vdots\\
{\bf{y}}_{N}
\end{bmatrix}
\end{equation}
by sparse coding, namely, the HR estimate $\hat{\bf{x}}$ is given by
\begin{align}
\hat{\boldsymbol \alpha} &= \underset{{\boldsymbol \alpha}}{\mathrm{ argmin}}
\|\hat{\bf{y}} - \tilde{\mathbf{D}}^{l} {\boldsymbol \alpha}\|_{2}^{2} +
\eta \|{\boldsymbol \alpha}\|_{1},\\
\hat{\bf{x}} &=
{\bf{D}}^{h} \hat{\boldsymbol \alpha}.
\end{align}
Each atom ${\boldsymbol d}^{h}$ in the HR dictionary $\mathbf{D}^{h}$ is
shifted and clipped to an atom of size $q^{\prime}$ by the action of
$W_{i}, i=2,\dots,N$, then blurred,
down-sampled by the action of $G$ to form corresponding LR atoms
$G W_{i} {\boldsymbol d}^{h},i=2,\dots,N$.
Each block of the stacked
dictionary in Eq.~\eqref{eq:joint} is composed of LR atoms obtained in
this manner.
\section{Approximation of Displacements by Pixel-Level Shifts}
As discussed in the previous section, the main problem of multi-frame SR
is reduced to the problem of estimating shift and clipping operators
$W_{i}, i=2,\dots,N$.
In the following, we consider enhancing the resolution of the
magnification factor $k$. Since the LR target
patch ${\mathbf{y}}_{1} \in \mathbb{R}^{q}$ is represented by a square
with $\sqrt{q}$ pixels on a side, the corresponding
HR patch is a square with
$\sqrt{p} =k \sqrt{q}$ pixels, namely, a vector of size $p=k^2 q$.
For estimating $W_{i}, i=2,\dots,N$,
we consider the upper left most point of the $i$-th auxiliary patch
${\bf{y}}_{i}$ as the origin $(0,0)$ of the shift, which is on the grid
of the $i$-th LR image $\mathbf{Y}_{i}$.
We consider the displacement of the target LR patch with HR accuracy in
order to achieve satisfactory result, that means placing
the target LR patch on the LR image $\mathbf{Y}_{i}$ by using sub-pixel level matching.
Let the upper left nearest grid point from the origin in the $i$-th LR image
$\mathbf{Y}_{i}$ be $(k,k)$, and the upper left most
point of the placed target patch be $(a,b) \in [0,k]^{2}$.
There are two cases of displacement: pixel-level parallel shifts in the
HR image space, and others.
In the former case where $(a,b)$ are both integers, the LR dictionary
for the auxiliary patch can be simply constructed by clipping the
corresponding areas from the HR dictionary and degradation with $G$.
Those LR dictionaries of pixel-level parallel shifts $a,b = 0,1,\dots,k$
are denoted by ${\mathbf{D}}^{l(a,b)}$, and referred to as the {\it{LR
base dictionaries}} henceforth. In the latter case, we assume that
relative displacement of the auxiliary patch is well-approximated by a linear combination of
pixel-level parallel translation in the HR space. Namely, estimating
shift and clipping operators is reduced to estimating the combination
coefficients for possible $(k+1)^{2}$ parallel translations. The $i$-th
block of the LR dictionary ${\mathbf{D}}^{l}_{i}$ in
Eq.~\eqref{eq:joint} is then given by
\begin{align}
\notag
{\mathbf{D}}^{l}_{i}
&=
G W_{i} {\mathbf{D}}^{h} \\ \notag
\label{eq:Dlelem}
& =
\theta_{i,(0,0)}\mathbf{D}^{l(0,0)} +
\theta_{i,(0,1)}\mathbf{D}^{l(0,1)} + \cdots +
\theta_{i,(k,k)}\mathbf{D}^{l(k,k)}
\end{align}
where the LR base dictionaries are common to all of LR observations, and
different observation is represented by different coefficients $\theta$.
In our
previous work~\cite{Kato2015}, we took a 2-step procedure for
multi-frame SR. That is, firstly we estimate the parameters
$\theta_{i,(a,b)}$ by using the 2D simultaneous block matching method
proposed in~\cite{2dest2} because of its
computational efficiency. Then, we construct the stacked LR dictionary in
Eq.~\eqref{eq:joint} and stacked LR patches in Eq.~\eqref{eq:jointy},
and the HR patch is obtained by sparse coding.
In this work, instead of the 2-step procedure, we propose a novel
approach for estimating the sub-pixel level accuracy shifts through a
common optimization objective to spares coding.
\section{Double Sparsity for Image Super Resolution}
In this section, we formalize the proposed method for estimating both
displacements of LR images and coefficients for sparse coding.
\subsection{Problem Formulation}
We start with showing that the stacked LR patches $\tilde{\bf{y}}$ in
Eq.\eqref{eq:jointy} is approximated by a bi-linear form. This is done
by expressing the stacked dictionary in
Eq.~\eqref{eq:joint} as
\begin{equation}
\label{eq:DtildetI}
\tilde{\mathbf{D}}^{l}
= {\mathbf{B}} \left( {\boldsymbol \theta} \otimes \mathbf{I}
\right),
\end{equation}
where $\otimes$ is the Kronecker product, $\mathbf{B}$ is defined by
\begin{displaymath}
{\scriptscriptstyle
\begin{bmatrix}
\mathbf{D}^l_{1} & & & & && \\
& \mathbf{D}^{l(0,0)} & \cdots & \mathbf{D}^{l(k,k)} & & & & & \\
& & & & \ddots & & \\
& & & & & \mathbf{D}^{l(0,0)} & & \cdots & \mathbf{D}^{l(k,k)} \\
\end{bmatrix}
},
\end{displaymath}
$\boldsymbol \theta$ is defined by
\begin{displaymath}
\begin{bmatrix} 1 , \theta_{2,(0,0)} , \theta_{2,(0,1)} , \hdots , \theta_{2,(k,k)} , \hdots, \theta_{N,(0,0)} , \hdots , \theta_{N,(k,k)} \end{bmatrix}^{\top},
\end{displaymath}
and ${\mathbf{I}}$ is the unit matrix of an appropriate size.
Then, the approximation of an LR patch is denoted by
\begin{align}
\tilde{\mathbf{y}} \simeq \tilde{\mathbf{D}}^{l} {\boldsymbol \alpha}
&= {\mathbf{B}} \left( {\boldsymbol \theta} \otimes \mathbf{I} \right) {\boldsymbol \alpha} \\
&= {\mathbf{B}} \left( {\mathbf{I}} \otimes {\boldsymbol \alpha}\right) {\boldsymbol \theta} \\
& = \mathbf{B}\; \mathrm{vec}\left( {\boldsymbol
\alpha}{\boldsymbol \theta}^\top
\right),
\end{align}
where $\mathrm{vec}$ is the column-span vectorization operator.
In the above expression, ${\boldsymbol \alpha}$ is the coefficient
vector of sparse coding, and ${\boldsymbol \theta}$ is the shift
vector of LR observations with sub-pixel level shifts in the HR space.
We will
estimate ${\boldsymbol \alpha}$ and ${\boldsymbol \theta}$ from
the observed LR patches. Then, the estimated coefficient ${\boldsymbol
\alpha}$ is used for reconstructing the HR image as $\mathbf{x} =
\mathbf{D}^h {\boldsymbol \alpha}$.
The optimization problem to be solved is
\begin{align}
\label{eq:objfunc}
\min_{\left\{ \boldsymbol \alpha , {\boldsymbol \theta} \right\}}\;
\| \tilde{\mathbf{y}} - {\mathbf{B}} \; \mathrm{vec} \left( {\boldsymbol
\alpha} {\boldsymbol \theta}^\top \right) \|_2^2 + \eta \|
{\boldsymbol \alpha} \|_1 \\
\text{subject to} \hspace{10pt} \mathbf{E}{\boldsymbol \theta} \leq \mathbf{1}, \hspace{10pt} {\boldsymbol \theta} \geq \mathbf{0},
\end{align}
where $\mathbf{1}$ denote the vector of all ones.
The regularization term $\eta \| {\boldsymbol \alpha}\|_1$ imposes
sparsity for representing observed signals by linear combinations of atoms.
The inequality $\mathbf{E}{\boldsymbol \theta} \leq \mathbf{1}$ encodes
a constraint that the sum of coefficients for interpolation is less than
or equal to one for each image. Here the matrix
$\mathbf{E} \in \mathbb{R}^{N \times \{1+(k+1)^{2}\times (N-1)\}}$ is defined by
\begin{align}
\mathbf{E} = \begin{bmatrix}
1 & & & & & & \multicolumn{3}{c}{\multirow{3}{*}{{\Huge 0}}} & \\
& 1 & 1 & \cdots & 1 & & & & & \\
& \multicolumn{3}{c}{\multirow{2}{*}{{\Huge 0}}} & & \ddots & & & & \\
& & & & & & 1 & 1 & \cdots & 1 \\
\end{bmatrix}.
\end{align}
Together with the non-negativity constraint ${\boldsymbol \theta} \geq
\mathbf{0}$, the constraints $\mathbf{E}{\boldsymbol \theta} \leq
\mathbf{1}$ and ${\boldsymbol \theta} \geq \mathbf{0}$ constitute the
$\ell^{1}$-norm like constraints with non-negativity, which also produces
sparse solutions not only for ${\boldsymbol \alpha}$ but also for ${\boldsymbol \theta}$.
\subsection{Optimization method}
\label{subsec:optim}
We solve the optimization problem~\eqref{eq:objfunc}. Since it is intractable to find a closed-form solution
for the problem~\eqref{eq:objfunc} on both ${\boldsymbol \alpha}$ and
${\boldsymbol \theta}$, we alternatingly solve the problem with respect to
${\boldsymbol \alpha}$ with a fixed ${\boldsymbol \theta}$ and with
respect to ${\boldsymbol \theta}$ with a fixed ${\boldsymbol \alpha}$.
First of all, using only the target LR patch $\mathbf{y}_1$, we
initialize the coefficient ${\boldsymbol \alpha}^{(1)}$ by solving the
following optimization problem,
\begin{align}
\label{eq:a0est}
{\boldsymbol \alpha}^{(1)} &= \arg \min_{\boldsymbol \alpha} \| \mathbf{y}_1 - \mathbf{D}^{l} {\boldsymbol \alpha}\|_2^2 + \eta \| {\boldsymbol \alpha }\|_1,
\end{align}
which is efficiently solved by using sparse coding algorithms.
By fixing the coefficient ${\boldsymbol \alpha}^{(t)}$, we obtain the
combination coefficient for shift operators by
\begin{equation}
\begin{split}
\label{eq:QP}
{\boldsymbol \theta}^{(t)} & = \arg \min_{\boldsymbol \theta} \|
\tilde{\mathbf{y}} - {\mathbf{B}} (\mathbf{I} \otimes
{\boldsymbol \alpha}^{(t)}) {\boldsymbol \theta} \|_2^2\\
&\hspace{15pt} \text{subject to} \hspace{10pt} \mathbf{E}{\boldsymbol
\theta} \leq \mathbf{1}, \hspace{5pt} {\boldsymbol \theta} \geq
\mathbf{0}, \hspace{5pt} \theta_1 = 1.
\end{split}
\end{equation}
This is a quadratic programing problem, and efficiently
solved by using any off-the-shelf solver.
For optimizing ${\boldsymbol \alpha}$ with a fixed
${\boldsymbol \theta}^{(t)}$, we solve the following problem:
\begin{align}
\label{eq:SC}
{\boldsymbol \alpha}^{(t+1)} &= \arg \min_{\boldsymbol \alpha} \|
\tilde{\mathbf{y}} - {\mathbf{B}} ({\boldsymbol \theta}^{(t)}
\otimes \mathbf{I}) {\boldsymbol \alpha}\|_2^2 + \eta \|
{\boldsymbol \alpha }\|_1.
\end{align}
This problem is also an instance of the $\ell^{1}$-norm regularized least
square optimization problem, which is efficiently solved by using sparse coding algorithms.
We iteratively solve these optimization problems \eqref{eq:QP} and
\eqref{eq:SC} until convergence.
In Algorithm~\ref{alg:proposed}, we summarize the proposed algorithm
with a pseudo-code. By the operation of {\tt ClipByMatching}, we
estimate the position of the target patch in the auxiliary image
$\mathbf{Y}_{i}$ and extract the auxiliary patch $\mathbf{y}_{i}$ of size
$q^{\prime}$. Also, operations of solving Eqs.~\eqref{eq:QP} and
\eqref{eq:SC} are denoted by {\tt SolveDisp} and {\tt SolveCoeff},
respectively.
\begin{algorithm}
\caption{Proposed Algorithm}
\label{alg:proposed}
\begin{algorithmic}
\State {$\mathbf{Input}$: LR Images
$\mathbf{Y}_1,\mathbf{Y}_2,\cdots,\mathbf{Y}_N$, HR Dictionary
$\mathbf{D}^h$, and LR base Dictionaries
$\mathbf{D}^l,\mathbf{B}$}
\While{there remains patches to be extracted}
\State{extract a patch $\mathbf{y}_{1}$ from the target image $\mathbf{Y}_{1}$}
\For{$i=2$ to $N$}
\State{${\mathbf{y}_{i}} \leftarrow {\tt
ClipByMatching}(\mathbf{y}_{1}, \mathbf{Y}_{i})$}
\EndFor
\State{$\tilde{\mathbf{y}} \leftarrow \begin{bmatrix}
\tilde{\mathbf{y}},
\mathbf{y}_2,
\dots,
\mathbf{y}_N
\end{bmatrix}^{\top}$}
\State{${\boldsymbol \alpha}^{(1)} \leftarrow \arg \min_{\boldsymbol \alpha} \| \mathbf{y}_1 - \mathbf{D}^l {\boldsymbol \alpha}\|_2^2 + \eta \| {\boldsymbol \alpha }\|_1$}
\For{$t=1$ to $T$}
\State{${\boldsymbol \theta}^{(t)} \leftarrow {\tt
SolveDisp}(\tilde{\mathbf{y}}, {\mathbf{B}}, {\boldsymbol
\alpha}^{(t)}) $}
\Comment{Eqs.~\eqref{eq:QP}}
\State{${\boldsymbol \alpha}^{(t)} \leftarrow {\tt
SolveCoeff}(\tilde{\mathbf{y}},{\mathbf{B}}, {\mathbf{\theta}}^{(t)})$}
\Comment{Eqs.~\eqref{eq:SC}}
\EndFor
\State{$\mathbf{x} \leftarrow \mathbf{D}^h {\boldsymbol \alpha}^{(T)}$}
\EndWhile
\State{post processing for improving the consistency
of the whole image (see, e.g. \cite{Kato2015})}\\
\Return HR image $\mathbf{X}$
\end{algorithmic}
\end{algorithm}
\subsection{Double sparsity structure}
It is worth noting that the optimization problem~\eqref{eq:objfunc}
shares the same form with the formulation of {\it{double sparsity}}
dictionary learning
proposed by \cite{rubinstein}. Double sparsity dictionary learning
is proposed for bridging the gap between {\it{analytic}} dictionary such
as wavelets~\cite{Daubechies:1992:TLW:130655} and {\it{learning-based}}
dictionary such MOD~\cite{Engan:1999:MOD:1257298.1257971} and
K-SVD~\cite{ksvd}. The double sparsity
dictionary learning assumes that dictionary atoms themselves have
some underlying sparse structure over a set of more fundamental base dictionaries. In
our formulation of multi-frame super resolution, the atoms are generated
from a sparse combination of fundamental atoms derived by pixel-level shifting and
degenerating the HR atoms. A schematic diagram of the double sparsity
structure in our formulation is shown in Fig.~\ref{fig:ds}.
\begin{figure*}[!th]
\centering
\includegraphics[scale=.5]{./figs/ds.eps}
\caption{An illustrative diagram for double sparsity for our
formulation of the multi-frame super resolution problem. The LR
dictionary used for sparse coding of the stacked LR patches is itself
a sparse combination of base dictionaries generated by pixel-level
parallel shift of HR dictionary followed by degradation.
\label{fig:ds}}
\end{figure*}
By modeling the shift operation by a set of base shift operators, we
obtained a natural and simple formulation of the multi-frame super
resolution based on sparse coding.
\section{Experimental Results}
In this section, we demonstrate the super-resolution results on some sets of still images and some sets of images from movies.
\subsection{Application to still images}
We suppose observed LR images are generated
from an HR image through parallel translations, blurring, down-sampling, and
addition of noises.
The parallel translations are imitated by shifting
the HR image. The degree of vertical and horizontal shifts are randomly
sampled from a uniform distribution in $[-5,5]$.
The blurring operation is realized by convolution of $9 \times
9$-pixel Gaussian filter with the standard deviation $\sigma_h = 1$.
The blurred and shifted images are then down-sampled by the factor
three. Finally, noises sampled from $\mathcal{N}(0,\sigma_{n}=1)$
are added to generate LR observation images.
In our experiments, both the intensity of the blur and
the noise are supposed to be given.
In our proposed algorithm, we iteratively solve the
quadratic programming~\eqref{eq:QP} and the sparse coding
problem~\eqref{eq:SC}. We observed that the algorithm converged in less
than three iterations, hence we fix the iteration number $T$ to three in all
of the experiments.
In our experiments, we magnify the input LR images by a factor of three
for all cases. We used five LR images to estimate an HR images, i.e., $N=5$.
We compare the proposed method to seven conventional methods.
The first method is bi-cubic interpolation. This method is simple
and regarded as a baseline for SR.
The second and third methods are Single-Frame Joint
Dictionary Learning (SF-JDL;~\cite{Yang:2010:ISV:1892456.1892463}) and Adaptive Sparse Domain
Selection (ASDS;~\cite{asdssr}). These methods are considered as state-of-the-art
single-frame SR methods with publicly available software implementations.
The fourth method is the one proposed
by~\cite{wang1}, which is a
multi-frame SR method based on joint dictionary learning. We refer to
this method as MF-JDL (Multi-Frame super resolution based on Joint
Dictionary Learning). The other two methods are representative methods
in reconstruction-based SR in the literature. In~\cite{frsr}, a multi-frame SR method
based on regularization in the form of Bilateral Total Variation (BTV) is
proposed. Because of its performance and simplicity, the method have
become one of the most commonly cited papers in the field of multi-frame
SR. BTV is further improved in~\cite{Li:2010:MIS:1621135.1621163}, in which the method based on
regularization by Locally Adaptive Bilateral Total Variation (LABTV) is
proposed. In this paper, these two reconstruction-based methods are referred to as BTV
and LABTV, respectively. Finally,
we also use the multi-frame SR method based on sparse coding proposed in our
previous paper~\cite{Kato2015}, which is referred to as MF-SC.
There are several tuning parameters for each SR methods. To make fair
comparison, we first optimize the parameters of each method to maximize
PSNR of the image Lena, which is one of the most commonly used
benchmark images in the field of image processing. Then, for all other images, we keep using
the same parameters which are optimized for Lena.
We use two different gray-scale images (Lena and Cameraman), and three
color images (Flower, Girl, and Parthenon) for evaluating the
performance of SR methods.
When we deal with color images, we first
convert the image into YCbCr format, then apply SR methods only to
luminance channel (Y). Values of other channels Cb and Cr are simply
expanded by bi-cubic interpolation.
We show the experimental results in
Fig.~\ref{r_lena}-Fig.~\ref{r_parthenon}. To focus on the difference between our previous method and the newly
proposed method, we only show the original images, degraded images,
images obtained by MF-SC and those obtained by our proposed method.
These figures indicate that the proposed method can generate
comparable or better images compared to MF-SC.
{\setlength{\tabcolsep}{1mm}
\begin{figure}[!ht]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/lena_LR.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/lena_HR.eps}
\end{center}
\end{minipage}
\\
{\small (a) Observed LR image}
&
{\small (b) Original HR image}
\\
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/lena_proposed.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/ds_lena.eps}
\end{center}
\end{minipage}
\\
{\small (c) MF-SC}
&
{\small (d) Proposed}
\end{tabular}
\caption{Reconstructed images estimated from LR observations for
Lena.
(a) Observed LR image, (b) Original HR image, results by (c) MF-SC(our
previous work), and (d) the proposed method.
\label{r_lena}}
\end{center}
\end{figure}
}
{\setlength{\tabcolsep}{1mm}
\begin{figure}[!ht]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/cameraman_LR.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/cameraman_HR.eps}
\end{center}
\end{minipage}
\\
{\small (a) Observed LR image}
&
{\small (b) Original HR image}
\\
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/cameraman_proposed.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/ds_cameraman.eps}
\end{center}
\end{minipage}
\\
{\small (c) MF-SC}
&
{\small (d) Proposed}
\end{tabular}
\caption{Reconstructed images estimated from LR observations for
Cameraman.
(a) Observed LR image, (b) Original HR image, results by (c) MF-SC, and (d) the proposed method. \label{r_cameraman}}
\end{center}
\end{figure}
}
{\setlength{\intextsep}{0.5pt}
{\setlength{\tabcolsep}{1mm}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/flower_LR.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/flower_HR.eps}
\end{center}
\end{minipage}
\\
{\small (a) Observed LR image}
&
{\small (b) Original HR image}
\\
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/flower_proposed.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/ds_flower.eps}
\end{center}
\end{minipage}
\\
{\small (c) MF-SC}
&
{\small (d) Proposed}
\end{tabular}
\caption{Reconstructed images estimated from LR observations for
Flower.
(a) Observed LR image, (b) Original HR image, results by (c) MF-SC, and (d) the proposed method.
\label{r_flower}}
\end{center}
\end{figure}
}
}
{\setlength{\intextsep}{0.5pt}
{\setlength{\tabcolsep}{1mm}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/girl_LR.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/girl_HR.eps}
\end{center}
\end{minipage}
\\
{\small (a) Observed LR image}
&
{\small (b) Original HR image}
\\
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/girl_proposed.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/ds_girl.eps}
\end{center}
\end{minipage}
\\
{\small (c) MF-SC}
&
{\small (d) Proposed}
\end{tabular}
\caption{Reconstructed images estimated from LR observations for
Girl.
(a) Observed LR image, (b) Original HR image, results by (c) MF-SC, and (d) the proposed method.
\label{r_girl}}
\end{center}
\end{figure}
}
}
{\setlength{\intextsep}{0.5pt}
{\setlength{\tabcolsep}{1mm}
\begin{figure}[!h]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/parthenon_LR.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/parthenon_HR.eps}
\end{center}
\end{minipage}
\\
{\small (a) Observed LR image}
&
{\small (b) Original HR image}
\\
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/parthenon_proposed.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4cm}
\begin{center}
\includegraphics[clip, width = 4cm]{./figs/ds_parthenon.eps}
\end{center}
\end{minipage}
\\
{\small (c) MF-SC}
&
{\small (d) Proposed}
\end{tabular}
\caption{Reconstructed images estimated from LR observations for
Parthenon.
(a) Observed LR image, (b) Original HR image, results by (c) MF-SC, and (d) the proposed method.
\label{r_parthenon}}
\end{center}
\end{figure}
}
}
For quantitative comparison of SR methods, we use the Peak Signal to
Noise Ratio (PSNR) defined as
\begin{align}
{\rm{PSNR}}\text{[dB]} = 10 \log_{10} \frac{255^2}{{\rm{MSE}}},
\end{align}
where MSE is the mean squared error between the original HR image and
the estimated HR image, and the higher PSNR indicates the better SR
performance.
We show PSNR values obtained by various methods in Table~\ref{tab:result_psnr}.
For evaluating the PSNR values, we randomly generated $100$ sets of $5$
shift operators and generated
degraded images by adding random observation noises. From each set of observed images, we randomly choose one target image.
The only target image is used for single-frame SR, while in multi-frame SR, the remaining $4$ images are used as auxiliary LR images.
The means and standard deviations of PSNR values are calculated using
$100$ SR results by each methods. The best and the second best results
are shown in bold and underlined styles, respectively.
\begin{table*}[th!]
\begin{center}
\caption{PSNRs of SR method.
The best and second best results are shown by bold and underlined
styles, respectively.\label{tab:result_psnr}}
\scalebox{.7}{
\begin{tabular}{c|cccccccc} \hline
Image & Bicubic & SF-JDL & ASDS & MF-JDL & BTV & LABTV & MF-SC& Proposed \\ \hline
Lena & $27.91\pm 0.00$ & $28.73 \pm 0.01$ & $\bold{30.08} \pm 0.02$
& $29.25 \pm 0.05$ & $29.01 \pm 0.22$ & $29.33 \pm 0.20$ &
$ 29.69 \pm 0.15$ & $\underline{29.82} \pm 0.14$\\
Cameraman & $27.03 \pm 0.00$ & $28.25 \pm 0.01$ & $29.88
\pm 0.02$ & $28.29 \pm 0.03$ & $29.43 \pm 0.37$ & $29.83
\pm 0.37$ & $\underline{30.19} \pm 0.38$ & ${\bold{30.44}} \pm 0.35$\\
Flower & $35.50 \pm 0.01$ & $35.81 \pm 0.01$ & $36.22 \pm 0.02$ &
$36.32 \pm 0.04$ & $36.26 \pm 0.24$ &
$\underline{36.46} \pm 0.21$ & $\bold{36.61}
\pm 0.10$ &
$\bold{36.61} \pm 0.12$\\
Girl & $31.12 \pm 0.00$ & $31.49 \pm 0.01$ & $31.72 \pm 0.01$ &
$31.73 \pm 0.02$ & $31.84 \pm 0.17$ & $\bold{32.09} \pm
0.16$ & $\underline{31.98} \pm 0.06$
& $31.92 \pm 0.07$\\
Parthenon & $24.40 \pm 0.00$ & $24.59 \pm 0.00$ & $25.07 \pm 0.01$&
$24.70 \pm 0.02$ & $\bold{25.45} \pm 0.19$ & $25.33 \pm
0.13$ & $\underline{25.41} \pm 0.09$
& $25.37 \pm 0.08$\\ \hline
\end{tabular}
}
\end{center}
\end{table*}
As shown in Table~\ref{tab:result_psnr}, the proposed method
outperforms other conventional methods in two out of five images (
Cameraman and Flower), and
be the second best for Lena. It improved the previous method in two
images, being the same in one image, and slightly worth in two images.
\subsection{Application to motion pictures}
We show experimental results on LR images sequentially captured from movies.
From five consecutive LR images, the middle (third in the temporal
sequence) image is selected as the target image, and other four are
considered as auxiliary images.
The obtained HR images using MF-SC and the proposed method are shown in
Fig.~\ref{r_mac} and Fig.~\ref{r_samurai}.
We also show the obtained PSNR in Table~\ref{tab:psnr_movie} for all methods.
\begin{figure}[!htb]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{4.0cm}
\begin{center}
\includegraphics[clip, width=4.0cm]{./figs/macarthur_LR.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4.0cm}
\begin{center}
\includegraphics[clip, width=4.0cm]{./figs/macarthur_HR.eps}
\end{center}
\end{minipage}
\\
{\small (a) Observed LR image}
&
{\small (b) Original HR image}
\\
\begin{minipage}{4.0cm}
\begin{center}
\includegraphics[clip, width=4.0cm]{./figs/macarthur_proposed0.003.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4.0cm}
\begin{center}
\includegraphics[clip, width=4.0cm]{./figs/ds_macarthur.eps}
\end{center}
\end{minipage}
\\
{\small (c) MF-SC}
&
{\small (d) Proposed}
\end{tabular}
\caption{Images estimated from LR observations for MacArthur.
(a) Observed LR image, (b) Original HR image, results by (c) MF-SC, and (d) the proposed method.
\label{r_mac}}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\begin{tabular}{cc}
\begin{minipage}{4.0cm}
\begin{center}
\includegraphics[clip, width=4.0cm]{./figs/samurai_LR.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4.0cm}
\begin{center}
\includegraphics[clip, width=4.0cm]{./figs/samurai_HR.eps}
\end{center}
\end{minipage}
\\
{\small (a) Observed LR image}
&
{\small (b) Original HR image}
\\
\begin{minipage}{4.0cm}
\begin{center}
\includegraphics[clip, width=4.0cm]{./figs/samurai_proposed0.003.eps}
\end{center}
\end{minipage}
&
\begin{minipage}{4.0cm}
\begin{center}
\includegraphics[clip, width=4.0cm]{./figs/ds_samurai.eps}
\end{center}
\end{minipage}
\\
{\small (c) MF-SC}
&
{\small (d) Proposed}
\end{tabular}
\caption{Images estimated from LR observations for Samurai.
(a) Observed LR image, (b) Original HR image, results by (c) MF-SC, and (d) the proposed method.
\label{r_samurai}}
\end{center}
\end{figure}
\begin{table*}[!th]
\begin{center}
\caption{PSNRs and computational times (in parentheses) of SR methods applied to movie data.}
\label{tab:psnr_movie}
\begin{tabular}{c|ccccccccc} \hline
& Bicubic & SF-JDL & ASDS & MF-JDL & BTV & LABTV & MF-SC &Proposed \\ \hline
MacArthur & 34.11 & 34.33 & 35.63 & 35.18 &
34.39 & 34.40 & 34.79 & 35.63\\
& & (2.69) & (178.08) & (133.78) & (61.72) & (96.17) & (27.70) & (61.74)\\
Samurai & 25.36 & 25.97 & 26.66 & 26.12 & 26.16
& 26.07 & 25.90 & 26.49\\
& & (2.50) & (211.65) & (138.38) & (62.13) & (96.24) & (30.75) &(59.86)\\ \hline
\end{tabular}
\end{center}
\end{table*}
From Fig.~\ref{r_mac} and Fig.~\ref{r_samurai}, the HR images obtained by the proposed method
are clear and have distinct edges compared to the images obtained by
MF-SC.
From Table~\ref{tab:psnr_movie}, PSNRs of the proposed method are the
same or lower
than ASDS. However, the computational costs of the
proposed method are lower than ASDS and other multi-frame
SR methods.
Although the proposed method requires about double computational cost to our
previous method (MF-SC), it significantly improves the quality of
reconstructed image in PSNR.
\section{Conclusion}
In this paper, we discussed multi-frame image super resolution as
a combination of distinct problems of image registration and
sparse coding. Main contribution of this work is formulating these two
problems within a framework of double sparsity dictionary learning.
Image registration and sparse coding problems are unified in a single
objective function, then registration coefficients and sparse
coding coefficients are alternatingly optimized with
quadratic programming and l1-norm constraint least squares,
respectively, both of which
lead sparse estimation of the coefficients.
The proposed method improved our previous formulation of
multi-frame super resolution for some images.
Particularly, images from movies are significantly improved.
We mainly explained the proposed super resolution method with an
application to image resolution enhancement, however,
we consider that our double-sparsity formulation is applicable to enhancement or
refinement of multiple observations from a number of inaccurate sensors
with appropriate base dictionaries, such as information integration from
observations by autonomous mobile robots.
Our future work includes application of the proposed method to other signal resolution
enhancement, and further improvement of computational efficiency by
adopting or developing optimization methods for sparse coding and shift estimation.
\section*{Acknowledgment}
Part of this work was supported by JSPS KAKENHI No. 25120009 and 26120504.
|
2,877,628,089,429 | arxiv |
\section{Appendix}
\subsection{Proof of Theorem 2 (pseudo-robustness)}
\label{proof:th2}
\begin{proof}
From the proof of Theorem~1, we can easily deduce that:
\begin{eqnarray*}
\lefteqn{|\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}}})-l_{emp}(\mathcal{A}_{p_{\mathbf{s}}})|\leq 2B\sum_{i=1}^K|\frac{|N_i|}{n}-\mu(C_i)|+}\\
&&\left|\sum_{i=1}^K\sum_{j=1}^K\mathbb{E}_{z_1,z_2\sim
\mu}(l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|z_1\in C_i,z_2\in
C_j)\frac{|N_i|}{n}\frac{|N_j|}{n}
-\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^nl(\mathcal{A}_{p_{\mathbf{s}}},s_i,s_j)\right|.
\end{eqnarray*}
Then, we have
\begin{eqnarray*}
&\leq&2B\sum_{i=1}^K|\frac{|N_i|}{n}-\mu(C_i)| +\\
&&\left|\frac{1}{n^2}\sum_{i=1}^K\sum_{j=1}^K\sum_{(s_o,s_l)\in\hat{p}(\mathbf{s})}
\sum_{s_o\in N_i}\sum_{s_l\in N_j}\max_{z\in C_i}\max_{z'\in C_j}|l(\mathcal{A}_{p_{\mathbf{s}}},z,z')-l(\mathcal{A}_{p_{\mathbf{s}}},s_o,s_l)|\right|+\\
&&\left|\frac{1}{n^2}\sum_{i=1}^K\sum_{j=1}^K\sum_{(s_o,s_l)\not\in\hat{p}(\mathbf{s})}\sum_{s_o\in N_i}\sum_{s_l\in N_j}\max_{z\in C_i}\max_{z'\in C_j}|l(\mathcal{A}_{p_{\mathbf{s}}},z,z')- l(\mathcal{A}_{p_{\mathbf{s}}},s_o,s_l)|\right|\\
&\leq&\frac{\hat{p}_n(p_{\mathbf{s}})}{n^2}\epsilon(p_{\mathbf{s}})+B(\frac{n^2-\hat{p}_n(p_{\mathbf{s}})}{n^2}+2\sqrt{\frac{2K \ln 2 + 2\ln 1/\delta}{n}}).
\end{eqnarray*}
The second inequality is obtained by the triangle inequality, the last one is obtained by the application of Proposition~1, the hypothesis of pseudo-robustness and the fact that $l$ is positive and bounded by $B$ and thus $|l(\mathcal{A}_{p_{\mathbf{s}}},z,z')-l(\mathcal{A}_{p_{\mathbf{s}}},s_o,s_l)|\leq B$.
\end{proof}
\subsection{Proof of sufficiency of Theorem 3}
\label{proof:suff}
\begin{proof} The proof of sufficiency corresponds to the first part of the proof of Theorem 8 in \cite{XUrobustness-ML}.
When $\mathcal{A}$ is weakly robust there exits a sequence $\{{D}_n\}$
such that for any $\delta,\epsilon>0$ there exists
$N(\delta,\epsilon)$ such that for all $n>N(\delta,\epsilon)$,
$Pr(\mathbf{t}(n)\in D_n)>1-\delta$ and
\begin{equation}\label{eq:d_n}
\max_{\mathbf{\hat{s}}(n)\in {D}_n}\left|L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{\hat{s}}(n)})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{{s}}^*(n)})\right|<\epsilon.
\end{equation}
Therefore for any $n>N(\delta,\epsilon)$,
\begin{eqnarray*}
\lefteqn{|\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}^*(n)}})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|}\\
&=&|\mathbb{E}_{\mathbf{t}(n)}(L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)}))-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|\\\
&=&|Pr(\mathbf{t}(n)\not\in D_n)\mathbb{E}(L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})|\mathbf{t}(n)\not\in
D_n)\\
&&+Pr(\mathbf{t}(n)\in D_n)\mathbb{E}(L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})|\mathbf{t}(n)\in
D_n)-
L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|\\
&\leq&Pr(\mathbf{t}(n)\not\in D_n)|\mathbb{E}(L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})|\mathbf{t}(n)\not\in
D_n)-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|+\\
&&Pr(\mathbf{t}(n)\in D_n)|\mathbb{E}(L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})|\mathbf{t}(n)\in
D_n)-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|\\
&\leq&\delta
B+\max_{\mathbf{\hat{s}}(n)\in\mathcal{D}_n}|L(\mathcal{A}_{p_{\mathbf{{s}^*}(n)}},p_{\mathbf{\hat{s}}(n)})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|\\ &\leq&
\delta B+\epsilon.
\end{eqnarray*}
The first inequality holds because the testing samples $\mathbf{t}(n)$
consists of $n$ instances IID from $\mu$. The second equality
is obtained by conditional expectation. The next inequality uses the positiveness and the upper bound $B$ of the loss function. Finally, we apply
Equation~\ref{eq:d_n}.
We thus conclude that $\mathcal{A}$ generalizes for $p_{\mathbf{s}^*}$
because $\epsilon$ and $\delta$ can be chosen arbitrary.
\end{proof}
\subsection{Proof of Lemma 1}
\label{proof:lem1}
\begin{proof}
This proof follows exactly the same principle as the proof of Lemma 2 from \cite{XUrobustness-ML}.
By contradiction, assume $\epsilon^*$ and $\delta^*$ do not exist. Let
$\epsilon_v=\delta_v=1/v$ for $v=1,2, ...$, then there exists a non
decreasing sequence $\{N(v)\}_{v=1}^\infty$ such that for all $v$, if
$n\geq N(v)$ then
$Pr(|L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|\geq
\epsilon_v)<\delta_v$.
For each $n$ we define
$$
D_n^v\triangleq\{\mathbf{\hat{s}}(n)|L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{\hat{s}}(n)})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|<\epsilon_v\}.
$$
For each $n\geq N(v)$ we have
$$Pr(\mathbf{t}(n)\in
D_n^v)=1-Pr(|L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|\geq
\epsilon_v)>1-\delta_v.$$
For $n\geq N(1)$, define $D_n\triangleq D_n^{v(n)}$, where $v(n)=\max(v|N(v)\leq
n; v\leq n)$. Thus for all, $n\geq N(1)$ we have $Pr(\mathbf{t}(n)\in
D_n)>1-\delta_{v(n)}$ and
$$\sup_{\mathbf{\hat{s}}(n)\in D_n}|L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{\hat{s}}(n)})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|<\epsilon_{v(n)}.$$
Note that $v(n)$ tends to infinity, it follows that
$\delta_{v(n)}\rightarrow 0$ and $\epsilon_{v(n)}\rightarrow 0$.
Therefore, $Pr(\mathbf{t}(n)\in D_n)\rightarrow 1$ and
$$
\lim_{n\rightarrow \infty}\{\sup_{\mathbf{\hat{s}}(n)\in D_n}|L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{\hat{s}}(n)})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|\}=0.
$$
That is $\mathcal{A}$ is weakly robust. w.r.t. $p_{\mathbf{s}}$ which is a
desired contradiction.
\end{proof}
\subsection{Mc Diarmid inequality}
\label{mcdiarmid}
Let $X_1, \ldots, X_n$ be $n$ independent random variables taking values in
${X}$ and let $Z=f(X_1, \ldots, X_n)$. If for each $1\leq i \leq n$, there exists a constant $c_i$ such that
\begin{eqnarray*}
\lefteqn{\sup_{x_1, \ldots, x_n, x'_i\in \mathcal{X}}|f(x_1, \ldots, x_i, \ldots, x_n) - f(x_1, \ldots,
x'_i, \ldots, x_n)|\leq c_i, \forall 1\leq i\leq n,}\\
&&\textrm{then for any } \epsilon>0,\quad\quad\quad \mathrm{Pr}[|Z-{\mathbb E}[Z]|\geq \epsilon]\leq
2\exp\left(\frac{-2\epsilon^2}{\sum_{i=1}^n c_i^2}\right).
\end{eqnarray*}
\subsection{Proof of Example 2 ($\ell_1$ norm)}
\label{proof:ex2}
\begin{proof}
Let $\mathbf{M^*}$ be the solution given training data $p_{\mathbf{s}}$. Due to optimality of $\mathbf{M^*}$, we have $\|\mathbf{M^*}\|_1 \leq g_0/c$.
We can partition $\mathcal{Z}$ as $|Y|\mathcal{N}(\gamma/2,{X},\|\cdot\|_1)$ sets, such that if $z$ and $z'$ belong to the same set, then $y=y'$ and $\|x-x'\|_1 \leq \gamma$. Now, for $z_1,z_2,z'_1,z'_2\in\mathcal{Z}$, if $y_1=y'_1$, $\|x_1-x'_1\|_1 \leq \gamma$, $y_2=y'_2$ and $\|x_2-x'_2\|_1 \leq \gamma$, then:
\begin{eqnarray*}
\lefteqn{|g(y_{12}[1-f(\mathbf{M^*},x_1,x_2)]) - g(y'_{12}[1-f(\mathbf{M^*},x'_1,x'_2)])|}\\
& \leq & U (|(x_1-x_2)^T\mathbf{M^*}(x_1-x'_1)| + |(x_1-x_2)^T\mathbf{M^*}(x'_2-x_2)|\\
& & + ~|(x_1-x'_1)^T\mathbf{M^*}(x'_1+x'_2)| + |(x'_2-x_2)^T\mathbf{M^*}(x'_1+x'_2)|)\\
& \leq & U(\|x_1-x_2\|_\infty\|\mathbf{M^*}\|_1\|x_1-x'_1\|_1 + \|x_1-x_2\|_\infty\|\mathbf{M^*}\|_1\|x'_2-x_2\|_1\\
& & + ~\|x_1-x'_1\|_1\|\mathbf{M^*}\|_1\|x'_1-x'_2\|_\infty + \|x'_2-x_2\|_1\|\mathbf{M^*}\|_1\|x'_1-x'_2\|_\infty)\\
& \leq & \frac{8UR\gamma g_0}{c}.
\end{eqnarray*}
\end{proof}
\subsection{Proof of Example 3 ($\ell_{2,1}$ norm)}
\label{proof:ex3}
\begin{proof}
Let $\mathbf{M^*}$ be the solution given training data $p_{\mathbf{s}}$. Due to optimality of $\mathbf{M^*}$, we have $\|\mathbf{M^*}\|_{2,1} \leq g_0/c$. We can partition $\mathcal{Z}$ in the same way as in the proof of Example~1 and use the inequality $\|\mathbf{M^*}\|_{\mathcal{F}} \leq \|\mathbf{M^*}\|_{2,1}$ (from Theorem~3 of Feng \cite{Feng-NormRel03}) to derive the same bound:
\begin{eqnarray*}
\lefteqn{|g(y_{12}[1-f(\mathbf{M^*},x_1,x_2)]) - g(y'_{12}[1-f(\mathbf{M^*},x'_1,x'_2)])|}\\
& \leq & U(\|x_1-x_2\|_2\|\mathbf{M^*}\|_{\mathcal{F}}\|x_1-x'_1\|_2 + \|x_1-x_2\|_2\|\mathbf{M^*}\|_{\mathcal{F}}\|x'_2-x_2\|_2\\
& & + ~\|x_1-x'_1\|_2\|\mathbf{M^*}\|_{\mathcal{F}}\|x'_1-x'_2\|_2 + \|x'_2-x_2\|_2\|\mathbf{M^*}\|_{\mathcal{F}}\|x'_1-x'_2\|_2)\\
& \leq & U(\|x_1-x_2\|_2\|\mathbf{M^*}\|_{2,1}\|x_1-x'_1\|_2 + \|x_1-x_2\|_2\|\mathbf{M^*}\|_{2,1}\|x'_2-x_2\|_2\\
& & + ~\|x_1-x'_1\|_2\|\mathbf{M^*}\|_{2,1}\|x'_1-x'_2\|_2 + \|x'_2-x_2\|_2\|\mathbf{M^*}\|_{2,1}\|x'_1-x'_2\|_2)\\
& \leq & \frac{8UR\gamma g_0}{c}.
\end{eqnarray*}
\end{proof}
\subsection{Proof of Example 4 (Kernelization)}
\label{proof:ex4}
We assume $\mathbb{H}$ to be an Hilbert space with an inner product operator $\langle\cdot,\cdot\rangle$. The mapping $\phi(\cdot)$ is continuous from $X$ to $\mathbb{H}$. The norm $\|\cdot\|_{\mathbb{H}}:\mathbb{H}\rightarrow \mathbb{R}$ is defined as $\|w\|_{\mathbb{H}}=\sqrt{\langle w,w \rangle}$ for all $w\in \mathbb{H}$, for matrices $\|\mathbf{M}\|_{\mathbb{H}}$ we take the entry wise norm by considering a matrix as a vector, corresponding to the Frobenius norm. The kernel function is defined as $k(x_1,x_2)=\langle\phi(x_1),\phi(x_2)\rangle$.
$B_\gamma$ and $f_{\mathbb{H}}(\gamma)$ are finite by the compactness of $X$ and continuity of $k(\cdot,\cdot)$. Let $\mathbf{M^*}$ be the solution given training data $p_{\mathbf{s}}$, by the optimality of $\mathbf{M^*}$ and using the same trick as the other examples we have: $\|\mathbf{M^*}\|_{\mathbb{H}} \leq g_0/c$.
Then, by considering a partition of $\mathcal{Z}$ into $|Y|\mathcal{N}(\gamma/2,X,\|\cdot\|_2)$ disjoint subsets such that if $(x_1,y_1)$ and $(x_2,y_2)$ belong to the same set then $y_1=y_2$ and $\|x_1-x_2\|_2\leq \gamma$.
We have
\begin{eqnarray}
\lefteqn{|g(y_{ij}[1-f(\mathbf{M^*},\phi(x_1),\phi(x_2))]) - g(y_{ij}[1-f(\mathbf{M^*},\phi(x'_1),\phi(x'_2))])|}\nonumber\\
& \leq & U( |(\phi(x_1)-\phi(x_2))^T\mathbf{M^*}(\phi(x_1)-\phi(x'_1))|
+ |(\phi(x_1)-\phi(x_2))^T\mathbf{M^*}(\phi(x'_2)-\phi(x_2))|\nonumber\\
& & + ~|(\phi(x_1)-\phi(x'_1))^T\mathbf{M^*}(\phi(x'_1)+\phi(x'_2))| +
|(\phi(x'_2)-\phi(x_2))^T\mathbf{M^*}(\phi(x'_1)+\phi(x'_2))|)\nonumber\\
&\leq&U(|\phi(x_1)^T\mathbf{M^*}(\phi(x_1)-\phi(x'_1))| +|\phi(x_2)^T\mathbf{M^*}(\phi(x_1)-\phi(x'_1))|+\label{eq:l}\\
&&|\phi(x_1)^T\mathbf{M^*}(\phi(x'_2)\phi(x_2))| +|\phi(x_2)^T\mathbf{M^*}(\phi(x'_2)-\phi(x_2))|+\nonumber\\
&&|(\phi(x_1)-\phi(x'_1))^T\mathbf{M^*}\phi(x'_1)| +|(\phi(x_1)-\phi(x'_1))^T\mathbf{M^*}\phi(x'_2)|+\nonumber\\
&&|(\phi(x'_2)-\phi(x_2))^T\mathbf{M^*}\phi(x'_1)| +|(\phi(x'_2)-\phi(x_2))^T\mathbf{M^*}\phi(x'_2)|).\nonumber
\end{eqnarray}
Then, note that
\begin{eqnarray*}
|\phi(x_1)^T\mathbf{M^*}(\phi(x_1)-\phi(x'_1))|&\leq &
\sqrt{\langle\phi(x_1),\phi(x_1)\rangle} \|\mathbf{M}^*\|_{\mathbb{H}}\sqrt{\langle\phi(x'_1)-\phi(x'_2),\phi(x'_1)-\phi(x'_2)\rangle}\\
&\leq& B_\gamma\frac{g_o}{c}\sqrt{f_{\mathbb{H}}(\gamma)}.
\end{eqnarray*}
Thus, by applying the same principle to all the terms in the right part of inequality \eqref{eq:l}, we obtain:
\begin{eqnarray*}
|g(y_{ij}[1-f(\mathbf{M^*},\phi(x_1),\phi(x_2))]) -
g(y_{ij}[1-f(\mathbf{M^*},\phi(x'_1),\phi(x'_2))])|& \leq & \frac{8U B_\gamma \sqrt{f_{\mathbb{H}}(\gamma)} g_0}{c}.
\end{eqnarray*}
\section{Conclusion}
\label{conclu}
We proposed a new theoretical framework for evaluating the
generalization ability of metric learning based on the notion of
algorithm robustness originally introduced in \cite{XUrobustness-ML}.
We showed that a weak notion of robustness characterizes the
generalizability of metric learning algorithms, justifying that
robustness is fundamental for such algorithms.
This framework allows us to derive generalization bounds for a large
class of algorithms with different regularizations, such as sparsity
inducing norms,
making the approach more powerful than the (few) existing frameworks.
Moreover, almost no algorithm-specific argument is needed to derive
these bounds, which explains why they are often similar.
Natural perspectives arise when considering different settings.
For example, some algorithms use both pair and triplet based
information as input such as \cite{Weinberger2009}.
Other future work could include studying even more general loss functions and
regularizers (such as the LogDet divergence used in
\cite{Davis2007,Jain2008}),
unsupervised/semi-supervised methods or domain adaptation.
Being able to characterize the generalization ability of metric
learning directly with the kind of
classifier using the metric - like $k$-NN - is also an interesting
and challenging direction.
\section{General form}
\section{Examples of Robust Metric Learning Algorithms}
\label{exsec}
We first restrict our attention to Mahalanobis distance learning algorithms of the following form:
\begin{eqnarray}
\displaystyle\min_{\mathbf{M} \succeq 0} & c\|\mathbf{M}\| + \frac{1}{n^2}\displaystyle\sum_{(s_i,s_j)\in p_s}g(y_{ij}[1-f(\mathbf{M},x_i,x_j)]),
\label{generalform}
\end{eqnarray}
where $s_i=(x_i,y_i)$, $s_j=(x_j,y_j)$, $y_{ij} = 1$ if $y_i=y_j$ and $-1$ otherwise, $f(\mathbf{M},x_i,x_j) = (x_i-x_j)^T\mathbf{M}(x_i-x_j)$ is the Mahalanobis distance parameterized by the $d\times d$ PSD matrix $\mathbf{M}$, $\|\cdot\|$ some matrix norm and $c$ a regularization parameter. The loss function $l(f,s_i,s_j) = g(y_{ij}[1-f(\mathbf{M},x_i,x_j)])$ outputs a small value when its input is large positive and a large value when it is large negative. We assume $g$ to be nonnegative and Lipschitz continuous with Lipschitz constant $U$. Lastly, $g_0 = \sup_{s_i,s_j}g(y_{ij}[1-f(\mathbf{0},x_i,x_j)])$ is the largest loss when $\mathbf{M}$ is $\mathbf{0}$.
To prove the robustness of \eqref{generalform}, we will need the following theorem, which essentially says that if a metric learning algorithm achieves approximately the same testing loss for testing pairs that are close to each other, then it is robust.
\begin{theorem}
Fix $\gamma>0$ and a metric $\rho$ of $\mathcal{Z}$. Suppose $\mathcal{A}$ satisfies
$$|l(\mathcal{A}_{p_\mathbf{s}},z_1,z_2)-l(\mathcal{A}_{p_\mathbf{s}},z'_1,z'_2)|\leq \epsilon(p_\mathbf{s}),\quad \forall z_1,z_2,z'_1,z'_2 : z_1,z_2\in \mathbf{s}, \rho(z_1,z'_1)\leq \gamma, \rho(z_2,z'_2)\leq \gamma$$
and $\mathcal{N}(\gamma/2,\mathcal{Z},\rho) < \infty$. Then $\mathcal{A}$ is $(\mathcal{N}(\gamma/2,\mathcal{Z},\rho),\epsilon(p_\mathbf{s}))$-robust.
\label{testtheorem}
\end{theorem}
\begin{proof}
By definition of covering number, we can partition ${X}$ in $\mathcal{N}(\gamma/2,{X},\rho)$ subsets such that each subset has a diameter less or equal to $\gamma$. Furthermore, since ${Y}$ is a finite set, we can partition $\mathcal{Z}$ into $|Y|\mathcal{N}(\gamma/2,{X},\rho)$ subsets $\{C_i\}$ such that $z_1,z'_1\in C_i \Rightarrow \rho(z_1,z'_1)\leq \gamma$.
Therefore,
$$|l(\mathcal{A}_{p_\mathbf{s}},z_1,z_2)-l(\mathcal{A}_{p_\mathbf{s}},z'_1,z'_2)|\leq \epsilon(p_\mathbf{s}),\quad \forall z_1,z_2,z'_1,z'_2 : z_1,z_2\in \mathbf{s}, \rho(z_1,z'_1)\leq \gamma, \rho(z_2,z'_2)\leq \gamma$$
implies
$z_1,z_2\in \mathbf{s}, z_1,z'_1\in C_i,z_2,z'_2\in C_j \Rightarrow |l(\mathcal{A}_{p_\mathbf{s}},z_1,z_2)-l(\mathcal{A}_{p_\mathbf{s}},z'_1,z'_2)|\leq \epsilon(p_\mathbf{s}),$
which establishes the theorem.
\end{proof}
We now prove the robustness of \eqref{generalform} when $\|\mathbf{M}\|$ is the Frobenius norm.
\begin{example}[Frobenius norm]
Algorithm \eqref{generalform} with $\|\mathbf{M}\| = \|\mathbf{M}\|_{\mathcal{F}} = \sqrt{\sum_{i=1}^d\sum_{j=1}^d m_{ij}^2}$ is $(|Y|\mathcal{N}(\gamma/2,{X},\|\cdot\|_2),\frac{8UR\gamma g_0}{c})$-robust.
\label{ex1}
\end{example}
\begin{proof}
Let $\mathbf{M^*}$ be the solution given training data $p_s$. Thus, due to optimality of $\mathbf{M^*}$, we have
$$c\|\mathbf{M^*}\|_{\mathcal{F}} + \frac{1}{n^2}\displaystyle\sum_{(s_i,s_j)\in p_s}g(y_{ij}[1-f(\mathbf{M},x_i,x_j)]) \leq c\|\mathbf{0}\|_{\mathcal{F}} + \frac{1}{n^2}\displaystyle\sum_{(s_i,s_j)\in p_s}g(y_{ij}[1-f(\mathbf{0},x_i,x_j)]) = g_0$$
and thus $\|\mathbf{M^*}\|_{\mathcal{F}} \leq g_0/c$.
We can partition $\mathcal{Z}$ as $|Y|\mathcal{N}(\gamma/2,{X},\|\cdot\|_2)$ sets, such that if $z$ and $z'$ belong to the same set, then $y=y'$ and $\|x-x'\|_2 \leq \gamma$. Now, for $z_1,z_2,z'_1,z'_2\in\mathcal{Z}$, if $y_1=y'_1$, $\|x_1-x'_1\|_2 \leq \gamma$, $y_2=y'_2$ and $\|x_2-x'_2\|_2 \leq \gamma$, then:
\begin{eqnarray*}
\lefteqn{|g(y_{12}[1-f(\mathbf{M^*},x_1,x_2)]) - g(y'_{12}[1-f(\mathbf{M^*},x'_1,x'_2)])|}\\
& \leq & U|(x_1-x_2)^T\mathbf{M^*}(x_1-x_2)-(x'_1-x'_2)^T\mathbf{M^*}(x'_1-x'_2)|\\
& = & U|(x_1-x_2)^T\mathbf{M^*}(x_1-x_2)-(x_1-x_2)^T\mathbf{M^*}(x'_1-x'_2)\\
& & + ~(x_1-x_2)^T\mathbf{M^*}(x'_1-x'_2)|-(x'_1-x'_2)^T\mathbf{M^*}(x'_1-x'_2)|\\
& = & U|(x_1-x_2)^T\mathbf{M^*}(x_1-x_2-(x'_1+x'_2)) + (x_1-x_2-(x'_1+x'_2))^T\mathbf{M^*}(x'_1+x'_2)|\\
& \leq & U (|(x_1-x_2)^T\mathbf{M^*}(x_1-x'_1)| + |(x_1-x_2)^T\mathbf{M^*}(x'_2-x_2)|\\
& & + ~|(x_1-x'_1)^T\mathbf{M^*}(x'_1+x'_2)| + |(x'_2-x_2)^T\mathbf{M^*}(x'_1+x'_2)|)\\
& \leq & U(\|x_1-x_2\|_2\|\mathbf{M^*}\|_{\mathcal{F}}\|x_1-x'_1\|_2 + \|x_1-x_2\|_2\|\mathbf{M^*}\|_{\mathcal{F}}\|x'_2-x_2\|_2\\
& & + ~\|x_1-x'_1\|_2\|\mathbf{M^*}\|_{\mathcal{F}}\|x'_1-x'_2\|_2 + \|x'_2-x_2\|_2\|\mathbf{M^*}\|_{\mathcal{F}}\|x'_1-x'_2\|_2) \leq \frac{8UR\gamma g_0}{c}
\end{eqnarray*}
Hence, the example holds by Theorem~\ref{testtheorem}.
\end{proof}
Note that for the special case of Example~\ref{ex1}, a generalization bound (with same order of convergence rate) based on uniform stability was derived in \cite{Jin09}.
However, it is known that sparse algorithms are not stable \cite{Xu2012}, and thus stability-based analysis fails to assess the generalization ability of recent sparse metric learning approaches \cite{Rosales2006,Qi2009,Ying2009}.
The key advantage of robustness over stability is that it can accommodate arbitrary $p$-norms (or even any regularizer which is bounded below by some $p$-norm), thanks to the equivalence of norms. To illustrate this, we show the robustness when $\|\mathbf{M}\|$ is either the $\ell_1$ norm (used in \cite{Rosales2006,Qi2009}) which promotes sparsity at the component level, or the $\ell_{2,1}$ norm (used in \cite{Ying2009}), which is particularly interesting in the context of Mahalanobis distance learning since it induces group sparsity at the column/row level.\footnote{In this case, the linear projection space of the data induced by the learned Mahalanobis distance is of lower dimension than the original space, allowing more efficient computations and smaller storage size.} The proofs are reminiscent of that of Example~\ref{ex1} and can be found in Appendices~\ref{proof:ex2} and \ref{proof:ex3}.
\begin{example}[$\ell_1$ norm]
Algorithm \eqref{generalform} with $\|\mathbf{M}\| = \|\mathbf{M}\|_1$ is $(|Y|\mathcal{N}(\gamma,\mathcal{X},\|\cdot\|_1),\frac{8UR\gamma g_0}{c})$-robust.
\label{ex2}
\end{example}
\begin{example}[$\ell_{2,1}$ norm]
Consider Algorithm \eqref{generalform} with $\|\mathbf{M}\| = \|\mathbf{M}\|_{2,1} = \sum_{i=1}^d \|m^i\|_2$, where $m^i$ is the $i$-th column of $\mathbf{M}$. This algorithm is $(|Y|\mathcal{N}(\gamma,\mathcal{X},\|\cdot\|_2),\frac{8UR\gamma g_0}{c})$-robust.
\label{ex3}
\end{example}
Some metric learning algorithms have kernelized versions, for instance~\cite{Schultz2003,Davis2007}. In the following example we show robustness for a kernelized formulation.
\begin{example}[Kernelization]\label{ex:kernel} Consider the kernelized version of Algorithm~\eqref{generalform}:
\begin{eqnarray}
\displaystyle\min_{\mathbf{M} \succeq 0} & c\|\mathbf{M}\|_{\mathbb{H}} + \frac{1}{n^2}\displaystyle\sum_{(s_i,s_j)\in p_s}g(y_{ij}[1-f(\mathbf{M},\phi(x_i),\phi(x_j))]),
\label{generalform_kernel}
\end{eqnarray}
where $\phi(\cdot)$ is a feature mapping to a kernel space $\mathbb{H}$,
$\|\cdot\|_{\mathbb{H}}$ the norm function of $\mathbb{H}$ and
$k(\cdot,\cdot)$ the kernel function.
Consider a cover of $X$ by $\|\cdot\|_2$ ($X$ being compact) and let
$f_{\mathbb{H}}(\gamma)\stackrel{\bigtriangleup}{=}\max_{a,b\in X,
\|a-b\|_2\leq \gamma}(k(a,a)+k(b,b)-2k(a,b))$ and
$B_\gamma=\max_{x\in X}\sqrt{k(x,x)}$. If the kernel function is
continuous, $B_\gamma$ and $f_{\mathbb{H}}$ are finite for any
$\gamma>0$ and thus Algorithm~\ref{generalform_kernel} is
$(|Y|\mathcal{N}(\gamma,X,\|\cdot\|_2),\frac{8 U B_\gamma
\sqrt{f_{\mathbb{H}}} g_0}{c})$-robust.
\end{example}
The proof is given in Appendix~\ref{proof:ex4}.
\begin{remark}We can easily prove similar results for other forms of metrics using the same technique. For instance, when the function is a bilinear similarity $f(\mathbf{M},x_i,x_j) = x_i^T\mathbf{M}x_j$ where $\mathbf{M}$ is usually not constrained to be PSD \cite{Chechik2009,Qamar2010,ShalitWC10}, we can improve the robustness to $2UR\gamma g_0/c$.
\end{remark}
\begin{remark}
Using triplet-based robustness (Equation~\ref{eq:robu_trip}), we can for instance show the robustness of two popular triplet-based metric learning approaches \cite{Schultz2003,Ying2009} for which no generalization guarantees were known (to the best of our knowledge). These algorithms have the following form:
$$\displaystyle\min_{\mathbf{M} \succeq 0} c\|\mathbf{M}\| + \frac{1}{|trip_\mathbf{s}|}\displaystyle\sum_{(s_i,s_j,s_k)\in trip_\mathbf{s}}[1-(x_i-x_k)^T\mathbf{M}(x_i-x_k)+(x_i-x_j)^T\mathbf{M}(x_i-x_j)]_+,$$
where $\|\mathbf{M}\|$ = $\|\mathbf{M}\|_{\mathcal{F}}$ in \cite{Schultz2003} and $\|\mathbf{M}\| = \|\mathbf{M}\|_{1,2}$ in \cite{Ying2009}. These methods are $(\mathcal{N}(\gamma,\mathcal{Z},\|\cdot\|_2),\frac{16UR\gamma g_0}{c})$-robust (by using the same proof technique as in Examples~\ref{ex1} and \ref{ex3}). The additional factor 2 comes from the use of triplets instead of pairs.
\end{remark}
\section{Introduction}
The past ten years have seen a growing interest in supervised metric learning. Indeed, the relevance of a distance or a similarity, for a given task, is of crucial importance to the effectiveness of many classification or clustering methods.
For this reason, a lot of research has been devoted to automatically learning distances or similarities from supervised data.
Existing approaches rely on the fairly reasonable principle that, according to a good metric, pairs of examples with the same (resp. different) labels must be close to each other (resp. far away). Learning thus generally consists in finding
the best parameters of the metric function given a set of labeled pairs.\footnote{These pairs are sometimes replaced by triplets $(x,y,z)$ such that example $x$ must be closer to example $y$ than to example $z$, where $x$ and $y$ share the same label and $z$ does not.}
The most classic and commonly used approach in the literature focuses on Mahalanobis distance
learning where the objective is to learn a positive semi-definite (PSD)
matrix \cite{Schultz2003,Shalev-Shwartz2004,Rosales2006,Davis2007,Jain2008,Weinberger2009,Qi2009,Ying2009} inducing a linear projection of the data where the Euclidean distance performs well.
Other approaches have also considered arbitrary similarity functions
with no PSD constraint \cite{Chechik2009,Qamar2010,ShalitWC10}. The learned distance or similarity is then typically used to improve the performance of nearest-neighbor methods.
From a theoretical standpoint, many papers have studied the
convergence rate of the optimization problem used to learn
the parameters of the metric.
However and somewhat surprisingly, few
studies have been done about the generalization ability of learned
metrics on unseen data. This situation can be explained by the fact that one cannot assume that the learning pairs provided to a metric learning algorithm are independent and identically distributed (IID).
Indeed, these pairs are generally given by an expert and/or extracted from a
sample of individual instances.
For example, common procedures for building such learning pairs are based either on the $k$ nearest
or farthest neighbors of each example, some criterion of diversity \cite{Kar2011}, taking all the possible pairs or drawing pairs randomly from a learning sample.
Online methods \cite{Shalev-Shwartz2004,Jain2008,Chechik2009} nevertheless offer guarantees, but only in the form of regret bounds assessing the deviation between the cumulative loss suffered by the online algorithm and the loss induced by the best hypothesis that can be chosen in hindsight.
Apart from these results, as far as we know, very few papers have proposed a theoretical study on the generalization ability of supervised metric learning methods. The approach of Bian and Tao \cite{Bian2011} uses a statistical analysis to give
generalization guarantees for loss minimization methods, but their results assume some hypotheses on the distribution of the examples and do not take into account any regularization on the metric. The most general contribution has been proposed by
Jin et al. \cite{Jin09} who adapted the framework of
uniform stability \cite{Bousquet02} to regularized metric
learning. However, their approach is based on a Frobenius norm regularizer and cannot be applied to any type of regularization, in particular sparsity-inducing norms \cite{Xu2012}.
In this paper, we propose to address this lack of theoretical framework
by studying the generalization ability of metric
learning algorithms according to a notion of {\it algorithmic robustness}.
Algorithmic robustness, introduced by Xu et al. \cite{XUrobustness,XUrobustness-ML}, allows one to derive
generalization bounds when given two ``close'' training and testing examples the variation between their associated loss is bounded.
This notion of closeness examples relies on a partition of the input space into
different regions such that two examples in the same region are said close.
This framework has been successfully used in the classic supervised learning setting for deriving generalization bounds for SVM, Lasso and more.
We propose here to adapt this notion of algorithmic robustness to metric learning that works both for similarity and distance learning.
We show that, in this context, the problem of non-IIDness of the learning pairs can be worked around by simply assuming that the pairs are built from an IID sample of labeled examples.
Moreover, following the work of Xu et al. \cite{XUrobustness-ML}, we provide a notion of weak robustness that is necessary and sufficient for metric learning algorithms to generalize well, highlighting that robustness is a fundamental property.
We illustrate the applicability of our framework by deriving generalization bounds, using very few approach-specific arguments, for a larger class of problems than Jin et al. that can accommodate a vast choice of regularizers, without any assumption on the distribution of the examples.
The rest of the paper is organized as follows. We introduce some preliminaries and notations in Section~\ref{prelim}. Our notion of algorithmic robustness for metric learning is presented in Section~\ref{robustsec}. The necessity and sufficiency of weak robustness is shown in Section~\ref{nessec}. Section~\ref{exsec} is devoted to the illustration of our framework to actual metric learning algorithms. Finally, we conclude in Section~\ref{conclu}.
\section{Necessity of Robustness}
\label{nessec}
We prove here that a notion of weak robustness is actually necessary and sufficient to generalize in a metric learning setup.
This result is based on an asymptotic analysis following the work of Xu and Mannor \cite{XUrobustness-ML}.
We consider pairs of instances coming from an increasing sample of training instances
$\mathbf{s}=(s_1,s_2,\ldots)$ and from a sample of test instances
$\mathbf{t}=(t_1,t_2,\ldots)$ such that both samples are assumed to be
drawn IID from a distribution $\mu$. We use $\mathbf{s}(n)$ and
$\mathbf{t}(n)$ to denote the first $n$ examples of the two samples respectively, while
$\mathbf{s}^*$ denotes a fixed sequence of examples.
We use $L(f,p_{\mathbf{t}(n)})=\frac{1}{n^2} \sum_{(s_i,s_j)\in p_{\mathbf{t}(n)}} l(f,s_i,s_j)$ to refer to the average loss given a set of pairs for any learned metric $f$, and $\mathcal{L}(f)=\mathbb{E}_{z,z'\sim
\mu}l(f,z,z')$ for the expected loss.
We first define a notion of generalizability for metric learning.
\begin{definition}
\begin{enumerate}
\item Given a training pair set $p_{\mathbf{s}^*}$ coming from a sequence of examples $\mathbf{s}^*$, a metric learning method
$\mathcal{A}$ generalizes w.r.t. $p_{\mathbf{s}^*}$ if
$lim_n\left|\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}^*(n)}})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})\right|=0$.
\item A learning method
$\mathcal{A}$ generalizes with probability 1 if it generalizes
with respect to the pairs $p_{\mathbf{s}}$ of almost all samples $\mathbf{s}$ IID from $\mu$.
\end{enumerate}
\end{definition}
Note this notion of generalizability implies convergence in mean. We then introduce the notion of weak robustness for metric learning.
\begin{definition}
\begin{enumerate}
\item Given a set of training pairs $p_{\mathbf{s}^*}$ coming from a sequence of examples ${\mathbf{s}^*}$, a metric learning
method $\mathcal{A}$ is weakly robust with respect to $p_{\mathbf{s}^*}$ if there
exists a sequence of $\{\mathcal{D}_n\subseteq \mathcal{Z}^n\}$ such
that $Pr(\mathbf{t}(n)\in \mathcal{D}_n)\rightarrow 1$ and
$$
lim_n\left\{\max_{\mathbf{\hat{s}}(n)\in\mathcal{D}_n}\left|L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{\hat{s}}(n)})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})\right|\right\}=0.
$$
\item A learning method $\mathcal{A}$ is almost surely weakly robust if it is robust
w.r.t. almost all $\mathbf{s}$.
\end{enumerate}
\end{definition}
The definition of robustness requires the labeled sample space to be
partitioned into disjoints subsets such that if some instances of pairs of
train/test examples belong to the same partition, then they have
similar loss. Weak robustness is a generalization of this notion where
we consider the average loss of testing and training pairs: if for a
large (in the probabilistic sense) subset of data, the
testing loss is close to the training loss, then the algorithm is
weakly robust. From Proposition~\ref{prop:BHC}, we can see that if for any fixed
$\epsilon>0$ there exists $K$ such that an algorithm $\mathcal{A}$ is $(K,\epsilon)$ robust, then $\mathcal{A}$ is weakly robust. We now give the main result of this section about the necessity of robustness.
\begin{theorem}\label{th:weak}
Given a fixed sequence of training examples $\mathbf{s}^*$, a metric learning
method $\mathcal{A}$ generalizes w.r.t. $p_{\mathbf{s}^*}$ if and only if
it is weakly robust w.r.t. $p_{\mathbf{s}^*}$.
\end{theorem}
\begin{proof2}
Following \cite{XUrobustness-ML}, the sufficiency is obtained by the fact that the testing pairs are obtained from a sample $\mathbf{t}(n)$ constituted of $n$ IID instances. We give the proof in Appendix~\ref{proof:suff}.
For the necessity, we need the following lemma which is a direct adaptation of a result introduced in \cite{XUrobustness-ML} (Lemma 2). We provide the proof in Appendix~\ref{proof:lem1} for the sake of completeness.
\end{proof2}
\begin{lemma}\label{lem:div}
Given $\mathbf{s}^*$, if a learning method is not weakly robust
w.r.t. $p_{\mathbf{s}^*}$, there exists $\epsilon^*,\delta^*>0$ such that
the following holds for infinitely many $n$:
\begin{equation}\label{eq:div}
Pr(|L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})-L(\mathcal{A}_{p_\mathbf{s}^*(n)},p_{\mathbf{s}^*(n)})|\geq\epsilon^*)\geq \delta^*.
\end{equation}
\end{lemma}
\begin{proof3}
Now, recall that $l$ is positive and uniformly bounded by $B$, thus by the
McDiarmid inequality (recalled in Appendix~\ref{mcdiarmid}) we have that for any $\epsilon,\delta>0$ there
exists an index $n^*$ such that for any $n>n^*$, with probability at least
$1-\delta$, we have
$|\frac{1}{n^2}\sum_{(t_i,t_j)\in p_{\mathbf{t}(n)}}l(\mathcal{A}_{p_{\mathbf{s}^*(n)}},t_i,t_j)-\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}^*(n)}})|\leq
\epsilon$. This implies the convergence $\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})-\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}^*(n)}})\stackrel{Pr}{\rightarrow}0$, and thus from a given index:
\begin{equation}\label{eq:lim}
|\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})-\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}^*(n)}})|\leq \frac{\epsilon^*}{2}.
\end{equation}
Now, by contradiction, suppose algorithm $\mathcal{A}$ is not weakly robust, Lemma~\ref{lem:div}
implies Equation \ref{eq:div} holds for infinitely many $n$. This combined with
Equation~\ref{eq:lim} implies that for infinitely many $n$:
$$
|\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{t}(n)})-L(\mathcal{A}_{p_{\mathbf{s}^*(n)}},p_{\mathbf{s}^*(n)})|\geq \frac{\epsilon^*}{2}
$$
which means $\mathcal{A}$ does not generalize, thus the necessity of
weak robustness is established.
\end{proof3}
The following corollary follows immediately from Theorem~\ref{th:weak}.
\begin{corollary}
A metric learning method $\mathcal{A}$ generalizes with probability 1 if and
only if it is almost surely weakly robust.
\end{corollary}
\section{Preliminaries}
\label{prelim}
\subsection{Notations}
Let $X$ be the instance space, $Y$
be a finite label set and let $\mathcal{Z}=X\times Y$. In the following, $z=(x,y)\in\mathcal{Z}$ means $x\in X$ and $y\in Y$. Let $\mu$ be an unknown
probability distribution over $\mathcal{Z}$.
We assume
that $X$ is a compact convex metric space w.r.t. a norm $\|\cdot\|$ such that $X\subset\mathbb{R}^d$, thus there exists a constant $R$ such that $\forall x\in X$, $\|x\|\leq R$.
A similarity or distance function is a pairwise function $f:X\times
X\rightarrow \mathbb{R}$.
In the following, we use the generic term {\it metric} to refer to either a similarity or a distance function.
We denote by $\mathbf{s}$ a labeled training
sample consisting of $n$ training instances $(s_1,\ldots,s_n)$ drawn
IID from $\mu$.
The sample of all possible pairs built from $\mathbf{s}$ is denoted by $p_{\mathbf{s}}$ such that $p_{\mathbf{s}}=\{(s_1,s_1),\ldots,(s_1,s_n),\ldots,(s_n,s_n)\}$.
A metric learning algorithm $\mathcal{A}$ takes as input a finite set of pairs from
$(\mathcal{Z}\times\mathcal{Z})^n$ and outputs a metric.
We denote by $\mathcal{A}_{p_{\mathbf{s}}}$ the metric learned by an algorithm $\mathcal{A}$ from a sample $p_{\mathbf{s}}$ of pairs.
For any pair of labeled examples $(z,z')$ and any metric $f$, we associate a loss function $l(f,z,z')$ which depends on the examples and their labels. This loss is assumed to be nonnegative and uniformly bounded by a constant $B$.
We define the true generalization loss over $\mu$ by
$\mathcal{L}(f)=\mathbb{E}_{z,z'\sim \mu}l(f,z,z')$.
We denote the empirical loss over the sample $p_{\mathbf{s}}$ by
$l_{emp}(f)=\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^nl(f,s_i,s_j)=\frac{1}{n^2}\sum_{(s_i,s_j)\in p_{\mathbf{s}}}l(f,s_i,s_j)$.
\subsection{Robustness for classical supervised learning}
The notion of algorithmic robustness, introduced by Xu and Mannor \cite{XUrobustness,XUrobustness-ML} in the context of classic supervised learning, is based on the deviation between the losses associated to two training and testing instances that are close. An algorithm is said $(K,\epsilon(\mathbf{s}))$-robust if there exists a partition of the space $\mathcal{Z}=X\times Y$ into $K$ disjoint subsets such that for every learning and testing instances belonging to the same region of the partition, the deviation between their associated losses is bounded by a term $\epsilon(\mathbf{s})$. From this definition, the authors have proved a convergence bound for the difference between the empirical and true losses of the form $\epsilon(\mathbf{s})+B\sqrt{\frac{2K \ln 2 + 2\ln 1/\delta}{n}}$ (with probability $1-\delta$).
This bound depends on $K$ and $\epsilon(\mathbf{s})$ which can be made as small as desired by refining this partition.
When considering metric spaces, the partition of $\mathcal{Z}$ can be obtained by the notion of covering number \cite{Kolmogorov61}.
\begin{definition}
For a metric space $(X,\rho)$, and $T\subset X$, we say that
$\hat{T}\subset T$ is a $\gamma$-cover of $T$, if $\forall t\in T$,
$\exists \hat{t}\in \hat{T}$ such that $\rho(t,t')\leq \gamma$. The
$\gamma$-covering number of $T$ is
$$
{\mathcal N}(\gamma,T,\rho)=\min\{|\hat{T}|: \hat{T} \mbox{\ is a\ }
\gamma-\mbox{\ cover of\ }T\}.
$$
\end{definition}
For example, when $X$ is a compact convex space, for any $\gamma>0$, the quantity ${\mathcal N}(\gamma,X,\rho)$ is finite leading to a finite cover.
If we consider the space $\mathcal{Z}$, we can note that the label set can be partitioned into $|Y|$ sets. Thus, $\mathcal{Z}$ can be partitioned into $|Y|\mathcal{N}(\gamma,X,\rho)$ subsets such that if two instances $z_1=(x_1,y_1)$, $z_2=(x_2,y_2)$ belong to the same subset, then $y_1=y_1$ and $\rho(x_1,x_2)\leq \gamma$.
\section{Robustness and Generalization for Metric Learning }
\label{robustsec}
\begin{figure}[t]
\begin{center}
\psfrag{Classic robustness}[][][0.8]{Classic robustness}
\psfrag{Robustness for metric learning}[][][0.8]{Robustness for metric learning}
\psfrag{z}[][][0.8]{$z$}
\psfrag{z'}[][][0.8]{$z'$}
\psfrag{z1}[][][0.8]{$z_1$}
\psfrag{z2}[][][0.8]{$z_2$}
\psfrag{Ci}[][][0.6]{$C_i$}
\psfrag{Cj}[][][0.6]{$C_j$}
\includegraphics[width=0.7\columnwidth]{robustness.eps}
\caption[Illustration of robustness in the classic and metric learning settings]{Illustration of the property of robustness in the classic and metric learning settings. In this example, we use a cover based on the $L_1$ norm. In the classic definition, if any example $z'$ falls in the same region $C_i$ as a training example $z$, then the deviation between their loss must be bounded. In the metric learning definition proposed in this work, for any pair $(z,z')$ and a training pair $(z_1,z_2)$, if $z,z_1$ belong to some region $C_i$ and $z',z_2$ to some region $C_j$, then the deviation between the loss of these two pairs must be bounded.}
\label{fig:robustness}
\end{center}
\end{figure}
We present here our adaptation of robustness to metric learning.
The idea is to use the partition of $\mathcal{Z}$ at the pair level: if a new test pair of examples is close to a learning pair, then the losses of the two pairs must be close. Two pairs are close when each instance of the first pair fall into the same subset of the partition of $\mathcal{Z}$ as the corresponding instance of the other pair, as shown in Figure~\ref{fig:robustness}.
A metric learning algorithm with this property is said robust. This notion is formalized as follows.
\begin{definition}\label{def:robu}
An algorithm $\mathcal{A}$ is $(K,\epsilon(\cdot))$ robust for
$K\in\mathbb{N}$ and $\epsilon(\cdot): (\mathcal{Z}\times\mathcal{Z})^n \rightarrow
\mathbb{R}$ if $\mathcal{Z}$ can be partitioned into $K$ disjoints sets, denoted
by $\{C_i\}_{i=1}^K$, such that for all
sample $\mathbf{s}\in\mathcal{Z}^n$ and the pair set $p(\mathbf{s})$ associated to this sample, the following holds: \\
$\forall (s_1,s_2) \in p(\mathbf{s}), \forall z_1,z_2 \in
\mathcal{Z}, \forall i,j=1,\ldots,K:$ if
$s_1,z_1\in C_i$ and $s_2,z_2\in C_j$ then
\begin{equation}\label{eq:robustness}
|l(\mathcal{A}_{p_{\mathbf{s}}},s_1,s_2)-l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|\leq \epsilon(p_{\mathbf{s}}).
\end{equation}
\end{definition}
$K$ and $\epsilon(\cdot)$ quantify the robustness of the algorithm which depends on the learning sample. The property of robustness is required for every training pair of the sample; we will see later that this property can be relaxed.
Note that this definition of robustness can be easily extended to triplet based metric learning algorithms. Instead of considering all the pairs $p_{\mathbf{s}}$ from an IID sample $\mathbf{s}$, we take the admissible triplet set $trip_{\mathbf{s}}$ of $\mathbf{s}$ such that $(s_1,s_2,s_3)\in trip_{\mathbf{s}}$ means $s_1$ and $s_2$ share the same label while $s_1$ and $s_3$ have different ones, with the interpretation that $s_1$ must be more similar to $s_2$ than to $s_3$. The robustness property can then be expressed by:
$\forall (s_1,s_2,s_3) \in trip_{\mathbf{s}}, \forall z_1,z_2,z_3 \in
\mathcal{Z}, \forall i,j,l=1,\ldots,K:$ if
$s_1,z_1\in C_i$, $s_2,z_2\in C_j$ and $s_3,z_3\in C_l$ then
\begin{equation}\label{eq:robu_trip} |l(\mathcal{A}_{trip_{\mathbf{s}}},s_1,s_2,s_3)-l(\mathcal{A}_{trip_{\mathbf{s}}},z_1,z_2,z_3)|\leq \epsilon(trip_{\mathbf{s}}).
\end{equation}
\subsection{Generalization of robust algorithms}
We now give a PAC generalization bound for metric learning algorithms fulfilling the property of robustness (Definition~\ref{def:robu}).
We first begin by presenting a concentration inequality that will help us to derive the bound.
\begin{proposition}[\cite{VDV-empricalprocess}]\label{prop:BHC}
Let $(|N_1|,\ldots,|N_K|)$ an IID multinomial random variable with
parameters $n$ and $(\mu(C_1),\ldots,\mu(C_K))$.
By the Breteganolle-Huber-Carol inequality we have:
$
Pr\left\{\sum_{i=1}^K \left|\frac{|N_i|}{n}-\mu(C_i)\right| \geq
\lambda \right\}\leq 2^K \exp\left(\frac{-n\lambda^2}{2}\right)
$,
hence with probability at least $1-\delta$,
\begin{equation}
\sum_{i=1}^K \left|\frac{N_i}{n}-\mu(C_i)\right|\leq \sqrt{\frac{2K\ln
2 + 2 \ln(1/\delta)}{n}}.
\end{equation}
\end{proposition}
We now give our first result on the generalization of metric learning algorithms.
\begin{theorem}\label{th:robu}
If a learning algorithm $\mathcal{A}$ is $(K,\epsilon(\cdot))$-robust
and the training sample is made of the pairs $p_{\mathbf{s}}$ obtained from a sample $\mathbf{s}$ generated by $n$ IID draws from $\mu$, then
for any $\delta>0$, with probability at least $1-\delta$ we have:
$$
|\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}}})-l_{emp}(\mathcal{A}_{p_{\mathbf{s}}})|\leq \epsilon(p_{\mathbf{s}})+2B\sqrt{\frac{2K \ln 2 + 2\ln 1/\delta}{n}}.
$$
\end{theorem}
\begin{proof}
Let $N_i$ be the set of index of points of $\mathbf{s}$ that fall into the $C_i$. $(|N_1|,\ldots,|N_K|)$ is a IID random variable with parameters $n$ and $(\mu(C_1),\ldots,\mu(C_K))$.
We have:
\begin{eqnarray*}
\lefteqn{|\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}}})-l_{emp}(\mathcal{A}_{p_{\mathbf{s}}})|}\\
&=&\left|\sum_{i=1}^K\sum_{j=1}^K\mathbb{E}_{z_1,z_2\sim
\mu}(l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|z_1\in C_i,z_2\in C_j)\mu(C_i)\mu(C_j)-\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^nl(\mathcal{A}_{p_{\mathbf{s}}},s_i,s_j)\right|\\
&\stackrel{(a)}{\leq}&\left|\sum_{i=1}^K\sum_{j=1}^K\mathbb{E}_{z_1,z_2\sim
\mu}(l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|z_1\in C_i,z_2\in
C_j)\mu(C_i)\mu(C_j)-\right.\\
&&\hspace{2cm}\left.\sum_{i=1}^K\sum_{j=1}^K\mathbb{E}_{z_1,z_2\sim
\mu}(l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|z_1\in C_i,z_2\in
C_j)\mu(C_i)\frac{|N_j|}{n}\right|+\\
&&\left|\sum_{i=1}^K\sum_{j=1}^K\mathbb{E}_{z_1,z_2\sim
\mu}(l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|z_1\in C_i,z_2\in C_j)\mu(C_i)\frac{|N_j|}{n}-\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^nl(\mathcal{A}_{p_{\mathbf{s}}},s_i,s_j)\right|\\
&\stackrel{(b)}{\leq}&\left|\sum_{i=1}^K\sum_{j=1}^K\mathbb{E}_{z_1,z_2\sim
\mu}(l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|z_1\in C_i,z_2\in
C_j)\mu(C_i)(\mu(C_j)-\frac{|N_j|}{n})\right|+\\
&&\left|\sum_{i=1}^K\sum_{j=1}^K\mathbb{E}_{z_1,z_2\sim
\mu}(l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|z_1\in C_i,z_2\in
C_j)\mu(C_i)\frac{|N_j|}{n}-\right.\\
&&\hspace{2cm}\left.\sum_{i=1}^K\sum_{j=1}^K\mathbb{E}_{z_1,z_2\sim
\mu}(l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|z_1\in C_i,z_2\in
C_j)\frac{|N_i|}{n}\frac{|N_j|}{n}\right|+\\
&&\left|\sum_{i=1}^K\sum_{j=1}^K\mathbb{E}_{z_1,z_2\sim
\mu}(l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|z_1\in C_i,z_2\in
C_j)\frac{|N_i|}{n}\frac{|N_j|}{n}
-\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^nl(\mathcal{A}_{p_{\mathbf{s}}},s_i,s_j)\right|\\
\end{eqnarray*}
\begin{eqnarray*}
&\stackrel{(c)}{\leq}&B\left(\left|\sum_{j=1}^K\mu(C_j)-\frac{|N_j|}{n}\right|
+\left|\sum_{i=1}^K\mu(C_i)-\frac{|N_i|}{n}\right|\right)+\\
&&\left|\frac{1}{n^2}\sum_{i=1}^K\sum_{j=1}^K\sum_{s_o\in N_i}\sum_{s_l\in
N_j}\max_{z\in C_i}\max_{z'\in
C_j}|l(\mathcal{A}_{p_{\mathbf{s}}},z,z')-l(\mathcal{A}_{p_{\mathbf{s}}},s_o,s_l)|\right|\\
&\stackrel{(d)}{\leq}&\epsilon(p_{\mathbf{s}})+2B\sum_{i=1}^K\left|\frac{|N_i|}{n}-\mu(C_i)\right|
\stackrel{(e)}{\leq}\epsilon(p_{\mathbf{s}})+2B\sqrt{\frac{2K \ln 2 + 2\ln 1/\delta}{n}}.
\end{eqnarray*}
Inequalities $(a)$ and $(b)$ are due to the triangle inequality, $(c)$ uses the fact that $l$ is bounded by $B$, that $\sum_{i=1}^K\mu(C_i)=1$ by definition of a multinomial random variable and that $\sum_{j=1}^K \frac{|N_j|}{n}=1$ by definition of the $N_j$. Lastly, $(d)$ is due to the hypothesis of robustness (Equation~\ref{eq:robustness}) and $(e)$ to the application of Proposition~\ref{prop:BHC}.
\end{proof}
The previous bound depends on $K$ which is given by the cover chosen for $\mathcal{Z}$. If for any $K$, the associated $\epsilon(\cdot)$ is a constant (i.e. $\epsilon_K(\mathbf{s})=\epsilon_K$) for any $\mathbf{s}$, we can prove a bound holding uniformly for all $K$: $|\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}}})-l_{emp}(\mathcal{A}_{p_{\mathbf{s}}})|\leq \inf_{K\geq 1}\left[\epsilon_K+2B\sqrt{\frac{2K \ln 2 + 2\ln1/\delta}{n}}\right]$. This also gives an insight into the objective of any robust algorithm: according to a partition of the labeled input space, given two regions, minimize the maximum loss over pairs of examples belonging to each region.
For triplet based metric learning algorithms, by following the definition of robustness given by Equation~\ref{eq:robu_trip} and adapting straight forwardly the losses to triplets such that they output zero for non admissible ones, Theorem~\ref{th:robu} can be easily extended to obtain the following generalization bound:
\begin{equation}\label{eq:bound_trip}
|\mathcal{L}(\mathcal{A}_{trip_{\mathbf{s}}})-l_{emp}(\mathcal{A}_{trip_{\mathbf{s}}})|\leq \epsilon(trip_{\mathbf{s}})+3B\sqrt{\frac{2K \ln 2 + 2\ln 1/\delta}{n}}.
\end{equation}
\subsection{Pseudo-robustness}
The previous study requires the robustness property to be true for every learning pair. We show, with the following definition, that it is possible to relax the robustness to be true for only a subpart of the sample and yet be able to derive generalization guarantees.
\begin{definition}
An algorithm $\mathcal{A}$ is $(K,\epsilon(\cdot),\hat{p}_n(\cdot))$
pseudo robust for
$K\in\mathbb{N}$, $\epsilon(\cdot): (\mathcal{Z}\times\mathcal{Z})^n \rightarrow
\mathbb{R}$ and $\hat{p}_n(\cdot): (\mathcal{Z}\times\mathcal{Z})^n \rightarrow
\{1,\ldots,n^2\}$, if $\mathcal{Z}$ can be partitioned into $K$ disjoints sets,
denoted
by $\{C_i\}_{i=1}^K$, such that for all
$\mathbf{s}\in\mathcal{Z}^n$ IID from $\mu$, there exists a subset of training pairs samples $\hat{p}_{\mathbf{s}} \subseteq p_{\mathbf{s}}$, with $|\hat{p}_{\mathbf{s}}|=\hat{p}_n(p_{\mathbf{s}})$,
such that the following holds:\\
$\forall (s_1,s_2)\in \hat{p}_{\mathbf{s}}, \forall z_1,z_2 \in
\mathcal{Z}, \forall i,j=1,\ldots,K$: if
$s_1,z_1\in C_i$ and $s_2,z_2\in C_j$ then
\begin{equation}
|l(\mathcal{A}_{p_{\mathbf{s}}},s_1,s_2)-l(\mathcal{A}_{p_{\mathbf{s}}},z_1,z_2)|\leq \epsilon(p_{\mathbf{s}}).
\end{equation}
\end{definition}
We can easily observe that $(K,\epsilon(\cdot))$-robust is equivalent to $(K,\epsilon(\cdot),n^2)$ pseudo-robust. The following theorem illustrates the generalization guarantees associated to the pseudo-robustness property.
\begin{theorem}\label{th:pseudo-robustesse}
If a learning algorithm $\mathcal{A}$ is
$(K,\epsilon(\cdot),\hat{p}_n(\cdot))$ pseudo-robust, the training pairs $p_{\mathbf{s}}$ come from a sample generated by $n$ IID draws from $\mu$, then
for any $\delta>0$, with probability at least $1-\delta$ we have:
$$
|\mathcal{L}(\mathcal{A}_{p_{\mathbf{s}}})-l_{emp}(\mathcal{A}_{p_{\mathbf{s}}})|\leq \frac{\hat{p}_n(p_{\mathbf{s}})}{n^2}\epsilon(p_{\mathbf{s}})+B(\frac{n^2-\hat{p}_n(p_{\mathbf{s}})}{n^2}+2\sqrt{\frac{2K \ln 2 + 2\ln 1/\delta}{n}}).
$$
\end{theorem}
The proof is similar to that of Theorem~\ref{th:robu} and is given in Appendix~\ref{proof:th2}.
The notion of pseudo-robustness characterizes a situation that often occurs
in metric learning: it is sometimes difficult to optimize the metric over all the possible pairs. This theorem shows that it suffices to have a property of robustness over only a subset of the possible pairs to have generalization guarantees.
Moreover, it also gives an insight into the behavior of metric learning approaches aiming at learning a distance to be plugged in a $k$-nearest neighbor classifier such as LMNN \cite{Weinberger2009}. These methods do not optimize the distance according to all possible pairs, but only according to the nearest-neighbors of the same class and some pairs of different class.
According to the previous theorem, this principle is founded provided that the robustness property is fulfilled for some of the pairs used to optimize the metric.
Finally, note that this notion of pseudo-robustness can be also easily adapted to triplet based metric learning.
|
2,877,628,089,430 | arxiv | \section{Introduction}
The 5G communication system has been commercialized world-widely, and the beyond 5G (B5G) system starts attracting more and more researchers' attention due to its low energy consumption, high spectrum efficiency and massive multi-device interconnections \cite{david20186g,saad2019vision,tariq2019speculative}. In order to satisfy the increasing demand caused by the fast-growing number of users, various techniques, including millimetre wave \cite{jamali2019intelligent}, massive multi-inputs and multi-outputs (MIMO) system \cite{larsson2014massive}, and small cell \cite{guo2016method}, have been investigated and extensively used in practice. As a potential technique of B5G, non-orthogonal multiple access (NOMA) has received widespread attention due to its high spectral efficiency \cite{ding2015application, dai2018survey}. Different from conventional orthogonal multiply access (OMA), such as frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA), and orthogonal frequency-division multiple access (OFDMA), NOMA allows multiple users to share the same time slot, frequency block and channel code, which dramatically increases the spectral efficiency. In particular, the users in a NOMA network usually adopt successive inference cancellation (SIC) to remove the inference from other NOMA users, which can efficiently improve the signal to interference and noise ratio (SINR) and reception reliability \cite{saito2015performance}. Recently, intelligent reflective surface (IRS) has also been proposed as a potential solution to further improve the performance of wireless networks, including enlarging the communication coverage, and improving transmission robustness. Specifically, the IRS can reflect the electromagnetic wave to extend the cover rage of the base station (BS). It also has the ability to tune the channel by adjusting the phase shift of each element, which will greatly improve the quality of users' received signal\cite{wu2018intelligent}. \par
\subsection{Related Works}
In literature, extensive research has been carried out for the NOMA technique, which has been combined with various state-of-the-art techniques such as MIMO and orthogonal time-frequency space modulation (OTFS)\cite{sun2018joint, fang2017joint, surabhi2019diversity, tang2019wireless, fang2016energy}. Recently, IRS has emerged as a kind of powerful equipment for wireless communication networks \cite{wu2019towards, di2019smart, wu2019intelligent}. Among these works, IRS was proved as a perfect solution for a wireless communication network, where the channel will be intelligently reconfigured by the IRS\cite{tang2019wireless, tang2019wireless1, ASIRSNOMAMag, FangIRS2020}. \par
Motivated by the benefits from NOMA and IRS, the combination of NOMA and IRS has been recently proposed as a promising solution to improve the communication systems. There have been some ongoing works studying the combination of NOMA and IRS. Some recent research works such as \cite{zhu2019power, ding2020impact} considered a simple scenario where a single IRS serves two users in a downlink NOMA network. In \cite{zhu2019power}, the authors minimized the transmit power at the BS by optimizing beanforming and IRS phase shift and also considered an improved quasi-degradation condition to guarantee that NOMA can achieve the capacity region with high possibility. In \cite{ding2020impact}, the authors analysed two kinds of phase shift designs, namely random phase shifting and coherent phase shifting.
Moreover, there are many works considering an IRS-assisted NOMA network where a signal IRS serves multiple users \cite{fu2019reconfigurable, liu2020ris, mu2019exploiting, zuo2020resource, zeng2020sum}. The problems which have been researched can be divided into two categories, one is about the transmit power minimization \cite{fu2019reconfigurable, liu2020ris} and the other is about the the sum-rate maximization\cite{mu2019exploiting, zuo2020resource, zeng2020sum}. For the transmit power minimization problem, the authors in \cite{mu2019exploiting} minimized the total transmit power by optimizing beamforming vectors of each user and the phase shift design of the IRS in an IRS-empowered downlink NOMA network. \cite{liu2020ris} considered a single IRS assisted downlink NOMA network and adopted reinforcement learning to design the beamforming vectors which minimized the transmit power at the BS. Regarding to the sum rate maximization problem, the authors of \cite{mu2019exploiting} optimized the beamforming design to maximize the sum rate in a downlink MISO IRS aided NOMA system. \cite{zuo2020resource} discussed a multi-channel downlink communications IRS-NOMA framework, where the sum rate of multiple NOMA users served by one IRS was maximized by optimizing resource allocation to each user and jointly considering channel assignment and decoding order. \cite{zeng2020sum} considered an IRS-assisted uplink NOMA system where multiple NOMA users can only transmit data through an IRS to the BS. \par
There are also some works considering a multi-cluster system mode, i.e., users are divided into different clusters \cite{li2019joint, ni2020resource}. In \cite{li2019joint}, the authors discussed a downlink IRS-assisted NOMA network where two types of users named the central user and the cell edge user were assigned to different clusters. Each cluster had one central user, one cell edge user and one IRS serving all users. The authors minimized the transmit power at the BS by jointly optimizing the beamforming vectors of each user and the phase shift design of the IRS. In \cite{ni2020resource}, the authors considered a multi-cluster and multi-BS IRS-aided NOMA network, where each cluster is served by its associated BS and one IRS serves all clusters. The sum rate was minimized by jointly optimizing power allocation and phase shift.\par
\subsection{Motivation and Challenges}
All the above works only consider one IRS. However the channel state of each user is related to its particular surrounding environment. Therefore, one single IRS might not be enough to reconfigure all users' channels simultaneously. Thus, we propose the use of multiple IRSs to assist the users whose channel conditions are bad. One IRS can adjust its phase shift dedicatedly for its associated user to generate a better channel condition. In this paper, we consider a multi-cluster NOMA network, where each cluster has one IRS and the BS generates an unique beam for each cluster to serve all users located in this cluster. \par
With the considered scenario, there are a few challenges which need to be overcome. We consider a multi-user and multi-IRS scenario which increases the number of optimization variables and hence make the optimization more complicated than the case with a single IRS in the network. The joint optimization problem contains three coupled variables, which is a non-convex problem and highly intractable. We divide the primal problem into subproblems and transform them to convex forms through approximations, the feasibility of these subproblems cannot be guaranteed during the transformation. Moreover, due to the high quality of variables, the computing time of algorithms will be extensive.
\subsection{Contributions}
Different from the above mentioned works \cite{li2019joint, ni2020resource}, in this paper, we adopt a new system model and employ multiple IRSs to assist users. Then, we formulate a non-convex optimization problems which is highly intractable. We propose a novel alternating algorithm to solve this non-convex problem efficiently. Finally, we simplify the system model and propose a low-complexity algorithm, which achieves a reasonable performance. We summarize the contributions as follows:
\begin{itemize}
\item We consider a multi-cluster IRS-NOMA system, where each cluster contains two users served by one IRS. We formulate the transmit power minimization problem with respect to the beamforming vector, the phase shifting matrix of IRSs and the power allocation coefficient of each cluster. Each IRS can accomplish channel reconfiguration according to the channel condition between the BS and the cell edge user it serves, which intuitively yields a better performance than the scenario with the single IRS serving the whole system.
\item The formulated problem is non-convex because three variables are highly coupled together. To solve the proposed optimization problem, we propose an alternating algorithm by decoupling variables. We divide the primal problem into two subproblems. However, the beamforming optimization problem still has two variables coupled together, which causes the intractability. To address this challenge, we first adopt arithmetic and geometric means inequalities to approximately transform the non-convex set to its upper bound which is convex. Then, we use the equivalence between Schur complement larger than zero and the positive semidefinite matrix and successive convex approximation (SCA) to transfer another non-convex constraint to a convex form. Finally, we use an alternating algorithm iteratively solve two subproblems.
\item We introduce some fixed points during the approximation. It is essential to obtain the initial choice of the fixed points to guarantee the feasibility of the beamforming optimization problem. Therefore, we propose a feasible initial points search algorithm, where we introduce an auxiliary variable to force all constraints to be feasible. We minimize this auxiliary variable until it equals to zero. The values of the fixed points when this auxiliary variable equals to zero can be the initial choice of the fixed points for the proposed alternating algorithm.
\item The complexity of the proposed alternating algorithm is high. To reduce the complexity of the proposed algorithm, we simplify the system model where each cluster shares the same power allocation coefficient. With this assumption, the previous problem will be degraded into a simpler one with two coupled variables. We design a partial exhaustive search algorithm to solve this new problem, which has a low complexity. Compared with the alternating algorithm, the complexity is reduced but the performance is still reasonable.
\end{itemize}
\subsection{Organization}
The rest of paper is organized as follows. In Section \uppercase\expandafter{\romannumeral3}, we describe a multi-cluster IRS-assisted NOMA downlink network and formulate a energy minimizing optimization problem. In Section \uppercase\expandafter{\romannumeral3}, the solution to solve the formulated problem is introduced. In Section \uppercase\expandafter{\romannumeral4}, we briefly illustrate the simplified optimization problems and the partial exhaustive search based algorithm. In Section \uppercase\expandafter{\romannumeral5}, we provide the convergence analysis of the algorithms. We also present the simulation results to analyze the performance of the proposed algorithm. Finally, a conclusion is summarised in Section \uppercase\expandafter{\romannumeral6}.
\section{System Model and Problem Formulation}
\subsection{System Model}
\begin{figure}
\centering
\graphicspath{{./figures/}}\includegraphics[width=0.8\linewidth]{system_model.png}\\
\caption{An IRS NOMA sytem model.} \label{Fig0}
\end{figure}
As shown in Fig.\ref{Fig0}, we consider a multi-user downlink network where two types of users, namely the central user and the cell edge user, are served by the BS simultaneously. Generally, the central users are much closer to the base station than the cell edge users. We assume that there are $K$ clusters and each cluster contains a central user, a cell edge user and an IRS. We use $CU_k$, $EU_k$ and ${\rm IRS}_k$ to represent the central user, the cell edge user and the IRS in the $k$-th cluster, respectively. Each IRS is equipped with $N$ passive reflecting elements and assists the cell edge user receiving signal from the BS. The BS is equipped with $M$ ($M \geq K$) antennas and $K$ beams to serve $K$ clusters. We assume that the direct links between the BS and all cell edge users are not available due to blockage, and the IRSs are implemented to reflect the signals sent by the BS to the cell edge users. Each cluster is far from others so the interference caused by the IRSs serving the other clusters can be reasonably ignored. In each cluster, the IRS is deployed closely to the cell edge user so that it will not infect the central user either. \par
To improve the spectrum efficiency, we adopt NOMA to serve all users simultaneously and we also assign different power levers to the two users in each cluster. The base station broadcasts the superposition signal $\sum_{k=1}^K \mathbf{w}_k(\alpha_k s_{k,c} + (1-\alpha_k)s_{k,e})$, where $\mathbf{w}_k \in \mathbb{C}^M$ denotes the beamforming vector in the $k$-th cluster and $k\in {1,2,...,K}$. $s_{k,c}$ and $s_{k,e}$ denote the signals to be sent to the central users and the cell edge users, respectively, and $\alpha_k$ is the power allocation coefficient of $CU_k$, thus $1-\alpha_k$ is the power coefficient of $EU_k$. We assume that the BS perfectly knows channel state information (CSI). Therefore, the signal received at $CU_k$ is given by
\begin{align}
y_{k,c} = \mathbf{h}_{k,c}^H\sum_{k=1}^K \mathbf{w}_k(\alpha_k s_{k,c} + (1-\alpha_k)s_{k,e}) + w_{c,k},
\end{align}
where $\mathbf{h_{k,c} \in \mathbb{C}^{M\times 1}}$ denotes the channel vector between the base station and $CU_k$, and $w_{c,k} \sim \mathcal{CN}(0,\sigma^2)$ is the additive white Gaussian noise (AWGN). Meanwhile, the signal received at $CU_k$ is given by
\begin{align}
y_{k,e} = (\mathbf{h}_{k,e}^H\mathbf{\Theta}_k\mathbf{G}_k)\sum_{k=1}^K \mathbf{w}_k(\alpha_k s_{k,c} + (1-\alpha_k)s_{k,e}) + w_{e,k},
\end{align}
where $\mathbf{G}_k \in \mathbb{C}^{N \times M}$ denotes the channel matrix between the BS and ${\rm IRS}_k$, $w_{e,k} \sim \mathcal{CN}(0,\sigma^2)$ denotes AWGN, $\mathbf{h}_{k,e} \in \mathbb{C}^{N\times 1}$ denotes the channel vector between ${\rm IRS}_k$ and $EU_k$, and $\mathbf{\Theta}_k = {\rm diag}(\beta e^{j\theta_1^k},...,\beta e^{j\theta_n^k})$ is the phase shift matrix of ${\rm IRS}_k$, where $\theta_n^k \in [0,2\pi), n \in \{1,...,N\}$ and $\beta \in [0,1]$ denote the phase shift of each reflecting element $n$ and amplitude coefficient on the signal, respectively. Without loss of generality, we assume $\beta=1$ given the fact that each reflecting element can only change the phase but not the amplitude of the reflected signal \cite{liu2020ris}. More detailed discussions about the choices of the reflecting amplitude and the phase shift can be found in \cite{abeywickrama2020intelligent}. Due to the path loss, we consider that the signal can be only efficiently reflected by the IRS once. Moreover, the long distance that geographically separates each cluster justifies the assumption that the IRS in one cluster will not infect other clusters. With this assumption, in each cluster, SIC is only performed at the central user to eliminate the interference from its intra-cluster edge user and the cell edge user decodes its data directly. Hence, the SINR of $EU_k$ is given by
\begin{align}
\mathrm{SINR}_{k,e}=\frac{|\mathbf{h}_{k,e}^H\mathbf{\Theta}_k\mathbf{G}_k\mathbf{w}_k|^2(1-\alpha_k)}{|\mathbf{h}_{k,e}^H\mathbf{\Theta}_k\mathbf{G}_k\mathbf{w}_k|^2\alpha_k+\sum\limits_{\substack{i=1 \\ i\neq k}}^K|\mathbf{h}_{k,e}^H\mathbf{\Theta}_k\mathbf{G}_k\mathbf{w}_i|^2+\sigma^2}, \label{eq3}
\end{align}
where $|\mathbf{h}_{k,e}^H\mathbf{\Theta}_k\mathbf{G}_k\mathbf{w}_k|^2\alpha_k$ is intra-cluster interference and $\sum_{\substack{i=1 \\ i\neq k}}^K|\mathbf{h}_{k,e}^H\mathbf{\Theta}_k\mathbf{G}_k\mathbf{w}_i|^2$ is inter-cluster interference. For the central users, they need to apply SIC to decode $s_{k,e}$ first and then remove it. Thus, the SINR of signal $s_{k,e}$ observed at $CU_k$ can be expressed as follows
\begin{align}
\mathrm{SINR}_{k,c\to e}=\frac{|\mathbf{h}_{k,c}^H\mathbf{w}_k|^2(1-\alpha_k)}{|\mathbf{h}_{k,c}^H\mathbf{w}_k|^2\alpha_k+\sum\limits_{\substack{i=1 \\ i\neq k}}^K|\mathbf{h}_{k,c}^H\mathbf{w}_i|^2+\sigma^2}.
\end{align}
The SNR of $CU_k$ to decode its own signal is given by
\begin{align}
\mathrm{SINR}_{k,c}=\frac{|\mathbf{h}_{k,c}^H\mathbf{w}_k|^2\alpha_k}{\sum\limits_{\substack{i=1 \\ i\neq k}}^K|\mathbf{h}_{k,c}^H\mathbf{w}_i|^2+\sigma^2}.
\end{align}
\subsection{Problem Formulation}
In this section, we formulate a transmit power minimization problem by jointly optimizing the beamforming vector $\left(\mathbf{w}_k, k \in \{1,...K\}\right)$, power allocation coefficients $\left(\alpha_k, k \in \{1,...K\}\right)$ and phase shifting matrix $\left(\mathbf{\Theta}_k, k \in \{1,...K\}\right)$, while considering the quality of service (QoS) requirement and the constraints of reflecting elements. The considered transmit power minimization problem can be formulated as
\begin{subequations}\label{Prob:0}
\begin{align}
&{\rm P}0:\min_{\boldsymbol{\alpha},\mathbf{w},\mathbf{\Theta}}\quad\sum\limits_{k=1}^K||\mathbf{w}_k||^2 \\
&~\mathrm{s.t.}~\log_2(1+\mathrm{SINR}_{k,c})\geq R_{k,c},\quad \forall k \label{P1b}\\
&\qquad \log_2(1+ \min (\mathrm{SINR}_{k,e},\mathrm{SINR}_{k,c\to e}))\geq R_{k,e},\forall k \label{P1c}\\
&\qquad 0\leq\theta_{i,n}\leq 2\pi,\quad \forall ~ i,n \label{P1d} \\
&\qquad |\mathbf{\Theta}_{i,n,n}|\leq 1,\quad \forall ~ i,n \label{P1e}
\end{align}
\end{subequations}
where $||\mathbf{w}_k||^2$ is the transmit power allocated to the cluster $k$, $R_{k,c}$ and $R_{k,e}$ denote the required minimum data rate of $CU_k$ and $EU_k$, respectively. The constraints (\ref{P1b}) and (\ref{P1c}) indicate the QoS requirements of the central users and the cell edge users, (\ref{P1d}) defines the phase shift range of the reflecting elements and \eqref{P1e} ensures that the IRS is a passive component. \par
However, problem ${\rm P}0$ is highly intractable due to the non-convex constraints (\ref{P1b}) and (\ref{P1c}). The non-convexity is caused by three highly coupled variables (i.e. $\mathbf{w}$, $\alpha$ and $\mathbf{\Theta}$). To efficiently solve this problem, we adopt SCA, SDR and the inequality approximation to develop an alternating algorithm to iteratively solve it.
\section{ Optimization Solution}
As discussed in the previous section, it is difficult to find the optimal solution of P0 due to its non-convexity. In this section, an alternating optimization algorithm is proposed to solve P0 efficiently. The main idea of this algorithm is to divide the primal problem into two subproblems and solve them alternatively. In particular, P0 is divided to a beamforming optimization subproblem and a feasible phase shifting matrix search subproblem. As shown later, each of the two subproblems is non-convex, and we will transform them into convex forms which can be solved efficiently by convex solver, e.g., CVX in Matlab.
\subsection{Beamforming Optimization}
For a given phase shifting matrix $\mathbf{\Theta}$, the concatenated channel respond $\mathbf{h}_{k,e}^H\mathbf{\Theta}_k\mathbf{G}_k \in \mathbb{C}^{1 \times M}$ is fixed. Thus, the beamforming optimization problem can be formulated as
\begin{subequations}\label{Prob:1}
\begin{align}
{\rm P}1:&\min\limits_{\mathbf{\alpha,\mathbf{w}}} \quad \sum\limits_{k=1}^K ||\mathbf{w}_k||^2 \\
&{\rm s.t.} \quad\quad \log_2(1+{\rm SINR}_{k,c}) \geq R_{k,c}, \quad \forall k \label{P2a}\\
& \qquad \;\quad \log_2(1+{\rm SINR}_{k,e}) \geq R_{k,e},\quad \forall k \label{P2b}\\
& \qquad \;\quad \log_2(1+{\rm SINR}_{k,c \to e}) \geq R_{k,e},\quad \forall k \label{P2c} \\
& \qquad \;\quad 0 \leq \alpha_k \leq 1, \quad \forall k. \label{P2d}
\end{align}
\end{subequations}
P1 is non-convex because the beamforming vector and the power allocation coefficient are still coupled together in all constraints except \eqref{P2d}, which is challenging to be solved. We notice that the rank-constrained semidefinite programming (SDP) problem can be approximated to a convex form. Therefore, we convert ${\rm P}1$ into a SDP form, then SDR can be applied to solve this problem. \par
First, we transform the constraint (\ref{P2b}) into a convex form. According to \eqref{eq3}, the constraint (\ref{P2b}) can be rewritten as follows:
\begin{align}
\frac{|e_k^H \mathbf{D}_{k,e} \mathbf{G}_k \mathbf{w}_k|^2(1-\alpha_k)}{|e_k^H \mathbf{D}_{k,e} \mathbf{G}_k \mathbf{w}_k|^2\alpha+\sum\limits_{\substack{i=1 \\ i\neq k}}^K|e_k^H \mathbf{D}_{k,e} \mathbf{G}_k \mathbf{w}_i|^2 + \sigma^2} \geq r_{k,e}, \label{eq8}
\end{align}
where $r_{k,e} = 2^{R_{k,e}} - 1$, $e_k$ is an $N \times 1$ vector containing all the diagonal elements of $\mathbf{\Theta}_k^H$, and $\mathbf{D}_{k,e}$ is a diagonal matrix, whose main diagonal elements are from the channel vector $\mathbf{h}_{k,e}^H$. After some algebraic transformations, (\ref{eq8}) can be equivalently expressed as follows:
\begin{equation} \label{eq9}
\begin{split}
&|e_k^H \mathbf{D}_{k,e} \mathbf{G}_k \mathbf{w}_k|^2(1+r_{k,e}) \alpha_k \leq \\
&|e_k^H \mathbf{D}_{k,e} \mathbf{G}_k \mathbf{w}_k|^2 - \sum\limits_{\substack{i=1 \\ i\neq k}}^K|e_k^H \mathbf{D}_{k,e} \mathbf{G}_k \mathbf{w}_i|^2r_{k,e} - \sigma^2r_{k,e}.
\end{split}
\end{equation}
Since the CSI is perfectly known by the BS, the channel $e_k^H \mathbf{D}_{k,e} \mathbf{G}_k$ is fixed with a given phase shifting matrix. For simply notation, we replace $e_k^H \mathbf{D}_{k,e} \mathbf{G}_k$ with $\mathbf{z}_{k,e}^H$ and rewrite (\ref{eq9}) as follows:
\begin{equation}
\alpha_k|\mathbf{z}_{k,e}^H \mathbf{w}_k|^2 \leq \frac{|\mathbf{z}_{k,e}^H \mathbf{w}_k|^2}{1+r_{k,e}} - (\sum\limits_{\substack{i=1 \\ i\neq k}}^K |\mathbf{z}_{k,e}^H \mathbf{w}_i|^2 + \sigma^2) \frac{r_{k,e}}{1+r_{k.e}}, \label{eq10}
\end{equation}
where $\mathbf{z}_{k,e}^H = e_k^H \mathbf{D}_{k,e} \mathbf{G}_k$. Note that the beamforming vector in (\ref{eq10}) has the same form as $\mathbf{w}_k \mathbf{w}_k^H$. Inspired by SDR, we introduce a slack matrix $\mathbf{W}_k = \mathbf{w}_k \mathbf{w}_k^H$, which is a rank-one positive semidefinite (PSD) matrix. Then the constraint (\ref{eq10}) can be equivalently rewritten as follows:
\begin{align}
&\alpha_k \rm{Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k) \leq \notag \\
&\frac{\rm{Tr}(\mathbf{Z}_{k,e}\mathbf{W}_k)}{1+r_{k,e}} - (\sum\limits_{\substack{i=1 \\ i\neq k}}^K {\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_i) + \sigma^2) \frac{r_{k,e}}{1+r_{k.e}} \label{eq11} \\
&\mathbf{W}_k \succcurlyeq 0 \label{eq12} \\
&\rm{Rank}(\mathbf{W}_k) = 1, \label{eq13}
\end{align}
where $\mathbf{Z}_{k,e} = \mathbf{z}_{k,e} \mathbf{z}_{k,e}^H$. From (\ref{eq11}), we notice that the right hand side of (\ref{eq11}) is a liner combination of two convex terms with respect to $\mathbf{W}_k$, which is convex. The only obstacle is the left hand side, which is a bilinear term constructed by $\alpha_k$ and $\rm{Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k)$. To make this constraint a convex set, we need to transform the non-convexity function $\alpha_k \rm{Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k)$ to a convex form. Inspired by the inequality of arithmetic and geometric means, the non-convex feasible set of the left hand side term can be upper bounded by a convex set $\frac{1}{2} (\alpha_k^2 + \rm{Tr}(\mathbf{Z}_{k,e}\mathbf{W}_k)^2)$. To tighten this upper bound in each iteration of the proposed iterative algorithm, we introduce a fixed point $c_k$, then we have
\begin{align}
2\alpha_k {\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k) \leq (\alpha_k c_k)^2 + \left(\frac{({\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k)}{c_k}\right)^2. \label{eq14}
\end{align}
We iteratively update this fixed feasible point.
\begin{lem} \label{lemma1}
The fixed point $c_k$ at the $m$-th iteration can be updated by:
\begin{align}
c_k^{(m)} = \sqrt{\frac{{\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_k^{(m-1)})}{\alpha_k^{(m-1)}}} \label{eq15}
\end{align}
\end{lem}
\begin{proof}
We define the difference function of the original function $2\alpha_k {\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k)$ and its approximated upper bound as
\begin{align}
\mathcal{F}(c_k) = 2\alpha_k {\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k) - (\alpha_k c_k)^2 - \left(\frac{({\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k)}{c_k}\right)^2 \label{eq16}
\end{align}
when the function \eqref{eq16} equals to 0, both sides of \eqref{eq14} are equal, which tightens the upper bound. From \eqref{eq14}, we notice that the maximum value of function $\mathcal{F}(c_k)$ is 0. Since
\begin{align}
\frac{\partial^2\mathcal{F}(c_k)}{\partial c_k^2} = -2 \alpha_k - \frac{6 {\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k)}{c_k^4} \leq 0,
\end{align}
when $\alpha_k \geq 0$ and ${\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k) \geq 0$, the function $\mathcal{F}(c_k)$ is a concave function with respect to $c_k$. The optimal value of $c_k$, defined as $c_k^*$, can be obtained by $\frac{\partial^2\mathcal{F}(c_k)}{\partial c_k^2} = 0$, then we have
\begin{align}
c_k^{*} = \sqrt{\frac{{\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_k)}{\alpha_k}}. \label{up_c}
\end{align}
\end{proof}
Hence, the constraint \eqref{P2b} can be approximated as follows:
\begin{equation} \label{eq19}
\begin{split}
&(\alpha_k c_k)^2 + \left(\frac{({\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k)}{c_k}\right)^2 \leq \\
&2\frac{{\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_k)}{1+r_{k,e}} - 2(\sum\limits_{\substack{i=1 \\ i\neq k}}^K {\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_i) + \sigma^2) \frac{r_{k,e}}{1+r_{k.e}}. \\
\end{split}
\end{equation}
It is noted that the left hand side of \eqref{eq19} is convex and the right side of \eqref{eq19} is an affine function, which means that the constraint \eqref{eq19} is a convex set. \par
For handling with the next non-convex constraint \eqref{P2c}, after some algebraic manipulations, we can rewrite \eqref{P2c} as follows:
\begin{align}
\alpha_k|\mathbf{h}_{k,c}^H \mathbf{w}_k|^2 \leq \frac{|\mathbf{h}_{k,c}^H \mathbf{w}_k|^2}{1+r_{k,e}} - (\sum\limits_{\substack{i=1 \\ i\neq k}}^K |\mathbf{h}_{k,c}^H \mathbf{w}_i|^2 + \sigma^2) \frac{r_{k,e}}{1+r_{k.e}}. \label{eq20}
\end{align}
It is worth to point out that \eqref{eq20} has the same form as \eqref{eq10}. Similarly, the method allied to \eqref{eq10} can be efficiently applied to \eqref{eq20} to yield a convex form. Therefore, \eqref{eq20} can be eventually transformed to
\begin{equation} \label{eq21}
\begin{split}
&(\alpha_k d_k)^2 + \left(\frac{({\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_k)}{d_k}\right)^2 \leq \\
&2\frac{{\rm Tr}(\mathbf{H}_{k,c}\mathbf{W}_k)}{1+r_{k,e}} - 2(\sum\limits_{\substack{i=1 \\ i\neq k}}^K {\rm Tr}(\mathbf{H}_{k,c}\mathbf{W}_i) + \sigma^2) \frac{r_{k,e}}{1+r_{k.e}} \\
\end{split}
\end{equation}
where $\mathbf{H}_{k,c} = \mathbf{h}_{k,c} \mathbf{h}_{k,c}^H$, and $d_k$ is a fixed point. At the $m$-th iteration, $d_k$ can be updated as follows:
\begin{align}
d_k^{(m)} = \sqrt{\frac{{\rm Tr}(\mathbf{H}_{k,c}\mathbf{W}_k^{(m-1)})}{\alpha_k^{(m-1)}}}. \label{up_d}
\end{align} \par
Now, we focus on the last non-convex constraint \eqref{P2a}. First, we also rewrite it as follows:
\begin{align}
\alpha_k {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_k) \geq \sum\limits_{\substack{i=1 \\ i \neq k}}^K {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_i) r_{k,c} + \sigma^2 r_{k,c} \label{eq23}
\end{align}
where $r_{k,c} = 2^{R_{k,c}} - 1$. Though \eqref{eq23} also has a bilinear term $\alpha_k {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_k)$, we cannot straightforwardly apply the method which has been successfully applied to constraint \eqref{P2b} and \eqref{P2c}. Even we replace $\alpha_k {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_k)$ with the sum of two square terms through the inequality of arithmetic and geometric means they are located at the left side of $\geq$ sign, which causes this inequality to be concave, and the transformed constraint is still non-convex. Hence, we propose another method to deal with this constraint. First, we introduce a slack variable $t_k$ and \eqref{P2a} can be equivalently transformed to
\begin{align}
\alpha_k {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_k) \geq t_k^2 \label{eq24}
\end{align}
\begin{align}
t_k^2 \geq \sum\limits_{\substack{i=1 \\ i \neq k}}^K {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_i)r_{k,c} + \sigma^2r_{k,c}. \label{eq25}
\end{align}
It can be straightforwardly show that neither of \eqref{eq24} and \eqref{eq25} is convex. According to the convex optimization theory \cite{boyd2004convex}, we know that the sufficient and necessary condition for a matrix to be PSD is that its Schur complement is greater than zero and also know that a PSD matrix is a convex constraint. After a simple transformation, \eqref{eq24} can be rewritten as follows:
\begin{align}
\alpha_k - \frac{t_k^2}{{\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_k)} \geq 0, \label{eq26}
\end{align}
which is equivalent to
\begin{align}
\begin{bmatrix}
\alpha_k & t_k \\
t_k & {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_i)
\end{bmatrix} \succcurlyeq 0. \label{eq27}
\end{align}
Constraints \eqref{eq26} and \eqref{eq27} are mutually sufficient, and constraint \eqref{eq27} is convex. Now, we deal with constraint \eqref{eq25}. We notice that $t_k^2$ is on the left hand side of the greater sign, which makes the whole constraint a non-convex set. To deal with this, we adopt SCA, where the first order Taylor series approximation is adopted to approximate the quadratic form \eqref{eq25} to
\begin{align}
t_{k,0}^2 + 2t_{k,0}(t_k - t_{k,0}) \geq \sum\limits_{\substack{i=1 \\ i \neq k}}^K {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_i)r_{k,c} + \sigma^2r_{k,c}
\end{align}
where $t_{k,0}$ is a fixed point. By applying SCA, we update $t_{k,0}$ at the $m$-th iteration by
\begin{align}
t_{k,0}^{(m)} = t_k^{(m-1)}. \label{up_t}
\end{align}
The final obstacle to deal with this problem arises from the rank-one constraint \eqref{eq13}. By applying SDR, the rank-one constraint is omitted to make the whole problem tractable. Thus we eventually transform P1 to
\begin{subequations}\label{Prob:2}
\begin{align}
&{\rm P2}:\min\limits_{\mathbf{\alpha,\mathbf{w}, \mathbf{t}}} \quad \sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k) \\
&{\rm s.t.} \quad (\alpha_k c_k)^2 + \left(\frac{({\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k)}{c_k}\right)^2 \leq \notag\\
&\quad 2\frac{{\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_k)}{1+r_{k,e}} - 2(\sum\limits_{\substack{i=1 \\ i\neq k}}^K {\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_i) + \sigma^2) \frac{r_{k,e}}{1+r_{k.e}}, \forall k\label{P3a}\\
& \qquad (\alpha_k d_k)^2 + \left(\frac{({\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_k)}{d_k}\right)^2 \leq \notag \\
&\quad 2\frac{{\rm Tr}(\mathbf{H}_{k,c}\mathbf{W}_k)}{1+r_{k,e}} - 2(\sum\limits_{\substack{i=1 \\ i\neq k}}^K {\rm Tr}(\mathbf{H}_{k,c}\mathbf{W}_i) + \sigma^2) \frac{r_{k,e}}{1+r_{k.e}}, \forall k\label{P3b}\\
& \quad \begin{bmatrix}
\alpha_k & t_k \\
t_k & {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_i)
\end{bmatrix} \succcurlyeq 0, \forall k\label{P3c}\\
& \quad t_{k,0}^2 + 2t_{k,0}(t_k - t_{k,0}) \geq \sum\limits_{\substack{i=1 \\ i \neq k}}^K {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_i)r_{k,c} + \sigma^2r_{k,c},\forall k \label{P3d} \\
& \quad 0 \leq \alpha_k \leq 1, \forall k. \label{P3e}
\end{align}
\end{subequations}
\begin{algorithm}[tp]
\caption{Initial Point Search Algorithm }\label{Alg1}
\begin{algorithmic}[1]
\STATE {{\bf Initialize:} $c_k^{(0)}$, $d_k^{(0)}$, $t_{k,0}^{(0)} \quad \forall k$, $\epsilon = 0.00001$, $i=0$, $q^{(0)} = 100$.}
\WHILE {$q^{(i)} > \epsilon$}
\STATE {$i = i+1.$}
\STATE {Update $\mathbf{W}_k^{(i)}$, $\alpha_k^{(i)}$ and $q^{(i)}$ with fixed $c_k^{(i-1)}$, $d_k^{(i-1)}$, $t_{k,0}^{(i-1)}$ by solving P3.}
\STATE {Update $c_k^{(i)}$, $d_k^{(i)}$, and $t_{k,0}^{(i)}$ based on \eqref{up_c}, \eqref{up_d} and \eqref{up_t} respectively.}
\ENDWHILE
\STATE {{\bf Output} $c_k^{(i)}$, $d_k^{(i)}$, and $t_{k,0}^{(i)}.$}
\end{algorithmic}\label{Al1}
\end{algorithm}
Since the restriction of rank one is removed, P2 is a convex problem and can be efficiently solved by convex optimization toolboxes, for instance, CVX. However, the optimal solution of P2 may not be the optimal solution of P1 unless the rank of $\mathbf{W}_k^*, \forall k$ is 1. We use Gaussian randomization \cite{luo2010semidefinite} to alternatively obtain a suboptimal solution. We define the optimal solution of P2 as $\mathbf{W}_k^*, \forall k$, and each $\mathbf{W}_k^*$ is a positive semidefinite matrix. Recall that $\mathbf{W}_k^* = \mathbf{w}_k^* \mathbf{w}_k^{*H}$, eigenvalue decomposition can be used to obtain the optimal beamforming vector. \par
Before we solve P2, we need to initialize three fixed points, $c_k$, $d_k$ and $t_{k,0}, \forall k$. It is noted that initializing them randomly will make the formulated problem infeasible. Hence, we propose a feasible initial points search algorithm to find the feasible fixed points to make P2 solvable. From the P2, we notice that the fixed points $c_k$. $d_k$ and $t_{k,0}$ must satisfy the constraints \eqref{P3a}, \eqref{P3b} and \eqref{P3c}. To address this problem, we introduce an auxiliary variable $q$, which intentionally relaxes the constraints to enlarge the feasible set. Then, we can formulate the initial point search problem as
\begin{subequations}\label{Prob:3}
\begin{align}
&{\rm P}3:\min\limits_{\mathbf{\alpha,\mathbf{w}},\mathbf{t},q} \quad q \\
&{\rm s.t.} \quad \eqref{P3a} \; \eqref{P3b} \; \eqref{P3c} \; \eqref{P3d} \; \eqref{P3e} \\
& \quad \; \quad q \geq 0. \label{P4f}
\end{align}
\end{subequations}
Specifically, when $q$ equals to 0, all constraints in P3 are exactly the same as the constraints in P2 and the obtained values of $c_k$, $d_k$ and $t_{k,0}$ can be the initial points of P2, which will guarantee the feasibility. We notice that the objective function is an affine function and all constraints are convex so it can be solved easily by CVX. To solve P3 efficiently, we design an iterative algorithm shown as Algorithm \ref{Alg1} to solve it iteratively. It is worth to point out that, unlike P2, the initial points $c_k^{(0)}$, $d_k^{(0)}$ and $t_{k,0}^{(0)}$ in P3 can be generated randomly because the feasibility of P3 can be always guaranteed. \par
\begin{algorithm}[tp]
\caption{The Beamforming Optimization Algorithm }\label{Alg2}
\begin{algorithmic}[1]
\STATE {{\bf Initialize:} fixed feasible points \{$c_k^{*(0)}$, $d_k^{*(0)}$, $t_{k,0}^{*(0)}, $\} $\forall k $, $\epsilon = 0.001$, $m=0$.}
\WHILE {$\sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k^{(m)})-\sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k^{(m+1)}) \geq \epsilon$}
\STATE {Update beamforming matrix \{$\mathbf{W}_k^{(m)}, \alpha_k^{(m)}$\}, $\forall k $ by solving P2 with the fixed feasible point\{$c_k^{*(m)}$, $d_k^{*(m)}$, $t_{k,0}^{*(m)} $\}, $\forall k$}.
\STATE {Update \{$c_k^{*(m)}$, $d_k^{*(m)}$, $t_{k,0}^{*(m)}$\}, $\forall k $ based on \eqref{eq15}, \eqref{up_d} and \eqref{up_t} respectively.}
\STATE {$m=m+1$.}
\ENDWHILE
\STATE {Update $\alpha_k^* = \alpha_k^{(m)}, \forall k $}
\STATE {Update beamforming vector $\mathbf{w}_k^{*}, \forall k$ by decomposing $\mathbf{W}_k^{(m)}, \forall k$} based on Gaussian Randomization method.
\STATE {{\bf Output} \{$\mathbf{w}_k^{*}, \alpha_k^*$\}, $\forall k$}
\end{algorithmic}\label{Al1}
\end{algorithm}
\begin{algorithm}[tp]
\caption{The Proposed Alternating Algorithm}\label{Alg3}
\begin{algorithmic}[1]
\STATE {{\bf Initialize:} $\epsilon = 0.001, j=0$.}
\WHILE { $\sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k^{*(j)}) - \sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k^{*(j)}) \geq \epsilon$ }
\STATE {Searching initial fixed feasible point \{$c_k^{*(j)}$, $d_k^{*(j)}$, $t_{k,0}^{*(j)}$\}, $\forall k $} based on Algorithm 1.
\STATE {Update \{$\mathbf{W}_k^{*(j)}$,$\mathbf{w}_k^{*(j)}$, $\alpha_k^{*(j)}$\}, $\forall k$ based on Algorithm 2.}
\STATE {Update $\mathbf{V}_k^{*(j)}, \forall k$ by solving P5 with \{$\mathbf{w}_k^{*(j)}$, $\alpha_k^{*(j)}$\}, $\forall k$}
\STATE {Update phase shift vector $\mathbf{e}_k^{*(j)}, \forall k$ by decomposing $\mathbf{V}_k^{*(j)}, \forall k$ based on Gaussian Randomization method.}
\STATE {$j = j + 1$}
\ENDWHILE
\STATE {{\bf Output} \{$\mathbf{w}_k^{*(j)}$, $\alpha_k^{*(j)}$, $\mathbf{e}_k^{*(j)}$\}, $\forall k$.}
\end{algorithmic}\label{Al1}
\end{algorithm}
After deciding the fixed points, the last challenge for solving the beamforming optimization problem has been removed. To solve this problem efficiently, we design an alternating algorithm to solve P2 iteratively. The details of the algorithm are shown in Algorithm 2. Specifically, the fixed initial points \{$c_k^{*(0)}$, $d_k^{*(0)}$, $t_{k,0}^{*(0)}$\} $\forall k$ are obtained from Algorithm 1.
\subsection{Phase Shifting Optimization}
In this section, we focus on the phase shifting optimization. The phase shifting optimization can be transformed to a feasibility problem since the objective function in the primal problem does not contain the phase shifting parameter $\mathbf{\Theta}_k, \forall k$. Only the constraints \eqref{P1b}, \eqref{P1c} and \eqref{P1d} in the primal problem contain the phase shifting parameter and \eqref{P1b} can be equivalently divided into \eqref{P2b} and \eqref{P2c}, where only \eqref{P2b} contains the phase shifting parameter. Therefore, given the beamforming vectors, the phase shift feasibility problem can be written as follows:
\begin{subequations}\label{Prob:4}
\begin{align}
&{\rm P}4:{\rm find} \quad \mathbf{\Theta} \\
&{\rm s.t.} \quad \log_2(1+{\rm SINR}_{k,e}) \geq R_{k,e}, \forall k \label{P5a}\\
& \qquad \;\; 0 \leq \theta_{i,n} \leq 2\pi, \forall i,n \label{P5b}\\
& \qquad \; |\mathbf{\Theta}_{i,n,n}| = 1, \forall i,n . \label{P5c}
\end{align}
\end{subequations}
Note that P4 is non-convex due to constraint \eqref{P5a}. It is straighforward to find out that the non-convexity arises form constraint \eqref{P5a}. We need to transform this non-convex constraint to be a convex one. Thus, we can rewrite \eqref{P5a} as follows:
\begin{align}
&|\mathbf{h}_{k,e}^H \Gamma_{\mathbf{p}_k}\mathbf{e}_k|^2(1+r_{k,e})\alpha_k \leq \notag \\
&|\mathbf{h}_{k,e}^H \Gamma_{\mathbf{p}_k}\mathbf{e}_k|^2 - \sum\limits_{\substack{i=1 \\ i \neq k}}^K |\mathbf{h}_{k,e}^H \Gamma_{\mathbf{p}_i}\mathbf{e}_k|^2 - \sigma^2 r_{k,e}, \label{theta1}
\end{align}
where $\Gamma_{\mathbf{p}_i}$ is a diagonal matrix whose main diagonal elements are from $\mathbf{p}_i = \mathbf{G}_k \mathbf{w}_i$ and $e_k$ is the phase shifting vector. However, with ${\mathbf{W}_k}, \alpha_k, \forall k$ already obtained from the beamforming optimization problem, constraint \eqref{theta1} is a quartic form with respect to $\mathbf{e}_k$. For simplicity, we substitute $\mathbf{h}_{k,e}^H \Gamma_{\mathbf{p}_i}$ with $\mathbf{r}_{k,e}^{iH}$. From \cite{luo2010semidefinite}, we know that a quartic form can be equivalently transformed to a linear form with a rank-one constraint. Thus, \eqref{theta1} can be expressed as follows:
\begin{align}
&{\rm Tr}(\mathbf{R}_{k,e}^k \mathbf{V}_k)(1+r_{k,e})\alpha_k \leq \notag \\
&{\rm Tr}(\mathbf{R}_{k,e}^k \mathbf{V}_k)- \sum\limits_{\substack{i=1 \\ i \neq k}}^K {\rm Tr}(\mathbf{R}_{k,e}^i \mathbf{V}_i)- \sigma^2 r_{k,e} \label{theta2}\\
&\mathbf{V}_k \succcurlyeq 0\label{theta3}\\
&{\rm Rank}(\mathbf{V}_k) = 1, \label{thetha4}
\end{align}
where $\mathbf{R}_{k,e}^i = \mathbf{r}_{k,e}^i \mathbf{r}_{k,e}^{iH}$ and $\mathbf{V}_i = \mathbf{e}_i \mathbf{e}_i^H$. Given $\mathbf{w}_k, \alpha_k, \forall k$, \eqref{theta2} is an affine constraint. The rank-one constraint will make the whole problem intractable, so we adopt SDR to remove this rank-one constraint frist. Then P4 can be transformed as follows:
\begin{subequations}\label{Prob:5}
\begin{align}
&{\rm P}5:{\rm find} \quad \mathbf{V}_k, \quad \forall k \\
&{\rm s.t.} \qquad \; \eqref{theta2}, \quad \forall k \label{P6a} \\
& \qquad \; \mathbf{V}_k \succcurlyeq 0, \quad \forall k \label{P6b}\\
& \qquad \; \mathbf{V}_{k,n,n} = 1, \quad \forall k,n. \label{P6c}
\end{align}
\end{subequations}
P5 is a convex problem, which can be solved by CVX efficiently. Since the rank-one constraint is removed, the optimal solution of P5 may not be the optimal solution of P4. Therefore, Gaussian randomization will be applied to achieve a sub-optimal solution for P4.
\subsection{Algorithm Design}
The details of the proposed alternating algorithm are illustrated in Algorithm 3, where P2 and P5 are alternately solved until the convergence metric is satisfied. At the $i$-th iteration of Algorithm 3, first, the initial points are obtained by Algorithm 1. Then, the algorithm begins to solve the beamforming optimization problem by solving P2 through Algorithm 2. Then, the algorithm starts to solve phase shifting feasibility problem by solving P5 (step 5 and step 6) to obtain a feasible phase shift vector $\mathbf{e}_k^{*(i)}, \forall k$. The feasible phase shifting vector of this current iteration will be used as a given phase shift for the beamforming optimization in the next iteration. It is worth to point out that after each iteration, the channel state will change with the new obtained phase shifting vector $\mathbf{e}_k, \forall k$, so in each iteration before solving P2, we need to search new feasible fixed points (step 3), which is necessarily to guarantee that P2 is feasible at each iteration.
\subsection{Complexity analysis}
From \cite{luo2010semidefinite}, we learn that the worst complexity of solving a SDR problem through CVX is
\begin{equation} \label{cvxcom}
\mathcal{O}({\rm max} \{m,n\}^4 n^{1/2} \log(1/\epsilon_{c})),
\end{equation}
where $n$ is the problem size, and $m$ is the number of constraints and $\epsilon_{c}$ is the accuracy of the algorithm that CVX adopts. We assume that the problem size is greater than the number of constraints, then we have the complexity of CVX to solve a SDR problem as
\begin{equation} \label{cvxcom2}
\mathcal{O}(n^{4.5} \log(1/\epsilon_{c})).
\end{equation}
Algorithm 1 is essentially to solve a SDR problem multiple times until the accuracy is satisfied. Thus, the complexity of Algorithm 1 is
\begin{equation}
\mathcal{O}\left(n_1^{4.5} \log\left(\frac{1}{\epsilon_{c}}\right) \log\left( \frac{1}{\epsilon_{1}}\right)\right),
\end{equation}
where $n_1$ is the problem size of P3 and $\epsilon_1$ is the accuracy of Algorithm 1. Algorithm 2 is similar to Algorithm 1, which is also to solve a SDR problem multiple times and hence P2 has the same size as P3. Thus, the complexity of Algorithm 2 can be expressed as follows:
\begin{equation}
\mathcal{O}\left(n_1^{4.5} \log\left(\frac{1}{\epsilon_{c}}\right) \log\left(\frac{1}{\epsilon_{2}}\right)\right),
\end{equation}
where $\epsilon_2$ is the accuracy of Algorithm 2. Now, we have the complexities of step 3 and step 4 in Algorithm 3. The last one we need to consider is the complexity of step 5. It is easy to find out that a single SDR problem is solved in the step 5, so the complexity is
\begin{equation}
\mathcal{O}(n_2^{4.5} \log(1/\epsilon_{c}),
\end{equation}
where $n_2$ is the problem size of P5. Finally, we have the complexity of the proposed algorithm as follows:
\begin{equation} \label{complprop}
\mathcal{O} (\mathcal{O}_1\log(1/\epsilon_3)),
\end{equation}
where
\begin{align}
\mathcal{O}_1 = \; &n_1^{4.5} \left(\log\left(\frac{1}{\epsilon_{c}}\right) \log\left(\frac{1}{\epsilon_{1}}\right) + \log\left(\frac{1}{\epsilon_{c}}\right) \log\left(\frac{1}{\epsilon_{2}}\right) \right) + \notag \\
&n_2^{4.5} \log(1/\epsilon_{c})\notag.
\end{align}
\section{Partial Exhaustive Search Algorithm}
In this section, we propose a simple algorithm based on partial exhaustive search, which can reduce significantly computation complexity. The main idea of this partial exhaustive search algorithm is to assume that all the clusters using the same power allocation coefficient, of which the optimal value can be obtained by an exhaustive search within the range [0, 1]. The primal problem can also be divided into the beamforming optimization problem and the phase shifting feasibility problem.
Since each cluster shares the same power coefficient, the power coefficient is first fixed in each searching progress so we only need to optimize the beamforming vector and the phase shifting vector in these two subproblems. We notice that these two subproblems can be reduced to the QCQP problem, which is a classic form in convex optimization theory. SDR is widely used as one of the most common methods to efficiently solve the QCQP problem. Two subprobelms are formulated as P6 and P7. We can obtain P6 and P7 through the basic SDR theory and some simple algebraic transformations, where the derivation is omitted in this paper due to space limitations.
\begin{subequations}\label{Prob:6}
\begin{align}
&{\rm P6}:\quad \min\limits_{\mathbf{\mathbf{w}}} \quad \sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k) \\
&{\rm s.t.} \; \alpha {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_k) \geq \sum\limits_{\substack{i=1 \\ i \neq k}}^K {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_i) r_{k,c} + \sigma^2 r_{k,c}, \forall k \label{P7a} \\
& \alpha {\rm Tr}(\mathbf{Z}_{k,e} \mathbf{W}_k) \leq \notag \\
& \frac{{\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_k)}{1+r_{k,e}} - (\sum\limits_{\substack{i=1 \\ i\neq k}}^K {\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_i) + \sigma^2) \frac{r_{k,e}}{1+r_{k.e}} , \forall k \label{P7b} \\
& \alpha {\rm Tr}(\mathbf{H}_{k,c} \mathbf{W}_k) \leq \notag \\
& \frac{{\rm Tr}(\mathbf{Z}_{k,e}\mathbf{W}_k)}{1+r_{k,e}} - (\sum\limits_{\substack{i=1 \\ i\neq k}}^K {\rm Tr}(\mathbf{H}_{k,c}\mathbf{W}_i) + \sigma^2) \frac{r_{k,e}}{1+r_{k.e}}, \forall k \label{P7c} \\
& \qquad \qquad \qquad \mathbf{W}_k \succcurlyeq 0, \forall k \label{P7d}
\end{align}
\end{subequations}
where $\mathbf{Z}_{k,e}, \mathbf{W}_k, \forall k$ in P6 are the same as those in P2.
\begin{subequations}\label{Prob:7}
\begin{align}
&{\rm P7}:{\rm find} \quad \mathbf{V}_k, \quad \forall k \\
&{\rm s.t.} \quad {\rm Tr}(\mathbf{R}_{k,e}^k \mathbf{V}_k)(1+r_{k,e})\alpha \leq \notag \\
&\qquad \; {\rm Tr}(\mathbf{R}_{k,e}^k \mathbf{V}_k)- \sum\limits_{\substack{i=1 \\ i \neq k}}^K {\rm Tr}(\mathbf{R}_{k,e}^i \mathbf{V}_i)- \sigma^2 r_{k,e}, \forall k \label{P8a}\\
&\qquad \; \mathbf{V}_k \succcurlyeq 0, \quad \forall k \label{P8b}\\
& \qquad \; \mathbf{V}_{k,n,n} = 1, \quad \forall k,n \label{P8c}
\end{align}
\end{subequations}
where $\mathbf{R}_{k,e}^i, \forall i,k$ and $\mathbf{V}_k, \forall k$ are the same as those in P5. The detail of the partial exhaustive search algorithm is illustrated in Algorithm \ref{Alg4}. \par
In each search progress, the algorithm will solve two SDR problems with different sizes $n_1$ and $n_2$, which are the problem sizes of P6 and P7. Therefore, the complexity of Algorithm \ref{Alg4} can be expressed as follows:
\begin{equation}\label{complxqiong}
\mathcal{O}\left(I \left( n_1^{4.5} \log(1/\epsilon_{c}) + n_2^{4.5} \log(1/\epsilon_{c}) \right)\right).
\end{equation}
$I$ is the number of searches, which is depended on the search step $\alpha$. Comparing \eqref{complprop} and \eqref{complxqiong}, we can find that the partial exhaustive search algorithm has a lower complexity than the one in the previous section.
\begin{algorithm}[tp]
\caption{The Partial Exhaustive Search Algorithm}\label{Alg4}
\begin{algorithmic}[1]
\STATE{{\bf Initialization} $P_{opt} = 10000, \alpha_{opt} = 0, \mathbf{w}_k^*, \mathbf{e}_k^*, \forall k$}
\FOR{ $\alpha = 0.1:0.1:0.9$ }
\STATE {{\bf Initialization} $\epsilon = 0.001, i = 0, \mathbf{e}_k^{(0)}$}
\WHILE {$\sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k^{(i)}) - \sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k^{(i+1)}) > \epsilon$}
\STATE{Update $\mathbf{W}_k^{(i)}, \forall k$ by solving P6}.
\STATE{Update $\mathbf{w}_k^{(i)}, \forall k$ by decomposing $\mathbf{W}_k^{(i)}, \forall k$ based on Gaussian Randomization method.}
\STATE {Update $\mathbf{V}_k^{(i)}, \forall k$ by solving P7 based on given $\mathbf{w}_k^{(i)}, \forall k$.}
\STATE {Update $\mathbf{e}_k^{(i)}, \forall k$ by decomposing $\mathbf{V}_k^{(i)}, \forall k$ based on Gaussian Randomization method.}
\STATE{$i = i+1.$}
\ENDWHILE
\IF{$P_{opt} > \sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k^{(i)})$}
\STATE{$P_{opt} > \sum\limits_{k=1}^K {\rm Tr}(\mathbf{W}_k^{(i)})$.}
\STATE{$\alpha_{opt} = \alpha$, $\mathbf{w}_k^* = \mathbf{w}_k^{(i)}$, $\mathbf{e}_k^* = \mathbf{e}_k^{(i)}$, $\forall k$.}
\ENDIF
\ENDFOR
\STATE{{\bf Output} $\alpha_{opt}, \mathbf{w}_k^*, \mathbf{e}_k^*, \forall k$.}
\end{algorithmic}\label{Al1}
\end{algorithm}
\section{Numerical Results}
In this section, we evaluate all simulation results of the proposed algorithms. In simulations, channel gains are generated by
\begin{align}
\mathbf{h}_{k,e} = \frac{\mathbf{h}_{k,e}^*}{\sqrt{ d_0^{\alpha_0}}} \quad
\mathbf{G}_k = \frac{\mathbf{G}_k^*}{\sqrt{d_1^{\alpha_1}}} \quad
\mathbf{h}_{k,c} = \frac{\mathbf{h}_{k,c}^*}{\sqrt{ d_2^{\alpha_2}}}
\end{align}
where $k = 1,2,...K$, and $\mathbf{h}_{k,e}^*$, $\mathbf{G}_k^*$, $\mathbf{h}_{k,c}^*$ are complex Reyleigh channel coefficients. $d_0 = 10$ m, $d_1 = 50$ m, $d_2 = 10$ m, respectively denote the distances between the IRS and the cell edge user, the distance between the BS and the IRS, and the distance between the BS and the cell center user. $\alpha_0 ,\alpha_1 ,\alpha_2$ are the path loss exponents of the corresponding links. We assume that all the cell central users are at the same distance from the BS, all the cell edge users are at the same distance from the related IRS and all the IRSs are at the same distance from the BS. We set $\alpha_0 = \alpha_2 = 2$ and $\alpha_1 = 2.2$. The noise power is $\sigma^2 = BN_0$, where the bandwidth $B = 100$ MHz and the noise power spectral density is $N_0 = -80$ dBm. \par
Fig. \ref{Fig1} shows the transmit power at the BS versus the number of each IRS's reflecting elements. We provide the performance of the proposed schemes compared with the random phase scheme in NOMA and OMA. In Fig. \ref{Fig1}, the number of antennas at the BS is $M = 8$, and the date rate requirement of all the central users and the cell edge users is 1 bps/Hz. Obviously, the transmit power at the BS of all schemes decreases rapidly with the increasing of the number of IRS's reflecting elements. From Fig. \ref{Fig1}, we can see that both proposed algorithms requires a less transmit power than the benchmarks. Comparing the two proposed algorithms, the performance gap is very small at the beginning but is gradually enlarging when the number of reflecting elements at IRSs grows. The result in Fig. \ref{Fig1} demonstrates that the alternating algorithm can yield the best performance among the schemes shown in the figure. \par
Fig. \ref{Fig2} shows the transmit power at the BS versus the minimum data rate of the central users. In this figure, we assume that the central users' date rate requirement are the same, and all the cell edge users' date rates are set as 1bps/Hz. In this figure, we set the number of antennas at the BS as $M = 8$ and the number of reflecting elements at each IRS as $N = 32$ respectively. According to the Shannon's capacity formula, it is well known that a higher date rate requires a higher transmit power at the BS. All schemes in Fig. \ref{Fig2} have the same trend, where the transmit power at the BS increases with the increasing of the central users' minimum date rate. From Fig. \ref{Fig2}, we find that the proposed alternating algorithm needs less power consumption under the same date rate requirement. Although, the partial exhaustive search algorithm cannot achieve the same performance as the proposed alternating algorithm, it has low complexity and still yields a better performance than NOMA with random IRS scheme and OMA scheme. \par
\begin{figure}
\centering
\graphicspath{{./figures/}}\includegraphics[width=1.1\linewidth]{N_1.png}\\
\caption{The transmit power versus the number of elements.} \label{Fig1}
\end{figure}
\begin{figure}
\centering
\graphicspath{{./figures/}}\includegraphics[width=1.1\linewidth]{r_1.png}\\
\caption{The transmit power versus the minimum date rate of the central users.} \label{Fig2}
\end{figure}
\begin{figure}
\centering
\graphicspath{{./figures/}}\includegraphics[width=1.1\linewidth]{M_1.png}\\
\caption{The transmit power versus the number of antennas at the BS.} \label{Fig3}
\end{figure}
\begin{figure}
\centering
\graphicspath{{./figures/}}\includegraphics[width=1.1\linewidth]{d_1.png}\\
\caption{The transmit power versus the distance between the IRS and the cell edge user.} \label{Fig4}
\end{figure}
\begin{figure}
\centering
\graphicspath{{./figures/}}\includegraphics[width=1.1\linewidth]{q_coverge.png}\\
\caption{The value of $q$ versus the iterative number.} \label{Fig5}
\end{figure}
\begin{figure}
\centering
\graphicspath{{./figures/}}\includegraphics[width=1.1\linewidth]{power_converge.png}\\
\caption{The transmit power at the BS versus the iterative number.} \label{Fig6}
\end{figure}
Fig. \ref{Fig3} shows the transmit power versus the number of antennas at the BS. Similar to Fig. \ref{Fig1}, we also compare the performance of two proposed algorithms and NOMA and OMA with random phase IRS. In Fig. \ref{Fig3}, we set the number of reflecting elements at each IRS as $N = 32$ and the date rate requirements of all users are chosen as 1 bps/Hz. Obviously, we can see the transmit power at the BS of all schemes decreases with an increasing number of antennas at the BS, and these two proposed algorithms are still better than the benchmarks. When the number of antennas at the BS is set as 4, the performance of the partial exhaustive search algorithm is slightly better than the alternating algorithm. However, the energy consumption of the proposed alternating algorithm drops faster than the partial exhaustive search algorithm with the number of base station's antennas increasing. It indicates that the proposed alternating algorithm performs better than the partial exhaustive search method in the most of cases of massive antennas. \par
Fig. \ref{Fig4} shows the transmit power versus the distance between the IRS and the cell edge user in each cluster. In Fig. \ref{Fig4}, we set the number of antennas at the BS as $M = 8$, the number of each IRS's reflecting elements as $N = 32$. Each users' minimum date rate is 1bps/Hz. As expected, the transmit power of all schemes increases when the distance between IRS and the cell edge user in each cluster gets large. Similar to Fig. \ref{Fig3}, the proposed alternating algorithm consumes less energy compared with all other schemes. \par
Fig. \ref{Fig5} shows the the value of $q$ in the initial point search algorithm versus the iterative number. As we discussed before, the $q$ represents the distance between the current problem and a feasible problem and $q$ can enforce the current problem to be a feasible one. In Fig. \ref{Fig4}, we can see that the value of $q$ in the $R_c = 1.2$ bps/Hz scheme is larger than that in the $R_c = 1$ bps/Hz scheme at each iteration. Moreover, the scheme with $R_c = 1.2$ bps/Hz needs more iterations to converge, which indicates that a higher date rate makes all constraints more difficult to be fulfilled. \par
Fig. \ref{Fig6} shows the transmit power at the BS versus the iterative number in Algorithm 2. We evaluate the transmit power in different scenarios with the different numbers of antennas at the BS. We set all users' data rate as 1 bps/Hz and the number of each IRS as $N = 32$. From Fig. \ref{Fig6}, we find that the transmit power at the BS decreases with the number of iterations increasing, which also means this algorithm can converge with the algorithm proceeding.
\section{Conclusion}
In this paper, we investigated joint optimization of beamforiming, power allocation and IRS phase shift in a NOMA-IRS assisted multi-cluster network. By introducing inequality approximation, SCA and SDR, we proposed an alternating algorithm to minimize the transmit power by iteratively solving beamforming optimization and phase shifting feasibility until the algorithm converges. Furthermore, we proposed an initial point search algorithm to guarantee the feasibility of the beamforming optimization subproblem. Moreover, to reduce the complexity of the proposed algorithm, we also provided a low-complexity solution for this scenario based on the partial exhaustive search. The simulation results demonstrated the alternating algorithm outperforms the simplified partial exhaustive search algorithm but has a higher complexity. In the future research, the IRS reconfigures the imperfect channel will be studied and the inter-cluster interference caused by IRS will also be considered.
\bibliographystyle{IEEEtran}
|
2,877,628,089,431 | arxiv | \section{Introduction}
Voice activity detection (VAD), whose main objective is to detect voiced speech segments and distinguish those from unvoiced ones, is a crucial component for tasks such as speech recognition, speaker recognition, and speaker verification.
Deep learning approaches have been successfully applied to VAD~\cite{Hughes2013,ryant2013speech,thomas2014analyzing,Lavechin2020}.
For VAD in complex environments, neural networks (NN) have been successful.
Deep neural networks (DNN) and specifically convolutional neural networks (CNN) offer improved modeling capabilities compared to traditional methods~\cite{ryant2013speech}, while recurrent- (RNN) and long short-term memory (LSTM) networks can better model long-term dependencies between sequential inputs~\cite{Hughes2013, eyben2013real,Tong2016}.
However, despite the application of deep learning methods, NN-based VAD training still requires frame labels.
Thus training data utilized is usually under controlled environment with or without additional synthetic noise~\cite{hirsch2000aurora}. This inevitably prevents VAD from real-world applications, where speech in the wild is often accompanied by countless unseen noises with different features.
Therefore, this paper intends to propose a method to detect speech beyond clean and controlled noisy environment. It should be noted that frame-level labels are quite unlikely to come with real-world recordings since manual labeling is costly, and label predictions from a Hidden Markov model need prior knowledge about the language being spoken~\cite{rabiner_hmm_tutorial}.
A task to detect speech components while enabling noisy data training, is related to weakly-supervised sound event detection (WSSED), which detects and localizes different sounds, including speech via clip-level supervision.
Since WSSED systems are reported~\cite{dinkel2019duration} to be robust to noise and only require clip-level labels, this work integrates WSSED methods in scaling VAD to speech in-the-wild scenarios and relaxing its dependence on frame labeling. Specifically, we investigate two questions: 1) Are current, multi-class WSSED models comparable in performance to DNN-based VAD; 2) Is utterance-level training a viable alternative compared to frame-level?
We thus introduce our framework, a general-purpose training framework for VAD (GPVAD, see \Cref{fig:model_approach}).
By general purpose, we refer to two distinct aspects: First, the framework is noise-robust and capable of being deployed in wild, real-world scenarios; Secondly, the framework can be trained on unconstrained data, thus enabling learning from big webly data like noisy online-videos.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{GPVAD.pdf}
\caption{The proposed framework. A CRNN architecture is utilized, while GPVAD is trained clip-level labels, and VAD-C trained on frame-level labels. Each Conv2d block represents a batch-normalization, followed by a zero-padded 2-dimensional convolution with kernel size $3\times3$ and a leaky ReLU activation with a negative slope of $0.1$.
The CNN output is fed into a bidirectional gated recurrent unit (GRU) with 128 hidden units.
The architecture sub-samples the temporal dimension $T$ by a factor of 4 and later upsampled to match the original input temporal dimension.
The number of events $E$ is set to be $527$ for GPV-F, $2$ for GPV-B, and VAD-C.
After post-processing the output, only the \textit{Speech} event is kept for final evaluation.}
\label{fig:model_approach}
\end{figure*}
The paper is structured as follows: In \Cref{sec:relatedwork}, we briefly review the related work on WSSED and how it can be transferred for VAD in the wild.
In \Cref{sec:approch}, the GPVAD approach is introduced.
Moreover, in \Cref{sec:experiments} we introduce our experimental setup and provide implementation details.
In \Cref{sec:results} the results are presented and finally in \Cref{sec:conclusion} a conclusion is provided.
\section{Weakly supervised sound event detection}
\label{sec:relatedwork}
Since WSSED can work well in detecting speech in a noisy environment without frame-level labeling, we borrow this idea to realize VAD in the wild. Here we present related work on sound event detection (SED), which aims to classify (audio tagging) and possibly localize multiple co-occurring sound events from a given audio clip.
In this work, we mainly focus on weakly-supervised SED (WSSED), a semi-supervised task, which has only access to clip-level labels during training, yet needs to classify and localize a specific event during evaluation. This weakly-supervised fashion enables training on noisy data with lower requirements for the labeling method.
Recent advances in weakly supervised sound event detection, in particular, the detection and classification of acoustic scenes and events (DCASE) challenges~\cite{Serizel2018}, led to large improvements for predicting accurate sound event boundaries as well as event-labels~\cite{lin2019specialized,yong_xu_att,Cakr2017,Kong2018,Pellegrini2019,Kong2019}.
In particular, recent work~\cite{dinkel2019duration} has shown promising performance regarding short, sporadic events such as speech.
\section{VAD in the wild via WSSED}
\label{sec:approch}
Traditionally, VAD for noisy scenarios is modeled as in \Cref{eq:noisy_eq}.
The assumption is that additive noise $\mathbf{u}$ can be filtered out from an observed speech signal $\mathbf{x}$ to obtain clean speech $\mathbf{s}$.
\begin{equation}
\label{eq:noisy_eq}
\mathbf{x} = \mathbf{s} + \mathbf{u}
\end{equation}
However, directly modeling $\mathbf{u}$ is rather tricky, since each type of noise has its individual traits.
Therefore, we aim at learning the properties of $\mathbf{s}$ by observing it with potentially $L$ different non-speech events $\left( \mathbf{u}_1 \ldots, \mathbf{u}_L \right)$.
Those events are not restricted to being background/foreground noises and can have distinct real-world sounds (e.g., Cat, Music).
\begin{align}
\label{eq:model}
\begin{split}
\mathcal{X} &= \{ \mathbf{x}_1,\ldots, \mathbf{x}_{l}, \ldots , \mathbf{x}_{L} \}\\
\mathbf{x}_l &= \left( \mathbf{s}, \mathbf{u}_l \right)
\end{split}
\end{align}
Our approach stems from multiple instance learning (MIL), meaning that training set knowledge about specific labels is incomplete (e.g., Speech never directly observed).
Here, we model our observed speech data $\mathcal{X}$ as a ``bag'', containing all co-occurrences of Speech in conjunction with another, possibly noisy background/foreground event label $l \in \{1, \ldots, L\}$ from a set of all possible event labels $L < E$ (\Cref{eq:model}).
So to speak, our approach aims to refine a model's belief about the speech signal $\mathbf{s}$, within complex environmental scenarios.
The advantage of this modeling method is that it can be applied for both frame- and clip-level training.
Our GPVAD, therefore, relaxes these constraints by allowing training on clip/utterance level, where each training clip contains at least one event of interest.
We propose two different models: GPV-F, which outputs $E=527$ labels ($L=405$) and the naive GPV-B, $E=2, L=1$.
GPV-F can be seen as a full-fledged WSSED approach using maximal label supervision and is, therefore, more costly than GPV-B, which only requires knowledge about a clip containing Speech.
However, GPV-F should be capable of modeling each individual noise-event instead of clustering all noise into a single class (GPV-B), thus possibly enhancing performance in heavy noise scenarios.
The two models are compared against a model trained on frame-level, further referred to as VAD-C.
All models share a common backbone convolutional recurrent neural network (CRNN)~\cite{dinkel2019duration} approach used in WSSED, which is shown to be robust towards short, sporadic events such as Speech.
The following modification to~\cite{dinkel2019duration} have been done: \begin{enumerate*}
\item Add an upsampling operation, such that the models' time-resolution remains constant.
\item Use $L^p$ pooling as our default with $p=4$, as it has been seen to be beneficial for duration invariant estimates.
\end{enumerate*}
Different from VAD-C training, where frame-level labels are available, our GPVAD framework is split into two distinct stages.
During training, only clip/utterance-level labels are accessible.
Therefore a temporal pooling function is required (\Cref{eq:linear_softmax}).
During inference, post-processing needs to be applied (\Cref{para:post_processing}) to convert probability sequences into binary labels (absence/presence of an event) as well as any predicted non-speech label is discarded.
The framework is depicted in \Cref{fig:model_approach}.
\section{Experiments}
\label{sec:experiments}
In our work, deep neural networks were implemented in PyTorch ~\cite{paszke2017automatic}, front-end feature extraction utilized librosa~\cite{Librosa_mcfee}.
Code is available online\footnote{Available at github.com/richermans/gpv}.
\subsection{Datasets}
\label{ssec:data}
\begin{table}[htbp]
\centering
\begin{tabular}{l|r|r|r|r}
Datatype & Name & Condition & Label & Duration \\
\hline\hline
\multirow{2}{*}{Training} & Audioset & Real & Clip & 15 h \\
& Aurora 4+ & Syn & Frame & 30 h \\
\hline
\multirow{3}{*}{Evaluation}& Aurora 4 (A) & Clean & Frame & 40 min \\
& Aurora 4 (B) & Syn & Frame & 8.7 h \\
& DCASE18 & Real & Frame & 100 min \\
\end{tabular}
\caption{Training datasets for GPVAD (Audioset) and VAD-C (Aurora 4+), as well as the three proposed evaluation protocols for clean, synthetic noise and real-world scenarios. Duration represents the approximate length of speech.}
\label{tab:datasets}
\end{table}
Utilized datasets in this work can be split into a train data portion, which differs between the GPVAD and VAD approaches, and evaluation data, which is shared by both approaches.
Our main GPVAD training dataset is the ``balanced" set provided by the AudioSet corpus~\cite{Gemmeke2017}, containing 21100/22160 (due to unavailability) 10-second Youtube audio clips, categorized into 527 noisy event labels.
From the available 21100 clips (58h), 5452 clips ($\approx$ 15h) are labeled as containing speech, but always alongside $L=405$ other events (e.g., Bark).
Regarding GPV-B, we replace all 526 events in the balanced dataset, not being speech as ``noise'', thus $\mathcal{X}_{\text{GPV-B}} = \{ \left(\mathbf{s}, \mathbf{u}_{noise}, \right), \mathbf{u}_{noise} \}$.
It is important to note that for GPV-B/V training, speech is never individually observed.
Our VAD-C model is trained on the Aurora 4 training set extended by 15 hours of Switchboard~\cite{godfrey_switchboard}, obtaining our Aurora 4+ training subset, containing clean as well as synthetic noise data.
The additive synthetic noise (Syn) is obtained from six different noise types (car, babble, restaurant, street, airport, and train) that were added at randomly selected SNRs between 10 and 20 dB.
All utilized datasets are described in \Cref{tab:datasets}.
Three different evaluation scenarios are proposed.
First, we validate on the 40 minutes long, clean Aurora 4 test set~\cite{hirsch2000aurora}.
Second, we synthesize a noisy test set based on the clean Aurora 4 test set by randomly adding noise from 100 noise types using a SNR ranging from 5db to 15db in steps of 1db.
Lastly, we merge the development and evaluation tracks of the DCASE18 challenge~\cite{Serizel2018}, itself a subset of Audioset, to create our real-world evaluation data.
The DCASE18 data provides ten domestic environment event labels, of which we neglect all labels other than Speech, but report the number of instances where non-speech labels were present.
Our DCASE18 evaluation set encompasses 596 utterances labelled as "Speech", 414 utterances (69\%) contain another non-speech label, 114 utterances (20\%) only contain speech and 68 utterances (11\%) contain two or more non-speech labels.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{data_distributions.pdf}
\caption{Evaluation data distribution with regards to duration (left) and number of segments per utterance (right), between the Aurora 4 (orange) and DCASE18 (blue) sets. Best viewed in color. }
\label{fig:dataset_distribution}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{l|l|l||r|r|r|r|r|r}
\multirow{2}{*}{Testset} & \multirow{2}{*}{Condition} & \multirow{2}{*}{Model} & \multicolumn{5}{c}{Metric} \\
& & & F1-macro(\%) & F1-micro(\%) & AUC(\%) & FER(\%) & Event-F1(\%) \\
\hline\hline
\multirow{3}{*}{Aurora 4 (A)} & \multirow{3}{*}{Clean} & VAD-C & \textbf{96.55} & \textbf{97.43} & \textbf{99.78} & \textbf{2.57} & \textbf{78.9} \\
& & GPV-B & 86.24 & 88.41 & 96.55 & 11.59 & 21.00 \\
& & GPV-F & \underline{95.58} & \underline{95.96} & \underline{99.07} & \underline{4.01} & \underline{73.70} \\
\hline
\multirow{3}{*}{Aurora 4 (B)} & \multirow{3}{*}{Syn} & VAD-C & \textbf{85.96} & \textbf{90.28} & \textbf{97.07} & \textbf{9.71} & \textbf{47.5} \\
& & GPV-B & 73.90 & 75.75 & 89.99 & 24.25 & 8.0 \\
& & GPV-F & \underline{81.99} & \underline{84.26} & \underline{94.63} & \underline{15.74} & \underline{35.4} \\
\hline
\multirow{3}{*}{DCASE18} & \multirow{3}{*}{Real} & VAD-C & 77.93 & \underline{78.08} & 87.87 & 21.92 & \underline{34.4} \\
& & GPV-B & \underline{77.95} & 75.75 & \underline{89.12} & \underline{19.65} & 24.3 \\
& & GPV-F & \textbf{83.50} & \textbf{84.53} & \textbf{91.80} & \textbf{15.47} & \textbf{44.8} \\
\end{tabular}
\caption{Best achieved results on each respective evaluation condition. Bold marks best result for the respective dataset, while underlined marks second best.}
\label{tab:results}
\end{table*}
As it can be seen in \Cref{fig:dataset_distribution}, the DCASE18 evaluation datasets differ from the Aurora 4 dataset in terms of average duration spoken (1.49 s vs. 3.31 s), as well as number of spoken segments within an utterance (3.87 vs. 2.08).
\subsection{Evaluation Metrics}
\label{ssec:eval}
\paragraph*{Frame-level} For frame-level evaluation, we utilize frame macro/micro averaged F1 scores (F1-macro, F1-micro), Area Under the Curve (AUC)~\cite{ROC_AUC}, and frame error rate (FER).
\vspace{-10pt}
\paragraph*{Segment-level} For segment-level evaluation we utilize event-based F1-Score (Event-F1)~\cite{Mesaros2016,Bilen2019}.
Event-F1 calculates whether onset, offset, and the predicted label overlaps with the ground truth, therefore being a measure for temporal consistency.
We set a t-collar value according to WSSED research~\cite{Serizel2018} to 200 ms to allow an onset prediction tolerance and further permit a duration discrepancy between the reference and prediction of 20\%.
\subsection{Setup}
\label{ssec:setup}
Regarding feature extraction, all experiments used $64$-dimensional log-Mel power spectrograms (LMS) in this work.
Each LMS sample was extracted by a $2048$ point Fourier transform every $20$ ms with a window size of $40$ ms using a Hann window.
During training, zero padding to the longest sample-length within a batch is applied, whereas, during inference, a batch-size of $1$ is utilized, meaning no padding.
The training criterion for all experiments between the ground truth $\hat{y}$ and prediction $y$ is cross-entropy \Cref{eq:bce} for all samples $N$.
\begin{equation}
\label{eq:bce}
\mathcal{L}(\hat{y}, y) = -\frac{1}{N} \sum_{n}^N \hat{y}_{n} \log(y_n) + (1-\hat{y}_n)\log(1-y_{n})
\end{equation}
Linear softmax~\cite{Wang2018,dinkel2019duration} (\Cref{eq:linear_softmax}) is utilized as temporal pooling layer that merges frame-level probabilities $y_t(e) \in \left[ 0,1 \right]$ to a single vector representation $y(e) \in \left[ 0,1 \right]^E$.
\begin{equation}\label{eq:linear_softmax}
y(e) = \frac{\sum_{t}^T y_t(e)^2}{\sum_{t}^T y_t(e)}
\end{equation}
\vspace{-10pt}
\paragraph*{GPVAD}
The available training data was split into a label-balanced 90\% training and a 10\% held-out set for model training using stratification~\cite{sechidis2011stratification}.
Due to the inherent label-imbalance within Audioset, sampling is done such that each batch contains evenly distributed clips from each label.
Training uses Adam optimization with a starting learning rate of $1\mathrm{e}{-4}$, a batch size of 64, and terminates after seven epochs if the criterion did not decrease on the held-out dataset.
\vspace{-10pt}
\paragraph*{VAD-C}
VAD-C training utilizes a batch size of 20, whereas the loss (\Cref{eq:bce}) is ignored for padded frames.
The learning rate is set to $1\mathrm{e}{-5}$, and SGD is used for model optimization.
Training target labels are obtained by force alignment from a Kaldi trained ASR HMM model~\cite{povey2011kaldi}.
\vspace{-10pt}
\paragraph*{Post-processing}
\label{para:post_processing}
During inference, post-processing is required in order to obtain hard labels from class-wise probability sequences ($y_t(e)$).
We hereby use double threshold~\cite{dinkel2019duration,Kong2018} post-processing, which uses two thresholds $\phi_{\text{low}}=0.1, \phi_{\text{hi}}=0.5$.
\section{Results}
\label{sec:results}
Our results can be seen in \Cref{tab:results}.
Firstly, we show that our VAD-C model is capable of performing on an equal footing to other deep neural network approaches~\cite{Tong2016}.
Comparing VAD-C with GPV-B/F, it can be seen that VAD-C is indeed the best performing model given our metrics for clean and synthetic noise datasets.
However, evaluation on the real-world dataset reveals a different picture.
Here, VAD-C seems to be struggling against the naive GPV-B approach (AUC 87.87 vs. 89.12, FER 21.92 vs. 19.65), indicating that VAD-C is more likely to misclassify speech in the presence of real-world noise.
Moreover, in real-world scenarios, GPV-F is outperforming VAD-C for each proposed metric.
Our proposed GPV-F approach can also be seen to be consistently noise-robust since its performance difference between synthetic noise and real-world scenarios is minor.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{predictions_all.pdf}
\caption{Per-frame probability output for three sample clips, with visualized speech occurrence (boxed, gray). (Top) Contains a clip from Aurora 4 (B); (Center) contains a musician playing a guitar (DCASE18); (Bottom) contains somebody talking with background noises (DCASE18). Post-processing thresholds $\phi_{high},\phi_{low}$ are indicated. Best viewed in color. }
\label{fig:prob_output}
\end{figure}
Even though GPV-B is, on average underperforming against the other two approaches, one should note that it is the least costly system, since labeling data for GPV-B is essentially a binary question whether one heard any speech within a clip, making this approach capable of cheaply scaling to large data.
We conclude that the GPVAD models trained with only clip-level labels are capable of competing trained on frame-level labels.
\paragraph*{Quantitative Results}
In order to visualize model-specific behavior, three clips (one Aurora 4 Noisy, two DCASE18) were sampled from the testing set, and per-frame output probabilities are shown for each model seen in \Cref{fig:prob_output}.
In the case of the synthetic Aurora 4 test at the top, we can see that our GPVAD models are capable of modeling short pauses between two speech segments, at which VAD-C fails, yet both GPVAD models could not correctly estimate the second speech segments end.
The center sample further demonstrated a typical VAD-C problem in real-world scenarios: it is unable to distinguish between foreground events (here Guitar) and active speech for a majority of the utterance.
Especially the bottom sample exemplifies this problem: VAD-C starts to predict speech, where there is none, while both GPVAD models are capable of distinguishing any background noises from speech.
Please note that the bottom clip contains laughter at the end, which VAD-C classifies as speech.
In our future work, we would like to further extend the scope of GPVAD training by utilizing larger training data (e.g., unbalanced AudioSet).
\section{Conclusion}
\label{sec:conclusion}
This paper introduces a noise-robust VAD approach by utilizing weakly labeled sound event detection.
Two GPVAD systems are investigated: GPV-B, trained on binary speech and non-speech pairs only, as well as GPV-F, which utilizes all 527 AudioSet labels.
Our evaluation protocol thoroughly compares our proposed GPVAD approach to traditional VAD utilizing five distinct metrics.
Results indicate that GPV-B, even though trained on clip-wise, unconstrained speech, can be used to detect spoken language, without requiring clean, frame-labeled training data.
Further, while GPV-B/F both fall short in clean and synthetic noise scenarios against VAD-C, they excel at stable predictions for real-world data.
Specifically it can be seen that our proposed approach is robust in its performance across the synthetic and real-world noise datasets.
Our best performing model, GPV-F outperforms traditional supervised VAD approaches by a significant margin on real-world data, culminating in an absolute increase of 5.57\% F1-macro, 6.45\% F1-micro, 3.93\% AUC, 6.45\% FER and 10.4\% Event-F1.
\section{Acknowledgements}
This work has been supported by National Natural Science Foundation of China (No.61901265) and Shanghai Pujiang Program (No.19PJ1406300). Experiments have been carried out on the PI supercomputer at Shanghai Jiao Tong University.
\bibliographystyle{IEEEtran}
|
2,877,628,089,432 | arxiv | \section{Introduction}
A characteristic feature of quantum theory is that composite quantum
systems, whose parts do not interact, may possess nonlocal properties.
For example, entanglement \cite{Horodeckis-2009} and quantum information
both exhibit nonlocality. Entangled states are nonlocal because they
give rise to correlations that cannot be explained by local hidden
variable theories \cite{Bell-1964}, whereas, nonlocality of quantum
information is in the sense that a measurement on the whole system
sometimes reveals more information about the state than coordinated
local measurements on its parts \cite{Bennett-I-99,Bennett-II-99 +Divin-2003,Peres-Wootters-1991,Ghosh-2001,Horodecki-2003,Bandyo-2011}.
This nonlocal nature of quantum information is generally manifested
in the setting of local discrimination of quantum states \cite{Bandyo+Walgate-2009,Bandyo-2011,Bennett-I-99,Chefles2004,Ghosh-2001,Ghosh-2002,Hayashi-etal-2006,Horodecki-2003,Nathanson-2005,Peres-Wootters-1991,Virmanietal2001,Walgate-2000-2002,Watrous-2005,Duan2007-2009,Hayashi-2}.
One of the principal goals in quantum information theory is to understand
and quantify the relationship between entanglement and nonlocality
of quantum information.
The difficulty in quantifying the role of entanglement in local state
discrimination is evident from some of the early results, which show
that the presence of entanglement is neither necessary nor sufficient
to ensure whether a given set of orthogonal states is locally indistinguishable.
That entanglement is not necessary is evident from the examples of
locally indistinguishable sets of orthogonal product states exhibiting
{}``non-locality without entanglement'' or forming an unextendible
product basis (UPB)\cite{Bennett-I-99,Bennett-II-99 +Divin-2003}.
On the other hand, any two orthogonal states can be perfectly distinguished
no matter how entangled they are \cite{Walgate-2000-2002}, showing
that entanglement is not sufficient for local indistinguishability.
Nevertheless, entanglement is often the key factor in a typical locally
indistinguishable set as in the case of a complete bipartite orthogonal
basis containing one or more entangled states \cite{Horodecki-2003,Ghosh-2001};
if such a set can be perfectly distinguished locally, then one can
create entanglement from product states using LOCC \cite{Horodecki-2003,Ghosh-2001},
a task known to be impossible.
Significant progress, which also motivated the present work, was reported
in \cite{Hayashi-etal-2006}, where it was shown that entanglement
does guarantee difficulty in local state discrimination. In particular,
it was shown that if the states (pure or mixed) $\sigma_{1},\sigma_{2},...,\sigma_{N}$
can be perfectly discriminated by LOCC, then the number of states
is bounded by\begin{equation}
N\leq D/\overline{d\left(\sigma_{i}\right)}\leq D/\overline{r(\sigma_{i})}\leq D/\overline{2^{E\left(\sigma_{i}\right)}}\leq D/\overline{2^{G(\sigma_{i})}},\label{eq:Hayashi}\end{equation}
where, $D$ is the dimension of the Hilbert space of the composite
quantum system; $d\left(\sigma\right)$ is a quantity (to be defined
later) resembling distance to the nearest separable state; $r\left(\sigma\right)=\mbox{rank}\left(\rho\right)\left(1+R_{g}\left(\rho\right)\right)$,
where $\rho$ is the normalized projector onto the support \cite{support}
of $\sigma$ and $R_{g}\left(\rho\right)$ is the robustness of entanglement
\cite{Harrow-3-Steiner-Vidal}; $E\left(\sigma_{i}\right)=E_{R}(\sigma_{i})+S\left(\sigma_{i}\right)$,
where $E_{R}\left(\sigma\right)$ is the relative entropy \cite{Vedral-Plenio-1998}
and $S\left(\sigma\right)$ is the von Neumann entropy; $G\left(\sigma\right)$
is the geometric measure \cite{Hayashi-etal-2006,shimoy-geometric-measure},
and $\overline{x_{i}}=\frac{1}{N}\sum_{i=1}^{N}x_{i}$ denotes the
average. If the inequality is violated for any of the bounding quantities,
then we can certainly conclude that the given set of states cannot
be perfectly locally distinguished. However, if the inequality is
satisfied then no such definite conclusion can be drawn. Applications
of inequality (\ref{eq:Hayashi}) for LOCC discrimination of interesting
multipartite ensembles having certain group symmetries can be found
in \cite{Hayashi-2}.
For pure states the bounding quantities (from right to left) correspond
to well-defined distance-like entanglement measures, namely, geometric
measure, relative entropy, and robustness of entanglement, thereby
allowing a clear interpretation: the number of pure states that can
be perfectly distinguished by LOCC is bounded by the total dimension
over average entanglement. This therefore clarifies the matter to
a great extent for pure states. For mixed states, however, no such
clear conclusion can be drawn, and the role of entanglement still
remains unclear.
The purpose of the present work is to investigate how local distinguishability
of a given set of orthogonal mixed states depend on entanglement and
mixedness of the states. We first show that local distinguishability
of mixed states can be completely characterized by local distinguishability
of their support
\footnote{The support of a density matrix is the subspace spanned by its eigenvectors
corresponding to the nonzero eigenvalues
}. In particular, we establish a simple equivalence between local discrimination
of orthogonal states and subspaces in the sense that a given set of
density matrices can be perfectly distinguished by LOCC if and only
if their supports are also perfectly locally distinguishable, and
moreover, if the states can be perfectly distinguished, then the separable
measurement that distinguishes the states also distinguishes the supports
and vice versa. We use this fact to obtain the following results:
(a) state-specific properties such as inseparability and mixedness
of the density matrices (whose local distinguishability is under consideration)
do not have any special role in determining their local distinguishability,
(b) local distinguishability of mixed states may be completely determined
by maximal pure state entanglement within their supports, and (c)
the number of LOCC distinguishable orthogonal mixed states can be
bounded by the quantities that are optimized over all orthogonal mixed
state ensembles having identical supports. We now briefly discuss
results (a)-(c).
For result (a), we show that the state-specific properties such as
inseparability and mixedness of the given mixed states do not have
any fundamental role in determining their local distinguishability.
To see this, suppose $\mathbb{S}=\left\{ \sigma_{1},\sigma_{2},...,\sigma_{N}\right\} $
is a set of orthogonal density matrices whose local distinguishability
is under question. Let $\left\{ s_{1},s_{2},...,s_{N}\right\} $ be
the set of orthogonal subspaces where, $s_{i}$ is the support of
$\sigma_{i}$. It is clear that infinitely many sets like $\mathbb{S}$
exist, where each set contains $N$ orthogonal density matrices with
the property that $s_{i}$ is the support of its $i^{\mbox{th}}$
element. Let $\mathbb{Q}$ be the collection of all such sets; that
is, $\mathbb{Q}=\left\{ \mathbb{S},\mathbb{S}^{\prime},...\right\} $.
We then show that either every set in $\mathbb{Q}$ is perfectly distinguishable
by LOCC or none of them are, regardless of how entangled or mixed
the states in a given set are. That is, if $\mathbb{S}$ is perfectly
distinguishable by LOCC, then so is any set, say $\mathbb{S}^{\prime}\in\mathbb{Q}$
and vice versa, even though average entanglement or mixedness could
be very different for the states in $\mathbb{S}$ and $\mathbb{S}^{\prime}$
(for instance, the density matrices in $\mathbb{S}$ may be highly
entangled, whereas the density matrices in $\mathbb{S}^{\prime}$
may be very weakly entangled). We call this property subspace degeneracy.
Thus the state-specific properties of the density matrices $\sigma_{i}$
do not have any special role as far as their local distinguishability
is concerned.
For results (b) and (c) we use the above observations to present upper
bounds on the number of perfectly locally distinguishable orthogonal
mixed states. In particular, we obtain two kinds of upper bounds.
The first one shows that the number of orthogonal density matrices
that can be perfectly distinguished locally is bounded above by the
total dimension over the average of maximal pure state entanglement
in the supports of the density matrices. This bound is not necessarily
optimal but depends only on pure-state entanglement within the supports
of the states and therefore may be easy to compute in many instances.
This shows that local distinguishability of mixed states may be determined
by pure-state entanglement alone. The second bound is optimal in the
sense that it optimizes the bounding quantities over all orthogonal
ensembles (satisfying certain conditions) admissible within the supports
of the density matrices.
\section{Necessary conditions for perfect LOCC state discrimination}
Let $\mathcal{H}$ be the Hilbert space of a composite quantum system
and $D=\dim\mathcal{H}$. Throughout this paper we consider only finite-dimensional
systems. We note that any measurement realized by LOCC is separable
(the converse is not true \cite{Bennett-I-99}). A separable measurement
$\Pi=\left\{ \Pi_{1},\Pi_{2},\cdots,\Pi_{n}\right\} $ on $\mathcal{H}$
is a POVM satisfying $\sum_{i=1}^{n}\Pi_{i}=\mathcal{I_{H}}$, where
$\Pi_{i}$ is a separable, positive semi-definite operator for every
$i$. Therefore, if a set of quantum states is perfectly distinguishable
by LOCC, then there exists a separable measurement distinguishing
the states. For a necessary and sufficient condition for perfect discrimination
by separable measurements, see \cite{Duan2007-2009}.
We now state two necessary conditions for perfect LOCC state discrimination.
The first condition and its variants can be found in Refs. \cite{Hayashi-etal-2006,Watrous-2005,Chefles2004,Nathanson-2005,Duan2007-2009}
and the second condition is due to Ref. \cite{Hayashi-etal-2006}.
\begin{prop} If the orthogonal quantum states $\sigma_{1},\sigma_{2},...,\sigma_{N}$
are perfectly distinguishable by LOCC then it is necessary that there
exists a separable POVM $\Pi=\left\{ \Pi_{1},\Pi_{2},\cdots,\Pi_{N}\right\} $
such that \begin{equation}
\mbox{Tr}\left(\Pi_{i}\sigma_{j}\right)=\delta_{ij}.\label{prop1-eq-1}\end{equation}
\end{prop}
\begin{prop}
A necessary condition for perfect LOCC discrimination of the states
$\sigma_{1},\sigma_{2},...,\sigma_{N}$ by a separable POVM $\Pi=\left\{ \Pi_{1},\Pi_{2},\cdots,\Pi_{N}\right\} $
is that the following inequality is satisfied:\begin{equation}
\sum_{i=1}^{N}d\left(\sigma_{i}\right)\leq D\label{prop-2-eq-1}\end{equation}
where, $d\left(\sigma_{i}\right):=\min\frac{\mbox{Tr}\left(\Pi_{i}\right)}{\mbox{Tr}\left(\sigma_{i}\Pi_{i}\right)}$
such that $0\leq\frac{\Pi_{i}}{\mbox{Tr}\left(\sigma_{i}\Pi_{i}\right)}\leq\mathcal{I}$.\end{prop}
Let us remark that the above necessary condition is particularly useful
for bounding the number of states that can be perfectly discriminated
by LOCC \cite{Hayashi-etal-2006}.
\section{Results}
\subsection{Local discrimination of orthogonal subspaces}
We first explain what we mean by LOCC discrimination of orthogonal
subspaces (for discrimination of non-orthogonal subspaces using global
measurements see \cite{Hillery-2006}). In local discrimination of
orthogonal subspaces, a pure quantum state shared between several
observers is guaranteed to belong to a subspace chosen from a known
collection of orthogonal subspaces. The goal is to determine by LOCC
to which subspace the state belongs without making any error. We assume
that within each subspace each state is equally likely, and so are
the subspaces. We will say that the subspaces $\left\{ \mathcal{S}_{1},\mathcal{S}_{2},...,S_{k}\right\} $
are perfectly locally distinguishable if we can perfectly distinguish
the set of density matrices $\left\{ \rho_{1},\rho_{2},...,\rho_{k}\right\} $
by LOCC, where $\rho_{i}$ is the normalized projector onto the subspace
$\mathcal{S}_{i}$. Clearly, the problem of local discrimination of
orthogonal subspaces is a special case of the general problem. We
begin with a simple but useful lemma.
\begin{lem} If the orthogonal subspaces $\left\{ \mathcal{S}_{1},\mathcal{S}_{2},...,\mathcal{S}_{k}\right\} $
are perfectly LOCC distinguishable, then so are the density matrices
$\left\{ \omega_{1},\omega_{2},...,\omega_{k}\right\} $ where, $\omega_{i}\in\mathcal{S}_{i}$.
\end{lem}
\begin{proof}
That the subspaces $\left\{ \mathcal{S}_{1},\mathcal{S}_{2},...,\mathcal{S}_{k}\right\} $
are perfectly LOCC distinguishable means that the set of orthogonal
density matrices $\left\{ \rho_{1},\rho_{2},...,\rho_{k}\right\} $,
the normalized projectors onto the subspaces, can be perfectly distinguished.
Thus there exists a locally implementable separable POVM $\Pi=\left\{ \Pi_{1},\Pi_{2},...,\Pi_{k}\right\} $
such that $\mbox{Tr}\left(\Pi_{i}\rho_{j}\right)=\delta_{ij}$. Denoting
$\rho_{j}=\frac{1}{\dim\mathcal{S}_{j}}\Lambda_{j}$, where, $\Lambda_{j}$
is the projection operator onto $\mathcal{S}_{j}$, we get,\begin{equation}
\frac{1}{\dim\mathcal{S}_{j}}\mbox{Tr}\left(\Pi_{i}\Lambda_{j}\right)=\delta_{ij};\label{lem-1-eq-1}\end{equation}
thus for $i\neq j$ the POVM elements $\left\{ \Pi_{i}\right\} $
are all orthogonal to the subspace $\mathcal{S}_{j}$. Now for any
density matrix $\Delta$ and a POVM $\mathcal{M}=\left\{ \mathcal{M}_{i}:i=1,...,k\right\} $
the relation \begin{equation}
\sum_{i}\mbox{Tr}\left(\mathcal{M}_{i}\Delta\right)=1,\label{lem-1-eq-1.1}\end{equation}
is valid. The summation indicates that the sum of the probabilities
must add up to 1 when the measurement $\mathcal{M}$ is performed
on the state $\Delta$, where, $\mbox{Tr}\left(\mathcal{M}_{i}\Delta\right)$
is the probability of obtaining outcome $i$. Suppose the POVM $\Pi$
is implemented on the given state chosen from $\left\{ \omega_{1},\omega_{2},...,\omega_{k}\right\} $.
We therefore have \begin{equation}
\sum_{i}\mbox{Tr}\left(\Pi_{i}\omega_{j}\right)=1,\label{lem1-eq-2}\end{equation}
where $\mbox{Tr}\left(\Pi_{i}\omega_{j}\right)$ is the probability
of obtaining outcome $i$ when the input state is $\omega_{j}$. Because
$\omega_{j}\in\mathcal{S}_{j}$, $\omega_{j}$ must be orthogonal
to all POVM elements $\Pi_{i};i\neq j$. Therefore, \begin{equation}
\mbox{Tr}\left(\Pi_{i}\omega_{j}\right)=\delta_{ij}.\label{lem-1-eq-3}\end{equation}
Thus the POVM $\Pi$ also perfectly distinguishes the states $\left\{ \omega_{1},\omega_{2},...,\omega_{k}\right\} $.
\end{proof}
\subsection{Equivalence of LOCC discrimination of states and subspaces}
For a given set of orthogonal density matrices $\left\{ \rho_{i}:i=1,...,N\right\} $,
consider the set of subspaces $\left\{ \mathcal{S}_{1},...,\mathcal{S}_{N}\right\} $,
where $\mathcal{S}_{i}$ is the support of $\rho_{i}$. The orthogonality
of the density matrices implies that the supports are orthogonal.
We now show that the problems of local discrimination of orthogonal
states and subspaces are equivalent in the following sense.
\begin{thm}The density matrices $\rho_{1},...,\rho_{N}$ are perfectly
distinguishable by LOCC if and only if their supports are. Moreover,
if the states are perfectly distinguishable, then the measurement
that distinguishes the states also distinguishes their supports and
vice versa. \end{thm}
\begin{proof}
Let $\Pi=\left\{ \Pi_{i}:i=1,...,N\right\} $ be the POVM that perfectly
distinguishes the set of density matrices $\left\{ \rho_{i}:i=1,...,N\right\} $
by LOCC. Therefore, $\mbox{Tr}\left(\Pi_{i}\rho_{j}\right)=\delta_{ij}$.
Let $s_{j}$ be the support of $\rho_{j}$ and $\varrho_{j}=\frac{1}{|\mathcal{P}_{j}|}\mathcal{P}_{j}$,
where $\mathcal{P}_{j}$ is the projector onto the subspace $s_{j}$
and $|\mathcal{P}_{j}|=\dim s_{j}$. To prove that the POVM $\Pi$
also perfectly distinguishes the subspaces $\mathcal{S}_{1},\mathcal{S}_{2},...,\mathcal{S}_{N}$,
we only need to show that $\mbox{Tr}\left(\Pi_{i}\varrho_{j}\right)=\delta_{ij}$.
That the POVM is locally implementable holds by the assumption that
it perfectly distinguishes $\left\{ \rho_{i}\right\} $. Consider
first the diagonal decomposition of $\rho_{j}:$\begin{equation}
\rho_{j}=\sum_{l=1}^{|\mathcal{P}_{j}|}p_{l}^{j}|\phi_{l}^{j}\rangle\langle\phi_{l}^{j}|.\label{thm-1-eq-1}\end{equation}
From Eq.$\,$(\ref{lem-1-eq-1.1}) for every $l$ we have \begin{equation}
\sum_{i}\mbox{Tr}\left(\Pi_{i}|\phi_{l}^{j}\rangle\langle\phi_{l}^{j}|\right)=1.\label{thm-1-eq-2}\end{equation}
Also $\mbox{Tr}\left(\Pi_{i}\rho_{j}\right)=\delta_{ij}$ implies
that all POVM elements $\Pi_{i}\;\left(i\neq j\right)$ are orthogonal
to the states $\left\{ |\phi_{l}^{j}\rangle\;:l=1,...,d_{j}\right\} $.
Using this fact, the above equation reduces to, \begin{equation}
\mbox{Tr}\left(\Pi_{i}|\phi_{l}^{j}\rangle\langle\phi_{l}^{j}|\right)=\delta_{ij}\quad:\forall l.\label{thm-1-eq-3}\end{equation}
Noting that $\varrho_{j},$ the normalized projector onto the subspace
$s_{j}$ can be written as\begin{equation}
\varrho_{j}=\frac{1}{|\mathcal{P}_{j}|}\sum_{l=1}^{|\mathcal{P}_{j}|}|\phi_{l}^{j}\rangle\langle\phi_{l}^{j}|.\label{thm-1-eq-4}\end{equation}
we immediately obtain $\mbox{Tr}\left(\Pi_{i}\varrho_{j}\right)=\delta_{ij}$
using Eqs.$\,$(\ref{thm-1-eq-3}) and (\ref{thm-1-eq-4}). Thus the
subspaces can be perfectly distinguished and the POVM $\Pi$ distinguishes
them. The rest of the proof, namely, the POVM that perfectly distinguishes
the orthogonal subspaces $\left\{ \mathcal{S}_{1},\mathcal{S}_{2},...,\mathcal{S}_{N}\right\} $
also distinguishes the orthogonal density matrices $\left\{ \rho_{1},...,\rho_{N}\right\} $
follows from Lemma 1.
\end{proof}
The condition in Theorem 1, though remarkably simple and intuitive,
is able to capture the essence of local state discrimination and,
in particular, the role of entanglement therein. In particular, Theorem
1 leads to what we call {}``subspace degeneracy,'' which is discussed
in the next section.
\subsection{Subspace degeneracy}
As noted in the Introduction, intuitively, subspace degeneracy means
that any given set $\mathbb{S}$ of orthogonal density matrices belongs
to a collection of infinitely many sets having identical distinguishability
properties no matter how different the average entanglement of the
individual sets are. For a given set $\mathbb{S}=\left\{ \sigma_{i}:i=1,...,N\right\} $
whose local distinguishability is under consideration, consider another
orthogonal set $\mathbb{S}^{\prime}=\left\{ \sigma_{i}^{\prime}:i=1,...,N\right\} $
with the property that for every $i$, $\sigma_{i}$ and $\sigma_{i}^{\prime}$
have identical support. Let $\mathbb{Q}=\left\{ \mathbb{S}^{\prime}\right\} $
be the collection of all such orthogonal sets $\mathbb{S}^{\prime}$.
Clearly, $\mathbb{S}$ is also a member of $\mathbb{Q}$. In other
words, given a set of orthogonal subspaces $S=\left\{ s_{i}:i=1,...,N\right\} $,
$\mathbb{Q}$ is simply the collection of only those sets $\mathbb{S}^{\prime}=\left\{ \sigma_{i}^{\prime}:i=1,...,N\right\} $
with the properties that the for every $i$, $\sigma_{i}^{\prime}\in s_{i}$
and $\mbox{rank}\left(\sigma_{i}^{\prime}\right)=\dim s_{i}$. By
simple application of Theorem 1 we obtain the next result.
\begin{prop} All orthogonal sets in $\mathbb{Q}$ are either perfectly
LOCC distinguishable or none of them is, regardless of the average
entanglement of the individual sets. Furthermore, if the sets can
be perfectly distinguished by LOCC, then there is a separable measurement
$\Pi_{\mathbb{Q}}$ that distinguishes every set in $\mathbb{Q}$.
\end{prop}
Simply put, no matter how different the average entanglement of the
sets might be, as far as perfect local distinguishability is concerned
they are either equally hard or equally easy to distinguish. This
is in sharp contrast to pure states, where the result in \cite{Hayashi-etal-2006}
implies different upper bounds for pure ensembles of the same cardinality
but having different average entanglement. Thus, unlike pure states,
there cannot be any direct correlation between entanglement (under
any reasonable measure) of the states and their local distinguishability.
Furthermore, entanglement or mixedness of the states in $\mathbb{S}$
is not be crucial in determining whether $\mathbb{S}$ can be perfectly
distinguished or not by LOCC. We use this fact (Proposition 3) to
obtain two kinds of upper bounds on the number of perfectly LOCC distinguishable
orthogonal states.
\subsection{Upper bounds on the number of perfectly LOCC distinguishable orthogonal
states}
In this section we give two kinds of upper bounds on the number of
perfectly LOCC distinguishable density matrices. In the first one,
the bounding quantities depend only on the maximal pure-state entanglement
in the supports of the density matrices, whereas in the second we
use Proposition 3 to optimize the bounding quantities over all sets
in $\mathbb{Q}$.
We first show how local distinguishability of a set of orthogonal
density matrices can be related to the local distinguishability of
a set of orthogonal pure states satisfying certain conditions. For
the orthogonal density matrices $\sigma_{1},...,\sigma_{N}$, let
$S=\left\{ |\psi_{1}\rangle,|\psi_{2}\rangle,...,|\psi_{N}\rangle\right\} $
be a collection of orthogonal pure states such that for every $i$,
$|\psi_{i}\rangle\in s_{i}$, where, $s_{i}$ is the support of $\sigma_{i}$.
\begin{prop} If the states $|\psi_{1}\rangle,|\psi_{2}\rangle,...,|\psi_{N}\rangle$
are not perfectly distinguishable by LOCC then the density matrices
$\sigma_{1},...,\sigma_{N}$ cannot be perfectly distinguished by
LOCC. \end{prop}
\begin{proof}
The proof of the second statement is by contradiction. Suppose states
$|\psi_{1}\rangle,|\psi_{2}\rangle,...,|\psi_{N}\rangle$ cannot be
perfectly distinguished by LOCC but there is a LOCC protocol that
perfectly distinguishes the density matrices $\sigma_{1},...,\sigma_{N}$.
From Theorem 1 we know that if the density matrices $\sigma_{1},...,\sigma_{N}$
are perfectly LOCC distinguishable then one can also perfectly distinguish
the orthogonal subspaces $s_{1},s_{2},...,s_{N}$, where $s_{i}$
is the support of $\sigma_{i}$. This implies, by Lemma 1, that the
states $|\psi_{1}\rangle,|\psi_{2}\rangle,...,|\psi_{N}\rangle$ can
also be perfectly distinguished locally because for every $i$, $|\psi_{i}\rangle\in s_{i}$,
which contradicts our assumption.
\end{proof}
It is important to note that if the states $|\psi_{1\rangle},|\psi_{2}\rangle,...,|\psi_{N}\rangle$
can be perfectly distinguished locally then it does \emph{not} mean
that the density matrices $\sigma_{1},...,\sigma_{N}$ can also be
reliably distinguished. For example, consider the following density
matrices in $2\otimes2$: $\sigma_{1}=\alpha|\Phi^{+}\rangle\langle\Phi^{+}|+\left(1-\alpha\right)|01\rangle\langle01|$
and $\sigma_{2}=\beta|\Phi^{-}\rangle\langle\Phi^{-}|+\left(1-\beta\right)|10\rangle\langle10|$
, where $\alpha\neq0$ and $\beta\neq0$. Clearly, $s_{1}=\mbox{span}\left\{ |\Phi^{+}\rangle,|01\rangle\right\} $
and $s_{2}=\mbox{span}\left\{ |\Phi^{-}\rangle,|10\rangle\right\} $.
While any two orthogonal vectors $|\psi_{1}\rangle,|\psi_{2}\rangle$,
where $|\psi_{i}\rangle\in s_{i}$ for $i=1,2$, are perfectly LOCC
distinguishable, the density matrices $\sigma_{1}$ and $\sigma_{2}$
are not. The reason is that neither of the subspaces can be spanned
only by product states thereby violating a necessary condition for
perfect local discrimination by separable measurements \cite{Duan2007-2009}.
To arrive at our upper bound we will use the previous proposition
and inequality (\ref{eq:Hayashi}). For a set of orthogonal pure states
$\left\{ |\phi_{1}\rangle,|\phi_{2}\rangle,...,|\phi_{N}\rangle\right\} $
inequality (\ref{eq:Hayashi}) becomes, \begin{equation}
N\leq\frac{D}{\overline{1+R(|\phi_{i}\rangle)}}\leq\frac{D}{\overline{2^{E_{R}(|\phi_{i}\rangle)}}}\leq\frac{D}{\overline{2^{E_{g}(|\phi_{i}\rangle)}}},\label{hayashi-pure}\end{equation}
where, the corresponding bounding quantities have been defined before.
Now, if the states $|\psi_{1}\rangle,|\psi_{2}\rangle,...,|\psi_{N}\rangle$
as defined in Proposition 4 violate the above inequality then we can
certainly conclude that the density matrices $\sigma_{1},...,\sigma_{N}$
are not perfectly distinguishable by LOCC. Therefore, if the density
matrices $\sigma_{1},\sigma_{2},...,\sigma_{N}$ are perfectly LOCC
distinguishable, then the following inequality holds: \begin{equation}
N\leq\frac{D}{\overline{1+R(|\psi_{i}\rangle)}}\leq\frac{D}{\overline{2^{E_{R}(|\psi_{i}\rangle)}}}\leq\frac{D}{\overline{2^{E_{g}(|\psi_{i}\rangle)}}}.\label{hayashi-1}\end{equation}
Note that the inequality may still be satisfied even if the density
matrices are locally indistinguishable. The example given after Proposition
4 conforms to this fact. The crucial point is that, \emph{if the density
matrices are locally distinguishable then the inequality will not
be violated. }
Naturally we would like to maximize the bounding quantities over all
orthogonal pure state ensembles like $S$. Let $S_{\max}=\left\{ |\Psi_{1}\rangle,|\Psi_{2}\rangle,...,|\Psi_{N}\rangle\right\} $
be the set of orthogonal pure states with the properties that for
every $i$, (a) $|\Psi_{i}\rangle\in s_{i}$, and (b) $R\left(|\Psi_{i}\rangle\right)=\max_{\psi\in s_{i}}R\left(|\psi\rangle\right)$.
The first condition ensures that the states belong to the supports
of the density matrices, so that proposition 4 is applicable. The
second condition reflects the fact that for every $i$, $|\Psi_{i}\rangle$
is the state with maximum pure state entanglement in the support of
$\sigma_{i}$. Thus by replacing the pure state ensemble $S$ by $S_{\max}$
in (\ref{hayashi-1}) we have the following result.
\begin{thm} If the density matrices $\sigma_{1},\sigma_{2},...,\sigma_{N}$
are perfectly LOCC distinguishable, then, \begin{equation}
N\leq\frac{D}{\overline{1+R(|\Psi_{i}\rangle)}}\leq\frac{D}{\overline{2^{E_{R}(|\Psi_{i}\rangle)}}}\leq\frac{D}{\overline{2^{E_{g}(|\Psi_{i}\rangle)}}}\label{hayashi-2}\end{equation}
where, for every $i$, $|\Psi_{i}\rangle\in s_{i}$, $R\left(\Psi_{i}\right)=\max_{\psi\in s_{i}}R\left(|\psi\rangle\right)$.
\end{thm}
Let us note that an exact analytical formula for robustness $R$ is
known for pure bipartite states \cite{Harrow-3-Steiner-Vidal}. Therefore,
for any given set of bipartite orthogonal density matrices, the upper
bound can be explicitly calculated (one needs to optimize to get the
best possible bound). Inequality (\ref{hayashi-2}) shows that mixed
states also admit pure-state-like correlation between entanglement
and the number of locally distinguishable states. This allows us to
make a general statement on the connection between entanglement and
local distinguishability: \emph{The number of perfectly LOCC distinguishable
quantum states, pure or mixed, is bounded above by the total dimension
over the average of maximal pure state entanglement in the supports
of the states. }It is, however, important to note that the quantity,
second from left, in inequality (\ref{eq:Hayashi}) is always stronger
than the leftmost quantity in (\ref{hayashi-2}) \cite{Virmani-Pvt}.
Our second bound can be considered to be the optimized version of
the general mixed-state bound given by (\ref{eq:Hayashi}). From Proposition
3 we know that if the set $\mathbb{S}$ is perfectly LOCC distinguishable
then so is any set $\mathbb{S}^{\prime}\in\mathbb{Q}$, and the measurement
that distinguishes $\mathbb{S}$ also distinguishes any $\mathbb{S}^{\prime}$
and vice versa. Noting that the sets are of same cardinality, it simply
follows that an upper bound on the number of perfectly LOCC distinguishable
states for any $\mathbb{S}^{\prime}$ is also an upper bound for $\mathbb{S}$.
The optimal bound is thus obtained by maximizing the bounding quantities
over all $\mathbb{S}^{\prime}$.
For the given set $\mathbb{S}=\left\{ \sigma_{i};i=1,...,N\right\} $,
let $Q_{i}$ be the set of all density matrices in $s_{i}$ having
rank equal to $\dim s_{i}$, where $s_{i}$ is the support of $\sigma_{i}$.
Define the following quantities: \begin{eqnarray*}
\mathcal{R}_{i} & = & \max_{\sigma^{\prime\prime}\in Q_{i}}\mathcal{R}\left(\sigma^{\prime\prime}\right),\\
\mathcal{E}_{i} & = & \max_{\sigma^{\prime\prime}\in Q_{i}}\left(E_{R}\left(\sigma^{\prime\prime}\right)+S(\sigma^{\prime\prime})\right),\\
\mathcal{G}_{i} & = & \max_{\sigma^{\prime\prime}\in Q_{i}}G\left(\sigma^{\prime\prime}\right),\end{eqnarray*}
where $\mathcal{R}\left(\sigma^{\prime\prime}\right):=\alpha^{-1}\left(1+R_{g}\left(\sigma^{\prime\prime}\right)\right)$,
with $\alpha$ being the maximum eigenvalue of $\sigma^{\prime\prime}$,
$R_{g}\left(\sigma^{\prime\prime}\right)$ is the \emph{global} robustness
of entanglement \cite{Harrow-3-Steiner-Vidal}, $E_{R}\left(\sigma^{\prime\prime}\right)$
is the relative entropy \cite{Vedral-Plenio-1998}, $S\left(\sigma^{\prime\prime}\right)$
is the von Neumann entropy, and $G\left(\sigma^{\prime\prime}\right)$
is the geometric measure \cite{Hayashi-etal-2006}.
\begin{thm} If the set of states $\mathbb{S}=\left\{ \sigma_{i};\, i=1,...,N\right\} $
is perfectly distinguishable by LOCC, then the number of states is
bounded by\begin{equation}
N\leq D/\overline{d\left(\sigma_{i}\right)}\leq D/\overline{\mathcal{R}_{i}}\leq D/\overline{2^{\mathcal{E}_{i}}}\leq D/\overline{2^{\mathcal{G}_{i}}},\label{Hayashi-revised}\end{equation}
where, $\overline{x_{i}}=\frac{1}{N}\sum_{i=1}^{N}x_{i}$ denotes
the average. \end{thm}
\begin{proof}
It is sufficient to show that \begin{equation}
d\left(\sigma_{i}\right)\geq\mathcal{R}_{i}\geq\mathcal{E}_{i}\geq\mathcal{G}_{i}.\label{thm-3-eq-1}\end{equation}
Inequality (\ref{Hayashi-revised}) is then obtained by combining
Proposition 2 and the above inequality and dividing by $N$. The proof
follows along lines very similar to that in \cite{Hayashi-etal-2006}.
It was shown in \cite{Hayashi-etal-2006} that we can write $d\left(\sigma_{i}\right)=\min\left(\frac{1}{\lambda}\right)$
such that $\exists$ $\varrho^{\prime}$, satisfying\begin{equation}
\tilde{\Pi_{i}}=\lambda_{i}\mathcal{P}_{i}+\left(1-\lambda_{i}|\mathcal{P}_{i}|\right)\varrho^{\prime}\in\mathfrak{S}\label{thm-3-eq-4}\end{equation}
along with the conditions\begin{eqnarray}
\mbox{Tr}\left(\mathcal{P}_{i}\varrho^{\prime}\right) & = & 0,\\
\langle\phi|\tilde{\Pi_{i}}|\phi\rangle & \geq & \lambda\;\forall|\phi\rangle,\end{eqnarray}
where $\tilde{\Pi_{i}}=\Pi_{i}/\mbox{Tr}\left(\Pi_{i}\right)$, $\mathcal{P}_{i}$
is the projector onto $s_{i}$ (the support of $\sigma_{i}$) , $|\mathcal{P}_{i}|=\dim s_{i}$,
and $\mathfrak{S}$ is the set of separable density matrices. Recall
that $Q_{i}$ is the set of all density matrices in $s_{i}$ having
rank equal to $|\mathcal{P}_{i}|$. Now observe that for any density
matrix $\sigma^{\prime\prime}\in Q_{i}$, we have $d\left(\sigma^{\prime\prime}\right)=d\left(\sigma_{i}\right)$
\cite{th-3-proof}. Now $\sigma^{\prime\prime}$ can be expressed
as \begin{equation}
\sigma^{\prime\prime}=\alpha\mathcal{P}_{i}-\beta\sigma^{\prime\prime\prime},\label{thm-3-eq-5}\end{equation}
where $\alpha$ is the maximum eigenvalue of $\sigma^{\prime\prime}$,
$\beta=|\mathcal{P}_{i}|\alpha-1$, and $\sigma^{\prime\prime\prime}\in s_{i}$.
Equation (\ref{thm-3-eq-4}) can therefore be rewritten in the form,
\begin{equation}
\tilde{\Pi_{i}}=\frac{\lambda_{i}}{\alpha}\left(\sigma^{\prime\prime}+\gamma\varrho^{\prime\prime}\right)\in\mathfrak{S}\label{thm-3-eq-6}\end{equation}
where, $\gamma=\left(\alpha-\lambda_{i}\right)/\lambda_{i}$. Noting
that the generalized (or global) robustness of entanglement \cite{Harrow-3-Steiner-Vidal}
of $R_{g}\left(\sigma\right)$ of any state $\sigma$ is defined by
$R_{g}\left(\sigma\right)=\min t$ such that there exists a state
$\varrho$ satisfying \begin{equation}
\frac{1}{1+t}\left(\sigma+t\varrho\right)\in\varsigma,\label{them-3--global-robust}\end{equation}
where $\varsigma$ is a separable state, we immediately obtain $R_{g}\left(\sigma^{\prime\prime}\right)\leq\gamma$.
Thus for any $\sigma^{\prime\prime}\in Q_{i},$ we have \begin{equation}
d\left(\sigma^{\prime\prime}\right)\geq\alpha^{-1}\left[1+R_{g}\left(\sigma^{\prime\prime}\right)\right].\label{thm-3-eq-7}\end{equation}
Thus, \begin{equation}
d\left(\sigma_{i}\right)\geq\mathcal{R}_{i}=\max_{\sigma^{\prime\prime}\in\mathcal{Q}_{i}}\mathcal{R}\left(\sigma^{\prime\prime}\right),\label{thm-3-eq-8}\end{equation}
where $\mathcal{R}\left(\sigma^{\prime\prime}\right):=\alpha^{-1}\left[1+R_{g}\left(\sigma^{\prime\prime}\right)\right]$.
The rest of the proof is straightforward. It is easy to show that
for any density matrix $\sigma^{\prime\prime}\in Q_{i}$, the following
inequality holds: \begin{equation}
r\left(\sigma_{i}\right)\geq\mathcal{E}\left(\sigma^{\prime\prime}\right)\geq\mathcal{G}\left(\sigma^{\prime\prime}\right).\label{thm-3-eq-9}\end{equation}
Thus we obtain \begin{equation}
\mathcal{R}_{i}\geq r\left(\sigma_{i}\right)\geq\mathcal{E}_{i}=\max_{\sigma^{\prime\prime}\in Q_{i}}\mathcal{E}\left(\sigma^{\prime\prime}\right)\geq\mathcal{G}_{i}=\max_{\sigma^{\prime\prime}\in Q_{i}}\mathcal{G}\left(\sigma^{\prime\prime}\right).\label{thm3-eq-10}\end{equation}
Combining Eqs.$\,$ (\ref{thm3-eq-10}) and (\ref{thm-3-eq-8}),
we get Eq.$\,$(\ref{thm-3-eq-1}). This concludes the proof.
\end{proof}
A few remarks are in order.
(i) By construction, for every bounding quantity, $\mathcal{R},$
$\mathcal{E}$, and $\mathcal{G}$, there always exists a set of orthogonal
quantum states $\mathbb{S}^{\prime}=\left\{ \sigma_{i}^{\prime};\, i=1,...,N\right\} \in\mathbb{Q}$
maximizing it, which is what the essence of the entire optimality
argument. For example, one can construct an orthogonal set $\mathbb{S}_{\mathcal{R}}^{\prime}\left(\sigma^{\prime}\right)=\left\{ \sigma_{i}^{\prime};\, i=1,...,N\right\} \in\mathbb{Q}$,
such that for every $i$, $\mathcal{R}_{i}=\mathcal{R}\left(\sigma_{i}^{\prime}\right)$,
and similarly for the quantities $\mathcal{E}$, and $\mathcal{G}$.
(ii) A nice feature of the above inequality is that the hierarchical
form holds even when the bounding quantities are independently maximized
(this is clear from the proof), and different sets may maximize different
quantities.
(iii) For any set $\mathbb{S}^{\prime\prime}=\left\{ \sigma_{i}^{\prime\prime};\, i=1,...,N\right\} \in\mathbb{Q}$,
the following inequality holds {[}inequality (\ref{Hayashi-revised})
is simply the optimized version of the following one{]}:
\begin{equation}
N\leq D/\overline{d\left(\sigma_{i}\right)}\leq D/\overline{\mathcal{R}\left(\sigma^{\prime\prime}\right)}\leq D/\overline{2^{\mathcal{E}\left(\sigma^{\prime\prime}\right)}}\leq D/\overline{2^{\mathcal{G}\left(\sigma^{\prime\prime}\right)}}.\label{Hayashi-3}\end{equation}
\section{Conclusions}
We have considered the problem of local distinguishability of orthogonal
mixed states. In particular, we have investigated how entanglement
and mixedness of the states influence their local distinguishability.
We have shown a general equivalence between local discrimination of
orthogonal states and subspaces which in turn implies that local distinguishability
of mixed states is completely determined by whether or not their supports
are also locally distinguishable. This led to the following results:
(a) state specific properties like inseparability, mixedness of the
density matrices do not have any special role in determining their
local distinguishability, (b) local distinguishability of mixed states
may be completely determined by maximal pure state entanglement within
their supports (c) an upper bound on the number of perfectly locally
distinguishable orthogonal mixed states is given where the bounding
quantities are optimized over all orthogonal mixed state ensembles
having identical supports.
Although the results obtained in this paper and in \cite{Hayashi-etal-2006}
show that entanglement is a significant factor in local distinguishability,
many questions still remain open. For example, there are orthogonal
product states known to be locally indistinguishable \cite{Bennett-I-99,Bennett-II-99 +Divin-2003},
despite being completely unentangled. Whether there is a deeper reason
behind this phenomena or is it just a consequence of the fact that
not all separable measurements are locally implementable is not known
yet. Obviously entanglement of the states is a non-issue here, but
entanglement could still be important because to implement such separable
measurements by LOCC, one is expected to consume auxiliary entanglement.
Thus, it is necessary to quantify the entanglement cost of such separable
measurements. Another interesting class of states requiring further
investigation are those which despite being entangled and locally
indistinguishable, do not violate the inequalities presented in this
paper or in \cite{Hayashi-etal-2006}. All these examples show that
there is more to local distinguishability of quantum states than what
can be captured through entanglement only. \\
\textbf{Acknowledgments:} The author is grateful to Guruprasad Kar
(ISI, Kolkata) for many helpful discussions. Thanks to S. Virmani
and D. Markham for their comments on an earlier version of this manuscript.
|
2,877,628,089,433 | arxiv | \section*{\large\bit Introduction}
Although the Kerr geometry is usually considered to be vacuum solution
of the Einstein equations, there has been quite some interest in the
investigation of its (singular) source structure
\cite{Isr1,Isr2,Lop,Bur,BaNa2}.
Aside from shedding light on the singularity structure the knowledge of the
source, specifically the energy-momentum tensor, provides a useful tool
in the calculation of the ultrarelativistic limit geometries of the
Kerr spacetime \cite{BaNa3,BaNa4}. This is mainly due to the fact that the
ambiguities encountered in the metric-limit \cite{LouSa} are not present
in the approach based on the energy-momentum tensor \cite{BaNa3}. \par
In a previous paper \cite{BaNa2} we made use of the Kerr-Schild decomposition
\cite{KeSch}
to assign an energy-momentum tensor to the Kerr-Newman
spacetime-family. The important observation in this context was the fact
that the mixed form of the Ricci tensor (and thereby the
energy-momentum tensor) is linear in the profile $\hat{f}$ of the
decomposition. This allows an unambiguous distributional treatment
despite the nonlinear character of Einstein's theory of gravity.
As expected the energy-momentum tensor-distribution has its support
on the singular region of the geometry, which in the case of Kerr is a ring.
However, since we restricted the manifold to the region of positive mass
(or positive radial coordinate $r$) we picked up an additional
contribution which is concentrated on the on the disk spanned by the
ring. This contribution is due to the fact that we identified the
two different boundaries of the branch cut. The aim of the
present paper is to extend our treatment to the maximal analytic
extension, which includes both sheets and thereby avoids the
previous identification. The flat metric $\eta_{ab}$ in the
Kerr-Schild decomposition develops a singularity along the branch
surface, which does not allow the immediate application of the
Kerr-Schild method we used in \cite{BaNa2}. Within the
generalized Kerr-Schild class \cite{Taub} it is however possible to
find a smooth metric that is Colombeau-associated \cite{Col1}
to the Kerr-solution.
Calculating its energy-momentum tensor allows
to perform the distributional limit in the end.\par
Our work is organized in the following way:
In section one we review the generalized Kerr-Schild class
and present a conceptive example displaying the curvature of the
quasi-flat part of decomposition when extended to the double-cover.
In section two we briefly recapitulate the basic concepts of Colombeau's
algebra of generalized functions. Finally section three contains
the regularisation of the Kerr geometry within the
generalized Kerr-Schild class and the calculation of its
energy-momentum tensor.
\section*{\large\bit 1) Generalized Kerr-Schild class}
Usually geometries in the Kerr-Schild class are defined by
decomposing the metric
\begin{equation}\label{ksmetric}
g_{ab} = \eta_{ab} + \hat{f}\> k_a k_b\qquad k_a := \eta_{ab}k^b\quad
k^a k^b\eta_{ab}=0,
\end{equation}
where $\eta_{ab}$ denotes the flat part of the decomposition and
$k^a$ is a null vector-field with respect to both geometries.
An immediate consequence of (\ref{ksmetric}) is
$$
(k\nabla)k^a = (k\partial)k^a,
$$
where $\nabla_a$ and $\partial_a$ are the covariant derivatives of
$g_{ab}$ and $\eta_{ab}$ respectively. In particular requiring $k^a$
to be geodetic with respect to $\eta_{ab}$ entails its geodeticity with
respect to $g_{ab}$. Geometries with the above properties define
the Kerr-Schild class. Its most important representative is the
three-parameter Kerr-Newman family of (electro-)vacuum geometries,
whose most general element describes a stationary, rotating, charged
black hole. A simple subclass consists of geometries where the covariant
constancy of $k^a$ is required. These are the so-called plane-waves
with parallel rays (pp-waves) \cite{JEK}. \par
An important property of the Kerr-Schild class is the fact
that the mixed Ricci-tensor is linear in the function $\hat{f}$ (which will
be referred to as the profile by borrowing terminology from the pp-wave case):
\begin{equation}\label{ksricci}
R^a{}_b = \frac{1}{2}\left[ \partial^a\partial_c(\hat{f}k^c k_b) +
\partial_b\partial_c(\hat{f}k^c k^a) - \partial^2(\hat{f}k^a k_b)\right]
\end{equation}
It is remarkable that all of the above properties remain intact if one
replaces the flat background by an arbitrary one, i.~e.~one considers
geometries of the form
\begin{equation}\label{gksmetric}
\tilde{g}_{ab} = g_{ab} + \hat{f}\> k_a k_b,\qquad k_a := g_{ab}k^b\quad
k^a k^b g_{ab}=0.
\end{equation}
Once again geodeticity of $k^a$ with respect to $g_{ab}$ implies
the according behavior with respect to $\tilde{g}_{ab}$ since
$$
(k\tilde{\nabla})k^a = (k\nabla)k^a
$$
holds. The expression for the Ricci tensor (\ref{ksricci}) becomes modified
\begin{align}\label{gksricci}
\tilde{R}^a{}_b &= R^a{}_b -
\frac{1}{2}\left[R^a{}_c\>\hat{f}k^c k_b - R_{bc}\>\hat{f}k^c k^a
- 2R^a{}_{cdb}\>\hat{f}k^ck^d\right.\nonumber\\
&\left.+ (\nabla^a\nabla_c(\hat{f}k^c k_b) + \nabla_b\nabla_c(\hat{f}k^c k^a) -
\nabla^2(\hat{f}k^a k_b))\right],
\end{align}
however the linearity in $\hat{f}$ remains.\par
Let us now turn to the Kerr geometry in coordinates adapted to (\ref{ksmetric})
\begin{align}\label{kmetric}
ds^2 &= -dt^2 + \frac{\Sigma}{r^2 + a^2} dr^2 + \Sigma d\theta^2
+ (r^2 + a^2)\sin^2\theta d\phi^2 + \nonumber\\&+ \frac{2mr}{\Sigma}(dt -
\frac{\Sigma}{r^2+a^2}dr + a \sin^2\theta d\phi)^2
\qquad\Sigma=r^2 + a^2\cos^2\theta.
\end{align}
The first line of (\ref{kmetric}) is usually considered to represent
Minkowski space in spheroidal coordinates, however it is possible to exhibit
the double cover structure of the maximal analytic continuation by allowing
$r$ to take negative values as well.
This fact may be displayed more transparently if we consider
a two-dimensional analogue of the spatial part of the flat
background. For this end let us consider ${\mathbb R}^2$ in elliptical
coordinates.
\begin{align}\label{twoex}
ds^2&=dx^2+dy^2=\frac{\Sigma}{r^2+a^2}dr^2 +\Sigma d\phi^2
\qquad &&x=\sqrt{r^2+a^2}\cos\phi\nonumber\\
&\Sigma=r^2+a^2\sin^2\phi &&y=r\sin\phi
\end{align}
By extending the range of $r$ to negative values, it is
possible to extend (\ref{twoex}) to a cylinder as shown in fig.~\ref{fig:1}
\begin{figure}[htbp]
\begin{center}
\leavevmode
\epsfysize=4.5cm
\epsfbox{double.eps}
\caption{double cover of ${\mathbb R}^2$}
\label{fig:1}
\end{center}
\end{figure}
\noindent
The singularities are located at the branch-points $r=0,\phi=0$ and $\pi$,
which is also the location where the curvature is concentrated.
In order to show this let us ``regularize'' (\ref{twoex}) by replacing
$r^2\rightarrow r^2 +\alpha^2$.
\begin{align}\label{twoexreg}
ds^2 &= a^2(\frac{f^2}{g^2}\cosh^2u du^2 + f^2 d\phi^2)\qquad
&&f^2 = \sinh^2 u + \sin^2 \phi + \beta^2\nonumber\\
&r=a\sinh u,\> \alpha= a \beta &&g^2 = \cosh^2 u + \beta^2
\end{align}
Using an adapted frame one obtains the connection and the Riemann-tensor
\begin{align}
&e^u = a\frac{f}{g}du\qquad
&&\omega^u{}_\phi =\frac{\cos\phi\sin\phi\cosh u}{f^2g}du
- \frac{g\sinh u}{f^2}d\phi\nonumber\\
&e^\phi = af d\phi
&&R^u{}_\phi = -\frac{\beta^2(f^2 + 2 \cos\phi)\cosh u}{f^4 g} du d\phi.
\end{align}
Evaluating the densitized Ricci tensor
\begin{align}
&\sqrt{g}R^a{}_b = -\frac{\beta^2(f^2+2\cos^2\phi)\cosh u}{f^4g}
(\partial_u^a du_b + \partial_\phi^a d\phi_b) =: F(x)
(\partial_u^a du_b + \partial_\phi^a d\phi_b)\nonumber\\
\intertext{on an arbitrary test-function $\varphi(x)$ gives}
&\lim\limits_{\beta\to 0}\int F(x)\varphi(x)d^2x =
-\int \frac{\beta^2(f^2+2\cos^2\phi)\cosh u}{f^4g}\varphi(u,\phi)du d\phi
\nonumber\\
&=-\left(\int\limits_{-\infty}^{\infty}\hspace{-0.2cm}du\hspace{-0.1cm}
\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\hspace{-0.2cm}d\phi
\>\varphi(u,\phi) +
\int\limits_{-\infty}^{\infty}\hspace{-0.2cm}du\hspace{-0.1cm}
\int\limits_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\hspace{-0.2cm}d\phi
\>\varphi(u,\phi+\pi)\right)
\frac{\beta^2(f^2+2\cos^2\phi)\cosh u}{f^4g}\nonumber\\
&=-2\pi(\varphi(0,0)+\varphi(0,\pi))=-2\pi(\delta(u)\delta(\sin\phi),\varphi)
\end{align}
which explicitly shows that (\ref{twoex}) when extended to the cylinder
is no longer flat but develops curvature concentrated on the branch points.
Since the main difference between (\ref{kmetric}) between and (\ref{twoex})
is the breaking of the $U(1)$ symmetry down to ${\mathbb Z}_2$
it is obvious that we can no longer use the former as part of the smooth
structure.
\section*{\large\bit 2) A short review of Colombeau theory}
The expression for the Ricci tensor of the generalized Kerr-Schild
class involves terms which are products of singular quantities like
the curvature of the background and the profile $\hat{f}$. Within distribution
theory such products are in general meaningless. There exists however
a recent generalization due to Colombeau \cite{Col1,Col2} which embeds
distribution space $\mathcal D'$ into the larger algebra $\mathcal G$ of
generalized functions. We will sketch the ideas of its construction only
briefly and refer the reader to \cite{Col1,Col2,ArBi,Ba2} for a
more detailed treatment. The main idea is to consider one-parameter
families of $C^\infty$ functions $(f_\epsilon)_{0<\epsilon<1}$ as basic
objects\footnote{Intuitively distributions correspond to the limit
$\epsilon\to 0$ and $f_\epsilon(x)$ represents the additional information lost
in the process of idealization.}.
They form an algebra under the naturally defined pointwise
operations. The usual $C^\infty$ functions are embedded as constant sequences,
which does not require any additional structure. On the other hand
the embedding of $C^p$ functions and distributions is achieved by
convolution with an appropriate smoothing kernel $\rho$ via
\begin{equation*}
f_\epsilon(x) = \int d^ny\frac{1}{\epsilon^n}\rho(\frac{y-x}{\epsilon})f(y).
\end{equation*}
Since $C^\infty$ functions are obviously of class $C^p$, consistency requires
the identification of the different embeddings. The difference of the
two embeddings belongs to the set of so-called negligible families,
which vanish faster than any given positive power of $\epsilon$ on any
given compact set. Since we want this set to be an ideal, in order to preserve
the algebra structure under the identification, one has to require
a growth condition in $\epsilon$ on the general families. These so-called
moderate families do not grow faster than inverse powers of $\epsilon$
in the limit $\epsilon\to 0$. It can be shown that the embedding of
distributions generates moderate families \cite{Col1,Col2}.\par
Generalized functions (elements of $\mathcal{G}$) are thus equivalence classes
of moderate families modulo negligible ones. (This situation is actually very
similar to the one encountered in $L^p$ spaces where in our case the
negligible functions play the role of measurable functions that vanish
outside a set of measure zero.)\par
The contact with usual distribution theory is achieved by coarse-graining
$\mathcal G$. The idea is to pack together different Colombeau-objects
that give the same distribution in the limit $\epsilon\to 0$:
\begin{align*}
&(f_\epsilon)\in{\mathcal G} \approx T\in{\mathcal
D}'\quad\text{if}\qquad \forall\varphi\in
{\mathcal D}\qquad\lim_{\epsilon\to 0}\int dx
f_\epsilon(x)\varphi(x) - (T,\varphi) =0,\nonumber\\
\intertext{or more generally}
&(f_\epsilon)\in{\mathcal G} \approx (g_\epsilon)\in{\mathcal G}
\quad\text{if}\qquad\forall\varphi\in
{\mathcal D}\qquad\lim_{\epsilon\to 0}
\int dx(f_\epsilon(x)-g_\epsilon(x))\varphi(x)=0.
\end{align*}
The equivalence relation, which is usually called association,
respects addition, multiplication by $C^\infty$
functions and differentiation. However it does not respect
multiplication, which is to be expected since it models distributional
equality within $\mathcal G$. Let us now come back to the question raised at
the beginning of this section which is essentially boils down to the
existence of a distribution that is associated with the product of
$P(1/x)$ with $\delta(x)$. Within $\cal G$ we have the following
representations
$$
P_\epsilon(\frac{1}{x})= \frac{x}{x^2+\epsilon^2}
\quad\text{and}\quad
\delta_\epsilon(x)= \frac{\epsilon}{\pi}\frac{1}{x^2+\epsilon^2}.
$$
In order to find the a distribution associated to their product we have to
evaluate
$$
\lim_{\epsilon\to 0}\int dx \frac{\epsilon x}{\pi(x^2+\epsilon^2)^2}
\varphi(x)=\int dx \frac{1}{\epsilon\pi}\frac{x}{(1+x^2)^2}\varphi(\epsilon x)
=\frac{1}{2} \varphi'(0),
$$
where the second equality is achieved by rescaling $x$.
This shows that
$$
P_\epsilon(\frac{1}{x})\delta_\epsilon(x)\approx -\frac{1}{2}\delta'(x)
\footnote{It can be shown that the result holds in general for an
arbitrary embedding}.
$$
As we will see in the next section this is precisely what happens in the
case of the extended Kerr-geometry namely the delta-function contribution from
the background Ricci tensor combines with the principal value of the
profile to produce derivatives of the delta-function.
\section*{\large\bit 3) Energy-momentum tensor of the maximally analytic
Kerr geometry}
As already pointed out in the introduction, the flat background in the
Kerr-Schild decomposition of the Kerr-geometry becomes singular under
the extension to the maximally analytic manifold. The strategy we are going to
propose in this chapter is to regularize (embed into $\mathcal G$) the
Kerr-geometry such that it stays in the generalized Kerr-Schild class. \par
Using the same method as in the two-dimensional example presented in
section one, namely replacing $r^2$ by $r^2+\alpha^2$, the background-metric
and the vector field become
\begin{align}\label{regback}
&ds^2 = - dt^2 +\frac{\Sigma+\alpha^2}{r^2+a^2+\alpha^2}dr^2
+(\Sigma+\alpha^2)d\theta^2 + (r^2+a^2+\alpha^2)\sin^2\theta d\phi^2,
\nonumber\\
&k^a = \partial_t^a + \partial_r^a -\frac{a}{r^2+a^2+\alpha^2}\partial_\phi^a.
\end{align}
Calculating the norm of the latter gives
$$
g_{ab}k^a k^b =-1 + \frac{\Sigma+\alpha^2}{r^2+a^2+\alpha^2}
+ \frac{a^2\sin^2\theta}{r^2+a^2+\alpha^2} = 0,
$$
which is the first condition for belonging to the generalized
Kerr-Schild class. In addition we have to check the geodeticity of $k^a$.
The simplest way to do this is to observe that $(k\nabla)k^a=0$ is
equivalent to $k\rfloor dk=0$ due to the null character of $k^a$.
\begin{align*}
&dk=-\frac{2a^2\cos\theta\sin\theta}{r^2+a^2+\alpha^2}d\theta dr
- 2 a\cos\theta\sin\theta d\theta d\phi\\
&k\rfloor dk = \frac{2a^2\cos\theta\sin\theta}{r^2+a^2+\alpha^2}d\theta
- \frac{2a^2\cos\theta\sin\theta}{r^2+a^2+\alpha^2}d\theta =0,
\end{align*}
which is the desired result, showing that the deformation still
belongs to generalized Kerr-Schild class. \par
The first step in the evaluation of the Ricci-tensor $\tilde{R}^a{}_b$
(\ref{gksricci}) is the calculation of the background contribution
$R^a{}_b$. Changing the radial coordinate $r$ to $r=a\sinh u$
\begin{align}\label{backricci}
&g_{ab}= - dt_a dt_b + \frac{f^2}{g^2} a^2\cosh^2 u du_a du_b +
a^2 f^2 d\theta_a d\theta_b + a^2g^2\sin^2\theta d\phi_a d\phi_b\nonumber\\
&f^2=\sinh^2 u + \cos^2 \theta + \beta^2\quad g^2=\cosh^2 u + \beta^2
\qquad \alpha=a\beta
\end{align}
and using an adapted frame
\begin{align}
&e^t_a= dt_a && E^a_t = \partial^a_t\nonumber\\
&e^u_a=\frac{a f\cosh u}{g}du_a &&E^a_u=\frac{g}{a f\cosh u}
\partial_u^a\nonumber\\
&e^\theta_a = af d\theta_a &&E^a_\theta=\frac{1}{af}
\partial^a_\theta\nonumber\\
&e^\phi_a = ag\sin\theta d\phi_a &&E^a_\phi = \frac{1}{ag\sin\theta}
\partial^a_\phi
\end{align}
we obtain for the connection and the Riemann tensor
\begin{align}
&\omega^u{}_\theta = -\frac{\cos\theta\sin\theta}{f^2g}du -
\frac{g \sinh u}{f^2} d\theta && R^u{}_\theta =
-\frac{\beta^2\cosh u(\sin^2\theta + g^2)}{f^4 g}du d\theta
\nonumber\\
&\omega^u{}_\phi = -\frac{\sinh u\sin\theta}{f}d\phi
&&R^u{}_\phi = -\frac{\beta^2\cosh u\sin\theta}{f^3} du d\phi\nonumber\\
&\omega^\theta{}_\phi = -\frac{g\sin\theta}{f}d\phi
&&R^\theta{}_\phi = -\frac{\beta^2g\sin\theta}{f^3} d\theta d\phi,
\end{align}
which in turn gives rise to the mixed Ricci density
$$
\sqrt{g}R^a{}_b = \frac{2a\beta^2\cosh u\sin\theta}{f^4}
(g^2\partial^a_u du_b + \sin^2\theta \partial^a_\theta d\theta_b).
$$
Proceeding along the lines of the example presented in the first section
we obtain the associated distribution
$$
\sqrt{g}R^a{}_b = 2\pi a\delta(u)\delta(\cos\theta)
(\partial^a_u du_b + \partial^a_\theta d\theta_b).
$$
After factoring out the $k^a$-dependence the remaining terms
become
\begin{align}\label{mixed}
\tilde{R}^a{}_b -R^a{}_b &=
-\frac{1}{2}k^a(2 \hat{f} k^c R_{cb} -\nabla_b h - \hat{f} \Delta k_b
+ 2\nabla_c \hat{f} \nabla^ck_b) \nonumber \\
&+ \frac{1}{2}k_b (\nabla^a h -\nabla^2 \hat{f} k^a + \hat{f} \Delta k^a
-2\nabla_c \hat{f} \nabla^ck^a)\nonumber\\
&+ (h\frac{1}{2}(\nabla_b k^a + \nabla^a k_b) - \hat{f}
\nabla_c k^a\nabla^c k_b)) + \hat{f} R^a{}_{cdb}k^c k^d,
\end{align}
where
$$
h:=\hat{f} \nabla k + (k\nabla)\hat{f}\qquad \Delta k_a :=(d*d* + *d*d)k_a
= -\nabla^2 k_a + R^b{}_a k_b.
$$
Using the Laplace-Beltrami $\Delta$ operator instead of the covariant Laplacian $\nabla^2$ considerably facilitates the calculation of $\nabla^2 k^a$.
Let us briefly state some useful identities and then present the final result.
\begin{align}
&\nabla_b k^a = \frac{\sinh u(g^2-f^2)}{ag^2 f^2} E_u^a e^u_b +
\frac{\sinh u}{af^2} E_\theta^a e^\theta_b + \frac{\sinh u}{ag^2}
E_\phi^a e^\phi_b\nonumber\\
&+ \frac{\sin\theta \cos\theta}{agf^2}(E_\theta^a e^u_b - E_u^a e^\theta_b)
+ \frac{\sinh u\sin\theta}{ag^2f}(E_\phi^a e^u_b + E_u^a e^\phi_b)
+\frac{\cos\theta}{agf}(E_\theta^a e^\phi_b - E_\phi^a e^\theta_b)\nonumber\\
&\nabla^2 \hat{f} = -\frac{8m\beta^2\sinh u}{a^3f^8}(g^2+\sin^2\theta)
\qquad\Delta k_a = \frac{2}{a^2 gf^5}(f^4-2g^2 \beta^2) e^u_a -
\frac{2\sin\theta }{a^2 gf^2} e^\phi_a \nonumber\\
&R^a{}_{cdb}k^ck^d = \frac{\beta^2}{a^2 f^4}\left( \frac{\sin^2\theta}{g^2}
E_u^a e^u_b + \frac{2\sin^2\theta +g^2}{g^2}E_\theta^a e^\theta_b
+ \frac{f^2}{g^2}E_\phi^a e^\phi_b + \right.\nonumber\\
&\left.\hspace*{2cm}\frac{f\sin\theta}{g^2}
(E_u^a e^\phi_b + E_\phi^a e^u_b)\right)
\end{align}
Putting everything together gives
\begin{align}\label{regdiff}
\tilde{R}^a{}_b -&R^a{}_b = \frac{2\beta^2}{a^2 f^6} \hat{f}\left[
-(f^2+2\sin^2\theta)\partial_t^a dt_b +
\frac{3f^2\sin^2\theta + 2\sin^4\theta + f^4}{\sin^2\theta}\right.\nonumber\\
&(\frac{\sin\theta}{g}E_\phi^a)(\frac{\sin\theta}{g}e^\phi_b)
+ (f^2+\sin^2\theta)\partial_t^a(\frac{f}{g}e^u_b)
- 2(f^2 + \sin^2\theta)\partial_t^a(\frac{\sin\theta}{g}e^\phi_b)\nonumber\\
&\left.+2(f^2+\sin^2\theta)(\frac{\sin\theta}{g}E_\phi^a)dt_b
- (f^2 +\sin^2\theta)(\frac{\sin\theta}{g}E_\phi^a)
(\frac{\sin\theta}{g}e^\phi_b)\right ].
\end{align}
Although the last expression looks pretty complicated only
a small number of terms survive the limiting process
$\beta\to 0$. The final form of the Ricci-density is
\begin{align}\label{ricfinal}
\sqrt{\tilde{g}}\tilde{R}^a{}_b &=\nonumber\\
&2\pi\delta(\cos\theta)\left(
a\delta(u)(\partial^a_u du_b + \partial^a_\theta d\theta_b) + m\delta'(u)
(\partial^a_t -\frac{1}{a}\partial^a_\phi)
(dt_b + ad\phi_b)\right).
\end{align}
The typical integrals one has to deal with in the evaluation of (\ref{regdiff})
are
\begin{align*}
&\lim\limits_{\beta\to 0}\int \frac{\beta^2\sinh u}{f^6}
\>\bar{\varphi}(u,\theta)dud\theta =
\int\limits_{-\infty}^\infty du\int\limits_{-\frac{\pi}{2}}^
{\frac{\pi}{2}}d\lambda \frac{\beta^2\sinh u}{f^6}\>\bar{\varphi}(u,
\frac{\pi}{2}-\lambda) \\
&=\int\limits_{-\infty}^\infty du\int\limits_{-\infty}^\infty d\lambda
\frac{u}{\beta(u^2+\lambda^2+1)^3}
\bar{\varphi}(\beta u,\frac{\pi}{2}-\beta \lambda)
=\frac{\pi}{4}\partial_u\bar{\varphi}(0,\frac{\pi}{2})\\
\intertext{which implies}
&\lim\limits_{\beta\to 0}\frac{\beta^2\sinh u}{f^6} =
-\frac{\pi}{4}\delta'(u)\delta(\cos\theta)
\end{align*}
Several comments are in order. First of all one might wonder
why we derived the Ricci-density instead of the Ricci-tensor
as in \cite{BaNa2}. A simple answer is that the latter has no
associated distribution. However, since only the
determinant of the background metric enters in $\sqrt{\tilde{g}}$
our result actually does not differ in this regard from \cite{BaNa2}.
It merely is a question of whether one considers the delta-``function'' to
be a scalar or a density. Our result is also very similar
to that obtained in \cite{Isr2}, in that the delta-prime term
of (\ref{ricfinal}) becomes negative in the negative $r$-region.
Moreover the tensor-structure of the $m$-dependent terms coincides
with those in \cite{Isr2}. The main difference is the presence of
the background-contribution, which reflects the fact that
the background itself is non-flat, thereby once again emphasizing the fact
that the extended Kerr-geometry belongs to the generalized Kerr-Schild class
rather than the Kerr-Schild class. Due to its tensor structure
the background does not contribute to the mass or angular momentum
obtained by hooking the corresponding Killing vector into (\ref{ricfinal}).
\section*{Conclusion}
In this work we made use of the generalized Kerr-Schild class to calculate
the energy-momentum tensor for the extension of the Kerr-geometry that
contains the negative mass region. It might seem somewhat surprising to
use the generalized Kerr-Schild class, since Kerr is usually considered to
be a member of the (normal) Kerr-Schild class. However, a closer
(distributional) look at the ``flat'' part of the decomposition
reveals that it develops a branch-singularity upon extending to the
negative $r$-sheet, and that it is therefore no longer flat. For the
same reason we may no longer consider it as part of the smooth structure.
Using techniques developed for the multiplication of singular
expressions (Colombeau's algebra of new generalized functions) we
show that it is possible obtain a distributional energy-momentum
tensor for the extended geometry, which is concentrated on the
ring-singularity. Our result is consistent with \cite{Isr2}. The main
difference arises from the presence of the background curvature
terms. Due to their tensor structure the latter do not contribute to
the momentum and and angular momentum densities. A natural further
line of investigation would be the construction of the ultrarelativistic
limits. However, since the background is no longer flat, the concept
of boosts as its isometries has to be reconsidered.
\vfill
\noindent
{\bf Acknowledgement:} The author wants to thank Werner Israel for his
en\-courage\-ment and numerous sti\-mulating discussions and the
Depart\-ment of Physics and Astronomy at the University of Victoria for the
hospitality during the final stages of this work.
\newpage
|
2,877,628,089,434 | arxiv |
\section{Related Work}\label{sec:related}
We discuss some prior works which have indicated security and privacy vulnerabilities for model explanations.
\noindent\textbf{Security Attacks on Model Explanations.} Model explanations are sensitive to distribution shifts and adversarial examples. Model explanations do not accurately reflect the biases in ML model leading to misleading explanations which influence user trust in black box models~\cite{10.1145/3375627.3375833}.
Adversarial examples can be generated for model misclassification as well as fooling interpretations~\cite{undefire,heo2019fooling}. The attack exploits the fact that model predictions and their interpretations are misaligned.
SHAP and LIME explanations algorithms have also been shown to be vulnerable to adversarial examples~\cite{slack2020fooling,slack2021feature}.
Counterfactual examples are an alternative approach for explanations which are not robust: they converge to different counterfactuals under a small perturbation~\cite{slack2021counterfactual}.
To address these, Lakkaraju et al.~\cite{lakkaraju2020robust} propose adversarial training with minimax objective to construct high fidelity explanations with respect to the worst-case adversarial perturbations.
Additionally, Yeh et al.~\cite{yeh2019fidelity} propose two measures for evaluating robustness of explanations: sensitivity and infidelity, and propose algorithms to improve both.
\noindent\textbf{Privacy Attacks on Model Explanations.} Prior works have indicated a trade-off between transparency and privacy.
Model explanations have been shown to be vulnerable to membership inference attacks where \protect{$\mathcal{A}dv$}\xspace aims to infer whether a given data record belonged to the model training data using model explanations~\cite{shokri2021aies}. This threat was extended to data reconstruction attacks for explanations which reveal training data instances. To incorporate membership privacy and transparency, model explanations with differential privacy have been proposed in literature~\cite{patel2020model,harder2020interpretable}. However, this comes at the cost of quality of explanations.
Furthermore, since model explanations characterize the model's decision boundary, it can be used to steal the functionality of a model using model extraction attacks~\cite{10.1145/3287560.3287562,aivodji2020model}.
None of the prior works evaluate the vulnerability to attribute inference attacks.
\section{Discussions and Conclusions}\label{sec:conclusions}
\noindent\textbf{Summary.} Model explanations assign scores to attributes of an input by estimating their influence to model prediction. These model explanations potentially leak sensitive attributes. We propose the first attribute inference attack on model explanations and show their effectiveness in two threat models.
We show yet another trade-off between privacy and transparency in ML models.
\noindent\textbf{Attribute Privacy Risk Metric.} There is a need to design data privacy risk assessment tools as required by several privacy laws such as GDPR (Article 35). However, there is limited prior work on estimating privacy risk of different sensitive attributes to inference attacks: Hannun et al.~\cite{hannun2021fil} propose a generic metric based on Fisher Information Loss which are shown to estimate privacy risk to attribute inference attacks. However, it is applicable only to linear and convex models and hence, not scalable to deep neural networks with non-linear and non-convex objective. Furthermore, they limit \protect{$\mathcal{A}dv$}\xspace to unbiased estimators which they indicate will be violated in the presence of $\mathcal{D}_{aux}$.
We discuss the viability of model explanations as a tool for attribute privacy risk assessment. We indicate different requirements to be satisfied for attribute privacy risk metric and indicate how model explanations satisfy them.
\begin{enumerate}[leftmargin=*]
\item \textit{Independent of Attacks.} The metric should estimate the attribute privacy risk scores \textit{without} using any specific attacks. The assigned scores should capture the root cause of attribute privacy risk, i.e., different values of $s$ have different influence on model prediction which can be exploited by \protect{$\mathcal{A}dv$}\xspace to infer the value of $s$. This makes the privacy risk scores to quantify privacy risk to all possible future attacks.
\begin{itemize}[leftmargin=*]
\item Model explanations are independent of any specific attribute inference attacks and capture the influence of attributes to the model predictions.
\end{itemize}
\item \textit{Correlation with Attacks.} The attribute privacy risk scores assigned to each record's sensitive attribute should correlate with the attack success to infer $s$. This ensures that the privacy risk scores capture the susceptibility to attack success.
\begin{itemize}[leftmargin=*]
\item Model explanations can be mapped to $s$ as shown in this work (Section~\ref{subsec:correlation}) which can allow for model explanations as a relative privacy risk measure.
\end{itemize}
\item \textit{Efficient and Scalable.} The computation of scores should be efficient and scale to large deep neural network architectures.
\begin{itemize}[leftmargin=*]
\item Model explanations can be efficiently computed on deep neural networks and scalable to large models.
\end{itemize}
\end{enumerate}
We leave the careful design and evaluation of attribute privacy risk metric based on model explanations for future work.
\noindent\textbf{Defences Against Attribute Inference Attacks.} Current literature lacks specific defences against the described attribute inference attacks as well as prior attacks leveraging model predictions. AttriGuard was proposed as a method to lower the success of \protect{$\mathcal{A}dv$}\xspace's attack ML model by adding adversarial noise to \protect{$\mathcal{A}dv$}\xspace's auxiliary data obtained from public sources~\cite{attriguard}. This defence is more generic and can be adapted to ML models: \protect{$\mathcal{M}$}\xspace can use vulnerability of model explanations to adversarial examples, proposed in prior literature~\cite{slack2020fooling,slack2021feature,undefire,heo2019fooling,10.1145/3375627.3375833,yeh2019fidelity,slack2021counterfactual,lakkaraju2020robust}, as a defence mechanism to lower the success of \protect{$\mathcal{A}dv$}\xspace's attack model.
Data sanitization to remove the privacy risk while maintaining the utility of the ML model have also been explored~\cite{dysan}. Finallly, model explanations with differential privacy~\cite{patel2020model,harder2020interpretable} can possibly lower the privacy risk to attribute inference attacks as the minimize the influence of individual data records as a whole. However, using mechanisms based on pufferfish privacy~\cite{pufferfish,pufferfishmechanisms,zhang2022attribute} is likely address attribute inference risks. However, these have not been explored in the context of model explanations.
We keep the evaluation of defences for future work.
\noindent\textbf{Algorithmic Fairness and Attack Success.} There are several algorithms which guarantee fairness across sensitive attributes in ML models~\cite{fair1,fair2,fair3}. It is unclear whether there is a correlation between model bias and attack success to infer $s$ from model explanations. We speculate that since many evaluated datasets have proxy attributes to $s$, attribute inference attacks might still be effective (see Section~\ref{subsec:explonly}). A detailed study of the impact of algorithmic fairness on attribute inference attacks of $s$ is left for future work.
\section*{Acknowledgement}
The first author was supported in part by Intel (in the context of the Private-AI Institute).
\section{Introduction}\label{sec:introduction}
Machine Learning (ML) models are used for high-stakes decision making for several real-world applications. For instance, these models assist decision makers such as doctors and judges in healthcare and criminal justice~\cite{rudin2019stop}. However, the model's high complexity makes it difficult for human interpretation into the decision making process. This creates the need for \textit{transparency} into the model behaviour.
Model explanations release additional information to explain the behaviour of complex ML models. Specifically, attribute based model explanations explain the model's prediction on an input by releasing the influence of different input attributes responsible for the prediction~\cite{deeplift,deeplift2,gradshap,intgrad,smoothgrad}.
Some of the input attributes can be sensitive (e.g., \textsc{Race}\xspace and \textsc{Sex}\xspace). This raises the data privacy concerns when an adversary (\protect{$\mathcal{A}dv$}\xspace) can leverage model explanations as an attack surface. For instance, Shokri et al.~\cite{shokri2021aies} show that explanations can be exploited for membership inference (i.e., inferring whether input record was part of training data) and data reconstruction.
Additionally, releasing model explanations could leak the values of sensitive attributes which is a privacy risk, not considered in literature. For instance, consider the setting where an ML model is trained to predict the likelihood that a criminal will re-offend as an aid to judges in a court. In addition to output predictions, the model reveals explanations on why it made the prediction on that input. Attribute inference attacks could reveal \textsc{Race}\xspace and \textsc{Sex}\xspace from model explanations which individual prefers to keep their private to avoid biased decisions.
However, this quantification of privacy risk of model explanations to \textit{attribute inference attacks} is lacking in current literature. An analysis of this trade-off between privacy and transparency is necessary so that a model builder (\protect{$\mathcal{M}$}\xspace) can make appropriate choices to train ML models for high-stakes applications. In this work, we ask the following research question: \textit{can an \protect{$\mathcal{A}dv$}\xspace exploit model explanations to infer sensitive attributes of individual data records?}
We design the \textit{first attribute inference attack} to infer sensitive attributes from model explanations in two threat models:
\begin{enumerate}[label=\textbf{TM\arabic*},leftmargin=*]
\item \label{threatmodel1} Sensitive attributes are included in the training dataset and the input (following prior work~\cite{fredrikson1,fredrikson2}) and \protect{$\mathcal{A}dv$}\xspace only sees the output predictions but not their inputs. \protect{$\mathcal{A}dv$}\xspace has no control over passing the inputs but has to infer sensitive attributes from only the observed predictions.
\item \label{threatmodel2} Sensitive attributes are not included in training data or input (censored by \protect{$\mathcal{M}$}\xspace for privacy). This corresponds to real-world application such as ML as a Service (MLaaS).
\end{enumerate}
\noindent In this work, we claim the following main contributions.
\begin{enumerate}[leftmargin=*]
\item We design the \textbf{first} attribute inference attack, to infer sensitive attributes, e.g., \textsc{Race}\xspace and \textsc{Sex}\xspace, of the data records from corresponding model explanations. \protect{$\mathcal{A}dv$}\xspace trains an ML attack model to map model explanations to sensitive attributes. We additionally calibrate the threshold over the attack model's predictions to increase \protect{$\mathcal{A}dv$}\xspace's power (Section~\ref{sec:approach}).
\item In~\ref{threatmodel1}, we show that our attack successfully infers the sensitive attributes from model explanations (Section~\ref{sec:tm1results}). On evaluating across four benchmark datasets and four model explanations, we note:
\begin{itemize}
\item a high F1-score of 0.92 $\pm$ 0.07 (\textsc{Race}\xspace) and 0.88 $\pm$ 0.11 (\textsc{Sex}\xspace) using entire model explanations corresponding to both sensitive and non-sensitive attributes (Section~\ref{subsec:worstcase}).
\item a high F1-score of 0.90 $\pm$ 0.10 (\textsc{Race}\xspace) and 0.83 $\pm$ 0.09 (\textsc{Sex}\xspace) using model explanations corresponding to only sensitive attribute (Section~\ref{subsec:correlation}).
\end{itemize}
\item In~\ref{threatmodel2}, despite censoring the sensitive attributes, we show that our attack can successfully infer them using model explanations of other non-sensitive attributes (Section~\ref{sec:tm2results}). On evaluating across four benchmark datasets and four model explanations, we note:
\begin{itemize}
\item a high F1-score of 0.83 $\pm$ 0.12 (\textsc{Race}\xspace) and 0.77 $\pm$ 0.09 (\textsc{Sex}\xspace) (Section~\ref{subsec:explonly}).
\item that on combining model explanations with model predictions, attack success does not improve. Hence, model explanations are a strong attack surface for \protect{$\mathcal{A}dv$}\xspace to exploit (Section~\ref{subsec:combined}).
\end{itemize}
\item In both \ref{threatmodel1} and \ref{threatmodel2}, exploiting model explanations has a higher success than prior state-of-the-art attribute inference attacks which exploit model predictions. This indicates that releasing model explanations increases the attack surface enabling \protect{$\mathcal{A}dv$}\xspace to mount strong attribute inference attacks (Section~\ref{sec:compare}).
\end{enumerate}
\section{Background}\label{sec:background}
Consider a training dataset $\mathcal{D} = \{\mathcal{X},\mathcal{S},\mathcal{Y}\}$ where $\mathcal{X}$ is the space of non-sensitive input attributes, $\mathcal{S}$ is the space of sensitive input attributes, $\mathcal{Y}$ is the space of classification labels. We denote a data record as $(x,y,s)$ with non-sensitive attributes $x$ and sensitive attribute $s$ where $(x, s) \in \mathcal{X} \times \mathcal{S}$ and classification label $y \in \mathcal{Y}$.
ML models learn a function $f_{\theta}: (x \cup s) \rightarrow y$ which maps the input with sensitive and non-sensitive attributes to $y$. Alternatively, the models can be trained without $s$ in the training dataset given by $f_{\theta}: x \rightarrow y$. The models are parameterized by $\theta$ which are iteratively updated to minimize the loss on correctly predicting $x$ or $x \cup s$ as $y$. The model training, hyperparameters selection and deployment to application is done by \protect{$\mathcal{M}$}\xspace.
Given these formal notations, we describe the state-of-the-art algorithms for model explanations considered in this work (Section~\ref{back:explanations}) and prior work on attribute inference attacks (Section~\ref{back:aia}).
\subsection{Model Explanations}\label{back:explanations}
Model explanations describe a model's behaviour to \protect{$\mathcal{M}$}\xspace on specific inputs. Specifically, attribute based model explanations estimate the influence of input attributes on the model's output prediction. In other words, these explanations assign a score to each attribute in the input point of interest (PoI) which resulted in a particular model prediction.
Formally, for a given PoI $\vec{x} = (x_1, \cdots x_n)$, the model explanations $\phi(\vec{x})$ outputs a vector indicating the importance of different attributes influential in the model's prediction of $\vec{x}$.
Here, $\phi(\vec{x})$'s attribution of the prediction at input PoI $\vec{x}$ relative to a baseline input $\vec{x'}$ is a vector
$\phi_{\vec{x'}}(\vec{x}) = (\phi_1,\cdots,\phi_n)$.
We consider two types of attribute based explanation algorithms: (a) backpropagation-based explanations (\textsc{IntegratedGradients}\xspace and \textsc{DeepLift}\xspace) and (b) perturbation-based explanations (\textsc{GradientSHAP}\xspace and \textsc{SmoothGrad}\xspace).
\noindent\textbf{\underline{Gradient-based Explanations}} compute gradients using backpropagation to estimate the influence of attributes to predictions.
\begin{itemize}[leftmargin=*]
\item \noindent\textbf{\textsc{IntegratedGradients}\xspace}~\cite{intgrad} computes the integration of gradients with respect to inputs by considering a straight line path from the baseline $\vec{x'}$ to the PoI $\vec{x}$. This integration across the $i^{th}$ dimension can be computed as: $\phi_{\textsc{IntegratedGradients}\xspace_i}(\vec{x}) = (\vec{x} - \vec{x'}) \times \int_{\alpha=0}^{1} \frac{\partial f_{\theta}(\vec{x'}+ \alpha(\vec{x} - \vec{x'}))}{\partial x_i} \,d\alpha$.
Here, $\frac{\partial f_{\theta}(x)}{\partial x_i}$ indicates the gradient computed using the model $f_{\theta}$ over the input $x$ across the $i^{th}$ dimension.
\item \noindent\textbf{\textsc{DeepLift}\xspace}~\cite{deeplift,deeplift2} estimates the contribution of specific neurons using the difference in output with respect to a baseline output. It assignes scores as $\phi_{\textsc{DeepLift}\xspace}(\vec{x}) = m_{\Delta \vec{x} \Delta t}\frac{C_{\Delta \vec{x}\Delta t}}{\Delta \vec{x}}$ where $x$ is a given input neuron, $\Delta x$ is the difference from baseline, $t$ is target neuron and its output difference from baseline is given as $\Delta t$. The multiplier captures the contribution of $\Delta x$ to $\Delta t$ and is similar to partial derivative but over finite differences instead of infinitesimal differences.
\end{itemize}
\noindent\textbf{\underline{Perturbation-based Explanations}} add noise to data records or remove some attributes to see the impact on the model utility.
\begin{itemize}[leftmargin=*]
\item \noindent\textbf{\textsc{GradientSHAP}\xspace}~\cite{gradshap} computes the Shapley values and adds Gaussian noise to each input PoI by sampling multiple times and selects a random input $x$ along the path between baseline and input. The gradient of outputs with respect to those selected random points are then computed.
In other words, the final attributes are computed as the expected value over the product of the gradient and the difference in input PoI to the baseline: $\frac{\partial f_{\theta}(x)}{\partial x} \times (\vec{x} - \vec{x'})$.
\item \noindent\textbf{\textsc{SmoothGrad}\xspace}~\cite{smoothgrad} samples random inputs in a neighborhood of PoI $\vec{x}$ by adding Gaussian noise to the PoI. Then it averages the resulting sensitivity maps (i.e., derivative of model predictive with respect to input) corresponding to the $n$ noisy neighbour records $\phi_{\textsc{SmoothGrad}\xspace}(\vec{x}) = \frac{1}{n}\sum_{1}^{n}\frac{\partial f_{\theta}(\vec{x} + \mathcal{N}(0,\sigma^2))}{\vec{x}}$.
\end{itemize}
A natural choice for baseline $\vec{x'}$ to compute model explanations is where the prediction is unbiased~\cite{intgrad}. In all the cases, we use the mean vector over the inputs as our baseline. Additionally, each model explanation algorithm also outputs a convergence delta, $\delta$ where the lower the absolute value of the convergence delta the better is the approximation (i.e., low error). We append $\delta$ with $\phi()$ to obtain the final attack vector. We abuse the notation to refer the appended vector as $\phi()$.
\subsection{Attribute Inference Attacks}\label{back:aia}
Attribute inference attacks aim to infer $s$ (e.g., $s=1$ for males and $s=0$ for females) for an individual data record. \protect{$\mathcal{A}dv$}\xspace exploits observable information (i.e., model predictions or explanations in our case) to infer unobservable information (i.e., $s$). This attack is different from property inference attacks proposed in literature which aim to infer global properties of dataset (e.g., inferring the ratio of males to female attributes on which the model was trained on)~\cite{Melis2019ExploitingUF,propinf1,tople21usenix}.
Several prior work have proposed attribute inference attacks against ML models using the model's output predictions~\cite{fredrikson1,fredrikson2,Song2020Overlearning,yeom,Mahajan2020DoesLS,malekzadeh2021honest}. Fredrikson et al.~\cite{fredrikson1,fredrikson2} propose an attribute inference attack where \protect{$\mathcal{A}dv$}\xspace infers $s$ using the knowledge of both $x$ and $f_{\theta}(x \cup s)$. However, this assumption of \protect{$\mathcal{A}dv$}\xspace's knowledge is strong. Mahajan et al.~\cite{Mahajan2020DoesLS} and Song et al.~\cite{Song2020Overlearning} proposed an attack where an ML attack model was trained to infer $s$ using only model prediction. This attack exploits the distinguishability in predictions conditioned on different values of $s$. However, the attack model performs poorly for imbalanced dataset since the default threshold of 0.5 for estimating the value of $s$ is incorrect for skewed prediction distribution. To address this, Aalmoes et al.~\cite{aalmoes2022dikaios} proposed an attack which accounts for this skewness of attack model's predictions. They select a threshold over attack model's predictions which maximizes attack F1-Score on an auxiliary dataset known to \protect{$\mathcal{A}dv$}\xspace. That threshold is used over attack model's predictions to infer $s$ for target data records.
\section{Problem Statement}\label{sec:problem}
Our goal is to evaluate the privacy risks of model explanations to attribute inference attacks and hence study the trade-offs between privacy and transparency.
We consider the following setting: target ML model $f_{target}$ is trained and deployed on the Cloud by \protect{$\mathcal{M}$}\xspace within MLaaS paradigm. Given a POI $\vec{x}$, we assume that $f_{target}$ can output both the model prediction ($f_{target}(\vec{x})$) and corresponding explanations on that input ($\phi(\vec{x})$). $\phi()$ are required to be released by AI regulations to ensure trustworthy computation~\cite{ec2019ethics,dpia,nist,ico,whitehouse}.
Given that model explanations measure the influence of individual attributes in the input to the model's prediction, it is natural to ask, \textit{given access to $\phi()$, can \protect{$\mathcal{A}dv$}\xspace infer $s$?} This study is currently lacking in literature. We describe three main requirements for the design of an effective attack:
\begin{enumerate}[label=\textbf{AR\arabic*},leftmargin=*]
\item \label{attreq1} Attack should operate in a \textbf{blackbox threat model}, where \protect{$\mathcal{A}dv$}\xspace sends an input and obtains an output via an API from a MLaaS service provider. \protect{$\mathcal{A}dv$}\xspace does not have access to $f_{target}$'s internal parameters or architecture.
\item \label{attreq2} Attack should be \textbf{practical}, i.e., uses model observables ($\phi()$ or $f_{target}()$) to infer unobservables ($s$).
\item \label{attreq3} Attack should \textbf{account for class imbalance in $s$}. In all practical applications, $s$ is imbalanced which skews the predictions of $f_{adv}$ lowering the \protect{$\mathcal{A}dv$}\xspace's attack success to correctly infer $s$.
\item \label{attreq4} Attack should be \textbf{applicable to model explanations}, i.e., exploits $\phi()$ to infer the values of $s$.
\end{enumerate}
Prior attacks exploit the distinguishability in $f_{target}()$ given different values of $s$~\cite{fredrikson1,fredrikson2,yeom,Song2020Overlearning,Mahajan2020DoesLS,aalmoes2022dikaios}. Fredrikson et al.~\cite{fredrikson1,fredrikson2} and Yeom et al.~\cite{yeom} attacks have strong assumptions about \protect{$\mathcal{A}dv$}\xspace knowledge, such as knowledge of $x$ in addition to $f_{target}()$ (violating requirement~\ref{attreq2}). Alternatively, Song et al.~\cite{Song2020Overlearning} and Mahajan et al.~\cite{Mahajan2020DoesLS} do not account for the class imbalance in $s$ (violating requirement~\ref{attreq3}). Aalmoes et al.~\cite{aalmoes2022dikaios} use threshold calibration to improve the attack success but they exploit $f_{target}()$ and not $\phi()$ (violating \ref{attreq4}).
\subsection{Threat Model and Attack Methodology}\label{sec:threatmodel}
We discuss two threat models \ref{threatmodel1} and \ref{threatmodel2} along with the assumptions about \protect{$\mathcal{A}dv$}\xspace's knowledge and attack methodology.
\begin{itemize}[leftmargin=*]
\item \noindent\textbf{\ref{threatmodel1} (w/ $s$ in $\mathcal{D}$):} We assume $s$ is included in both $\mathcal{D}$ and input (i.e., $x \cup s$). Hence, $\phi(x \cup s)$ are released as part of the API and \protect{$\mathcal{A}dv$}\xspace can obtain $\phi(s)$ along with $\phi(x)$. This is when the \protect{$\mathcal{A}dv$}\xspace is monitoring the outputs from the model. Here, \protect{$\mathcal{A}dv$}\xspace cannot choose inputs to send to $f_{target}$ (as it already includes $s$)\footnote{Despite this, this is seen in several prior attribute inference attacks~\cite{fredrikson1,fredrikson2,Song2020Overlearning,yeom,Mahajan2020DoesLS}.} but can only observe $f_{target}(x \cup s)$ and $\phi(x \cup s)$ for some arbitrary inputs. Given only $\phi(x \cup s)$, \protect{$\mathcal{A}dv$}\xspace aims to infer $s$ using an attack ML model $f_{adv}: \phi(x \cup s) \rightarrow s$. Here, \protect{$\mathcal{A}dv$}\xspace can also attack different model explanations: $f_{adv}: \phi(x) \rightarrow s$ and $f_{adv}: \phi(s) \rightarrow s$\footnote{$\phi(s)$ and $\phi(x)$ indicates model explanations corresponding to $s$ and $x$ respectively.}.
\input{fig_tm1}
\item \noindent\textbf{\ref{threatmodel2} (w/o $s$ in $\mathcal{D}$):} We assume $s$ is \textit{not} included in $\mathcal{D}$ and input (i.e., $x$). \protect{$\mathcal{A}dv$}\xspace has blackbox access to $f_{target}$ and pass an input $x$ and obtain access to both $\phi(x)$ and $f_{target}(x)$. Unlike \ref{threatmodel1}, \protect{$\mathcal{A}dv$}\xspace can choose the input to pass to the model. This is the worst case for \protect{$\mathcal{A}dv$}\xspace where $s$ is censored by \protect{$\mathcal{M}$}\xspace for privacy making this threat model more practical. Given $\phi(x)$, \protect{$\mathcal{A}dv$}\xspace aims to infer $s$ using an attack ML model $f_{adv}: \phi(x) \rightarrow s$.
\input{fig_tm2}
\end{itemize}
Figure~\ref{fig:tm1} and \ref{fig:tm2} where \colorbox{red!25}{red} indicates components accessible to \protect{$\mathcal{A}dv$}\xspace.
In both \ref{threatmodel1} and \ref{threatmodel2}, we assume that \protect{$\mathcal{A}dv$}\xspace has additional auxiliary dataset $\mathcal{D}_{aux}$ which is drawn from the same distribution as $\mathcal{D}$ and includes data records $(x,s,y)$ containing non-sensitive and sensitive attributes with corresponding label. This assumptions is inline with all the prior attribute inference attacks proposed in literature~\cite{fredrikson1,fredrikson2,Song2020Overlearning,yeom,Mahajan2020DoesLS}.
$\mathcal{D}_{aux}$ is used to train $f_{adv}$: \protect{$\mathcal{A}dv$}\xspace passes data records $(x,s,y)$ (\ref{threatmodel1}) or $(x,y)$ (\ref{threatmodel2}) to $f_{target}$ and uses the generated model explanations to train $f_{adv}$ by mapping them to $s$ (known to \protect{$\mathcal{A}dv$}\xspace for $\mathcal{D}_{aux}$). This access to $f_{target}$ can be alleviated by training a ``shadow model'' on $\mathcal{D}_{aux}$ to mimic $f_{target}$; and use the model explanations from ``shadow model'' to train $f_{adv}$. We only use predictions from $f_{target}$ to train $f_{adv}$.
Once $f_{adv}$ is trained, the attack is evaluated on target dataset (distinct from $\mathcal{D}_{aux}$).
Moreover, any attack designed within \ref{threatmodel1} and \ref{threatmodel2} are blackbox (satisfy \ref{attreq1}) and practical (satisfy \ref{attreq2}). In both threat models, $\phi()$ is accessible to adversary to exploit and hence satisfies \ref{attreq4}.
Given this, we now have to design an attack which satisfies requirement~\ref{attreq3} to account for class imbalance in $s$ and improve attack success.
\section{Our Proposed Attack}\label{sec:approach}
Prior attribute inference attacks are directly applicable as they do not satisfy requirements \ref{attreq1}-\ref{attreq4}. We design attribute inference attacks to adapt to $\phi()$ to infer $s$ while calibrating the threshold over $f_{adv}$'s predictions to improve attack success.
Instead of using the default threshold of 0.5, as in prior attacks over model predictions~\cite{Mahajan2020DoesLS,Song2020Overlearning}, we calibrate the threshold over $f_{adv}(\phi())$ to maximize F1-Score.
We compute an optimal threshold $\tau^*$ over the probability $P(s|\phi(x))$, which is the output of $f_{adv}(\phi(x))$, to infer $s$.
In practice, we use the precision-recall curve which computes precision and recall values for multiple thresholds over $f_{adv}$'s predictions. Then, $\tau^*$ is chosen based on maximum F1-Score and this in-turn improves the precision and recall values. This is effective when there is a moderate to large class imbalance (satisfies~\ref{attreq3}).
\noindent\textbf{Calibrating the Threshold.} First, as a sanity check, we ensure the precision-recall curves are above random guess baseline. A random guess for precision-recall curve is the horizontal line with the precision value computed over the positive class examples in the dataset. Figure~\ref{fig:prplot} shows the precision-recall curves for $f_{adv}$ on $\mathcal{D}_{aux}$ which is beyond random guess in all cases.
This indicates the possibility of finding $\tau^*$ to improve \protect{$\mathcal{A}dv$}\xspace's F1-Score.
\setlength\tabcolsep{1.75pt}
\begin{table}[h]
\begin{center}
\caption{$\tau^*$ is different from default threshold of 0.5. ``IG'' is \textsc{IntegratedGradients}\xspace, ``DL'' is \textsc{DeepLift}\xspace, ``GS'' is \textsc{GradientSHAP}\xspace and ``SG'' is \textsc{SmoothGrad}\xspace.\label{tbl:thresholds}}
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|}
\hline
\rowcolor{LightCyan} & \multicolumn{4}{|c|}{\textbf{IG}} & \multicolumn{4}{|c|}{\textbf{DL}}\\
\hline
& \multicolumn{2}{c|}{w/ S} & \multicolumn{2}{c|}{w/o S} & \multicolumn{2}{c|}{w/ S} & \multicolumn{2}{c|}{w/o S}\\
\textbf{Dataset} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace}\\
\hline
\textbf{CENSUS} & 0.64 & 0.47 & 0.42 & 0.54 & 0.96 & 0.51 & 0.82 & 0.37 \\
\textbf{COMPAS} & 0.94 & 0.89 & 0.38 & 0.59 & 0.97 & 0.84 & 0.38 & 0.52\\
\textbf{LAW} & 0.93 & 0.56 & 0.93 & 0.56 & 0.93 & 0.74 & 0.79 & 0.56\\
\textbf{CREDIT} & 0.55 & 0.42 & 0.54 & 0.48 & 0.61 & 0.55 & 0.46 & 0.40\\
\hline
\rowcolor{LightCyan} & \multicolumn{4}{|c|}{\textbf{GS}} & \multicolumn{4}{|c|}{\textbf{SG}} \\
\hline
& \multicolumn{2}{c|}{w/ S} & \multicolumn{2}{c|}{w/o S}& \multicolumn{2}{c|}{w/ S} & \multicolumn{2}{c|}{w/o S}\\
\textbf{Dataset} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} \\
\hline
\textbf{CENSUS} & 0.77 & 0.26 & 0.55 & 0.47 & 0.68 & 0.49 & 0.51 & 0.50\\
\textbf{COMPAS} & 0.61 & 0.58 & 0.46 & 0.54 & 0.81 & 0.72 & 0.33 & 0.55\\
\textbf{LAW} & 0.68 & 0.61 & 0.82 & 0.56 & 0.97 & 0.96 & 0.93 & 0.57\\
\textbf{CREDIT} & 0.48 & 0.48 & 0.56 & 0.52 & 0.60 & 0.43 & 0.51 & 0.44 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[h]
\centering
\subfigure[CENSUS (\textsc{IntegratedGradients}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_CENSUS_False_expl_intgrad.png}}
\subfigure[COMPAS (\textsc{IntegratedGradients}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_COMPAS_False_expl_intgrad.png}}
\subfigure[CREDIT (\textsc{IntegratedGradients}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_CREDIT_False_expl_intgrad.png}}
\subfigure[LAW (\textsc{IntegratedGradients}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_LAW_False_expl_intgrad.png}}
\subfigure[CENSUS (\textsc{DeepLift}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_CENSUS_False_expl_DeepLift.png}}
\subfigure[COMPAS (\textsc{DeepLift}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_COMPAS_False_expl_DeepLift.png}}
\subfigure[CREDIT (\textsc{DeepLift}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_CREDIT_False_expl_DeepLift.png}}
\subfigure[LAW (\textsc{DeepLift}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_LAW_False_expl_DeepLift.png}}\\
\subfigure[CENSUS (\textsc{GradientSHAP}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_CENSUS_False_expl_GradientShap.png}}
\subfigure[COMPAS (\textsc{GradientSHAP}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_COMPAS_False_expl_GradientShap.png}}
\subfigure[CREDIT (\textsc{GradientSHAP}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_CREDIT_False_expl_GradientShap.png}}
\subfigure[LAW (\textsc{GradientSHAP}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_LAW_False_expl_GradientShap.png}}\\
\subfigure[CENSUS (\textsc{SmoothGrad}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_CENSUS_False_expl_smoothgrad.png}}
\subfigure[COMPAS (\textsc{SmoothGrad}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_COMPAS_False_expl_smoothgrad.png}}
\subfigure[CREDIT (\textsc{SmoothGrad}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_CREDIT_False_expl_smoothgrad.png}}
\subfigure[LAW (\textsc{SmoothGrad}\xspace)]{\includegraphics[width=0.24\textwidth]{figures/fairadv_pr_LAW_False_expl_smoothgrad.png}}
\caption{Precision-recall curves for finding optimal threshold to improving \protect{$\mathcal{A}dv$}\xspace's success. The precision recall are above random guess which can allow \protect{$\mathcal{A}dv$}\xspace to compute an optimal threshold to improve attack success.}
\label{fig:prplot}
\end{figure*}
Table~\ref{tbl:thresholds} further shows that the resultant $\tau^*$ is indeed different from 0.5 default threshold, indicative of improving attack success. It is important to note that $\tau^*$ is computed on $\mathcal{D}_{aux}$ which might be different from optimal threshold on target dataset which is being attacked. However, this cannot be known before-hand by \protect{$\mathcal{A}dv$}\xspace. This is the best \protect{$\mathcal{A}dv$}\xspace can do before performing the attack in real-world with imbalanced datasets hoping that $\tau^*$ improves attack success.
\noindent\textbf{Why is this a Privacy Risk?} Our threat models are similar to prior attacks~\cite{fredrikson1,fredrikson2,Song2020Overlearning,yeom,Mahajan2020DoesLS}. One can argue that the attack is not actually exploiting the model explanations but using the existing correlations between sensitive and non-sensitive attributes (which \protect{$\mathcal{A}dv$}\xspace could deduce from $\mathcal{D}_{aux}$). In this case, there is no privacy violation.
However, as seen in Table~\ref{tbl:correlation}, Pearson's correlation between $s$ and other attributes is low. Hence, \protect{$\mathcal{A}dv$}\xspace exploits non-trivial information, i.e., information memorized by $f_{target}$ about $s$ which is present in $\phi()$ (similar to the case of inferring $s$ from $f_{target}()$~\cite{Song2020Overlearning}).
\setlength\tabcolsep{1pt}
\begin{table}[h]
\begin{center}
\caption{Low Pearson Correlation of $s$ with $y$, $x$, $\phi(s)$ and $\phi(x)$ indicates that model is memorizing unintended private data. \label{tbl:correlation}}
\begin{tabular}{ |c|c|c|c|c|}
\hline
\textbf{Dataset} & \multicolumn{2}{c|}{\textbf{y}} & \multicolumn{2}{c|}{\textbf{x}}\\
& \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} \\
\hline
\textbf{CENSUS} & 0.02 & 0.01 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 \\
\textbf{COMPAS} & -0.06 & 0.02 & -0.01 $\pm$ 0.03 & -0.02 $\pm$ 0.05 \\
\textbf{LAW} & 0.02 & 0.02 & -0.01 $\pm$ 0.02 & 0.00 $\pm$ 0.01 \\
\textbf{CREDIT} & 0.01 & -0.01 & 0.01 $\pm$ 0.01 & 0.00 $\pm$ 0.02 \\
\hline
\rowcolor{LightCyan} \multicolumn{5}{|c|}{\textbf{\textsc{IntegratedGradients}\xspace}} \\
\hline
\textbf{Dataset} & \multicolumn{2}{c|}{\textbf{$\phi(s)$}} & \multicolumn{2}{c|}{\textbf{$\phi(x)$}} \\
& \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} \\
\hline
\textbf{CENSUS} & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02\\
\textbf{COMPAS} & -0.01 $\pm$ 0.02 & -0.01 $\pm$ 0.06 & -0.01 $\pm$ 0.02 & 0.00 $\pm$ 0.07\\
\textbf{LAW} & 0.02 $\pm$ 0.00 & 0.02 $\pm$ 0.02 & 0.02 $\pm$ 0.00 & 0.01 $\pm$ 0.02 \\
\textbf{CREDIT} & 0.01 $\pm$ 0.02 & 0.00 $\pm$ 0.03 & 0.01 $\pm$ 0.02 & 0.00 $\pm$ 0.02\\
\hline
\rowcolor{LightCyan} \multicolumn{5}{|c|}{\textbf{\textsc{DeepLift}\xspace}} \\
\hline
\textbf{Dataset} & \multicolumn{2}{c|}{\textbf{$\phi(s)$}} & \multicolumn{2}{c|}{\textbf{$\phi(x)$}} \\
& \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} \\
\hline
\textbf{CENSUS} & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02\\
\textbf{COMPAS} & 0.00 $\pm$ 0.03 & -0.02 $\pm$ 0.06 & -0.01 $\pm$ 0.02 & 0.01 $\pm$ 0.06 \\
\textbf{LAW} & -0.01 $\pm$ 0.03 & -0.00 $\pm$ 0.01 & -0.02 $\pm$ 0.03 & 0.00 $\pm$ 0.01\\
\textbf{CREDIT} & 0.00 $\pm$ 0.01 & 0.00 $\pm$ 0.01 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 \\
\hline
\rowcolor{LightCyan} \multicolumn{5}{|c|}{\textbf{\textsc{GradientSHAP}\xspace}} \\
\hline
\textbf{Dataset} & \multicolumn{2}{c|}{\textbf{$\phi(s)$}} & \multicolumn{2}{c|}{\textbf{$\phi(x)$}} \\
& \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} \\
\hline
\textbf{CENSUS} & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 \\
\textbf{COMPAS} & -0.01 $\pm$ 0.04 & -0.01 $\pm$ 0.03 & -0.03 $\pm$ 0.02 & -0.01 $\pm$ 0.03\\
\textbf{LAW} & -0.01 $\pm$ 0.02 & 0.00 $\pm$ 0.00 & -0.02 $\pm$ 0.00 & 0.00 $\pm$ 0.00\\
\textbf{CREDIT} & 0.00 $\pm$ 0.01 & 0.01 $\pm$ 0.02 & 0.00 $\pm$ 0.01 & 0.01 $\pm$ 0.02\\
\hline
\rowcolor{LightCyan} \multicolumn{5}{|c|}{\textbf{\textsc{SmoothGrad}\xspace}} \\
\hline
\textbf{Dataset} & \multicolumn{2}{c|}{\textbf{$\phi(s)$}} & \multicolumn{2}{c|}{\textbf{$\phi(x)$}} \\
& \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} & \textbf{\textsc{Race}\xspace} & \textbf{\textsc{Sex}\xspace} \\
\hline
\textbf{CENSUS} & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02 & 0.00 $\pm$ 0.02\\
\textbf{COMPAS} & 0.00 $\pm$ 0.02 & -0.01 $\pm$ 0.07 & -0.01 $\pm$ 0.02 & 0.00 $\pm$ 0.08 \\
\textbf{LAW} & -0.04 $\pm$ 0.04 & -0.01 $\pm$ 0.03 & -0.03 $\pm$ 0.06 & 0.01 $\pm$ 0.03 \\
\textbf{CREDIT} & 0.01 $\pm$ 0.02 & 0.00 $\pm$ 0.02 & 0.01 $\pm$ 0.02 & 0.00 $\pm$ 0.02 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Experimental Setup}\label{sec:setup}
We describe the tabular benchmark datasets used in our evaluation (Section~\ref{sec:datasets}), $f_{target}$ and $f_{adv}$ architectures (Section~\ref{sec:arch}), and metrics for evaluating attack success (Section~\ref{sec:metrics}).
\subsection{Datasets}\label{sec:datasets}
We consider four tabular datasets to demonstrate different high-stakes decision making applications. All the datasets have a binary classification task and publicly available.
\begin{itemize}[leftmargin=*]
\item \noindent\textbf{Adult Income (CENSUS)} comprises of 48,842 data records with 14 attributes about individuals from 1994 US Census data.
The attributes include marital status, education, occupation, job hours per week among others.
The binary classification is whether an individual makes an income of 50k per annum.
\item \noindent\textbf{Recidivism (COMPAS)} is used for commercial algorithms by judges and parole officers for estimating the likelihood of a criminal reoffending. It contains 10,000 criminal defendants in Florida.
The binary classification is if a criminal will reoffend.
\item \noindent\textbf{Law School Dataset (LAW)} is based on survey conducted by Law School Admission Council across 163 law schools in the United States. It contains information on 21,790 law students such as their entrance exam scores (LSAT), their grade-point average (GPA) collected prior to law school, and their first year average grade.
The classification is to predict if an applicant will have a high first year average grade.
\item \noindent\textbf{UCI Credit Card (CREDIT)} is an anonymized dataset from the UCI Machine Learning dataset repository and contains information about different credit card applicants. The dataset contains 30,000 records with 24 attributes for each record. The binary classification is if the application was approved.
\end{itemize}
In all the four datasets, the sensitive attributes are \textsc{Race}\xspace and \textsc{Sex}\xspace. We use this sensitive attributes for demonstration. The attack however can extend to other sensitive attributes with discrete values.
We use 70\% of $\mathcal{D}$ for $f_{target}$ and the remaining 30\% as testing dataset. $\mathcal{D}_{aux}$ is 50\% of the testing dataset while the other half is used as unseen dataset for evaluating the attack success.
\subsection{Architecture}\label{sec:arch}
We now describe the model architectures and training hyperparameters for $f_{target}$, trained on the main classification task, and $f_{adv}$ used by \protect{$\mathcal{A}dv$}\xspace to map the explanations to $s$. We use pytorch and captum library for model explanations and our code is made publicly available: \url{https://github.com/vasishtduddu/AttInfExplanations.git}.
\begin{itemize}[leftmargin=*]
\item \noindent\textbf{Target Models.} We consider a fully connected neural network with four hidden layers of sizes [1024, 512, 256, 128] for all the datasets. Note that all the datasets have a binary classification tasks and the target models are binary classifiers. The target models are trained for 30 epochs with Adam optimizer and learning rate of 1e-3 and no regularization.
\item \noindent\textbf{Attack Models.} We use a neural network model for all datasets other than LAW, where we use a random forest classifier with a maximum depth of 150. We consider a fully connected neural network with three hidden layers of sizes [64, 128, 32]. The model is trained using Adam optimizer with a learning rate of 1e-3 trained for 500 epochs.
The attack methodology is independent of ML models used and can be evaluated easily against other architectures.
\item \noindent\textbf{Model Accuracy.} The model utility is computed over the unseen test dataset across all the datasets. The test accuracy for the CENSUS dataset is 82.20\%, CREDIT dataset is 77.92\%, COMPAS dataset is 74.67\%, and LAW dataset is 95.63\%.
\end{itemize}
\subsection{Metrics}\label{sec:metrics}
We consider three main metrics for evaluating the success of attribute inference attack.
\begin{itemize}[leftmargin=*]
\item \noindent\underline{\textbf{Precision}}. The ratio of true positives to the sum of true positive and false positives. This indicates the fraction of $s$ inferred as having a positive value by \protect{$\mathcal{A}dv$}\xspace which indeed have positive attribute value as ground truth.
\item \noindent\underline{\textbf{Recall}}. The ratio of true positives to the sum of true positives and false negatives. This indicates the fraction of $s$ with positive values which are correctly inferred by \protect{$\mathcal{A}dv$}\xspace.
\item \noindent\underline{\textbf{F1 Score.}} The harmonic mean of precision and recall computed as $2\times\frac{precision. recall}{precision + recall}$. The highest value is one indicating perfect precision and recall while the minimum value of zero, when either precision or recall is zero.
\end{itemize}
\section{\ref{threatmodel1}: Evaluation of Attack Success}\label{sec:tm1results}
We first consider \ref{threatmodel1}, where \protect{$\mathcal{A}dv$}\xspace has access to $\phi(x \cup s)$ and hence $\phi(x)$ and $\phi(s)$. We evaluate attack success to infer $s$ from $\phi(x \cup s)$ (Section~\ref{subsec:worstcase}). Followed by this, we evaluate attack success to infer $s$ from only $\phi(s)$ (Section~\ref{subsec:correlation}).
\subsection{Inferring $s$ from $\phi(x \cup s)$}\label{subsec:worstcase}
We first evaluate the simplest attack surface: \protect{$\mathcal{A}dv$}\xspace has access to the entire model explanation vector $\phi(x \cup s)$ to infer $s$.
\protect{$\mathcal{A}dv$}\xspace's $f_{adv}$ maps the entire explanation to $s$, i.e., $f_{adv}: \phi(x \cup s) \rightarrow s$. Our hypothesis is that $\phi(x \cup s)$ is distinguishable for different values of $s$ which is captured by $f_{adv}$.
\setlength\tabcolsep{5pt}
\begin{table}[!htb]
\caption{\underline{\ref{threatmodel1}: Inferring $s$ from $\phi(x \cup s)$.}}
\label{tab:worstcase}
\begin{center}
\begin{tabular}{ | c | c | c | c | }
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{IntegratedGradients}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.98 | 0.95 & 0.95 | 0.87 & 0.97 | 0.91 \\
\textbf{\small COMPAS} & 1.00 | 1.00 & 1.00 | 0.98 & 1.00 | 0.99 \\
\textbf{\small CREDIT} & 0.95 | 0.94 & 0.69 | 0.61 & 0.80 | 0.74 \\
\textbf{\small LAW} & 0.93 | 0.57 & 0.92 | 0.55 & 0.93 | 0.56 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{DeepLift}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.99 | 0.98 & 0.95 | 0.93 & 0.97 | 0.96 \\
\textbf{\small COMPAS} & 1.00 | 0.97 & 1.00 | 0.98 & 1.00 | 0.97 \\
\textbf{\small CREDIT} & 0.95 | 0.91 & 0.70 | 0.61 & 0.81 | 0.73 \\
\textbf{\small LAW} & 0.94 | 0.88 & 0.92 | 0.54 & 0.93 | 0.67 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{GradientSHAP}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS}& 0.94 | 0.97 & 0.98 | 0.94 & 0.96 | 0.95 \\
\textbf{\small COMPAS} & 0.94 | 0.96 & 0.95 | 0.94 & 0.95 | 0.95 \\
\textbf{\small CREDIT} & 0.97 | 0.95 & 0.70 | 0.61 & 0.81 | 0.74 \\
\textbf{\small LAW} & 0.99 | 0.93 & 0.99 | 0.95 & 0.99 | 0.94 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{SmoothGrad}\xspace}}\\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS}& 0.97 | 0.95 & 0.94 | 0.90 & 0.95 | 0.93 \\
\textbf{\small COMPAS} & 0.99 | 0.99 & 1.00 | 0.99 & 0.99 | 0.99 \\
\textbf{\small CREDIT} & 0.93 | 0.92 & 0.70 | 0.61 & 0.80 | 0.73 \\
\textbf{\small LAW} & 0.99 | 0.99 & 1.00 | 1.00 & 0.99 | 0.99 \\
\hline
\end{tabular}
\end{center}
\end{table}
As seen in Table~\ref{tab:worstcase}, we indeed validate our hypothesis. The attack success as measured using F1-Score are high: \textsc{IntegratedGradients}\xspace (\textsc{Sex}\xspace: 0.80 $\pm$ 0.16; \textsc{Race}\xspace: 0.92 $\pm$ 0.07), \textsc{DeepLift}\xspace (\textsc{Sex}\xspace: 0.83 $\pm$ 0.13; \textsc{Race}\xspace: 0.92 $\pm$ 0.07), \textsc{GradientSHAP}\xspace (\textsc{Sex}\xspace: 0.89 $\pm$ 0.08; \textsc{Race}\xspace: 0.92 $\pm$ 0.06) and \textsc{SmoothGrad}\xspace (\textsc{Sex}\xspace: 0.91 $\pm$ 0.10; \textsc{Race}\xspace: 0.91 $\pm$ 0.10). In addition to high F1-Scores, the high precision and recall values indicate that our proposed attribute inference attack is effective to infer $s$ from $\phi(x \cup s)$.
\subsection{Inferring $s$ from $\phi(s)$}\label{subsec:correlation}
We now consider a different setting for fine-grained analysis: can \protect{$\mathcal{A}dv$}\xspace exploit only $\phi(s)$ to infer $s$? Here, $\phi(s)$ is directly influenced by $s$ while $\phi(x)$ is indirectly influence by $s$.
Our hypothesis that $\phi(s)$ is sufficient for reasonable attack success and does not require entire model explanation $\phi(x \cup s)$ to successfully infer $s$. \protect{$\mathcal{A}dv$}\xspace's $f_{adv}$ infers $s$ using only its corresponding explanation, i.e., $f_{adv}: \phi(s) \rightarrow s$.
\setlength\tabcolsep{5pt}
\begin{table}[!htb]
\caption{\underline{\ref{threatmodel1}: Inferring $s$ from $\phi(s)$.}}
\label{tab:attribute_correlation}
\begin{center}
\begin{tabular}{ | c | c | c | c | }
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{IntegratedGradients}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 1.00 | 0.96 & 0.90 | 0.75 & 0.94 | 0.84 \\
\textbf{\small COMPAS} & 0.99 | 0.98 & 1.00 | 0.94 & 0.99 | 0.96 \\
\textbf{\small CREDIT} & 1.00 | 1.00 & 0.70 | 0.82 & 0.61 | 0.75 \\
\textbf{\small LAW} & 0.98 | 1.00 & 1.00 | 1.00 & 0.98 | 1.00 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{DeepLift}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS}& 1.00 | 1.00 & 0.99 | 0.70 & 0.99 | 0.82 \\
\textbf{\small COMPAS} & 0.94 | 1.00 & 0.97 | 0.81 & 0.95 | 0.89 \\
\textbf{\small CREDIT} & 1.00 | 0.99 & 0.70 | 0.61 & 0.82 | 0.75 \\
\textbf{\small LAW} & 0.60 | 1.00 & 0.99 |1.00 & 0.75 | 1.00 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{GradientSHAP}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.99 | 1.00 & 0.90 | 0.66 & 0.94 | 0.79 \\
\textbf{\small COMPAS} & 0.94 | 1.00 & 0.96 | 0.81 & 0.95 | 0.89 \\
\textbf{\small CREDIT} & 1.00 | 1.00 & 0.70 | 0.60 & 0.82 | 0.75 \\
\textbf{\small LAW} & 0.99 | 0.99 & 0.93 | 0.55 & 0.96 | 0.71 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{SmoothGrad}\xspace}}\\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS}& 1.00 | 0.79 & 0.90 | 0.73 & 0.94 | 0.76 \\
\textbf{\small COMPAS}& 0.98 | 0.99 & 1.00 | 0.93 & 0.99 | 0.96 \\
\textbf{\small CREDIT} & 0.99 | 0.99 & 0.70 | 0.61 & 0.82 | 0.75 \\
\textbf{\small LAW} & 1.00 | 1.00 & 1.00 | 0.56 & 1.00 | 0.72 \\
\hline
\end{tabular}
\end{center}
\end{table}
We validate our hypothesis as indicated by the high F1-Score: \textsc{IntegratedGradients}\xspace (\textsc{Sex}\xspace: 0.88 $\pm$ 0.09; \textsc{Race}\xspace: 0.88 $\pm$ 0.15),
\textsc{DeepLift}\xspace (\textsc{Sex}\xspace:0.86 $\pm$ 0.09; \textsc{Race}\xspace: 0.87 $\pm$ 0.09),
\textsc{GradientSHAP}\xspace(\textsc{Sex}\xspace: 0.78 $\pm$ 0.06; \textsc{Race}\xspace: 0.91 $\pm$ 0.05), and
\textsc{SmoothGrad}\xspace (\textsc{Sex}\xspace: 0.79 $\pm$ 0.09; \textsc{Race}\xspace: 0.93 $\pm$ 0.07). In addition to high F1-Scores, the high precision and recall values indicate that our proposed attribute inference attack is effective to infer $s$ from only $\phi(s)$.
\begin{remark}
In \ref{threatmodel1}, the high attack success is attributed to the distinguishability of model explanations for different values of $s$. In other words, different values of $s$ \textbf{explicitly} influence the model predictions as they are included in the training dataset. This in-turn results in distinguishable explanations for different values of $s$. This distinguishability is captured by training $f_{adv}$ to infer $s$.
\end{remark}
\section{\ref{threatmodel2}: Evaluation of Attack Success}\label{sec:tm2results}
Having shown that our proposed attack is successful in \ref{threatmodel1}, we now evaluate the attack success in \ref{threatmodel2}. We show the attack success on exploiting $\phi(x)$ (Section~\ref{subsec:explonly}) followed by exploiting combination of $f_{target}(x)$ and $\phi(x)$ (Section~\ref{subsec:combined}).
\subsection{Inferring $s$ from $\phi(x)$}\label{subsec:explonly}
We evaluate the effectiveness of our attack to exploit $\phi(x)$ which are the only explanations available to \protect{$\mathcal{A}dv$}\xspace.
\protect{$\mathcal{A}dv$}\xspace maps $\phi(x)$ to value of $s$ using the trained attack ML model, i.e., $f_{adv}: \phi(x) \rightarrow s$. Our hypothesis is that despite $s$ not directly being included in the training dataset and input, some attributes in $x$ might act as a proxy for $s$. Hence, $s$ influences model predictions \textit{indirectly} resulting in distinguishable model explanations for different values of $s$.
\setlength\tabcolsep{5pt}
\begin{table}[!htb]
\caption{\underline{\ref{threatmodel2}: Inferring $s$ from $\phi(x)$.}}
\label{tab:explanations}
\begin{center}
\begin{tabular}{ | c | c | c | c | }
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{IntegratedGradients}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.97 | 0.85 & 0.90 | 0.79 & 0.94 | 0.82 \\
\textbf{\small COMPAS} & 0.76 | 0.99 & 0.57 | 0.80 & 0.65 | 0.89 \\
\textbf{\small CREDIT} & 0.91 | 0.91 & 0.69 | 0.60 & 0.79 | 0.72 \\
\textbf{\small LAW} & 0.98 | 0.90 & 0.94 | 0.56 & 0.96 | 0.69 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{DeepLift}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.98 | 0.90 & 0.91 | 0.80 & 0.94 | 0.85 \\
\textbf{\small COMPAS} & 0.81 | 1.00 & 0.54 | 0.81 & 0.65 | 0.89 \\
\textbf{\small CREDIT} & 0.98 | 0.91 & 0.70 | 0.60 & 0.81 | 0.72 \\
\textbf{\small LAW} & 0.99 | 0.99 & 0.92 | 0.55 & 0.96 | 0.70 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{GradientSHAP}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.94 | 0.85 & 0.90 | 0.80 & 0.92 | 0.83 \\
\textbf{\small COMPAS} & 0.75 | 0.90 & 0.55 | 0.82 & 0.63 | 0.86 \\
\textbf{\small CREDIT} & 0.95 | 0.95 & 0.70 | 0.61 & 0.80 | 0.74 \\
\textbf{\small LAW} & 0.93 | 0.53 & 0.92 | 0.55 & 0.93 | 0.54 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{SmoothGrad}\xspace}}\\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.98 | 0.87 & 0.90 | 0.78 & 0.94 | 0.82 \\
\textbf{\small COMPAS} & 0.77 | 0.98 & 0.56 | 0.80 & 0.65 | 0.89\\
\textbf{\small CREDIT} & 0.92 | 0.88 & 0.70 | 0.60 & 0.79 | 0.72 \\
\textbf{\small LAW} & 0.97 | 0.96 & 0.94 | 0.55 & 0.96 | 0.70\\
\hline
\end{tabular}
\end{center}
\end{table}
We confirm this hypothesis in Table~\ref{tab:explanations} which indicates high attack success. For instance, F1-Score across four datasets for each explanation algorithm are as follows: \textsc{IntegratedGradients}\xspace (\textsc{Sex}\xspace: 0.78 $\pm$ 0.07; \textsc{Race}\xspace: 0.83 $\pm$ 0.12), \textsc{DeepLift}\xspace (\textsc{Sex}\xspace: 0.79 $\pm$ 0.08; \textsc{Race}\xspace: 0.84$\pm$ 0.12), \textsc{GradientSHAP}\xspace (\textsc{Sex}\xspace: 0.74 $\pm$ 0.12 \textsc{Race}\xspace: 0.82 $\pm$ 0.12), and \textsc{SmoothGrad}\xspace (\textsc{Sex}\xspace: 0.78 $\pm$ 0.07; \textsc{Race}\xspace: 0.83 $\pm$ 0.12). Hence, censoring $s$ is ineffective to mitigate privacy risk to attribute inference attacks.
\subsection{Inferring $s$ from $f_{target}(x) \cup \phi(x)$}\label{subsec:combined}
Having shown the attack success on exploiting $\phi(x)$, we answer \textit{how good are explanations + predictions combination as an attack surface for \protect{$\mathcal{A}dv$}\xspace to exploit?}
We want to evaluate the impact on attack success on combining $f_{target}(x)$ with $\phi(x)$.
\setlength\tabcolsep{5pt}
\begin{table}[!htb]
\caption{\underline{\ref{threatmodel2}: Inferring $s$ from $f_{target}(x) \cup \phi(x)$.}}
\label{tab:combined}
\begin{center}
\begin{tabular}{ | c | c | c | c | }
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{IntegratedGradients}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.99 | 0.72 & 0.90 | 0.66 & 0.94 | 0.69 \\
\textbf{\small COMPAS} & 0.76 | 0.99 & 0.47 | 0.82 & 0.58 | 0.90 \\
\textbf{\small CREDIT} & 0.89 | 0.90 & 0.70 | 0.60 & 0.78 | 0.72 \\
\textbf{\small LAW} & 0.98 | 0.98 & 0.93 | 0.55 & 0.95 | 0.70 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{DeepLift}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.99 | 0.75 & 0.90 | 0.66 & 0.94 | 0.70\\
\textbf{\small COMPAS} & 0.75 | 0.99 & 0.49 | 0.81 & 0.59 | 0.89 \\
\textbf{\small CREDIT} & 0.97 | 0.92 & 0.80 | 0.60 & 0.81 | 0.73 \\
\textbf{\small LAW} & 0.99 | 0.99 & 0.92 | 0.54 & 0.95 | 0.70 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{GradientSHAP}\xspace}} \\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.95 | 0.60 & 0.90 | 0.66 & 0.93 | 0.63 \\
\textbf{\small COMPAS} & 0.55 | 0.93 & 0.50 | 0.81 & 0.52 | 0.86 \\
\textbf{\small CREDIT} & 0.93 | 0.92 & 0.69 | 0.61 & 0.79 | 0.73 \\
\textbf{\small LAW} & 0.83 | 0.58 & 0.92 | 0.55 & 0.87 | 0.56 \\
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{SmoothGrad}\xspace}}\\
\hline
\textbf{Dataset} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score}\\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{\small CENSUS} & 0.90 | 0.72 & 0.90 | 0.66 & 0.90 | 0.69 \\
\textbf{\small COMPAS} & 0.84 | 0.88 & 0.69 | 0.61 & 0.76 | 0.72 \\
\textbf{\small CREDIT} & 0.69 | 0.99 & 0.47 | 0.81 & 0.56 | 0.89 \\
\textbf{\small LAW} & 0.97 | 0.95 & 0.92 | 0.54 & 0.95 | 0.69\\
\hline
\end{tabular}
\end{center}
\end{table}
Given the combination $f_{target}(x) \cup \phi(x)$ as input, \protect{$\mathcal{A}dv$}\xspace trains $f_{adv}$ to map it to $s$, i.e., $f_{adv}: (f_{target}(x) \cup \phi(x)) \rightarrow s$.
In Table~\ref{tab:combined}, we note that the attack success does not show a significant difference compared to the results in Table~\ref{tab:explanations} for exploiting only model explanations.
Furthermore, for \textsc{Race}\xspace, the attack success degrades compared to using only model explanations (Table~\ref{tab:explanations}). Here, we conjecture that the model predictions lower the distinguishability for $f_{adv}$ to infer $s$ compared to only using model explanations.
These observations indicate that model explanations are a strong attack surface for \protect{$\mathcal{A}dv$}\xspace to exploit independent of model predictions.
\begin{remark}
In \ref{threatmodel2}, similar to \ref{threatmodel1}, the high attack success is attributed to the distinguishability of model explanations for different values of $s$. Unlike \ref{threatmodel1}, different values of $s$ \textbf{implicitly} influence the model predictions via other attributes acting as proxy variables for $s$. This in-turn results in distinguishable explanations for different values of $s$ which is exploited by $f_{adv}$.
\end{remark}
\section{Comparing Privacy Risk of Explanations vs. Predictions}\label{sec:compare}
Having shown the success of attack on model explanations, we answer \textit{how risky are explanations compared to model predictions with respect to attribute inference attacks?}
The experimental setup in our work is the same as Aalmoes et al.~\cite{aalmoes2022dikaios}.
Hence, we report the results from Aalmoes et al.~\cite{aalmoes2022dikaios} as the state-of-the-art for attribute inference attacks. Specifically, we consider their \textsc{PrecRec}\xspace attack for both \ref{threatmodel1} and \ref{threatmodel2}.
\setlength\tabcolsep{1.75pt}
\begin{table}[!htb]
\caption{Reported state-of-the-art attribute inference attack success exploiting model predictions from Aalmoes et al.~\cite{aalmoes2022dikaios}.}
\label{tab:prediction}
\begin{center}
\begin{tabular}{ | c | c | c | c | }
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{PrecRec}\xspace Attack (w/o $S$)}} \\
\hline
\multirow{2}{*}{\textbf{Dataset}} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score} \\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{CENSUS} & 0.91 | 0.94 & 0.90 | 0.69 & 0.90 | 0.80 \\
\textbf{COMPAS} & 0.97 | 0.96 & 0.48 | 0.82 & 0.64 | 0.88 \\
\textbf{LAW} & 0.98 | 1.00 & 0.95 | 0.56 & 0.96 | 0.72 \\
\textbf{CREDIT} & 0.99 | 0.97 & 0.69 | 0.61 & 0.81 | 0.75 \\
\hline
\hline
\rowcolor{LightCyan} \multicolumn{4}{|c|}{\textbf{\textsc{PrecRec}\xspace Attack (w/ $S$)}} \\
\hline
\multirow{2}{*}{\textbf{Dataset}} & \textbf{Recall} & \textbf{Precision} & \textbf{F1-Score} \\
& \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace & \textsc{Race}\xspace | \textsc{Sex}\xspace \\
\hline
\textbf{CENSUS} & 0.90 | 0.91 & 0.92 | 0.70 & 0.91 | 0.79 \\
\textbf{COMPAS} & 0.72 | 0.97 & 0.67 | 0.82 & 0.69 | 0.89 \\
\textbf{LAW} & 0.98 | 0.96 & 0.97 | 0.57 & 0.97 | 0.72 \\
\textbf{CREDIT} & 0.99 | 0.84 & 0.69 | 0.67 & 0.81 | 0.75 \\
\hline
\end{tabular}
\end{center}
\end{table}
We compare the inference capability of $s$ from $\phi(x \cup s)$ (reported Table~\ref{tab:worstcase}) against the inference capability of using model predictions (reported Table~\ref{tab:prediction} (w/ $s$)).
We note that the attack success in Table~\ref{tab:worstcase} for model explanations is higher than model predictions in Table~\ref{tab:prediction} in most of the cases.
Similarly, when $s$ is not included in training data, we find that the performance reported in Table~\ref{tab:explanations} is better than the results in Table~\ref{tab:prediction} (w/o $s$).
In summary, model explanations alone are stronger attack surface for attribute inference attack compared to model predictions.
|
2,877,628,089,435 | arxiv | \section{Introduction}
Until recently, studies of faint field galaxies have been limited to galaxy
counts, colors, and redshift distributions, which can be used to construct
luminosity functions at earlier epochs. Such luminosity functions are then
compared to models incorporating a certain amount of number evolution
(controlled by galaxy formation and merging) coupled with luminosity and color
evolution (controlled by star formation histories). Recent models range from
those predicting only a small degree of luminosity evolution ({\it e.g.,\ } Gronwall \&
Koo 1995) to those invoking entirely new classes of galaxies ({\it e.g.,\ } Babul \& Rees
1992; Babul \& Ferguson 1996), to those requiring high merger rates ({\it e.g.,\ }
Broadhurst, Ellis, \& Glazebrook 1992). A direct measure of luminosity
evolution in field galaxies will help to distinguish between various
hypotheses. Such measures have recently been attempted, but the results are
somewhat contradictory. Schade {\it et~al.}\ (1996a) found evidence for disk
brightening by 1.2 $B$ mag in galaxies at redshifts $0.5 \leq z \leq 1.2$.
Simard \& Pritchet (1996) found even greater levels of evolution ($2.5 \pm 0.5$
mag) in a sample of very blue galaxies at $z \sim 0.35$ for which strong {{\sc [O\thinspace ii]}}\
lines could be spatially resolved. Rix {\it et~al.}\ (1997), also using kinematic
information, derived a brightening of 1.5 mag at $z \sim 0.25$ for
sub-$L^\star$ galaxies. In contrast, Vogt {\it et~al.}\ (1996) and Bershady (1997),
using optical rotation curves, and Forbes {\it et~al.}\ (1996), using line widths,
found only small deviations from the local Tully-Fisher (TF) relation for
spiral galaxies, implying only modest brightening ($\sim$0.4 mag) out to $z\sim
1$. Simard \& Pritchet postulate that these various results could be
reconciled if strong luminosity evolution were present {\it only} in lower-mass
systems. If confirmed, this would be an important factor in understanding the
evolution of field galaxies.
This Letter introduces well-resolved rotation curves of eight predominantly
lower-luminosity galaxies ($L_B \lesssim L_B^\star$ = -20.3, see Efstathiou,
Ellis, \& Peterson 1988), selected by morphology as suitable TF candidates.
These new observations provide a valuable test of the mass-dependent luminosity
evolution hypothesis, particularly in comparison to the higher-mass sample
presented by Vogt {\it et~al.}\ (1996; hereafter ``Paper~I''). Combined with the work of
Paper~I, these data form a sample of rotation curves for 16 galaxies at redshifts
{0.15~$\lesssim$~{$z$}~$\lesssim$~1}, ranging over half the age of the universe
(for $q_0 = 0$, one--third for $q_0 = 0.5$). We also use this combined data
set to explore trends in surface brightness for comparison with Colless {\it et~al.}\
(1994), Forbes {\it et~al.}\ (1996) and Schade {\it et~al.}\ (1996a). Detailed analysis of
a significantly larger data set is currently underway, and these results will
be used to explore a variety of relevant selection effects. A full description
of our analysis techniques is deferred to that paper (Vogt {\it et~al.}\ 1997).
\section{Observations}
The TF candidate objects were selected from WFPC2 F814W (``$I_{814}$'') images
of the flanking fields of the Hubble Deep Field (HDF; Williams {\it et~al.}\ 1996).
Selection was based on the following criteria:
({\it i}) undistorted disk morphology;
({\it ii}) inclination greater than 30$^{\circ}$;
({\it iii}) no interacting companions or obscuring foreground stars; and
({\it iv}) $I_{814} \leq 22.5$.
The selection was made with no apriori knowledge of redshifts or luminosities.
The galaxies were observed in 1996 April under the auspices of the DEEP\
project (Koo 1995), using the Low Resolution Imaging Spectrograph (LRIS; Oke
{\it et~al.}~1995) on the Keck 10m\ telescope. LRIS\ employs slitmasks to provide
``long-slit'' spectral observations for multiple objects simultaneously. For
the TF candidates, the slitlet for each object was tilted to align with the
major axis of each galaxy. Slitlets were {1\farcs 1} wide and $\geq${12\farcs
0} long for these objects. Integration times were 50 minutes for each of two
600 line mm$^{-1}$ gratings, blazed at 5000 and 7500~\AA; the combined spectral
range was roughly 3800--8600~\AA. Spectral and spatial scales were
1.28~\AA~pix$^{-1}$ and {0\farcs 215}~pix$^{-1}$, respectively. The seeing was
approximately {1\farcs 0} FWHM. In additional to spectroscopy, two 300s
$V$-band images of the field were acquired. (See Phillips {\it et~al.}~1997 for more
details of the observations and for galaxy coordinates.)
The new spectral data have several significant improvements over those of Paper~I.
The targets were preselected as suitable TF galaxies, whereas the rotation
curves in Paper~I\ were obtained serendipitously. Slitlets were aligned with the
galaxy major axes, removing a source of potentially significant error.
Finally, the expanded spectral range means that multiple emission lines were
observed for each object, {\it e.g.,\ } {{\sc [O\thinspace ii]}}\wave{3727} through {{\sc [O\thinspace iii]}}\wave{5007} for $z
< 0.7$ (the majority of our sources), and through {H$\alpha$}\ for $z < 0.3$.
$z > 0.7$, only the {{\sc [O\thinspace ii]}}\ lines were available.
\section{Data Reduction and Analysis}
\subsection{Spectral Measurements}
The LRIS\ spectra were debiased and flat--fielded, and then rectified.
Wavelength calibration was done using the procedure described in Kelson {\it et~al.}\
1997. No relative or absolute flux calibration was applied. All spectra (not
just those preselected for TF work) were examined for spatially extended
emission lines, but only one additional object (IE4-1304-1007) displayed such
lines (the apparent inclination of this object was too low for our criteria,
but we include it in our final sample). Among the 18 preselected candidates,
one has yielded no redshift identification; one has a pure early-type spectrum
with no detected emission lines; one at $z = 0.109$ displayed unusually weak
lines, both in emission and absorption; eight show normal disk galaxy spectra
but the emission lines are too weak to use to derive velocity curves; and seven
display emission lines of sufficient strength to determine velocity structure.
The group of eight has virtually the same median redshift (0.50) and redshift
range (0.41--0.77) as do the seven successful targets.
Both the {{\sc [O\thinspace ii]}}\wave{3727} doublet and {H$\beta$}\ and {{\sc [O\thinspace iii]}}\wave{5007} were
observed for most sources in the final sample. The spatially-resolved emission
lines were analyzed using the same Gaussian profile fitting technique described
in Paper~I\ (see also Vogt 1995). Briefly, a single (or double) Gaussian profile
was fit to each emission line (or doublet) at each point in the spatial
direction. Profiles were considered acceptable whenever the Gaussian fit met
minimum requirements in height and width, generally a signal-to-noise-ratio
(S/N) of 5$\sigma$ and 3$\sigma$, respectively; the typical value was
10$\sigma$ for both width and amplitude. Central wavelengths of the profiles
were used to construct observed rotation curves.
As discussed in Paper~I, the sizes of the disks in these galaxies are on the order
of the seeing and the slit--width. Thus, wavelength shifts in the observed
spectral lines are not only a function of the velocity profile of the disk but
also the surface brightness distribution and the mapping of the
seeing--smoothed flux through the full slit. (A simplifying factor for the new
data is that the misalignment of the slit relative to the galaxy major axis was
less than 10$^{\circ}$\ for all but one source.) To derive terminal velocities we
must correct for these effects. To this end, we employ a grid of simple
exponential disk models with different terminal velocities to simulate emission
lines, and fit these model emission lines identically to the spectral data.
The circular velocity of the galaxy model was iteratively adjusted to match the
observed data, and adopted as the intrinsic terminal velocity, {$V_{term}$}. (For
some galaxies, a clear turnover in the velocity curve was not observed. Rather
than match the inner, rising rotation curve and extrapolate to a turnover
velocity, we have chosen the model whose measured turnover velocity matches the
maximum velocity measured in our spectra; these models provide lower limits to
the true {$V_{term}$}.) Errors in {$V_{term}$}\ were estimated by varying the inclination and
position angle of each galaxy by $\pm$~10$^{\circ}$.
\vfill\eject
\subsection{Photometric Parameters}
The HST\ {$I_{814}$}\ images were analyzed using IRAF-based tools (see {\it e.g.,\ } Forbes
{\it et~al.}\ 1994; Phillips {\it et~al.}\ 1997). Total magnitudes were measured from
aperture growth curves; inclinations and position angles were estimated from
outer elliptical isophotes; and disk scale lengths were measured from
simultaneous disk-plus-bulge fits to the major axis intensity profiles. HST\
took images of the flanking fields in {$I_{814}$}\ only, so a LRIS\ $V$ image was used
to determine a $V$--$I$ color. The {$I_{814}$}\ image was seeing-degraded to match the
ground--based image, and the color was determined within a {2\farcs 2} diameter
aperture (see Phillips {\it et~al.}\ 1997 for more detail).
Intrinsic galaxy parameters were calculated using the measured redshifts,
photometry, and angular scales, assuming \Hconst{75} and $q_0 = 0.05$.$^5$ To
determine restframe $L_B$, $k$-corrections were interpolated from the model
SEDs of Gronwall \& Koo (1995), which are based on Bruzual \& Charlot (1993)
models and realistic star-formation scenarios. Current epoch ({\it i.e.,\ }
non-evolving) SEDs were used. Since restframe $B$ corresponds to observed {$V_{606}$}\
at $z \sim 0.4$ and to {$I_{814}$}\ at $z \sim 0.8$, errors in the $k$-corrections
should be small. Galactic extinction was taken to be negligible for the HDF\
(Williams {\it et~al.}~1996), and sources were corrected for internal extinction
following the method of Tully \& Fouqu\'e (1985) in order to be consistent with
Pierce \& Tully (1988; 1992).
\altaffiltext{5}{Data from Paper~I\ have been adjusted for these new values.}
\section{Results and Discussion}
Images of the eight new galaxies and their spatially-resolved {{\sc [O\thinspace ii]}}\ lines are
shown in Figure~\ref{PLATE} ({\sc Plates NN1--NN2)}), along with the observed
and modeled velocity curves. Like the eight galaxies discussed in Paper~I, these
new distant TF galaxies appear to be quite similar to local normal spiral
galaxies, both morphologically and kinematically. The HST images show
apparently normal, disk-dominated spirals. Allowing for seeing and resolution
effects, the velocity curves are qualitatively similar to those of local
spirals. The rotation curves are traceable to $\sim$3 exponential scale
lengths ($R_d$) in the disks, a length comparable to the extent of rotation
curves for local galaxies ({\it cf.,\ } Vogt 1995). A simple estimate of their masses,
$M = V^2 R / G$, yields values of 0.5--$5 \times 10^{11} M_{\odot}$ within 15
kpc, similar to the range of masses found for nearby spirals.
For purposes of discussion, we separate the velocity data into two classes.
High quality is defined as having sufficient S/N\ and resolution elements to
clearly determine a terminal velocity; at least one emission line free of
strong night-sky contamination; apparent inclination greater than 30$^{\circ}$; and
slit misaligned with the galaxy major axis by {$<$20$^{\circ}$}. Six of the eight
new galaxies and five from Paper~I\ meet these criteria. Note that the
serendipitous object, IE4-1304-1007, is a low-quality source.
\subsection{The High-Redshift Tully-Fisher Relation}
In Figure~\ref{TF} we compare the 16 galaxies to a local TF relation in the
restframe $B$-band. The local relation shown is an {\it inverse} fit ({\it i.e.,\ }
{$V_{term}$}\ as a function of $M_B$) to the local sample from Pierce \& Tully (1992);
this is the proper fit for comparison with a magnitude--limited sample (see
Paper~I). The 11 high-quality sources have a weighted offset of $0.36 \pm 0.13$
mag relative to the local relation, and an {\it rms} dispersion of 0.65 mag.
This observed dispersion matches the combined estimated errors of the
logarithmic velocity widths (0.47), the restframe $B$ magnitudes (0.2), and an
assumed intrinsic scatter in the TF relation (0.4; {\it cf.,\ } Willick {\it et~al.}~1996 and
references therein), thus helping to validate our error estimates. The
lower-quality points show a much larger scatter, as expected.
We emphasize that the derived offset, $\sim$0.4 mag, represents an {\it upper
limit} to luminosity evolution of field galaxies, for these reasons: any
magnitude-limited sample is biased towards more luminous objects; our analysis
is restricted to objects with detectable emission lines --- that is, actively
star-forming galaxies which are likely to have elevated $B$ luminosities; some
terminal velocities are lower limits; and we may be overcorrecting for
extinction if galaxies were less dusty at earlier epochs. Our choice of $q_0 =
0.05$ is also conservative --- derived luminosities are reduced by 0.1--0.4 mag
for $q_0 = 0.5$. While mass evolution is a possible factor, we have implicitly
assumed that evolution in luminosity dominates the observed offset. We do not
expect the masses to evolve strongly, though, given the presence of clearly
formed disks.
The new HDF\ data extend the luminosity range of our total sample to $-21.8
\lesssim M_B \lesssim -19$. It is notable that {\it there is no deviation from
a linear relation over this range}, {\it i.e.,\ } there is no evidence in these data for
different amounts of luminosity evolution in different luminosity or mass
regimes.
One explanation for the wide range in luminosity evolution found by various
groups is that sample selection strongly affects the degree of evolution
detected in a given sample. In our sample, the galaxies were chosen primarily
by morphology. On the other hand, Bershady (1997), Simard \& Pritchet (1996),
and Rix {\it et~al.}~(1997) all selected {\it blue} galaxies. These studies all had
different sample selection, redshift ranges, and observational and analysis
techniques; a direct comparison is not practical nor is a full discussion of
these parameters within the scope of this Letter (see Vogt {\it et~al.}~1997).
However, it is useful to consider the potentially critical issue of color
selection criteria. The Bershady (1997) low redshift ($0.05 \leq z \leq 0.35$)
sample shows an offset of less than 0.5 mag, while the Rix {\it et~al.}\ (1997)
spatially unresolved data at redshift $z \sim 0.25$ show evidence for a
magnitude offset of $\sim$1.5 relative to a local {\it blue} sample ({\it i.e.,\ } the
effect would be even greater if the data were compared to a general local
sample). Simard \& Pritchet (1996) chose the strongest {{\sc [O\thinspace ii]}}\ emitters from
among a sample of emission-line galaxies (Simard 1996), and they find the
highest offset of all (2.5 $\pm$ 0.5 mag for a redshift of $z \sim 0.35$; note
the large scatter). This suggests the bluest, most actively star forming
galaxies may show the largest offsets. Forbes {\it et~al.}\ (1996), whose sample was
not color selected, noted some correlation between their offsets and galaxy
colors in the sense that the galaxies with the largest offsets tended to be
blue. Rix {\it et~al.}\ (1997) find the same trend within their blue sample. As
further example, Figure~\ref{TF} includes the two galaxies from Vogt
{\it et~al.}~(1993) with observed optical rotation curves; one (SA68$-$2545.3) was
chosen specifically for its unusually strong {{\sc [O\thinspace ii]}}\ flux, and this galaxy shows
a very large offset from the TF relation. Taken together, this suggests that
for redshifts $z \gtrsim 0.2$, color may prove to be a good indicator of
luminosity evolution in field galaxies, distinguishing between an average,
stable population and a bluer, star-forming population with enhanced luminosity.
\subsection{Surface Brightness Evolution}
Changes in surface brightness levels in disk galaxies can provide an
independent determination of luminosity evolution --- provided scale lengths do
not evolve strongly --- and is particularly useful since it is independent of
$q_0$. Freeman (1970) showed that disks in local spiral galaxies have a
near-uniform central surface brightness ($\mu_B$ = 21.65 mag arcsec$^{-2}$).
Recently de Jong (1995) found a morphological-type dependence to the surface
brightness, and studied the effect of internal extinction, determining a value
of $\mu_B$ = 21.45 $\pm$ 0.76 mag arcsec$^{-2}$ for the case of spirals of
T-types 1--6 (RC3) with semitransparent disks. Among high redshift ($z \sim
0.5$) galaxies, Forbes {\it et~al.}\ (1996) concluded that the surface brightness
increases by 0.6 $\pm$ 0.1 mag with respect to local galaxies of similar mass.
This is also in agreement with Colless {\it et~al.}\ (1994). Schade {\it et~al.}\ (1996b,a)
find increases of $\sim$0.9 mag out to a redshift $z \sim 0.5$ and $1.6 \pm
0.1$ mag for disk galaxies at redshifts $0.5 < z < 1.1$, respectively (with no
correction for internal extinction). For our sample, we compare disk sizes and
luminosities for the 15 disk galaxies, as plotted in Figure~\ref{sb} (we
exclude the ``double nucleus'' galaxy from Paper~I, as its structure is complex).
The single early-type spiral (NE4-1269-1248) appears to be a ``ring galaxy''
whose profile is difficult to fit. Scale length measurements for it range from
{0\farcs 4} to {1\farcs 0}; we adopt {0\farcs 6} (with large uncertainties) as
it appears most credible. This galaxy also has a significantly larger B/D
ratio ($\sim$0.9) than that of the others ($\sim$0.1). As is normal practice,
the disk scale lengths have not been corrected to face-on values. Sources with
inclination $i \gtrsim 80$$^{\circ}$\ may be systematically in error, due to
distortion of the surface brightness profiles from non-uniform extinction at
different radii ({\it cf.,\ } Giovanelli {\it et~al.}\ 1994). Comparison is made with de
Jong's local galaxy sample, and distant galaxy measurements from Schade {\it et~al.}\
(1996a). We find our sample to have an overall offset of $0.59 \pm 0.13$ mag
with respect to local galaxies, in fairly good agreement with the offset in the
TF relation. Though the highest redshift data ($z > 0.75$) generally lie
within the locus of the data points from Schade {\it et~al.}\, the median offset in
our data is significantly less. The apparent brightening at the high mass end
(seen also in the data of Forbes {\it et~al.}~1996), could be caused by a bias toward
higher-than-average luminosities among the most distant objects, as well as by
luminosity evolution.
\section{Conclusions}
In summary, we have compared a set of 16 high redshift galaxies with a local TF
relation and find a modest amount of luminosity evolution ($\Delta$M$_B
\lesssim$ 0.4). This conclusion is supported by an examination of the surface
brightness characteristics of the sample, which show evidence for evolution at
the level of $\sim$ 0.6 magnitude. We find no evidence for deviation from a
linear TF relation at lower luminosities, down to a magnitude M$_B \sim -$19.
The bluest galaxies within the sample have a slightly larger offset from the
local TF relation, which suggests (when taken in conjunction with results of
other studies) that the derived degree of luminosity evolution may depend
strongly on sample selection.
\acknowledgments
DEEP was established through the Center for Particle Astrophysics. Funding was
provided by NSF grants AST-9529098, AST-922540, and AST-9120005; NASA grants
AR-06337.08-94A, AR-06337.21-94A, GO-05994.01-94A, AR-5801.01-94A, and
AR-6402.01-95A. JG acknowledges partial support from Spanish MEC grants
PB89--124 and PB93--456 and a UCM del Amo foundation fellowship; CG funding
from an NSF Graduate Fellowship; and JDL support from NASA grant HF-1048.01-93A.
\vfill\eject
|
2,877,628,089,436 | arxiv | \section{Introduction}\label{Sec:Intro}
Given the agility of unmanned aerial vehicles (UAVs), they are capable of supporting compelling applications and are beginning to be deployed more broadly. Recently, the UK and Chile authorities proposed to deliver medical support and other essential supplies by using UAVs to vulnerable people in response to Covid-19 \cite{Drone:UK, Drone:Chile}. In \cite{MG:17:SC}, the authors used UAVs for image collection and high-resolution topography exploration. However, given the several limitations of on-board power level and the ability to adapt to changes in the environment, UAVs may not be fully autonomous and can only operate for short flight-durations, unless remote laser-charging is used \cite{QL:16:VTM}. Moreover, due to some challenging tasks such as topographic surveying, data collection or obstacle avoidance, the existing UAV technologies cannot operate in an optimal manner.\par
Wireless networks supported by UAVs constitute a promising technology for enhancing the network performance \cite{HC:06:ICC}. The applications of UAVs in wireless networks span across diverse research fields, such as wireless sensor networks (WSNs) \cite{JG:18:SAC}, caching \cite{CZ:20:CCN}, heterogeneous cellular networks \cite{HW:20:WC}, massive multiple-input multiple-output (MIMO) \cite{HH:20:VT}, disaster communications \cite{TD:19:GLOBECOM, Trung:19:IWCMC} and device-to-device communications (D2D) \cite{MM:16:WC}. For example, in \cite{Long:EAI}, UAVs were deployed to provide network coverage for people in remote areas and disaster zones. UAVs were also used for collecting data in a WSN \cite{JG:18:SAC}. Nevertheless, the benefits of UAV-aided wireless communication are critically dependent on the limited on-board power level. Thus, the resource allocation of UAV-aided wireless networks plays a pivotal role in approaching the optimal performance. Yet, the existing contributions typically assume having static environment \cite{TD:19:GLOBECOM, Trung:19:IWCMC, Minh:19:WCL} and often ignore the stringent flight time constraints in real-life applications \cite{JG:18:SAC, HW:20:WC, Xiaowei:19:VT}.
Machine learning has recently been proposed for the intelligent support of UAVs and other devices in the network \cite{HZ:20:VT, Xiao:19:VT,KK:19:Access, Khoi:19:Access, Khoi:20:Access, KL:19:VT, UC:19:WC, HH:20:VT, XL:19:VT, CW:19:VT}. Reinforcement learning (RL) is capable of searching for an optimal policy by trial-and-error learning. However, it is challenging for model-free RL algorithms, such as Q-learning to obtain an optimal strategy, while considering a large state and action space. Fortunately, with the emerging neural networks, the sophisticated combination of RL and deep learning, namely deep reinforcement learning (DRL) is eminently suitable for solving high-dimensional problems. Hence, DRL algorithms have been widely applied in fields such as robotics \cite{SG:17:ICRA}, business management \cite{QC:18:AAAI} and gaming \cite{Mnih:13}. Recently, DRL has also become popular in solving diverse problems in wireless networks thanks to their decision-making ability and flexible interaction with the environment \cite{YY:19:SAC, KK:19:Access, Khoi:19:Access, Khoi:20:Access, KL:19:VT, UC:19:WC, HH:20:VT, XL:19:VT, CW:19:VT, CZ:20:CCN, NZ:19:WC, SY:19:VT}. For example, DRL was used for solving problems in the areas of resource allocation \cite{NZ:19:WC, KK:19:Access, Khoi:19:Access}, navigation \cite{DY:18:VT, HH:20:VT} and interference management \cite{UC:19:WC}.
\subsection{Related Contributions}\label{Sec:Works}
UAV-aided wireless networks have also been used for machine-to-machine communications \cite{HW:19:WC} and D2D scenarios in 5G \cite{Minh:19:WCL, Huy:20:CCN}, but the associated resource allocation problems remain challenging in real-life applications. Several techniques have been developed for solving resource allocation problems \cite{LL:19:WC, LX:19:IOT, KK:19:Access, Khoi:19:Access, DY:18:VT, LN:19:SPAWC}. In \cite{LL:19:WC}, the authors have conceived a multi-beam UAV communications and a cooperative interference cancellation scheme for maximising the uplink sum-rate received from multiple UAVs by the base stations (BS) on the ground. The UAVs were deployed as access points to serve several ground users in \cite{LX:19:IOT}. Then, the authors proposed successive convex programming for maximising the minimum uplink rate gleaned from all the ground users. In \cite{DY:18:VT}, the authors characterised the tradeoffs between the ground terminal transmission power and the specific UAV trajectory both in a straight and in a circular trajectory.
The issues of data collection, energy minimisation, and path planning have been considered in \cite{QW:18:WC,CZ:18:WCL, HW:18:WCL, HW:19:WC, XL:19:VT, ZW:20:ITJ,JL:20:ITJ,MS:20:WC, MH:20:C,CZ:20:C}. In \cite{CZ:18:WCL}, the authors minimised the energy consumption of the data collection task considered by jointly optimising the sensor nodes' wakeup schedule and the UAV trajectory. The authors of \cite{HW:18:WCL} proposed an efficient algorithm for joint trajectory and power allocation optimisation in UAV-assisted networks to maximise the sum-rate during a specific length of time. A pair of near-optimal approaches for optimal trajectory was proposed for a given UAV power allocation and power allocation optimisation for a given trajectory. In \cite{HW:19:WC}, the authors introduced a communication framework for UAV-to-UAV communication under the constraints of the UAV's flight speed, location uncertainty and communication throughput. Then, a path planning algorithm was proposed for minimising the associated completion time task while balancing the performance by computational complexity trade-off. However, these techniques mostly operate in offline modes and may impose excessive delay on the system. It is crucial to improve the decision-making time for meeting the stringent requirements of UAV-assisted wireless networks.
Again, machine learning has been recognised as a powerful tool of solving the high-dynamic trajectory and resource allocation problems in wireless networks. In \cite{LN:19:SPAWC}, the authors proposed a model based on the classic k-means algorithm for grouping the users into clusters and assigned a dedicated UAV to serve each cluster. By relying on their decision-making ability, DRL algorithms have been used for lending each node some degree of autonomy \cite{YY:19:SAC, KK:19:Access, Khoi:19:Access, Khoi:20:Access, KL:19:VT, CZ:20:CCN, NZ:19:WC}. In \cite{YY:19:SAC}, an optimal DRL-based channel access strategy to maximise the sum rate and $\alpha$-fairness was considered. In \cite{KK:19:Access, Khoi:19:Access}, we deployed DRL techniques for enhancing the energy-efficiency of D2D communications. In \cite{KL:19:VT}, the authors characterised the DQL algorithm for minimising the data packet loss of UAV-assisted power transfer and data collection systems. As a further advance, caching problems were considered in \cite{CZ:20:CCN} to maximise the cache success hit rate and to minimise the transmission delay. The authors designed both a centralised and a decentralised system model and used an actor-critic algorithm to find the optimal policy.
\begin{table}[h!]
\centering
\caption{A comparison with existing literature}
\centering
\label{Tab:compare}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
& \cite{QW:18:WC} &\cite{CZ:18:WCL}& \cite{JG:18:SAC}& \cite{KL:19:VT}&\cite{XL:19:VT}& \cite{ZW:20:ITJ} & \cite{HH:20:VT} &\cite{JL:20:ITJ} &\cite{MS:20:VT}&\cite{MS:20:WC} &\cite{MH:20:C} &\cite{CZ:20:C} & Our work \\
\hline
Trajectory design & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & && \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
3D trajectory & & & & & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
Uplink & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & &&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
Downlink & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & & & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & & &&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\\ \hline
Sum-rate maximisation & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
Energy optimisation & & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & & & & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& &&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\\ \hline
Time minimisation & & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
Dynamic environment & & & & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
Simple environment & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;& \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & &&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
Complex environment &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
Mathematical solution & \tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; & &&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&\\ \hline
Reinforcement learning & & & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
Deep neural networks & & & &\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle; &&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;&&&&\tikz\fill[scale=0.4](0,.35) -- (.25,0) -- (1,.7) -- (.25,.15) -- cycle;\\ \hline
\end{tabular}
\end{table}
DRL algorithms have also been applied for path planning in UAV-assisted wireless communications \cite{UC:19:WC, SY:19:VT, HH:20:VT, XL:19:VT, CW:19:VT, MS:20:VT}. In \cite{UC:19:WC}, the authors proposed a DRL algorithm based on the echo state network of \cite{HJ:01:GMD} for finding the flight path, transmission power and associated cell in UAV-powered wireless networks. The so-called deterministic policy gradient algorithm of \cite{Lillicrap:15} was invoked for UAV-assisted cellular networks in \cite{SY:19:VT}. The UAV's trajectory was designed for maximising the uplink sum-rate attained without the knowledge of the user location and the transmit power. Moreover, in \cite{HH:20:VT}, the authors used the DQL algorithm for the UAV's navigation based on the received signal strengths estimated by a massive MIMO scheme. In \cite{XL:19:VT}, Q-learning was used for controlling the movement of multiple UAVs in a pair of scenarios, namely for static user locations and for dynamic user locations under a random walk model. However, the aforementioned contributions have not addressed the joint trajectory and data collection optimisation of UAV-assisted networks, which is a difficult research challenge. Furthermore, these existing works mostly neglected interference, 3D trajectory and dynamic environment.
\subsection{Contributions and Organisation}
In this paper, we consider a system model relying on a single UAV to serve several user nodes. The UAV is considered to be an information-collecting robot aiming for collecting the maximum amount of data from the users with the shortest distance travelled. We conceive a solution based on the DRL algorithm to find the optimal path of a UAV for maximising the joint reward function based on the shortest flight distance and the uplink transmission rate. We compare the difference between our proposed approach and other existing works in Table \ref{Tab:compare}. Our main contributions are summarised as follows:
\begin{itemize}
\item The UAV system considered has stringent constraints owing to the position of the destination, the UAV's limited flight time and the communication link's constraint. The UAV's objective is to find an optimal trajectory for maximising the total network throughput, while minimising its distance travelled.
\item We propose DRL techniques for solving the above problem. The area is divided into a grid to enable fast convergence. Following its training, the UAV can have the autonomy to make a decision concerning its next action at each position in the area, hence eliminating the need for human navigation. This makes UAV-aided wireless communications more reliable, practical and optimises the resource consumption.
\item Two scenarios are considered relying either on three or five clusters for qualifying the efficiency of our approach in terms of both the sum-rate, the trajectory and the associated time.
\end{itemize}
The rest of our paper is organised as follows. In Section \ref{Sec:Model}, we describe our data collection system model and the problem formulation of IoT networks relying on UAVs. Then, the mathematical background of the DRL algorithms is presented in Section \ref{Sec:BG}. Deep Q-learning (DQL) is employed for finding the best trajectory and for solving our data collection problem in Section \ref{Sec:Alg1}. Furthermore, we use the dueling DQL algorithm of \cite{Wang:15} for improving the system performance and convergence speed in Section \ref{Sec:Alg2}. Next, we characterise the efficiency of the DRL techniques in Section \ref{Sec:Results}. Finally, in Section \ref{Sec:Con}, we summarise our findings and discuss our future research.
\section{System Model and Problem Formulation}\label{Sec:Model}
Consider a system consisting of a single UAV and $M$ groups of users, as shown in Fig.~\ref{fig:System}, where the UAV relying on a single antenna visits all clusters to cover all the users. The 3D coordinate of the UAV at time step $t$ is defined as $X^t = (x_0^t, y_0^t, H_0^t)$. Each cluster consists of $K$ users, which are unknown and distributed randomly within the coverage radius of $C$. The users are moving following the random walk model with the maximum velocity $v$. The position of the $k$th user in the $m$th cluster at time step $t$ is defined as $X_{m,k}^t = (x_{m,k}^t, y_{m,k}^t)$. The UAV's objective is to find the best trajectory while covering all the users and to reach the dock upon completing its mission.
\begin{figure}[h!]
\centering
\subfigure{\includegraphics[width=0.5\textwidth]{Figs/System_model}}
\caption{System model of UAV-aided IoT communications.}
\label{fig:System}
\end{figure}
\subsection{Observation model}
The distance from the UAV to user $k$ in cluster $m$ at time step $t$ is given by:
\begin{equation}
d_{m, k}^t = \sqrt{(x_0^t - x_{m, k}^t) ^ 2 + (y_0^t - y_{m, k}^t) ^2 + {H_0^t} ^2}.
\end{equation}
We assume that the communication channels between the UAV and users are dominated by line-of-sight (LoS) links; thus the channel between the UAV and the $k$th user in the $m$th cluster at time step $t$ follows the free-space path loss model, which is represented as
\begin{equation}
\begin{split}
h_{m, k}^t &= \beta_0 {d_{m, k}^t} ^ {-2} \\
&= \frac{\beta_0}{(x_0^t - x_{m, k}^t) ^ 2 + (y_0 - y_{m, k}^t) ^2 + {H_0^t}^2},
\end{split}
\end{equation}
where the channel's power gain at a reference distance of $d=1m$ is denoted by $\beta_0$.
The achievable throughput from the $k$th user in the $m$th cluster to the UAV at time $t$ if the user satisfies the distance constraint is defined as follows:
\begin{equation}
R_{m,k}^t = B \log_2 \Bigg(1+ \frac{p_{m, k}^t h_{m, k}^t}{\sum_{i \neq m}^M \sum_j^K p_{i, j}^t h_{i, j}^t + \sum_{u \neq k}^K p_{m, u}^t h_{m, u}^t + \alpha^2}\Bigg), \forall m, k,
\end{equation}
where $B$ and $\alpha^2$ are the bandwidth and the noise power, respectively. Then the total sum-rate over the $T$ time step from the $k$th user in cluster $m$ to the UAV is given by:
\begin{equation}
R_{m,k} = \int_0^T R_{m,k}^t dt, \forall m,k.
\end{equation}
\subsection{Game formulation}
Both the current location and the action taken jointly influence the rewards obtained by the UAV; thus the trial-and-error based learning task of the UAV satisfies the Markov property. We formulate the associated Markov decision process (MDP) \cite{Puterman:14} as a 4 tuple $< \mathcal{S}, \mathcal{A}, \mathcal{P}_{ss'}, \mathcal{R} >$, where $\mathcal{S}$ is the state space of the UAV, $\mathcal{A}$ is the action space; $\mathcal{R}$ is the expected reward of the UAV and $\mathcal{P}_{ss'}$ is the probability of transition from state $s$ to state $s'$, where we have $s' = s^{t+1}| s = s^t$. Through learning, the UAV can find the optimal policy $\pi^* : \mathcal{S} \rightarrow \mathcal{A}$ for maximising the reward $\mathcal{R}$. More particularly, we formulate the trajectory and data collection game of UAV-aided IoT networks as follows:
\begin{itemize}
\item \textit{Agent}: The UAV acts like an agent interacting with the environment to find the peak of the reward.
\item \textit{State space}: We define the state space by the position of UAV as
\begin{equation}
\mathcal{S} = \{x, y, H\}.
\end{equation}
At time step $t$, the state of the UAV is defined as $s^t = ( x^t, y^t, H^t )$.
\item \textit{Action space}: The UAV at state $s^t$ can choose an action $a^t$ of the action space by following the policy at time-step $t$. By dividing the area into a grid, we can define the action space as follows:
\begin{equation}
\mathcal{A} = \{ \text{left}, \text{right}, \text{forward}, \text{backward}, \text{upward}, \text{downward}, \text{hover} \}.
\end{equation}
The UAV moves in the environment and begins collecting information when the users are in the coverage of the UAV. When the UAV has sufficient information $R_{m,k} \ge r_{min}$ from the $k$th user in the $m$th cluster, that user will be marked as collected in this mission and may not be visited by the UAV again.
\item \textit{Reward function}: In joint trajectory and data collection optimisation, we design the reward function to be dependent on both the total sum-rate of ground users associated with the UAV plus the reward gleaned when the UAV completes one route, which is formulated as follows:
\begin{equation}\label{equ:reward}
R = \frac{\beta}{MK}\left(\sum_m^M \sum_k^K P(m,k) R_{m, k}\right) + \zeta R_{plus},
\end{equation}
where $\beta$ and $\zeta$ are positive variables that represent the trade-off between the network's sum-rate and UAV's movement, which will be described in the sequel. Here, $P(m, k) = \{0,1\}$ indicates whether or not user $k$ of cluster $m$ is associated with the UAV; $R_{plus}$ is the acquired reward when the UAV completes a mission by reaching the final destination. On the other hand, the term $ \frac{\sum_m^M \sum_k^K P(m,k) R_{m, k}}{MK}$ defines the average throughput of all users.
\item \textit{Probability}: We define $\mathcal{P}_{s^t s^{t+1}}(a^t, \pi)$ as the probability of transition from state $s^t$ to state $s^{t+1}$ by taking the action $a^t$ under the policy $\pi$.
\end{itemize}
At each time step $t$, the UAV chooses the action $a^t$ based on its local information to obtain the reward $r^t$ under the policy $\pi$. Then the UAV moves to the next state $s^{t+1}$ by taking the action $a^t$ and starts collecting information from the users if any available node in the network satisfies the distance constraint. Again, we use the DRL techniques to find the optimal policy $\pi^*$ for the UAV to maximise the reward attained (\ref{equ:reward}). Following the policy $\pi$, the UAV forms a chain of actions $( a^0, a^1, \dots, a^t, \dots, a^{final} )$ to reach the landing dock.
Our target is to maximise the reward expected by the UAV upon completing a single mission during which the UAV flies from the initial position over the clusters and lands at the destination. Thus, we design the trajectory reward $R_{plus}$ when the UAV reaches the destination in two different ways. Firstly, the binary reward function is defined as follows:
\begin{equation}\label{equ:Rplus1}
R_{plus} = \left\{ \begin{array}{rcl} 1 & \mbox{,} & X_{final} \in X_{target} \\
0 & \mbox{,} & \mbox{otherwise.} \end{array} \right. ,
\end{equation}
where $X_{final}$ and $X_{target}$ are the final position of UAV and the destination, respectively. However, the UAV has to move a long distance to reach the final destination. It may also be trapped in a zone and cannot complete the mission. These situations lead to increased energy consumption and reduced convergence. Thus, we consider the value of $R_{plus}^t$ in a different form by calculating the horizontal distance between the UAV and the final destination at time step $t$, yielding:
\begin{equation}\label{equ:Rplus2}
R^t_{plus} = \left\{ \begin{array}{rcl} 1 & \mbox{,} & X_{final} \in X_{target} \\
\Big (\exp \sqrt{(x_{target}-x_{0}^t)^2 + (y_{target} - y_{0}^t)^2} \Big)^{-1} & \mbox{,} & \mbox{otherwise.} \end{array} \right.
\end{equation}
When we design the reward function as in (\ref{equ:Rplus2}), the UAV is motivated to move ahead to reach the final destination. However, one of the disadvantages is that the UAV only moves forward. Thus, the UAV is unable to attain the best performance in terms of its total sum-rate in some environmental settings. We compare the performance of the two trajectory reward function definitions in Section \ref{Sec:Results} to evaluate the pros and cons of each approach.
We design the reward function by arranging for a trade-off game with parameters $\beta, \zeta$ to make our approach more adaptive and flexible. By modifying the value of $\beta/\zeta$ , the UAV adapts to several scenarios: a) fast deployment for emergency services, b) maximising the total sum-rate, and c) maximising the number of connections between the UAV and users. Depending on the specific problems, we can adjust the value of the trade-off parameters $\beta, \zeta$ to achieve the best performance. Thus, the game formulation is defined as follows:
\begin{equation}
\begin{split}
\max R = \quad & \frac{\beta}{MK} \left(\sum_m^M \sum_k^K P(m,k) R_{m, k}\right) + \zeta R_{plus},\\
s.t. \quad&
X_{final} = X_{target},\\
&d_{m, k} \le d_{cons},\\
&R_{m,k} \ge r_{min},\\
&P(m, k) = \{0, 1\},\\
&T \le T_{cons}\\
&\beta \ge 0, \zeta \ge 0,
\end{split}
\end{equation}
where $T$ and $T_{cons}$ are the number of steps that the UAV takes in a single mission and the maximum number of UAV's steps given its limited power, respectively. The distance constraint $d_{m, k} \le d_{cons}$ indicates that the served $(m,k)$-user has a satisfied distance to the UAV. Those stringent constraints, such as the transmission distance, position and flight time make the optimisation problem more challenging. Thus, we propose DRL techniques for the UAV in order to attain the optimal performance.
\section{Preliminaries}\label{Sec:BG}
In this section, we introduce the fundamental concept of Q-learning, where the so-called value function is defined by a reward of the UAV at state $s^t$ as follows:
\begin{equation}
V(s, \pi) = \mathbb{E} \bigg[ \sum_t^T \gamma \mathcal{R}^t (s^t, \pi)| s_0 = s\bigg],
\end{equation}
where $\mathbb{E[\centerdot]}$ represents an average of the number of samples and $0 \le \gamma \le 1$ denotes the discount factor. The value function can be rewritten by expoiting the Markov property as follows:
\begin{equation}
V(s, \pi) = \mathbb{E} \bigg[ \mathcal{R}^t(s^t, \pi)\bigg] + \gamma \sum_{s' \in \mathcal{S}} P_{ss'}(a, \pi) V(s', \pi).
\end{equation}
In a finite game, there is always an optimal policy $\pi^*$ that satisfies the Bellman optimality equation \cite{BD:95:Book:v1}
\begin{equation}
\begin{split}
V^* (s, \pi) &= V (s, \pi^*)\\
& = \max_{a \in \mathcal{A}} \Bigg[ {\mathbb{E} \bigg[ \mathcal{R}^t(s^t, \pi^*)\bigg] + \gamma \sum_{s' \in S} P_{ss'}(a, \pi^*) V(s', \pi^*)} \Bigg] .
\end{split}
\end{equation}
The action-value function is obtained, when the agent at state $s^t$ takes action $a^t$ and receives the reward $r^t$ under the agent policy $\pi$. The optimal Q-value can be formulated as:
\begin{equation}\label{equ:Q}
Q^*(s, a, \pi) = {\mathbb{E} \bigg[ \mathcal{R}^t(s^t, \pi^*)\bigg] + \gamma \sum_{s' \in S} P_{ss'}(a, \pi^*) V(s', \pi^*)}.
\end{equation}
The optimal policy $\pi^*$ can be obtained from $Q^*(s, a, \pi)$ as follows:
\begin{equation}\label{equ:V}
V^*(s, \pi) = \max_{a \in \mathcal{A}} Q(s, a, \pi).
\end{equation}
From (\ref{equ:Q}) and (\ref{equ:V}), we have
\begin{equation}
\begin{split}
Q^*(s, a, \pi) \; & = \mathbb{E} \bigg[ \mathcal{R}^t(s^t, \pi^*)\bigg] + \gamma \sum_{s' \in S} P_{ss'}(a, \pi^*) \max_{a' \in \mathcal{A}} Q(s', a', \pi), \\
&= \mathbb{E} \bigg[ \mathcal{R}^t(s^t, \pi^*)+ \gamma \max_{a' \in \mathcal{A}} Q(s', a', \pi)\bigg] ,
\end{split}
\end{equation}
where the agent takes the action $a' = a^{t+1}$ at state $s^{t+1}$.
Through learning, the Q-value is updated based on the available information as follows:
\begin{equation}\label{equ:Qupdate}
\begin{split}
Q(s, a, \pi) = \; Q(s, a, \pi) + \alpha \bigg[ \mathcal{R}^t(s^t, \pi^*)
+ \gamma \max_{a' \in \mathcal{A}} Q(s', a', \pi) - Q(s, a, \pi) \bigg],
\end{split}
\end{equation}
where $\alpha$ denotes the updated parameter of the Q-value function.
In RL algorithms, it is challenging to balance the \textit{exploration} and \textit{exploitation} for appropriately selecting the action. The most common approach relies on the $\epsilon$-greedy policy for the action selection mechanism as follows:
\begin{equation}\label{equ:ep}
a = \left\{ \begin{array}{rcl} \argmax Q(s, a, \pi) & \mbox{with} & \epsilon \\
\mbox{randomly} & \mbox{if} & 1- \epsilon. \end{array} \right.
\end{equation}
Upon assuming that each episode lasts $T$ steps, the action at time step $t$ is $a^t$ that is selected by following the $\epsilon$-greedy policy as in (\ref{equ:ep}). The UAV at state $s^t$ communicates with the user nodes from the ground if the distance constraint of $d_{m, k} \le d_{cons}$ is satisfied. Following the information transmission phase, the user nodes are marked as collected users and may not be revisited later during that mission. Then, after obtaining the immediate reward $r(s^t, a^t)$ the agent at state $s^t$ takes action $a^t$ to move to state $s^{t+1}$ as well as to update the Q-value function in (\ref{equ:Qupdate}). Each episode ends when the UAV reaches the final destination and the flight duration constraint is satisfied.
\section{An effective deep reinforcement learning approach for UAV-assisted IoT networks}\label{Sec:Alg1}
In this section, we conceive the DQL algorithm for trajectory and data collection optimisation in UAV-aided IoT networks. However, Q-learning technique typically falters for large state and action spaces due to its excessive Q-table size. Thus, instead of applying the Q-table in Q-learning, we use deep neural networks to represent the relationship between the action and state space. Furthermore, we employ a pair of techniques for stabilising the neural network's performance in our DQL algorithm as follows:
\begin{itemize}
\item \textit{Experience relay buffer}: Instead of using current experience, we use a so-called relay buffer $\mathcal{B}$ to store the transitions $(s, a, r, s')$ for supporting the neural network in overcoming any potential instability. When the buffer $\mathcal{B}$ is filled with the transitions, we randomly select a mini-batch of $K$ samples for training the networks. The finite buffer size of $\mathcal{B}$ allows it to be always up-to-date, and the neural networks learn from the new samples.
\item \textit{Target networks}: If we use the same network to calculate the state-action value $Q$ and the target network, the network can be shifted dramatically in the training phase. Thus, we employ a target network $Q'$ for the target value estimator. After a number of iterations, the parameters of the target network $Q'$ will be updated by the network $Q$.
\end{itemize}
\begin{algorithm}[t!]
\caption{The deep Q-learning algorithm for trajectory and data collection optimisation in UAV-aided IoT networks}
\begin{algorithmic}[1]
\label{alg:DQL}
\STATE Initialise the network $Q$ and the target network $Q'$ with the random parameters $\theta$ and $\theta'$, respectively
\STATE Initialise the replay memory pool $\mathcal{B}$
\FOR{episode = $1,\dots, L$}
\STATE Receive initial observation state $s^0$
\WHILE{$X_{final} \notin X_{target}$ or $T \le T_{cons}$}
\STATE Obtain the action $a^t$ of the UAV according to the $\epsilon$-greedy mechanism (\ref{equ:ep})
\STATE Execuse the action $a^t$ and estimate the reward $r^t$ according to (\ref{equ:reward})
\STATE Observe the next state $s^{t+1}$
\STATE Store the transition $(s^t, a^t, r^t, s^{t+1})$ in the replay buffer $\mathcal{B}$
\STATE Randomly select a mini-batch of $K$ transitions $(s^k, a^k, r^k, s^{k+1})$ from $\mathcal{B}$
\STATE Update the network parameters using gradient descent to minimise the loss
\begin{equation}
\mathbb{L}(\theta) = \mathbb{E}_{s, a, r, s'} \Bigg[\bigg(y^{DQL} - Q(s, a; \theta)\bigg)^2 \Bigg],
\end{equation}
The gradient update is
\begin{equation}
\nabla_\theta \mathbb{L}(\theta) = \mathbb{E}_{s, a, r, s'} \Bigg[\bigg(y^{DQL} - Q(s, a; \theta)\bigg)\nabla_\theta Q(s, a;\theta) \Bigg],
\end{equation}
\STATE Update the state $s^t = s^{t+1}$
\STATE Update the target network parameters after a number of iterations as $\theta' = \theta$
\ENDWHILE
\ENDFOR
\end{algorithmic}
\end{algorithm}
The neural network parameters are updated by minimising the loss function defined as follows:
\begin{equation}\label{equ:DQLloss}
\mathbb{L}(\theta)= \mathbb{E}_{s, a, r, s'} \Bigg[ \bigg(y^{DQL} - Q(s, a; \theta)\bigg)^2 \Bigg],
\end{equation}
where $\theta$ is a parameter of the network $Q$ and we have
\begin{equation}
y = \left\{ \begin{array}{rcl} r^t & \mbox{if terminated at} \; s^{t+1} \\
r^t + \gamma \max_{a' \in \mathcal{A}} Q'(s', a'; \theta') & \mbox{otherwise.} \end{array} \right.
\end{equation}
The details of the DQL approach in our joint trajectory and data collection trade-off game designed for UAV-aided IoT networks are presented in Alg. \ref{alg:DQL} where $L$ denotes the number of episode. Moreover, in this paper, we design the reward obtained in each step to assume one of two different forms and compare them in our simulation results. Firstly, we calculate the difference between the current and the previous reward of the UAV as follows:
\begin{equation}\label{equ:R1}
r_1^t (s^t, a^t) = r^t (s^t, a^t) - r^{t-1}(s^{t-1}, a^{t-1}).
\end{equation}
Secondly, we design the total episode reward as the accumulation of all immediate rewards of each step within one episode as
\begin{equation}\label{equ:R2}
r_2^t (s^t, a^t) = \sum^t_{i=0} r_1^t(s^t, a^t).
\end{equation}
\section{Deep reinforcement learning approach for UAV-assisted IoT networks: A dueling deep Q-learning approach }\label{Sec:Alg2}
According to Wang \textit{et. al.} \cite{Wang:15}, the standard Q-learning algorithm often falters due to the over-supervision of all the state-action pairs. On the other hand, it is unnecessary to estimate the value of each action choice in a particular state. For example, in our environment setting, the UAV has to consider moving either to the left or to the right when it hits the boundaries. Thus, we can improve the convergence speed by avoiding visiting all state-action pairs. Instead of using Q-value function of the conventional DQL algorithm, the dueling neural network of \cite{Wang:15} is introduced for improving the convergence rate and stability. The so-called advantage function $A (s, a) = Q(s, a) - V( s) $ related both to the value function and to the Q-value function describes the importance of each action related to each state.
\begin{algorithm}[t!]
\caption{The dueling deep Q-learning algorithm for trajectory and data collection optimisation in UAV-aided IoT networks}
\begin{algorithmic}[1]
\label{alg:DuelingDQL}
\STATE Initialise the network $Q$ and the target network $Q'$ with the random parameters, $\theta$ and $\theta'$, respectively
\STATE Initialise the replay memory pool $\mathcal{B}$
\FOR{episode = $1,\dots, L$}
\STATE Receive the initial observation state $s^0$
\WHILE{$X_{final} \notin X_{target}$ or $T \le T_{cons}$}
\STATE Obtain the action $a^t$ of the UAV according to the $\epsilon$-greedy mechanism (\ref{equ:ep})
\STATE Execute the action $a^t$ and estimate the reward $r^t$ according to (\ref{equ:reward})
\STATE Observe the next state $s^{t+1}$
\STATE Store the transition $(s^t, a^t, r^t, s^{t+1})$ in the replay buffer $\mathcal{B}$
\STATE Randomly select a mini-batch of $K$ transitions $(s^k, a^k, r^k, s^{k+1})$ from $\mathcal{B}$
\STATE Estimate the Q-value function by combining the two streams as follows:
\begin{equation}
\begin{split}
Q(s, a; \; \theta, \theta_A, \theta_V) = V(s ;\theta_V) + \Bigg( A(s, a; \theta_A) - \frac{1}{|\mathcal{A}|} \sum_{a'} A(s, a'; \theta_A) \Bigg).
\end{split}
\end{equation}
\STATE Update the network parameters using gradient descent to minimise the loss
\begin{equation}
\mathbb{L} (\theta) = \mathbb{E}_{s, a, r, s'} \Bigg[ \bigg(y^{DuelingDQL} - Q(s, a; \theta, \theta_A, \theta_V)\bigg)^2 \Bigg],
\end{equation}
\STATE where
\begin{equation}
y^{DuelingDQL} = r^t +\gamma \max_{a' \in \mathcal{A}} Q'(s', a'; \theta', \theta_A, \theta_V).
\end{equation}
\STATE Update the state $s^t = s^{t+1}$
\STATE Update the target network parameters after a number of iterations as $\theta' = \theta$
\ENDWHILE
\ENDFOR
\end{algorithmic}
\end{algorithm}
The idea of a dueling deep network is based on a combination of two streams of the value function and the advantage function used for estimating the single output $Q$-function. One of the streams of a fully-connected layer estimates the value function $V(s; \theta_V)$, while the other stream outputs a vector $A(s, a; \theta_A)$, where $\theta_A$ and $\theta_V$ represent the parameters of the two networks. The $Q$-function can be obtained by combining the two streams' outputs as follows:
\begin{equation}\label{equ:A}
Q(s, a; \theta, \theta_A, \theta_V) = V( s; \theta_V) + A(s, a; \theta_A).
\end{equation}
Equation (\ref{equ:A}) applies to all $(s, a)$ instances; thus, we have to replicate the scalar $V(s; \theta_V)$, $|\mathcal{A}|$ times to form a matrix. However, $Q(s, a; \theta, \theta_A, \theta_V)$ is a parameterised estimator of the true Q-function; thus, we cannot uniquely recover the value function $V$ and the advantage function $A$. Therefore, (\ref{equ:A}) results in poor practical performances when used directly. To address this problem, we can map the advantage function estimator to have no advantage at the chosen action by combining the two streams as follows:
\begin{equation}\label{equ:Amax}
\begin{split}
Q(s, a; \theta, \theta_A, \theta_V) = V(s ;\theta_V) + \bigg( A(s, a; \theta_A) - \max_{a' \in |\mathcal{A}|} A(s, a'; \theta_A) \bigg).
\end{split}
\end{equation}
Intuitively, for $a^* = \argmax_{a' \in \mathcal{A}}Q(s, a'; \theta, \theta_A, \theta_V) = \argmax_{a' \in \mathcal{A}} A(s, a'; \theta_A)$, we have $\linebreak Q(s, a^*; \theta, \theta_A, \theta_V) = V(s; \theta_V)$. Hence, the stream $V(s; \theta_V)$ estimates the value function and the other streams is the advantage function estimator. We can transform (\ref{equ:Amax}) using an average formulation instead of the \textit{max} operator as follows:
\begin{equation}\label{equ:Aavg}
\begin{split}
Q(s, a; \theta, \theta_A, \theta_V) = V(s ;\theta_V) + \Bigg( A(s, a; \theta_A) - \frac{1}{|\mathcal{A}|} \sum_{a'} A(s, a'; \theta_A) \Bigg).
\end{split}
\end{equation}
Now, we can solve the problem of identifiability by subtracting the mean as in (\ref{equ:Aavg}). Based on (\ref{equ:Aavg}), we propose a dueling DQL algorithm for our joint trajectory and data collection problem in UAV-assisted IoT networks relying on Alg. \ref{alg:DuelingDQL}. Note that estimating $V(s; \theta_V)$ and $A(s, a ; \theta_A)$ does not require any extra supervision and they will be computed automatically.
\section{Simulation Results}\label{Sec:Results}
In this section, we present our simulation results characterising the joint optimisation problem of UAV-assisted IoT networks. To highlight the efficiency of our proposed model and the DRL methods, we consider a pair of scenarios: a simple having three clusters, and a more complex one with five clusters in the coverage area. We use Tensorflow 1.13.1 \cite{Abadi:16} and the Adam optimiser of \cite{DJ:14} for training the neural networks. All the other parameters are provided in Table \ref{tab:Params}.
\begin{table}[t!]
\renewcommand{\arraystretch}{1.2}
\caption{SIMULATION PARAMETERS}
\label{tab:Params}
\centering
\begin{tabular}{l|l}
\hline
Parameters & Value \\
\hline
Bandwidth ($W$) & $1$ MHz \\
UAV transmission power & $5$ W \\
The start position of UAV & $(0, 0, 200)$\\
Discounting factor & $\gamma = 0.9$\\
Max number of users per cluster & $10$\\
Noise power & $\alpha^2 = -110dBm$ \\
The reference channel power gain & $\beta_0 = -50dB$\\
Path-loss exponent & $2$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}[t!]
\centering
\subfigure{\includegraphics[width=0.65\textwidth]{Figs/Trajectory}}
\caption{Trajectory obtained by using our DQL algorithm}
\label{fig:Traj}
\end{figure}
In Fig. (\ref{fig:Traj}), we present the trajectory obtained after training using the DQL algorithm in the $5$-cluster scenario. The green circle and blue dots represent the clusters' coverage and the user nodes, respectively. The red dots and black triangles in the figure represent the UAV's state after taking action. The UAV starts at $(0, 0)$, visits about $40$ users, and lands at the destination that is denoted by the black square. In a complex environment setting, it is challenging to expect the UAV to visit all users, while satisfying the flight-duration and power level constraints.
\subsection{Expected reward}
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[width=0.65\textwidth]{Figs/ALLreward3clusters2}}
\subfigure[]{\includegraphics[width=0.65\textwidth]{Figs/ALLreward3clusters}}
\caption{The performance when using the DQL and dueling DQL algorithms with 3 clusters while considering different $\beta$/$\zeta$ values}
\label{fig:reward3clusters}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure[With (\ref{equ:Rplus1})]{\includegraphics[width=0.65\textwidth]{Figs/ALLreward5clusters}}
\subfigure[With (\ref{equ:Rplus2})]{\includegraphics[width=0.65\textwidth]{Figs/ALLreward5clusters2}}
\caption{The expected reward when using the DQL and dueling DQL algorithms with 5-cluster scenario}
\label{fig:reward5clusters}
\end{figure}
For purposes of comparison, we run the algorithm five times in five different environmental settings and take the average to draw the figures. Firstly, we compare the reward obtained following (\ref{equ:reward}). Let us consider the $3$-cluster scenario and $\beta/\zeta = 2:1$ in Fig. (\ref{fig:reward3clusters}a), where the DQL and dueling DQL algorithms using the exponential function ($\ref{equ:Rplus2}$) reach the best performance. When using the exponential trajectory design function (\ref{equ:Rplus2}), the performance converges faster than that of the DQL and dueling DQL methods using the binary trajectory function (\ref{equ:Rplus1}). In addition, in Fig. (\ref{fig:reward3clusters}b), we compare the performance of the DQL and dueling DQL techniques using different $\beta/\zeta$ values. The average performance of the dueling DQL algorithm is better than that of the DQL algorithm. In conjunction, the results of using the exponential function ($\ref{equ:Rplus2}$) is better than that of the ones using the binary function ($\ref{equ:Rplus1}$).
Furthermore, we compare the rewards obtained by the DQL and dueling DQL algorithms in complex scenarios with $5$ clusters and $50$ user nodes in Fig. (\ref{fig:reward5clusters}). The performance of using the episode reward (\ref{equ:R2}) is better than that using the immediate reward (\ref{equ:R1}) in both trajectory designs relying on the DQL and dueling DQL algorithms. In Fig. (\ref{fig:reward5clusters}a), we compare the performance in conjunction with the binary trajectory design while in Fig. (\ref{fig:reward5clusters}b) the exponential trajectory design is considered. For $\beta/\zeta= 1:1$, the rewards obtained by the DQL and dueling DQL are similar and stable after about $400$ episodes. When using the exponential function (\ref{equ:Rplus2}), the dueling DQL algorithm reaches the best performance. Moreover, the convergence of the dueling DQL technique is faster than that of the DQL algorithm.
\begin{figure}[h!]
\centering
\subfigure{\includegraphics[width=0.65\textwidth]{Figs/ALLtradeoff5clusters}}
\caption{The performance when using the DQL and dueling DQL algorithms with 5 clusters and different $\beta/\zeta$ values}
\label{fig:reward5clusters2}
\end{figure}
In Fig. (\ref{fig:reward5clusters2}), we compare the performance of the DQL and of the dueling DQL algorithms while considering different $\beta/\zeta$ parameter values. The dueling DQL algorithm shows better performance for all the $\beta/\zeta$ pair values, exhibiting better rewards. In addition, when using the exponential function (\ref{equ:Rplus2}), both proposed algorithms show better performance than the ones using the binary function (\ref{equ:Rplus1}) if $\beta/\zeta \le 1:1$, but it becomes less effective when $\beta/\zeta$ is set higher.
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[width=0.65\textwidth]{Figs/DQLreward5clusters}}
\subfigure[]{\includegraphics[width=0.65\textwidth]{Figs/DQLtradeoff5clusters}}
\caption{The expected reward when using the DQL algorithm with 5 clusters and different reward function settings}
\label{fig:reward5clusters3}
\end{figure}
\begin{figure}[h!]
\centering
\subfigure{\includegraphics[width=0.65\textwidth]{Figs/Dueltradeoff5clusters2}}
\caption{The performance when using the dueling DQL with 5 clusters, and different $\beta/\zeta$ values}
\label{fig:reward5clusters4}
\end{figure}
We compare the performance of the DQL and of the dueling DQL algorithm using different reward function setting in Fig. (\ref{fig:reward5clusters3}) and in Fig. (\ref{fig:reward5clusters4}), respectively. The DQL algorithm reaches the best performance when using the episode reward (\ref{equ:R2}) in Fig. (\ref{fig:reward5clusters3}a) while the fastest convergence speed can be achieved by using the exponential function (\ref{equ:Rplus2}). When $\beta/\zeta \ge 1:1$, the DQL algorithm relying on the episode function (\ref{equ:R2}) outperforms the ones using the immediate reward function (\ref{equ:R1}) in Fig. (\ref{fig:reward5clusters3}b). The reward (\ref{equ:reward}) using the exponential trajectory design (\ref{equ:Rplus2}) has a better performance than that using the binary trajectory design (\ref{equ:Rplus1}) for all the $\beta/\zeta$ values. The similar results are shown when using the dueling DQL algorithm in Fig. (\ref{fig:reward5clusters4}). The immediate reward function (\ref{equ:R1}) is less effective than the episode reward function (\ref{equ:R2}).
\subsection{Throughput comparison}
\begin{figure}[h!]
\centering
\subfigure[With (\ref{equ:Rplus1})]{\includegraphics[width=0.65\textwidth]{Figs/Throughput3clusters}}
\subfigure[]{\includegraphics[width=0.65\textwidth]{Figs/ALLthroughput3clusters}}
\caption{The network's sum-rate when using the DQL and dueling DQL algorithms with 3 clusters}
\label{fig:throughput3clusters}
\end{figure}
In (\ref{equ:reward}), we consider two elements: the trajectory cost and the average throughput. In order to quantify the communication efficiency, we compare the total throughput in different scenarios. In Fig. (\ref{fig:throughput3clusters}), the performances of the DQL algorithm associated with several $\beta/\zeta$ values are considered while using the binary trajectory function (\ref{equ:Rplus1}), the episode reward (\ref{equ:R2}) and $3$ clusters. The throughput obtained for $\beta/\zeta= 1:1$ is higher than that of the others and when $\beta$ increases, the performance degrades. However, when comparing with the Fig. (\ref{fig:reward3clusters}b), we realise that in some scenarios the UAV was stuck and could not find the way to the destination. That leads to increased flight time spent and distance travelled. More details are shown in Fig. (\ref{fig:throughput3clusters}b), where we compare the expected throughput of both the DQL and dueling DQL algorithms. The best throughput is achieved when using the dueling DQL algorithm with $\beta/\zeta= 1:1$ in conjunction with (\ref{equ:Rplus1}), which is higher than the peak of the DQL method with $\beta/\zeta= 1:2$.
\begin{figure}[h!]
\centering
\subfigure[With (\ref{equ:Rplus1}), (\ref{equ:R2})]{\includegraphics[width=0.65\textwidth]{Figs/Throughput5clusters}}
\subfigure[With (\ref{equ:Rplus2}), (\ref{equ:R2})]{\includegraphics[width=0.65\textwidth]{Figs/Throughput5clusters4}}
\caption{The obtained total throughput when using the DQL algorithm with 5 clusters}
\label{fig:throughput5clusters}
\end{figure}
In Fig. (\ref{fig:throughput5clusters}), we compare the throughput of different techniques in the $5$-cluster scenario. Let us now consider the binary trajectory design function (\ref{equ:Rplus1}) in Fig. (\ref{fig:throughput5clusters}a), where the DQL algorithm achieves the best performance using $\beta/\zeta= 1:1$ and $\beta/\zeta = 2:1$. There is a slight difference between the DQL method having different settings, when using exponential the trajectory design function (\ref{equ:Rplus2}), as shown in Fig. (\ref{fig:throughput5clusters}b).
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[width=0.65\textwidth]{Figs/Throughput5clusters3}}
\subfigure[]{\includegraphics[width=0.65\textwidth]{Figs/Duelthroughput5clusters}}
\caption{The obtained throughput when using the DQL and dueling DQL algorithms in 5-cluster scenario}
\label{fig:throughput5clusters2}
\end{figure}
In Fig. (\ref{fig:throughput5clusters2}) and Fig. (\ref{fig:throughput5clusters3}), we compare the throughput of different $\beta/\zeta$ pairs. The DQL algorithm reaches the optimal throughput with the aid of trial-and-learn methods, hence it is important to carefully design the reward function to avoid excessive offline training. As shown in Fig. (\ref{fig:throughput5clusters2}), the DQL and dueling DQL algorithm exhibit reasonable stability for several $\beta/\zeta \le 1:1$ pairs as well as reward functions. While we can achieve the similar expected reward with different reward setting in Fig. (\ref{fig:reward5clusters3}), the throughput is degraded when the $\beta/\zeta$ increases. In contrast, with higher $\beta$ values, the UAV can finish the mission faster. It is a trade-off game when we can choose an approximate $\beta/\zeta$ value for our specific purposes. When we employ the DQL and the dueling DQL algorithms with the episode reward (\ref{equ:R2}), the throughput attained is higher than that using the immediate reward (\ref{equ:R1}) with different $\beta/\zeta$ values.
\begin{figure}[h!]
\centering
\subfigure[With (\ref{equ:Rplus2})]{\includegraphics[width=0.65\textwidth]{Figs/ALLthroughput5clusters}}
\subfigure[With (\ref{equ:R2})]{\includegraphics[width=0.65\textwidth]{Figs/ALLthroughput5clusters2}}
\caption{The expected throughput when using the DQL and dueling DQL algorithms with 5 clusters}
\label{fig:throughput5clusters3}
\end{figure}
Furthermore, we compare the expected throughput of the DQL and of the dueling DQL algorithm when using the exponential trajectory design (\ref{equ:Rplus2}) in Fig. (\ref{fig:throughput5clusters3}a) and the episode reward (\ref{equ:R2}) in Fig. (\ref{fig:throughput5clusters3}b). In Fig. (\ref{fig:throughput5clusters3}a), the dueling DQL method outperforms the DQL algorithm for almost all $\beta/\zeta$ values in both function (\ref{equ:R1}) and (\ref{equ:R2}). When we use the episode reward (\ref{equ:R2}), the obtained throughput are stable with different $\beta/\zeta$ values. The throughput attained by using the exponential function (\ref{equ:Rplus2}) is higher than that using the binary trajectory (\ref{equ:Rplus1}) and by using the episode reward (\ref{equ:R2}) is higher than that using the immediate reward (\ref{equ:R1}). We can achieve the best performance when using the dueling DQL algorithm with (\ref{equ:Rplus2}) and (\ref{equ:R2}). However, in some scenarios, we can achieve the better performance with different algorithmic setting as we can see in Fig. (\ref{fig:throughput3clusters}b) and Fig. (\ref{fig:throughput5clusters2}a). Thus, there is a trade-off governing the choice of the algorithm and function design.
\subsection{Parametric Study}
\begin{figure}[t!]
\centering
\subfigure{\includegraphics[width=0.65\textwidth]{Figs/gammaepsilon}}
\caption{The performance when using the DQL algorithm with different discount factors, $\gamma$, and exploration factors, $\epsilon$}
\label{fig:exp}
\end{figure}
In Fig. (\ref{fig:exp}), we compare the performance of our DQL technique using different $\textit{exploration}$ parameters $\gamma$ and $\epsilon$ values in our $\epsilon$-greedy method. The DQL algorithm achieves the best performance with the discounting factor of $\gamma = 0.9$ and $\epsilon = 0.9$ in the $5$-cluster scenario of Fig.~(\ref{fig:exp}). Balancing the \textit{exploration} and \textit{exploitation} as well as the action chosen is quite challenging, in order to maintain a steady performance of the DQL algorithm. Based on the results of Fig. (\ref{fig:exp}), we opted for $\gamma = 0.9$ and $\epsilon = 0.9$ for our algorithmic setting.
\begin{figure}[h!]
\centering
\subfigure{\includegraphics[width=0.65\textwidth]{Figs/Batchsize5clusters}}
\caption{The performance when using the DQL algorithm in $5$-cluster scenario and different batch sizes, $K$}
\label{fig:batch3clusters}
\end{figure}
Next, we compare the expected reward of different mini-batch sizes, $K$. In the $5$-cluster scenario of Fig. (\ref{fig:batch3clusters}), the DQL achieves the optimal performance with a batch size of $K = 32$. There is a slight difference in terms of convergence speed with batch size $K = 32$ is the fastest. Overall, we set the mini-batch size to $K = 32$ for our DQL algorithm.
\begin{figure}[h!]
\centering
\subfigure{\includegraphics[width=0.65\textwidth]{Figs/lr5clusters}}
\caption{The performance when using DQL algorithm with different learning rate, $lr$}
\label{fig:lr}
\end{figure}
Fig. (\ref{fig:lr}) shows the performance of the DQL algorithm with different learning rates in updating the neural networks parameters while considering the scenarios of $5$ clusters. When the learning rate is as high as $\alpha = 0.01$, the pace of updating the network may result the fluctuating performance. Moreover, when $\alpha = 0.0001$ or $\alpha = 0.00001$ the convergence speed is slower and may be stuck in a local optimum instead reaching the global optimum. Thus, based on our experiments, we opted for the learning rate of $\alpha = 0.001$ for the algorithms.
\section{Conclusion}\label{Sec:Con}
In this paper, the DRL technique has been proposed jointly optimising the flight trajectory and data collection performance of UAV-assisted IoT networks. The optimisation game has been formulated to balance the flight time and total throughput while guaranteeing the quality-of-service constraints. Bearing in mind the limited UAV power level and the associated communication constraints, we proposed a DRL technique for maximising the throughput while the UAV has to move along the shortest path to reach the destination. Both the DQL and dueling DQL techniques having a low computational complexity have been conceived. Our simulation results showed the efficiency of our techniques both in simple and complex environmental settings.
\bibliographystyle{IEEEtran}
|
2,877,628,089,437 | arxiv |
\section{Code}
\label{sec:code}
\subsection{Data Collection}
\label{sec:collectionCode}
\begin{lstlisting}[language=Java]
package com.example.readsensors;
import androidx.annotation.RequiresApi;
import androidx.appcompat.app.AppCompatActivity;
import androidx.core.app.ActivityCompat;
import android.content.BroadcastReceiver;
import android.content.ContentResolver;
import android.content.ContentValues;
import android.content.Context;
import android.content.Intent;
import android.content.IntentFilter;
import android.content.pm.PackageManager;
import android.hardware.SensorEventListener;
import android.hardware.SensorManager;
import android.media.MediaRecorder;
import android.net.Uri;
import android.net.wifi.ScanResult;
import android.net.wifi.WifiInfo;
import android.net.wifi.WifiManager;
import android.net.wifi.rtt.RangingRequest;
import android.net.wifi.rtt.RangingResult;
import android.net.wifi.rtt.RangingResultCallback;
import android.net.wifi.rtt.WifiRttManager;
import android.os.Build;
import android.os.Bundle;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.os.Environment;
import android.provider.MediaStore;
import android.util.Log;
import android.view.View;
import android.widget.ArrayAdapter;
import android.widget.TextView;
import android.widget.Toast;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Hashtable;
import java.util.List;
import java.util.concurrent.Executor;
public class MainActivity extends AppCompatActivity implements SensorEventListener {
private SensorManager senSensorManager;
private Sensor senAccelerometer;
public static String data = "X, Y, Z, Basement, Kitchen, Upstairs, Dining Room" + "\r\n";
ArrayList<Integer> xArray = new ArrayList<Integer>();
ArrayList<Integer> yArray = new ArrayList<Integer>();
ArrayList<Integer> zArray = new ArrayList<Integer>();
public static float x;
public static float y;
public static float z;
public static String rssi;
public static boolean record = false;
public static String D1;
public static String D2;
public static String D3;
public static String D4;
public static int save = 0;
public static File audiofile;
public static long current;
long start;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
senSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
senAccelerometer = senSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
senSensorManager.registerListener(this, senSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER), SensorManager.SENSOR_DELAY_NORMAL);
}
Context context;
MediaRecorder recorder = new MediaRecorder();
@Override
public void onSensorChanged(SensorEvent event) {
Sensor mySensor = event.sensor;
if (mySensor.getType() == Sensor.TYPE_ACCELEROMETER) {
x = event.values[0];
y = event.values[1];
z = event.values[2];
TextView textViewx = findViewById(R.id.textViewX);
textViewx.setText(new String(String.valueOf(x)));
TextView textViewy = findViewById(R.id.textViewY);
textViewy.setText(new String(String.valueOf(y)));
TextView textViewz = findViewById(R.id.textViewZ);
textViewz.setText(new String(String.valueOf(z)));
}
context = getApplicationContext();
wifiManager = (WifiManager) context.getSystemService(Context.WIFI_SERVICE);
wifiScan(null);
if ((save == 0) && (recorder != null) && record) {
File dir = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS);
try {
audiofile = File.createTempFile("sound" +(current/1000), ".wav", dir);
} catch (IOException e) {
System.out.println("audio not working");
}
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
recorder.setOutputFile(audiofile.getAbsolutePath());
try {
recorder.prepare();
} catch (IOException e) {
e.printStackTrace();
}
recorder.start();
start = System.currentTimeMillis();
save = 1;
}
if ((System.currentTimeMillis() >= start + 3000) && (save != 0) && record) {
recorder.stop();
addRecordingToMediaLibrary();
save = 0;
int total = 0;
int Xavg = 0;
for(int i = 0; i < xArray.size(); i++)
{
total += Math.abs(xArray.get(i));
Xavg = total / xArray.size();
}
total = 0;
int Yavg = 0;
for(int i = 0; i < yArray.size(); i++)
{
total += Math.abs(yArray.get(i));
Yavg = total / yArray.size();
}
total = 0;
int Zavg = 0;
for(int i = 0; i < zArray.size(); i++)
{
total += Math.abs(zArray.get(i));
Zavg = total / zArray.size();
}
data = data + new String(String.valueOf(Xavg)) + "," + new String(String.valueOf(Yavg)) + "," + new String(String.valueOf(Zavg)) + "," + D1 + "," + D2 + "," + D3 + "," + D4 +"\n";
xArray.clear();
yArray.clear();
zArray.clear();
}
if (record) {
xArray.add((int) x);
yArray.add((int) y);
zArray.add((int) z);
}
}
protected void addRecordingToMediaLibrary() {
ContentValues values = new ContentValues(4);
current = System.currentTimeMillis();
values.put(MediaStore.Audio.Media.TITLE, "audio" + audiofile.getName());
values.put(MediaStore.Audio.Media.DATE_ADDED, (int) (current / 1000));
values.put(MediaStore.Audio.Media.MIME_TYPE, "audio/3gpp");
values.put(MediaStore.Audio.Media.DATA, audiofile.getAbsolutePath());
ContentResolver contentResolver = getContentResolver();
Uri base = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI;
Uri newUri = contentResolver.insert(base, values);
sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, newUri));
Toast.makeText(this, "Added File " + newUri, Toast.LENGTH_LONG).show();
}
@Override
public void onAccuracyChanged(Sensor sensor, int accuracy) {
}
protected void onPause() {
super.onPause();
}
protected void onResume() {
super.onResume();
}
public void startRecording(View view) {
record = true;
}
public void stopRecording(View view) {
record = false;
data = data + "\r\n";
}
public void sendData(View view) throws IOException {
String FILENAME = "phone_data" + System.currentTimeMillis() / 1000L + ".csv";
File folder = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS);
File myFile = new File(folder, FILENAME);
FileOutputStream fstream = new FileOutputStream(myFile);
fstream.write(data.getBytes());
fstream.close();
TextView textViewMessage = findViewById(R.id.textViewMessage);
textViewMessage.setText("worked");
}
private List<ScanResult> results;
private ArrayList<String> arrayList = new ArrayList<>();
private ArrayAdapter adapter;
public WifiManager wifiManager;
public void wifiScan(View view) {
IntentFilter intentFilter = new IntentFilter();
intentFilter.addAction(WifiManager.SCAN_RESULTS_AVAILABLE_ACTION);
context.registerReceiver(wifiScanReceiver, intentFilter);
boolean success = wifiManager.startScan();
if (!success) {
scanFailure();
}
}
BroadcastReceiver wifiScanReceiver = new BroadcastReceiver() {
@RequiresApi(api = Build.VERSION_CODES.P)
@Override
public void onReceive(Context c, Intent intent) {
boolean success = intent.getBooleanExtra(
WifiManager.EXTRA_RESULTS_UPDATED, false);
if (success) {
scanSuccess();
} else {
scanFailure();
}
}
};
Hashtable<Integer, String> levels = new Hashtable<Integer, String>();
@RequiresApi(api = Build.VERSION_CODES.P)
private void scanSuccess() {
System.out.println("Scan Success");
List<ScanResult> results = wifiManager.getScanResults();
ScanResult scanResult = null;
ScanResult scanResult2 = null;
ScanResult scanResult3 = null;
ScanResult scanResult4 = null;
int z = 0;
for (int j = 0; j < results.size(); j++) {
if (results.get(j).SSID.equals("HomeG") && (results.get(j).is80211mcResponder())) {
levels.put(z, results.get(j).BSSID);
if (z == 0) {
scanResult = results.get(j);
}
if (z == 1) {
scanResult2 = results.get(j);
}
if (z == 2) {
scanResult3 = results.get(j);
}
if (z == 3) {
scanResult4 = results.get(j);
}
z++;
}
}
WifiRttManager mgr = (WifiRttManager) context.getSystemService(Context.WIFI_RTT_RANGING_SERVICE);
if (scanResult != null && scanResult2 != null && scanResult3 != null && scanResult4 != null) {
final RangingRequest request = new RangingRequest.Builder()
.addAccessPoint(scanResult)
.addAccessPoint(scanResult2)
.addAccessPoint(scanResult3)
.addAccessPoint(scanResult4)
.build();
final RangingResultCallback callback = new RangingResultCallback() {
public void onRangingResults(List<RangingResult> resultsRTT) {
System.out.println(resultsRTT);
System.out.println("Ranging Result");
try {
for (int k = 0; k < 4; k++) {
if (String.valueOf(resultsRTT.get(k).getMacAddress()).equals("b0:e4:d5:04:8a:c5")) {
System.out.println(resultsRTT.get(k).getMacAddress() + ": " + resultsRTT.get(k).getDistanceMm());
D1 = String.valueOf(resultsRTT.get(k).getDistanceMm());
TextView textViewD1 = findViewById(R.id.textViewD1);
textViewD1.setText(D1);
break;
}
}
for (int k = 0; k < 4; k++) {
if (String.valueOf(resultsRTT.get(k).getMacAddress()).equals("f0:72:ea:48:bc:95")) {
System.out.println(resultsRTT.get(k).getMacAddress() + ": " + resultsRTT.get(k).getDistanceMm());
D2 = String.valueOf(resultsRTT.get(k).getDistanceMm());
TextView textViewD2 = findViewById(R.id.textViewD2);
textViewD2.setText(D2);
break;
}
}
for (int k = 0; k < 4; k++) {
if (String.valueOf(resultsRTT.get(k).getMacAddress()).equals("cc:f4:11:4a:49:c4")) {
System.out.println(resultsRTT.get(k).getMacAddress() + ": " + resultsRTT.get(k).getDistanceMm());
D3 = String.valueOf(resultsRTT.get(k).getDistanceMm());
TextView textViewD3 = findViewById(R.id.textViewD3);
textViewD3.setText(D3);
break;
}
}
for (int k = 0; k < 4; k++) {
if (String.valueOf(resultsRTT.get(k).getMacAddress()).equals("b0:e4:d5:17:63:65")) {
System.out.println(resultsRTT.get(k).getMacAddress() + ": " + resultsRTT.get(k).getDistanceMm());
D4 = String.valueOf(resultsRTT.get(k).getDistanceMm());
TextView textViewD4 = findViewById(R.id.textViewD4);
textViewD4.setText(D4);
break;
}
}
} catch(Exception e) {
System.out.println(e);
TextView textViewD1 = findViewById(R.id.textViewD1);
}
}
public void onRangingFailure(int code) {
// Handle failure
List<ScanResult> results = wifiManager.getScanResults();
}
};
if (ActivityCompat.checkSelfPermission(this, android.Manifest.permission.ACCESS_FINE_LOCATION) != PackageManager.PERMISSION_GRANTED) {
return;
}
final Executor mainExecutor;
mainExecutor = context.getMainExecutor();
mgr.startRanging(request, mainExecutor, callback);
}
}
private void scanFailure() {
System.out.println("not working");
List<ScanResult> results = wifiManager.getScanResults();
}
}
\end{lstlisting}
\subsection{Feature Extraction}
\label{sec:extractioncode}
\begin{lstlisting}[language=Python]
import librosa
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
from PIL import Image
import pathlib
import csv
import librosa
import librosa.display
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
import keras
import warnings
warnings.filterwarnings('ignore')
all_files = ['PhoneAudio/phone_datatyping.csv', 'PhoneAudio/phone_datavaccuming.csv',
'PhoneAudio/phone_datavaccuming2.csv', 'PhoneAudio/phone_datafaucet.csv'
'PhoneAudio/phone_datafaucet2.csv', 'PhoneAudio/phone_databrushingTeeth.csv',
'PhoneAudio/phone_databrushingTeeth2.csv', 'PhoneAudio/phone_datacoffee.csv',
'PhoneAudio/phone_datacoffee2.csv', 'PhoneAudio/phone_datamedicine.csv',
'PhoneAudio/phone_datashaving.csv', 'PhoneAudio/phone_datacooking1.csv',
'PhoneAudio/phone_datacooking2.csv', 'PhoneAudio/phone_datafalse.csv']
df_from_each_file = (pd.read_csv(f, sep=',') for f in all_files)
df_merged = pd.concat(df_from_each_file, ignore_index=True)
df_merged.to_csv( "phone_data.csv")
header = 'filename chroma_stft rmse spectral_centroid spectral_bandwidth rolloff zero_crossing_rate'
for i in range(1, 21):
header += f' mfcc{i}'
header += ' label'
header = header.split()
file = open('data.csv', 'w', newline='')
with file:
writer = csv.writer(file)
writer.writerow(header)
activities = 'typing vaccuming faucet brushingTeeth coffee medicine shaving cooking false'.split()
for a in activities:
for filename in os.listdir(f'./PhoneAudio/{a}'):
audioname = f'./PhoneAudio/{a}/{filename}'
y, sr = librosa.load(audioname, mono=True, duration=3)
chroma_stft = librosa.feature.chroma_stft(y=y, sr=sr)
rmse = librosa.feature.rmse(y=y)[0]
spec_cent = librosa.feature.spectral_centroid(y=y, sr=sr)
spec_bw = librosa.feature.spectral_bandwidth(y=y, sr=sr)
rolloff = librosa.feature.spectral_rolloff(y=y, sr=sr)
zcr = librosa.feature.zero_crossing_rate(y)
mfcc = librosa.feature.mfcc(y=y, sr=sr)
to_append = f'{filename} {np.mean(chroma_stft)} {np.mean(rmse)} {np.mean(spec_cent)} {np.mean(spec_bw)} {np.mean(rolloff)} {np.mean(zcr)}'
for e in mfcc:
to_append += f' {np.mean(e)}'
to_append += f' {a}'
file = open('data.csv', 'a', newline='')
with file:
writer = csv.writer(file)
writer.writerow(to_append.split())
data = pd.read_csv('data.csv')
# Dropping unused columns and merging data
data = data.drop(['filename'],axis=1)
genre_list = data.iloc[:, -1]
loc = pd.read_csv("phone_data.csv")
merged = pd.concat([data, loc], axis=1)
merged.to_csv("data.csv", index=False)
data = pd.read_csv('data.csv')
encoder = LabelEncoder()
y = encoder.fit_transform(genre_list)
data=data.drop(['label'],axis=1)
data=data.drop(['Unnamed: 0'],axis=1)
scaler = StandardScaler()
scaler.fit(np.array(data.iloc[:]))
X = scaler.transform(np.array(data.iloc[:]))
\end{lstlisting}
\subsection{Model Training}
\label{sec:trainingcode}
\begin{lstlisting}[language=Python]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(256, activation='relu', input_shape=(X_train.shape[1],)))
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(32, activation='relu'))
model.add(layers.Dense(16, activation='softmax'))
model.add(layers.Dense(9, activation='softmax'))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train,
y_train,
epochs=1000,
validation_data=(X_test, y_test))
\end{lstlisting}
\subsection{Activity Inference}
\label{sec:inferencecode}
\begin{lstlisting}[language=Python]
header = 'filename chroma_stft rmse spectral_centroid spectral_bandwidth rolloff zero_crossing_rate'
for i in range(1, 21):
header += f' mfcc{i}'
header = header.split()
file = open('testdata.csv', 'w', newline='')
with file:
writer = csv.writer(file)
writer.writerow(header)
for filename in os.listdir(f'./testData/typing'):
audioname = f'./testData/typing/{filename}'
y, sr = librosa.load(audioname, mono=True, duration=3)
chroma_stft = librosa.feature.chroma_stft(y=y, sr=sr)
rmse = librosa.feature.rmse(y=y)[0]
spec_cent = librosa.feature.spectral_centroid(y=y, sr=sr)
spec_bw = librosa.feature.spectral_bandwidth(y=y, sr=sr)
rolloff = librosa.feature.spectral_rolloff(y=y, sr=sr)
zcr = librosa.feature.zero_crossing_rate(y)
mfcc = librosa.feature.mfcc(y=y, sr=sr)
to_append = f'{filename} {np.mean(chroma_stft)} {np.mean(rmse)} {np.mean(spec_cent)} {np.mean(spec_bw)} {np.mean(rolloff)} {np.mean(zcr)}'
for e in mfcc:
to_append += f' {np.mean(e)}'
file = open('testdata.csv', 'a', newline='')
with file:
writer = csv.writer(file)
writer.writerow(to_append.split())
data = pd.read_csv('testdata.csv')
# Dropping unneccesary columns
data = data.drop(['filename'],axis=1)
loc = pd.read_csv("testData/testphone_datatyping.csv")
merged = pd.concat([data, loc], axis=1)
merged.to_csv("testdata.csv", index=False)
data = pd.read_csv('testdata.csv')
X = scaler.transform(np.array(data.iloc[:]))
results = model.predict(X)
for element in results:
print(np.where(element == max(element)), np.where(element == np.unique(element)[-2]))
\end{lstlisting}
\section{Conclusion}
\label{sec:conclusion}
The results of the study prove that high-accuracy human activity recognition is achievable using a combination of accelerometer, audio, and Wi-Fi Round Trip Time localization. This is a promising first step towards creating a simple, inexpensive, unobtrusive, and accurate system for monitoring human activity.
The \mbox{\textsc{LEHAR}}\xspace system fulfills the requirements determined in this project.
\begin{itemize}
\item The system is more accurate than existing systems at detecting human activity.
\item No significant prior installation of materials or change to the room is required for activity monitoring.
\item Extending the system to include more activities is easily automated by retraining the machine learning model.
\end{itemize}
Some of the next steps are to collect more data for each activity. The more data and the more variety, the more accurate the model will be.
In addition, an important next step is to train new activities. The more activities the system can detect, the more useful it will be in the real world. The system has already proven that it can detect a larger variety of activities than current methods, but this can be taken further.
Finally, other smartphone sensors can be implemented into the system to see how helpful they are to improving the accuracy.
\section{Data Collection}
\label{sec:collect}
One of the key design decisions in \mbox{\textsc{LEHAR}}\xspace is what data to collect. While modern smartphones have a wide range of sensors, some of them are not highly relevant to HAR and collecting excessive data can consume resources and reduce the battery life of the smartphone. Most deployed smartphone HAR systems collect only accelerometer readings. Unfortunately, to be effective, the recognized activities must have dramatically different motion properties. As a result, the range of activities that can be recognized using just acceleration data is relatively limited (just running, walking, sitting, etc.). Adding audio information can help broaden the set of activities since sounds associated with activities such cooking or brushing teeth can be distinctive. However, there are many activities that may have similar environmental sounds, such as those associated with small motors (e.g. electric toothbrush, microwave oven, blender). In addition, audio features may be error prone since the environment may be noisy or the smartphone may not be optimally located to record sound (e.g. inside a pocket). To further distinguish between activities, \mbox{\textsc{LEHAR}}\xspace also collects location information. As shown in Figure~\ref{fig:home}, many activities are location specific; for example, tooth brushing typically only occurs in the bathroom. \mbox{\textsc{LEHAR}}\xspace is designed on the premise that the combination of acceleration, audio and location information suffice for performing highly accurate HAR. This section describes what data \mbox{\textsc{LEHAR}}\xspace collects and how it implements this data collection.
\subsection{Indoor Localization}
The most common approach to obtaining location information on mobile devices is to rely on GPS (Global Positioning System~\cite{GPS}). While GPS is relatively accurate, GPS signals do not propagate through walls and, as a result, GPS cannot provide location information indoors. Unfortunately, there are no widely deployed standards for getting location information indoors. Existing systems use one of three basic techniques: angle of arrival, signal strength, and time difference of arrival.
\subsubsection{Angle of Arrival Localization}
~
Angle of arrival localization systems~\cite{aoa1,aoa2} work by measuring the angle that a transmitter's signal arrives at a user's device. If there are multiple access points in a home, the user's device could observe the angle to each of the different access points. If the location of the access points is known, the device can then compute its own location from this angle information. Unfortunately, measuring the angle of arrival requires that the mobile device have some form of directional antenna. These are typically large and impractical for something like a smartphone. This leaves me two other alternatives that I consider below.
\subsubsection{Signal Strength Localization}
\label{sec:rssi}
~
Signal strength based localization systems (e.g.~\cite{RADAR}) leverage the principle that Wi-Fi signals typically get weaker further away from a transmitter. In an ideal setting, the signal strength would vary inversely with the square of the distance; however, in practice, there are many other factors beyond just distance that impact signal strength at a location. For example, signals typically attenuate significantly as they pass through obstacles such as walls. As a result, walls between a transmitter and a user may make the signal much weaker and seem like the transmitter is further away. Wi-Fi signals also suffer from multi-path interference. When a transmitter sends a message, it is sent out in all directions. The signal that travels directly between the transmitter and the receiver may interfere with the signal that reflects off a surface and then propagates to the receiver. This interference can be constructive or destructive (increasing the signal strength or reducing the signal strength, respectively) depending on the difference in distance between the direct and indirect path. The end result of these factors is that signal strength only provides a rough approximation for distance.
Practical systems that use signal strength for localization~\cite{RADAR} rely on someone generating a map of signal strength to location for a building for every single access point in the building. Surveying a building to generate this map usually takes significant effort and impedes wide use of this approach. Devices that want to determine their location measure the signal strength from all nearby access points and then lookup these signal strengths on the signal strength map. Even with detailed maps of signal strength, the accuracy of these systems are relatively low, allowing users to localize themselves to only a 3-4 meter region.
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[width=.75\linewidth]{Figures/propagation.png} \\
(a) \\
\\
\includegraphics[width=.75\linewidth]{Figures/tdoa.png} \\
(b) \\
\\
\end{tabular}
\caption{Illustration of TDoA localization. (a) shows the time for a signal to propagate and (b) shows multilateration used to localize the device from multiple distance measurements.}
\label{fig:tdoa}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.75\linewidth]{Figures/wifirtt.png}
\caption{Wi-Fi RTT measurement protocol message exchange.}
\label{fig:wifirtt}
\end{figure}
\subsubsection{Time Difference of Arrival Localization}
\label{sec:tdoa}
~
Figure~\ref{fig:tdoa} provides a 2-dimensional illustration of the concepts behind Time Difference of Arrival (TDoA) localization.
TDoA leverage the observation that signals take time to travel between a transmitter and receiver. This is shown in Figure~\ref{fig:tdoa}(a) where the signal propagates outward from the access point on the right, reaching the smartphone at $t_4 = 0.008s$. If a device can measure the time it takes for the signal to travel, it can use the propagation speed of the signal and the measured time to compute a distance to the transmitter. This narrows the possible locations of the receiver to a spherical shell at the measured distance from the transmitter. In 2-dimensions, this would represent a circle instead of a sphere. The circles around the access points in Figure~\ref{fig:tdoa}(b) show the potential locations of the smartphone based on the distance measurements to the respective access points. If this measurement is performed with two different transmitters, the receiver's location can be narrowed further to the intersection of the two associated spherical shells -- this typical produces a circle of possible locations. Intersecting this circle with the spherical shell from another measurement results in a pair of points where the receiver may be located. Collecting measurements from four transmitters narrows the potential location of the receiver to a single spot. In the 2-dimensional example shown, only three access points are needed. This technique has been used since World War II in navigation systems~\cite{williams2003loran} and is used in systems such as the Global Positioning System (GPS)~\cite{GPS}. Note that GPS systems rely on TDoA measurements from satellites.
A key challenge in TDoA systems is actually measuring the propagation time. Imagine a simple strawman design in which an access point reads its clock and transmits the local time in a packet. The receiver could then simply read the clock when the message is received and subtract the local time from the time in the packet. This would seem to provide a simple way to determine the time it took for the message to travel from the access point to the receiver. However, there are three major flaws with this approach.
The first issue is with clock synchronization. The clock at the access point and the clock at the receiver may not be synchronized, and they may read different times at the same moment. In addition, one clock may run faster than the other and they drift further and further ahead over time. Any measurement of the propagation time would have this additional offset (either positive or negative) from this lack of synchronization. This can be addressed by measuring the propagation time in both directions (also called a round trip time) and dividing by two. This would incorporate both a positive and negative offset, which would cancel each other out in the measurement.
The second issue is that the clocks need to be very precise. Wi-Fi signals propagate at the speed of light ($3.0 \times 10^8 m/s$). This means that the signal travels 1 meter in 3.34 nanoseconds.
If a clock only has a precision of $1\mu s$, it would only be able to measure distances $\pm 300m$. Extra hardware support is needed to provide clock readings with nanosecond precision.
The third issue is processing time. Computers process data and messages at a finite speed. The processing time will get added into the propagation time measurements. Hardware support to timestamp messages as the signal arrives is needed to obtain meter level accuracy in measurements.
Wi-Fi added support for TDoA localization to the standard in 2016~\cite{802.11mc}. This standard, called 802.11mc or Wi-Fi Round Trip Time (Wi-Fi RTT), enables smartphones to determine the distance from access points with a precision of 1-2 meters based on time it takes signal to travel to device and back.
The basic message exchange used in Wi-Fi RTT is shown in Figure~\ref{fig:wifirtt}. The smartphone normally scans for nearby access points to associate with for network connectivity. As part of this scan, the smartphone learns about the capabilities of the nearby access points including whether they support the Wi-Fi RTT protocol. When a smartphone wishes to measure the distance to an access point that supports Wi-Fi RTT, it initiates a ranging request by transmitting a Fine Timing Measurement (FTM) request to the access point. The access point immediately acknowledges receipt of this request by sending an ACK response and scheduling a measurement exchange. At some later time ($\tau_1$), the access point transmits an FTM Measurement packet. This arrives at the smartphone whose Wi-Fi interface records time $\tau_2$. The measurement packet is processed by the smartphone and an acknowledgement (ACK) is transmitted at time $\tau_3$. The access point records the time ($\tau_4$) when this ACK message is received. It then transmits the times $\tau_1$ and $\tau_4$ to the smartphone.
Note that the time $\tau_1$ and $\tau_4$ are recorded using the access point's clock and times $\tau_2$ and $\tau_3$ are recorded using the smartphones clock. These clocks are not synchronized in any way. Despite this, the round trip propagation time for the messages can still be calculated as:
\begin{equation} \label{eq:1}
Round~~Trip~~Time = (\tau_4 - \tau_1) - (\tau_3 - \tau_2)
\end{equation}
\noindent Note that the first term ($\tau_4 - \tau_1$) only uses clock values from the access point and the second term ($\tau_3 - \tau_2$) only uses times from the smartphone. As a result, the synchronization issues do not impact this computation in any way. With this measurement complete, the distance to the access point can be computed by the smartphone as:
\begin{equation} \label{eq:2}
Distance = c \times \frac{Round~~Trip~~Time}{2}
\end{equation}
\noindent
Note that $c =$ the speed of light = $3.00 \times 10^8 m/s$.\\
The $Round~~Trip~~Time$ is divided by two to get the one-way propagation delay for the signal.
\begin{figure*}[t]
\centering
\includegraphics[width=.75\textwidth]{Figures/DataCollectionFlowChart.png}
\caption{Flowchart representing the high level software structure for data collection android application}
\label{fig:flowchart}
\end{figure*}
\subsubsection{Discussion}
~
The first observation to notice is that the design of \mbox{\textsc{LEHAR}}\xspace does not exactly require the exact location of the user in physical space. What \mbox{\textsc{LEHAR}}\xspace requires is a set of sensor readings related to location that are closely correlated to the possible set of activities performed. The reason for this is that a machine learning based classifier is used to convert the sensor readings into recognized activities. This simplifies the collection of location information since \mbox{\textsc{LEHAR}}\xspace does not require three dimensional coordinates for an end user. \mbox{\textsc{LEHAR}}\xspace could just as easily use the array of distances to different access points as a feature in its machine learning classifier.
The second observation is that Wi-Fi RTT is a much better choice for \mbox{\textsc{LEHAR}}\xspace than a signal strength based approach due to its much higher accuracy. Measurement studies of the accuracy of Wi-Fi RTT~\cite{wifirtt_accuracy} have shown that the variation in distance estimates for the same location is approximately 1 meter. This level of accuracy would allow \mbox{\textsc{LEHAR}}\xspace to differentiate between activities within a single room. For example, it could allow \mbox{\textsc{LEHAR}}\xspace to determine the difference between making coffee (near the coffee machine) from using the microwave (near the microwave oven) as long as the coffee machine and microwave oven are not in the exact same location.
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[width=\linewidth]{Figures/wifirtt2.png} \\
(a) \\
\\
\includegraphics[width=\linewidth]{Figures/wifirttewma.png} \\
(b) \\
\\
\end{tabular}
\caption{Measurement of Wi-Fi RTT accuracy. (a) shows the raw measurements for distance to the access points. The distances are plotted as a CDF of difference from the mean distance and (b) shows distance measurements passed through different filters. EWMA lines represents an exponentially weighted moving average of the readings and outlier removal lines presents the results of filtering based on the range of recent measurements.}
\label{fig:wifirttmeas}
\end{figure}
To better understand the potential of Wi-Fi RTT, I performed a measurement study of its accuracy in the home shown in Figure~\ref{fig:home}. While leaving the phone at a fixed location, I recorded the distances measured to all four access points. 2286 distance measurements were collected to each access point. For each access point, I compute the mean distance measured across all its measurements. In Figure~\ref{fig:wifirttmeas}(a), I plot a cumulative distribution function (CDF) of each measurement minus the mean distance to that access point. Ideally, the measurements to a single access point would all be identical since the phone was not moving and the CDF would be a sharp step function at 0. However, in practice, the measurements show some variation. The distribution of values around 0 appear symmetric and roughly normally distributed. The standard deviation of each of the access points differs slightly from each other with the kitchen access point having the worst standard deviation of 630 mm. At first glance, this would seem to match well with the reported accuracy of 1 to 2 meters. However, note that in Figure~\ref{fig:wifirttmeas} there are a number of outlier measurements that are between 4000 and 6000 mm from the mean. This variation in measurements would make it difficult to use the distance measurements effectively for activity recognition. I tested a few different algorithms to remove these outliers. First, I tested a simple outlier removal that looked at the range of the last 5 readings. If the range was less than 1000mm, the algorithm simply reported the current value. If the range exceeded 1000mm, the algorithm removed the largest and smallest reading of the 5 and reported the average of the remaining 3 readings. I also tested an exponentially weighted moving average (EWMA), where a new average value would be calculated when a new reading was made as:
\begin{equation}
average_{new} = \alpha \times reading_{new} + (1 - \alpha) \times average_{old}
\end{equation}
I used an EWMA with $\alpha = 0.1$. This EWMA acts as a low-pass filter eliminating spurious spikes/dips in the readings. The results of using these two algorithms is shown in Figure~\ref{fig:wifirttmeas}(b). As can be seen from the graph, the spread of readings is significantly reduced and outliers are eliminated. The the outlier removal and EWMA on the kitchen access point measurements results in a standard deviation of 368 and 243mm, respectively. In general, the EWMA produced narrower distributions and I use this for all my experiments.
\subsection{Implementation}
The Flowchart in Figure~\ref{fig:flowchart} provides a high-level view of the code used to collect the needed sensor readings from an android smartphone. The Java app continuously collects x, y, and z acceleration data as well as the Wi-Fi RTT distances from the 4 routers. Every 3 seconds, it saves an audio file and starts recording a new clip. Additionally, it saves the acceleration and location data to a file.
In order to collect acceleration data, the code creates a SensorManager object, which monitors the activity of the accelerometer. When an update to the acceleration is received, it records the data to an array. The absolute value of the acceleration data is averaged across a 3 second period.
The next step in the code is to collect the location data. To do that it needs to first find all of the access points in range that support Wi-Fi RTT. It does this by using the WifiManager class to interact with Wi-Fi access points. The code records the BSSID for each of the access points. BSSID stands for basic service set identifier and is a unique address for each access point.
The code then creates a WifiRttMananger object to measure the distance in millimeters to each of the 4 access points. It does this by defining and executing a ranging request to receive the RTT information.
The average acceleration reading and the Wi-Fi RTT data is saved to a new line in a csv file once per 3 seconds.
The last piece of information the code needs to collect is audio. It creates a MediaRecorder object and saves audio clips as a wav file every 3 seconds, synchronized with the collection of the other sensor readings.
\section{Discussion}
\label{sec:discussion}
The results shared in the previous section indicate that \mbox{\textsc{LEHAR}}\xspace offers a viable method improving human activity recognition. Furthermore, the project proves that a easily deployable, accurate, and inexpensive solution can be created to address activity recognition inside homes. No prior instrumentation is necessary, allowing for an easy-to-implement solution. Although the project met the main constrains defined at the beginning of the paper, there are some limitations. These limitations include the following.
\begin{itemize}
\item \textbf{Activities.} Currently, \mbox{\textsc{LEHAR}}\xspace has been tested on twelve common activities. While these activities were clearly distinguishable, there may be activities that are more similar in audio. If the end goal is to help senior citizens, the system will have to be able to distinguish similar activities, such as taking medicines with water versus drinking orange juice. Further testing will be conducted as more activities are added.
\item \textbf{Hardware.} The \mbox{\textsc{LEHAR}}\xspace hardware was designed using technology that is not currently ubiquitous in homes. Wi-Fi RTT is still relatively new, and will most likely will be integrated into new products in the coming years.
\item \textbf{User testing.} Due to COVID-19, \mbox{\textsc{LEHAR}}\xspace was tested only by the author of the paper in the author's home. Future iterations will include having others conduct the activities to test the robustness of the system.
\end{itemize}
\section{Related Work}
\label{sec:related}
As mentioned earlier, Human Activity Recognition is not a new concept. Existing approaches fall into two broad categories: mobile device based and Smart Home based. At a very high level, the two approaches differ in the availability of activity recognition and the sensor information used. Mobile device-based designs are available as long as the user carries the devices, while smart home systems can only recognize activities when users are in the monitored spaces of the home. Smart Home designs benefit from the wide range sensors that can be deployed throughout the home, including cameras, microphones, motion detectors, etc. In contrast, mobile device designs only have access to the sensor information available in the mobile device. Therefore, smart home HAR systems result in much higher activity recognition accuracy than mobile devices-based systems. I describe some examples in each category below.
\paragraph{Mobile Device HAR.} Limited forms of activity detection is deployed in a wide variety of mobile devices. For example, many smartphones keep track of walking and running. The Apple Watch~\cite{applewatch}, a widely deployed commercial device, automatically detects when you have begun specific types of exercises~\cite{appleworkout}. This system only uses the accelerometer on the watch and is limited to detecting a small number of exercises including indoor walk, outdoor walk, indoor run, outdoor run, elliptical, rower, pool swim and open water swim. A similar system designed by D. Anguita, A. Ghio, et. al~\cite{2013.Anguita} used a dataset of accelerometer and gyroscope information from smartphones. Data was used to identify six activities that were all motion or posture oriented, such as standing, walking, and laying down. \mbox{\textsc{LEHAR}}\xspace improves upon both these designs by using audio and location data to perform accurate activity recognition on a larger variety of activities that are useful in a wider variety of applications. The ability to use additional sensor information also greatly improves the accuracy of \mbox{\textsc{LEHAR}}\xspace over the approaches used in these other systems.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/SmartHome.png}
\caption{Layout of sensors in a Smart Home to enable activity recognition. Reproduced from~\cite{2014.Krishnan}.}
\label{fig:smarthome}
\end{figure}
\paragraph{Smart Home HAR.} Most Smart Homes designs rely on external sensors placed in specific target objects: the microwave, the stove, the faucet, etc. This makes it relatively straightforward to recognize normal interactions with these devices such as turning on the faucet, cooking food, etc. These systems are typically limited to activities recognizable within the capabilities or coverage of these sensors. Furthermore, installing and maintaining the sensors in the locations you want to monitor adds to the overall cost of the system. Examples of such Smart Home projects include the Aware Home~\cite{aware}, CASAS~\cite{casas}, and PlaceLab~\cite{placelab}. Figure~\ref{fig:smarthome} illustrates the sensor placement associated with a Smart Home HAR deployment. The figure demonstrates how sensors need to be actively placed for the system to work. The benefit of \mbox{\textsc{LEHAR}}\xspace is that it requires no extensive installation of sensors in the home.
\section{Feature Extraction}
\label{sec:extraction}
While the accelerometer and Wi-Fi RTT readings are simple scalar values that are easily used in a machine learning classifier, the audio signal is a more complex data type.
In order to classify the audio signals, the system must extract simple features that are unique to the particular sound. \mbox{\textsc{LEHAR}}\xspace uses seven features of the audio, extracted on both the time and frequency domain.
\subsection{Time Domain Features}
The time domain features used by \mbox{\textsc{LEHAR}}\xspace focus on the amplitude and shape of the audio signal.
The first feature, root-mean-square-energy, focuses on the raw amplitude of the audio signal. The energy of a signal is the sum of the magnitudes of the signal squared. The root mean-square-energy is then the square root of the mean energy. It provides a useful signature of the loudness of the recorded sound.
The second feature, zero crossing rate, summarizes the shape of the signal. The computation of the zero crossing rate is illustrated in Figure~\ref{fig:zerocrossing}. The zero crossing rate is the number of times an audio signal crosses zero, shown the red dots shown in the figure. Past work~\cite{zerocrossing} has shown that this value is useful for distinguishing between a range of naturally occurring sounds.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/FeatureExtractionTimeDomain.png}
\caption{This graph shows what the zero crossing rate represents. The feature is computed on a time domain meaning that amplitude is on the y-axis and time is on the x-axis. Zero crossing rate is the number of times the signal amplitude crosses zero.}
\label{fig:zerocrossing}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/FaucetSpectrogram.png}
\caption{This graph describes the energy level at each frequency over time in an audio recording of a running faucet.}
\label{fig:faucetSpec}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/TypingSpectrogram.png}
\caption{This graph describes the energy level at each frequency over time in an audio recording of typing.}
\label{fig:typingSpec}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/FeatureExtractionFrequencyDomain.png}
\caption{This graph shows the features that can be extracted on a frequency domain, meaning that intensity is on the y-axis and frequency is on the x-axis.}
\label{fig:freqDomain}
\end{figure}
\subsection{Frequency Domain Features}
The frequency domain features used by \mbox{\textsc{LEHAR}}\xspace focus on how the energy of the audio signal is distributed across different sound frequencies.
The first step in obtaining any of these features is to perform a Fourier transform~\cite{fourier} on the audio signal. This converts the time domain samples to an equivalent set of frequencies and amplitudes. From this frequency domain representation, \mbox{\textsc{LEHAR}}\xspace computes five different features.
For \mbox{\textsc{LEHAR}}\xspace's first frequency domain feature, it uses a variant of the Fourier transform called short-time Fourier transform or STFT~\cite{STFT}. This variant is particularly useful for audio signals since it provides frequency content of local sections of the audio as it changes over time.
Spectrograms, such as the ones in Figure ~\ref{fig:faucetSpec} and Figure ~\ref{fig:typingSpec} are a visual representation of STFT output. The frequency is on the y-axis, time on the x-axis, and the brighter the color, the greater the amplitude (i.e. intensity) of the audio signal at that frequency. Notice the thin horizontal bar of brighter color in both figures. This is because there was a greater intensity at that frequency in the audio clip. For example, in Figure~\ref{fig:faucetSpec}, the horizontal band at 3700Hz is a unique characteristic of this faucet's noise and the vertical lines near 1.9 and 2.3 seconds represent short bursts of additional wide-frequency noise, possibly created by a background event or splashing of water. The entire set of coefficients are produced from the STFT of the 3 second audio clip and the mean is used as a feature.
The remaining frequency domain features attempt to summarize the shape of the audio signal in frequency domain. Figure~\ref{fig:freqDomain} shows three of these features: centroid, rolloff and bandwidth. Spectral centroid describes the frequency that the energy is centered on. The spectral rolloff is the frequency that high frequencies go down to 0. The spectral bandwidth is the range of the frequencies with significant intensity. The final feature is the Mel-Frequency Cepstral Coefficients~\cite{MFCC} of the audio signal. The MFCC measure is similar to the STFT coefficients -- however, the MFCC values are taken from the frequency bands on the mel scale, which are more representative of human hearing of sounds. As a result, it is often especially useful in speech recognition or other human-generated sounds.
\section{Activity Inference}
\label{sec:inference}
In order to identify the activity being performed from the raw sensor readings and extracted features, \mbox{\textsc{LEHAR}}\xspace uses a neural network. Neural networks are a type of supervised machine learning most commonly used for such classification tasks. Like any other supervised machine learning system, a neural network needs data labeled with the ground truth, such as a set of sensor readings labeled with the associated activities. Section~\ref{sec:data} describes the data set collected for this purpose. Once this data is collected, I also need to determine the structure of the neural network to use (Section~\ref{sec:structure} and train this neural network using the data collected (Section~\ref{sec:training}).
\subsection{Data Collection}
\label{sec:data}
As described in Section~\ref{sec:collect}, \mbox{\textsc{LEHAR}}\xspace uses x,y, and z acceleration, four distance values (one from each access point), and around 1500 three second audio clips collected on a smartphone to identify activities. To train \mbox{\textsc{LEHAR}}\xspace's neural network, I collected these sensor readings.
Data was collected for twelve common activities (typing, vacuuming, washing dishes, running the blender, brushing teeth - electric, brushing teeth - regular, making coffee, taking medicine, using microwave, shaving, drying hair, and doing nothing). The activities were specifically chosen because they are difficult to detect using existing smartphone HAR methods. Data was manually collected by running the smartphone application and performing the activities while the data was saved.
The data was then manually labelled with the performed activity. Labelling was done by placing the audio into a folder with the name of the activity. The code reads the name of the folder and appends it to a text file with the acceleration and location data, along with the seven audio features described in Section~\ref{sec:extraction}.
Before training the model, the data is split into train and test data in order to evaluate the model accuracy on data it has not been trained on. 80\% of the acceleration, audio, and location data is used to train the data while the other 20\% is set aside for testing the \mbox{\textsc{LEHAR}}\xspace system.
In addition, the data is scaled using StandardScalar() resulting in each feature having a distribution with a standard deviation of one. The reason for scaling is that features that have a larger range tend to hold a greater significance in training the model compared to features with smaller ranges. Scaling each feature to have the same standard deviation normalizes the data and removes this bias.
\subsection{Neural Network Structure}
\label{sec:structure}
Figure ~\ref{fig:neuralNet} shows the structure of the trained neural network. The colored dots represent artificial neurons that are connected to each other in a network. The neurons are separated into layers where the first layer (highlighted in green) is the input layer, the middle layers (highlighted in blue) are hidden layers, and the last layer (highlighted in red) is the output layer.
Each neuron computes a weighted sum of the input and adds on a bias. Each added layer adds a level of complexity in the decision making process of the model. This is because the neurons in a layer weigh the results of the previous layer to make a decision. After several tests with different number of layers, three layers seemed sufficient in producing satisfactory results.
Each of the artificial neurons in the network include an activation function. An activation function determines what should be fired to the next neuron in the network. It takes input data and produces an output. There are a variety of functions that can be used when designing a neural network and each layer can be assigned a different activation function for its neurons ~\cite{kerasapi}. This project uses four Rectified Linear Unit (ReLU)~\cite{relu} and two Softmax~\cite{generalML} functions. Based on testing with different activation functions, this combination worked the best. ReLU and Softmax are very commonly used activation functions in machine learning. In the ReLU activation function, the function f(z) is equal to zero if z is less than zero. If z is greater than or equal to 0, f(z) is equal to z. It is defined by the following equation:
\begin{equation} \label{eq:3}
{\displaystyle \phi (\mathbf {v} )=\max(0,a+\mathbf {v} '\mathbf {b} )},
\end{equation}
The Softmax function is used in the last layer to normalize the output of the neural network to a probability distribution. It is defined by the following equation:
\begin{equation} \label{eq:4}
\sigma(\vec{z})_{i}=\frac{e^{z_{i}}}{\sum_{j=1}^{K} e^{z_{j}}}
\end{equation}
In the code, the input layer is defined with an input shape, which defines the shape of the starting tensor. The input shape depends on the shape of the test data (depending on the amount of data and the number of features). The input layer has 2430 neurons that the input data is fed into. The output layer is where the model outputs what activity is being performed. Thus, there are twelve neurons in the output layer.
In addition to training a model using all of the features, I trained two more models. One of them was trained solely on the x, y, and z acceleration data, and one was solely trained on the audio features. The goal was to see what impact Wi-Fi RTT technology had on the \mbox{\textsc{LEHAR}}\xspace system. The next section describes the results of the model training.
\subsection{Training}
\label{sec:training}
Code was written to create a neural network of the appropriate structure and train it. The Blue (train) line in Figure~\ref{fig:loss} shows the accuracy and loss when training the neural network at each iteration of optimization (epoch). Accuracy is the percentage of the predictions it got correct out of the total and loss describes how far away a prediction was from the actual value. At the end of training, the training accuracy and loss have leveled out suggesting that additional training time will not help. The accuracy on the training data set reaches 100\% at the end of training.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/ModelAccuracy2.png}
(a)
\centering
\includegraphics[width=\linewidth]{Figures/ModelLoss2.png}
(b)
\caption{(a) shows the accuracy and (b) shows the loss of the model as it is being trained.}
\label{fig:loss}
\end{figure}
\section{Introduction}
\label{sec:intro}
Computing devices today have little awareness of their surroundings. This shortcoming is slowly being addressed by the addition of many different sensor types to mobile devices such as smartphones. The added sensors have enabled a variety of popular applications ranging from GPS-based navigation to step-counting. However, these sensors and enabled applications are just a stepping stone towards having computing devices understand what physical activities users are performing and providing them useful guidance in these tasks.
There are many potential applications of Human Activity Recognition or HAR. For example, HAR can be used in a device that could determine that a user is cooking and could prompt you with recipes and even step-by-step guidance in the preparation. HAR could also play an important role in health care. An HAR system could assist senior citizens by analyzing patterns in activities over long periods of time and look for discontinuities such as if he or she forgot to take their medicine one day. This could help seniors achieve greater independence in their daily lives and use changes in behavior to identify the onset of potential diseases.
Unfortunately, these applications have been hampered by the fact that accurately identifying what activity a user is performing has proven difficult for a variety of reasons. At its core, HAR involves the classification of human activities through the analysis of sensor data. This, in turn, creates two sub-problems: 1) the collection of sensor data, and 2) the classification into activities.
The sensor data necessary for HAR can come from a variety of sources and existing systems generally fit into two categories: sensors in smart homes and smartphones. Smart home-based HAR relies on cameras placed around the household and pre-installed sensors to recognize specific activities. This method tends to be very accurate, but expensive and hard to install. On the other hand, smartphone-based HAR is mainly accelerometer-based -- using a user's movement patterns to identify activities. It is very easy to deploy since smartphones are ubiquitous, but inaccurate at detecting activities.
On the classification front, smartphone-based systems are limited to identifying activities that are easily recognized from movement patterns, such as running, driving and sitting. Smart home designs typically are specialized and require additional hardware to identify any specific activity. Ideally, a system should classify a wide range of activities (cooking, brushing teeth, etc.) and even be able to identify progress or steps within an activity (e.g. mixing ingredients vs. cooking them). It should also be relatively easy to add new activities that the system can recognize.
Existing systems all suffer from some combination of inaccuracy, difficult deployment, and narrow range of recognized activities.
The goal of this project is to see if it is possible to perform practical, high-accuracy HAR that can recognize a wide range of activities using only a smartphone. A key enabler for this system is improving smartphone sensor technology, which has resulted in smartphones having a wide variety of sensors. A particularly important addition in recent smartphones is hardware support for Wi-Fi Round Trip Time (also called Wi-Fi RTT or 802.11mc)~\cite{802.11mc}, which enables indoor localization with a precision of one to two meters. The addition of location information is particularly useful for HAR since many activities are performed in specific locations. For example, cooking is typically done only in the kitchen and brushing teeth is typically done in a bathroom. As a result, location information can help differentiate between activities that may seem similar when observed using other sensors, opening the potential for significantly improved accuracy. In fact, fine-grain localization at meter-level accuracy may help in recognizing different activities within an area, such as eating at the kitchen table, cooking at the stove or making coffee at a coffee machine.
The Location-Enhanced HAR (\mbox{\textsc{LEHAR}}\xspace) system presented in this paper leverages acceleration, audio, and location data collected from a smartphone. The classification of these sensor readings into human activities relies on a neural network-based classifier. This neural network was trained using a labeled data set of sensor readings that I created for a set of twelve common activities (typing, vacuuming, washing dishes, running the blender, brushing teeth - electric, brushing teeth - regular, making coffee, taking medicine, using microwave, shaving, drying hair, and doing nothing). To better illustrate the value of adding new sensors, I also trained machine learning models using just the accelerometer readings and using a combination of accelerometer and microphone data.
\mbox{\textsc{LEHAR}}\xspace achieved an $F_1$-score of 0.965 at recognizing twelve common activities described above. Using just acceleration, a $F_1$-score of 0.660 was achieved. Note that most existing smartphone systems use only acceleration for HAR. This proves that acceleration data is not very accurate in predicting activities similar to the ones I chose to test. Using only audio, a $F_1$-score of 0.865 was achieved, showing that \mbox{\textsc{LEHAR}}\xspace's use of location provides a more accurate approach to human activity recognition.
The rest of this paper is organized as follows. Section~\ref{sec:system} provides an overview of the \mbox{\textsc{LEHAR}}\xspace system, including the system requirements and the key techniques used. Sections~\ref{sec:collect},~\ref{sec:extraction}, and~\ref{sec:inference} focus on the design of the core software components of \mbox{\textsc{LEHAR}}\xspace. Section~\ref{sec:results} provides a detailed description of the results and an analysis of the data from this study. Discussion, Related Work and Conclusions are presented in Sections~\ref{sec:discussion},~\ref{sec:related} and~\ref{sec:conclusion}.
\section{Results}
\label{sec:results}
In my experimental evaluation of \mbox{\textsc{LEHAR}}\xspace, I consider three key questions:
\begin{enumerate}
\item How accurate is \mbox{\textsc{LEHAR}}\xspace at identifying activities and is it signficantly more accurate than existing approaches? (Section~\ref{sec:accuracy})
\item How does the addition of more recognized activities impact the accuracy of \mbox{\textsc{LEHAR}}\xspace? (Section~\ref{sec:scaling})
\item How much and how accurate does location information need to be to provide the benefits of the \mbox{\textsc{LEHAR}}\xspace system? (Section~\ref{sec:locVacc})
\end{enumerate}
\subsection{\mbox{\textsc{LEHAR}}\xspace Accuracy}
\label{sec:accuracy}
To understand how well \mbox{\textsc{LEHAR}}\xspace performs, I tested \mbox{\textsc{LEHAR}}\xspace on the 20\% of data set aside for testing and validation (see Section~\ref{sec:data}). Figure~\ref{fig:loss} displays the accuracy and loss of the neural network at each iteration of training (epoch) on this test data (the red test line).
\mbox{\textsc{LEHAR}}\xspace achieved a validation accuracy of 96.5\% (accuracy when attempting to predict the activities in the testing data).
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/LEHARConfusionMatrix.png}
\caption{This figure describes the counts for each prediction of an element in the test data by the model and the actual label for the \mbox{\textsc{LEHAR}}\xspace system.}
\label{fig:LEHARconfusionMatrix}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/AudioConfusionMatrix.png}
\caption{This figure describes the counts for each prediction of an element in the test data by the model and the actual label only using audio data.}
\label{fig:audioconfusionMatrix}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/AccelerationConfusionMatrix.png}
\caption{This figure describes the counts for each prediction of an element in the test data by the model and the actual label only using acceleration data.}
\label{fig:accconfusionMatrix}
\end{figure}
To better understand the nature of the errors made by \mbox{\textsc{LEHAR}}\xspace in recognizing activities, I created a confusion matrix for its activity predictions (Figure~\ref{fig:LEHARconfusionMatrix}). The rows represent what the true activity was and the columns represent what the model predicted as the activity. The boxes contain the count of such predictions/reality pairs. The boxes along the diagonal (highlighted in blue) represent situation where the actual activity and predicted activity match (where the model correctly guessed the performed activity). The yellow boxes indicate situations where the model incorrectly identified an activity. As indicated by the high counts in the blue boxes compared to the yellow, the model performed relatively well. The overall $F_1$-score for the LEHAR model was 0.965. There is no clear, statistically significant pattern to the errors to suggest a common source of error.
Figure ~\ref{fig:audioconfusionMatrix} and ~\ref{fig:accconfusionMatrix} are confusion matrices for the audio-only and acceleration-only models. Note that the audio-only model made several incorrect predictions by predicting an audio clip of using the microwave as making coffee. This is most likely because making coffee and using the microwave are similar sounding activities. The \mbox{\textsc{LEHAR}}\xspace system did not make the same mistake as often because it includes location data and using the microwave and making coffee do not occur in the same location withinin the kitchen. The acceleration-only model made incorrect predictions much more often than the other two models. Activities such as typing are identified reliably since it is the only activity in the list that is performed while sitting\footnote{While sitting, the phone is in consistently in a different orientation since the phone was in my pant pocket}.
The model trained solely on acceleration data produced a an $F_1$-score of 0.660 . The low accuracy indicates that most current HAR systems that rely on solely acceleration information would not be capable of identifying complex activities such as the ones chosen in this project. This explains why current smartphone HAR systems only focus on movement based activities such as running and walking.
The model trained solely on the audio features achieved an $F_1$-score of 0.865. The use of audio data is an important factor in accurately predicting the activities. In addition, the confusion between making coffee and using the microwave suggests that other similar sounding activities will cause issues as more and more activities are added to the system. \mbox{\textsc{LEHAR}}\xspace's combined model performed significantly better, proving that the use of WI-Fi RTT and the use of multiple smartphone sensors enable an extremely high accuracy HAR system.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/numactivity.png}
\caption{This figure shows how the accuracy of model varies with the number of activities.}
\label{fig:scaling}
\end{figure}
\subsection{Scaling with Activities}
\label{sec:scaling}
One key concern is whether a system can handle a large number of unique activities. To understand the scaling properties of \mbox{\textsc{LEHAR}}\xspace, I evaluated the accuracy of the system, adding one new activity at a time to the set of identifiable activities and retraining the model. I also trained an audio-only and acceleration-only model while varying the number of activities. The results of this experiment are shown in Figure~\ref{fig:scaling}. As the graph shows, the performance of the acceleration-only model degrades rapidly with the addition of activities and even the audio-only model degrades gradually. This is because the addition of activities introduces activities that have similar motions or sounds. In contrast, \mbox{\textsc{LEHAR}}\xspace's is able to provide consistently high accuracy despite the addition of more activities. I believe this is because the addition of location effectively limits the set of activities and associated sounds that the system is trying to match at any time to a small enough set for high accuracy.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/numap.png}
\caption{This figure shows how the accuracy of model varies with the number of access points and the use of RSSI instead of Wi-Fi RTT.}
\label{fig:location}
\end{figure}
\subsection{Impact of Location Accuracy}
\label{sec:locVacc}
In this section, I consider the impact of access point availability on accuracy of the whole system. In particular, I consider the issue of having fewer access points or having access points that don't support the Wi-Fi RTT protocol. To understand the value of additional access points, I varied the number of access points available in my training and testing data sets. I also considered the use of Received Signal Strength Indication (RSSI) instead of Wi-FI RTT information. As mentioned in Section~\ref{sec:rssi}, signal strength can be used for localization, but at some loss of accuracy. To clearly quantify the accuracy gained from the location information, I chose to train a location-only model for this experiment. Figure~\ref{fig:location} shows the results of this experiment. As the graph shows, the Wi-Fi RTT model performs significantly better than the RSSI-based model. This is likely because Wi-Fi RTT provides more accurate location information than RSSI. This gap decreases as you add more access points since the redundancy in information provided by multiple access points compensates for the inaccuracy of each measurement. In addition, it is worth noting that the accuracy of the Wi-Fi RTT system levels out at 3 access points. This is because, as mentioned in Section~\ref{sec:tdoa}, measurements to single access points localizes the user to a spherical shell, measurements to two access points localizes to a circular region, measurements to three access points localizes to two possible locations and to four access points localizes to a single spot. As a result, the gain in location information beyond 3 Wi-Fi RTT measurements is minimal. These measurements do indicate that even 2 Wi-Fi RTT access points or 3 standard access points for RSSI measurement provide significant value for HAR systems.
\section{\mbox{\textsc{LEHAR}}\xspace Software Design}
\input{_dataCollection}
\input{_extraction}
\input{_inference}
\section{System Overview}
\label{sec:system}
In this section, I describe the design of \mbox{\textsc{LEHAR}}\xspace. Section~\ref{sec:require} describes the key requirements that any system designed to perform HAR must address. Section~\ref{sec:hardware} provides an overview of the \mbox{\textsc{LEHAR}}\xspace hardware design and Section~\ref{sec:software} discusses the software design.
\subsection{System Requirements}
\label{sec:require}
The goal of this system is to develop a HAR system that is easy to deploy, accurately identifies activities and is easily extended to support new activities. This would enable a wide range of applications that need such activity context to provide useful information. For example, it could be used in applications that analyze the daily activity patterns of seniors to detect abnormal behavior or other health deterioration.
In addition to meeting this high-level goal, the design of the \mbox{\textsc{LEHAR}}\xspace system needs to meet the following important requirements:
\begin{itemize}
\item {\bf Easy Deployment.} There should be no significant prior instrumentation in the household required. Many existing approaches to enabling HAR in homes require significant specialized infrastructure to be added around the house~\cite{2014.Krishnan}.
\item {\bf Portable.} Any sensors or components needed by the system must be either integrated into the mobile device or be similar in size/portability as the mobile device. Given the battery life constraints of mobile devices, the solution should use little if any power.
\item {\bf Unobtrusive.} The system should work without being intrusive to the user and not require the user to do anything for the system to operate.
\item {\bf Accurate.} The system must have high accuracy. In order to support applications that depend on the recognition of multiple related activities, a HAR system must identify activities with an accuracy of at least 95\%.
\item {\bf Inexpensive.} The system components must not add significant expense to the device.
\item {\bf Extendable.} It should be easy to add new activities that the system can recognize.
\end{itemize}
\mbox{\textsc{LEHAR}}\xspace employs a few main techniques to meet these requirements. First, it is a smartphone-only system; it relies only on the sensors that are available on a modern smartphone to collect data about a user's activity. This allows it to meet the Easy Deployments, Portable, Unobtrusive, and Inexpensive requirements. Second, it leverages a combination of different sensors, with fine-grain indoor localization as a key addition over any previous system, to recognize activities. This allows \mbox{\textsc{LEHAR}}\xspace to provide accurate predictions, meeting the Accurate requirement. Finally, \mbox{\textsc{LEHAR}}\xspace relies on a machine learning model to classify sensor observations to recognized activities. Retraining the machine learning model to recognize new activities can be easily automated, addressing the Extendable requirement.
\subsection{\mbox{\textsc{LEHAR}}\xspace Hardware Overview}
\label{sec:hardware}
Given the smartphone-based design of \mbox{\textsc{LEHAR}}\xspace, the hardware consists solely of the smartphone itself and the communication infrastructure associated with it. Since my objective was to incorporate fine-grain location, I chose a smartphone and communication infrastructure that incorporates support for some of the newest localization techniques (802.11mc, aka. Wi-Fi RTT~\cite{802.11mc}). Google maintains a list of devices that support RTT-based localization on its Android Developer Guide~\cite{androiddev}. At the time of this project, a wide range of phones support this standard, including phones from Google, Samsung, LG, and Xiaomi. I choose a relatively inexpensive and easy to obtain phone, the Google Pixel 4a~\cite{pixel4a}. While phone support for the protocol is widespread, only Google-branded Wi-Fi access points support the protocol from an infrastructure perspective. I chose to use the Google Nest Wi-Fi Router and Google Nest Wi-Fi Point devices~\cite{nestwifi} for my infrastructure.
Figure~\ref{fig:home} provides a pictorial view of the communication infrastructure within the home used for testing. The home has three floors -- a basement and two above ground floors. The Wi-Fi RTT protocol provides distance measurements to any access point that is within communication range (shown using the blue-green lines). Since Wi-Fi RTT only provides a distance estimate, measurements to four different access points is necessary to localize a device to a single location. This is explained in greater detail in Section~\ref{sec:tdoa}.
For this reason, I chose to deploy four access points in the home. In addition, I placed the access points at locations near the edge of the home, far from each other. This minimizes the localization error since the measurements to widely spaced access points produces a smaller intersection of potential locations.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{Figures/SoftwareDesign.png}
\caption{Diagram outlining the components of the \mbox{\textsc{LEHAR}}\xspace software.}
\label{fig:softwareDesign}
\end{figure*}
\subsection{\mbox{\textsc{LEHAR}}\xspace Software Overview}
\label{sec:software}
The overall structure of the \mbox{\textsc{LEHAR}}\xspace software is shown in Figure~\ref{fig:softwareDesign}. This software, which runs on the smartphone, consists of three main sections: data collection, feature extraction, and activity inference. The data collection software component of the system determines what data to collect and how often to collect it. I describe the software that performs this task in greater detail in Section~\ref{sec:collect}.
The feature extraction component of the code, described in Section~\ref{sec:extraction}, computes several defining features of the audio in the time and frequency domains. The goal of feature extraction is to process the sensor data into a format that is more applicable to machine learning. The activity inference component uses a trained neural network classifier to convert the raw sensor data and extracted features into recognized activities. The operation and training of this classifier is described in Section~\ref{sec:inference}.
\section{Acknowledgments}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a consistent \LaTeX\ style for use across ACM publications, and incorporates accessibility and metadata-extraction functionality necessary for future Digital Library endeavors. Numerous ACM and SIG-specific \LaTeX\ templates have been examined, and their unique features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable guide to the process of preparing your work for publication. If you have published with ACM before, this document provides insight and instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles for any ACM publication --- conference or journal, and for any stage of publication, from review to final ``camera-ready'' copy, to the author's own version, with {\it very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can be used to prepare many different kinds of documentation --- a double-blind initial submission of a full-length technical paper, a two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready'' journal article, a SIGCHI Extended Abstract, and more --- all by selecting the appropriate {\it template style} and {\it template parameters}.
This document will explain the major features of the document class. For further information, the {\it \LaTeX\ User's Guide} is available from \url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is the {\it template style} which corresponds to the kind of publication or SIG publishing the work. This parameter is enclosed in square brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\it template style} to be used in formatting your work, there are a number of {\it template parameters} which modify some part of the applied template style. A complete list of these parameters can be found in the {\it \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind'' conference submission. Anonymizes the work and includes line numbers. Use with the \verb|\acmSubmissionID| command to print the submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the source file: \verb|\documentclass[sigconf,screen]{acmart}|.
\section{Modifications}
Modifying the template --- including but not limited to: adjusting margins, typeface sizes, line spacing, paragraph and list definitions, and the use of the \verb|\vspace| command to manually adjust the vertical spacing between elements of your work --- is not allowed.
{\bf Your document will be returned to you for revision if modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the ``Libertine'' typeface family. Your \TeX\ installation should include this set of packages. Please do not substitute other typefaces. The ``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used, as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately - \url{https://capitalizemytitle.com/} has useful rules for capitalization. Use the {\verb|title|} command to define the title of your work. If your work has a subtitle, define it with the {\verb|subtitle|} command.
Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used in the page headers, to prevent overlapping text. The \verb|title| command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata identification. Multiple authors may share one affiliation. Authors' names should not be abbreviated; use full first names wherever possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note to apply to multiple authors --- for example, if the first two authors of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of the list of authors to be used in the page headers, to prevent overlapping text. The following command should be placed just after the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all of the authors' names, which may result in overlapping text in the page headers.
The article template's documentation, available at \url{https://www.acm.org/publications/proceedings-template}, has a complete explanation of these commands and tips for their effective use.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights form. Depending on the kind of work, and the rights management choice made by the author, this may be copyright transfer, permission, license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a copy of the completed rights form once it has been submitted. This form contains \LaTeX\ commands that must be copied into the source document. When the document source is compiled, these commands and their parameters add formatted text to several areas of the final document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several works for an event, make sure to use the correct set of commands with each of the works.
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful taxonomic tools for you to help readers find your work in an online search.
The ACM Computing Classification System --- \url{https://www.acm.org/publications/class-2012} --- is a set of classifiers and concepts that describe the computing discipline. Authors can select entries from this classification system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases of the authors' choosing, providing a more flexible way of describing the research being presented.
CCS concepts and user-defined keywords are required for all short- and full-length articles, and optional for two-page abstracts.
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands: \verb|section|, \verb|subsection|, \verb|subsubsection|, and \verb|paragraph|. They should be numbered; do not remove the numbering from the commands.
Simulating a sectioning command by setting the first word or words of a paragraph in boldface or italicized text is {\bf not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|'' package --- \url{https://ctan.org/pkg/booktabs} --- for preparing high-quality tables.
Table captions are placed {\it above} the table.
Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of
the three are discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an
inline or in-text formula. It is produced by the
\textbf{math} environment, which can be
invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end}
construction or with the short form \texttt{\$\,\ldots\$}. You
can use any of the symbols and structures,
from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a
few examples of in-text equations in context. Notice how
this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols
and structures available in \LaTeX\@; this section will just
give a couple of examples of display equations in context.
First, consider the equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or more images can be placed within a figure. If your figure contains third-party material, you must clearly identify it as such, as shown in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \& Ewing, Inc. [Public domain], via Wikimedia Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{The 1907 Franklin Model D roadster.}
\end{figure}
Your figures should contain a caption which describes the figure to the reader. Figure captions go below the figure. Your figures should {\bf also} include a description suitable for screen readers, to assist the visually-challenged to better understand your work.
Figure captions are placed {\it below} the figure.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that are placed after all author and affiliation information, and before the body of the article, spanning the page. If you wish to have such a figure in your article, place the command immediately before the \verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of {\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08emT\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}\ for the preparation and formatting of one's references is strongly recommended. Authors' names should be complete --- use full first names (``Donald E. Knuth'') not initials (``D. E. Knuth'') --- and the salient identifying features of a reference should be included: title, year, volume, number, pages, article DOI, etc.
The bibliography is included in your source document with these two commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
|
2,877,628,089,438 | arxiv | \section{Introduction}
In recent years, the achievements in laser technologies have lead to a
remarkable progress in the analysis of electronic degrees of freedom in
atomic clusters, for an extensive review see \cite{Reinbook}.
For example, photoelectron spectra (PES) can now be measured with a high
accuracy and for a broad variety of clusters
\cite{Wrigge,Moseler,Alum,semi,SIC_rost,c60_cam,c60_cam2}.
However, a wealth of electronic modes still remains unexplored.
In clusters with the diameter far below the laser wavelength (i.e. in clusters with
the number of atoms $N < 10^6 - 10^8$), the laser light couples only to dipole
($\lambda =1$) states. Hence we gain the well known dipole plasmon.
At the same time, it is still very hard to access the electronic modes with
higher multipolarity $\lambda > 1$. Multi-photon processes can generally
give a way to these modes. For example, two photons couple to quadrupole
($\lambda =2$) modes \cite{Ne_PRA_2004}. Just this case,
investigation of quadrupole modes
of valence electrons in two-photon processes, will be scrutinized in the
present paper. In a previous publication \cite{Ne_PRA_2006}, we have studied
the scenario of two-photon processes where both photons originate from
the same laser
pulse and so have the same frequency. In this paper, we will analyze an
alternative two-photon technique employing two different laser
frequencies. It will be
shown that the combined implementation of both techniques
is most optimal for the exploration of electronic quadrupole states.
A particularly interesting aspect emerges for low-frequency (infrared) quadrupole
modes in small deformed clusters.
It was found that these modes are dominated by a single electron-hole (1eh) pair
\cite{Ne_PRA_2004,Ne_PRA_2006} and their spectra are close to the pure
1eh energy
differences $\epsilon_{eh}= \epsilon_{e}-\epsilon_{h}$.
As a result, measuring the energy
$\epsilon_{eh}$ in a two-photon process and the energy of the hole (occupied) state
$e_h$ by PES \cite{Wrigge,Moseler,Alum}, we gain information on the
particle energy $\epsilon_{e}$. This allows to determine the full single-particle
spectrum of valence electrons near the HOMO-LUMO (highest occupied
molecular orbital - lowest unoccupied molecular orbital) gap. Being sensitive
to diverse cluster's features (equilibrium shape, ionic structure, ...), the
electronic spectra can in turn serve for investigations of these features. Besides they
constitute a critical test of any theoretical description.
It is worth noting that,
unlike the dipole states with their high frequencies and strong collective mixing,
the quadrupole states of interest mainly originate from the deformation
splitting \cite{Ne_PRA_2004,Ne_PRA_2006}. Their energy scale is thus quite
small and they usually lie in the infrared region $< 1 eV$. In small clusters
the spectrum of these states is very dilute. This prevents collective mixing
of the states, favors their $1eh$ nature, and simplifies the
experimental discrimination. Different kinds of small deformed clusters
(free, supported, embedded) can be explored for the infrared quadrupole modes.
In this paper we will consider the simplest case of free clusters.
Two-photon processes allow to excite not only the low-frequency quadrupole
modes, but also high-frequency quadrupole states in the regime of the
quadrupole plasmon. In fact, these plasmon states carry a large quadrupole
strength and thus rather easily respond to two-photons probes. The two sorts of
quadrupole modes can be characterized in terms of transitions between major
quantum shells of the cluster mean field \cite{Ne_PRA_2004}.
The low-frequency quadrupole modes correspond to $1eh$ excitations within the
valence quantum shell $N$ ($\Delta N =0$ modes). The high-frequency states in
the quadrupole-plasmon range correspond to the excitations over two major
shells ($\Delta N =2$ modes). There are still no any experimental
data about both kinds of modes. But their investigation could
deliver a valuable spectroscopic information.
As mentioned above, the excitation of quadrupole states needs at least
a two-photon process. A variety of such processes is known in atomic and
molecular spectroscopy \cite{Scoles,SEP,Vitanov,Berg}. However, to our knowledge,
none has been applied so far in experimental investigations of atomic
clusters. Some of these processes, namely the direct two-photon population (DTP)
\cite{Ne_PRA_2006}, the off-resonant stimulated Raman scattering (ORSR)
\cite{SEP}, and the stimulated Raman adiabatic passage (STIRAP)
\cite{Vitanov,Berg}, seem to be quite promising \cite{NY} and are thus worth a
closer inspection. As is shown below, some particular cluster properties, e.g.
high probability of undesirable plasmon population, complicates
implementation of two-photon schemes. Hence we need a detailed analysis
based on realistic calculations.
As a first step in this direction, the pump-probe DTP method was recently
investigated \cite{Ne_PRA_2006}. In this method the electronic infrared
quadrupole state is generated via the direct resonant two-photon (two-dipole)
excitation by the pump laser, see the DTP scheme at the left part of
figure \ref{fig:lam_sys_fig1}. The population of the quadrupole state
is detected through the
appearance of satellites in the photoelectron spectra produced by a probe
pulse (not shown in figure \ref{fig:lam_sys_fig1}). Femtosecond pump and probe pulses with
intensities $I = 2\cdot 10^{10} - 2\cdot 10^{11} W/cm^2$ and pulse duration $T
= 200 - 500$ fs were found to be optimal. The systematic TDLDA
calculations have shown that the method is very robust and delivers not only
the $1eh$ spectrum but also the lifetime of the $1eh$ pairs.
\begin{figure}
\centerline{
\includegraphics[height=10cm,width=3cm,angle=-90]{fig1.eps}
}
\caption{\label{fig:lam_sys_fig1}
Schemes of two-photon processes: direct two-photon (DTP) in a two-level system
and off-resonant stimulated Raman (ORSR) in a three-level $\Lambda$-system.
The initial $|0\rangle$, intermediate $|1\rangle$, and target $|2\rangle$ states have
the orbital moments $\lambda=$0, 1, and 2, respectively.
In ORSR the pump dipole pulse
couples the ground and intermediate states while the Stokes dipole pulse
provides the coupling of the intermediate and target states.
$\Delta$ is the detuning from the intermediate dipole state $|1\rangle$.
The purpose of both DTP and ORSR is the population of the
target quadrupole state $|2\rangle$.
}
\end{figure}
In the present paper, we aim to inspect an alternative two-photon method,
off-resonant stimulated Raman scattering (ORSR). In this method, the target
quadrupole state is populated by two different dipole transitions via an
intermediate dipole state. Hence we deal with the so called $\Lambda$-system, see
right part of figure \ref{fig:lam_sys_fig1}. The pump pulse provides the coupling of the initial
(ground) state $|0\rangle$ to the intermediate state $|1\rangle$. The Stokes pulse
couples $|1\rangle$ with the quadrupole target state $|2\rangle$, altogether stimulating
the transition to the target state. The pulses have to maintain the two-photon
resonance condition $\omega_p - \omega_s = \omega_2$, i.e. the
difference of the pump and Stokes frequencies must coincide with the frequency of the
target quadrupole state. Isolated dipole states of $1eh$ nature as well as
the dipole plasmon can serve as the intermediate state $|1\rangle$. However, one
should avoid a real population of the intermediate state to prevent
undesirable leaking into competing channels. This is especially important for
the dipole plasmon which decays via a fast Landau damping associated
with a short lifetime ($\sim 10-20$ fs) \cite{Reinbook}. To avoid the actual
population of $|1\rangle$, we will use an appreciable detuning $\Delta$ from the
energy of this state, hence the reference to {\it off-resonant} process. A
considerable detuning is the crucial point in our scheme. Detection of the
population of the target
quadrupole states (both at low- and high-frequency) in the ORSR can be done by
a probe pulse in the same way as in the DTP \cite{Ne_PRA_2006}.
As compared with DTP, the ORSR scheme is more involved since it requires not two
but three different pulses (pump, Stokes and probe). At the same time, the
ORSR allows to explore the low-lying quadrupole states by lasers in the region
of visible light and hence is an interesting alternative to DTP. Besides,
ORSR is widely used in atomic and molecular physics. It is certainly worth to
assay this method for atomic clusters as well.
In the present study, we will apply the ORSR to low-frequency quadrupole
states. However, as is shown below, it is hard to explore these states
without touching the high-frequency quadrupoles which very easily respond
to any two-photon probes. Both kinds of quadrupole states should thus be
involved into a realistic exploration scheme. The high-frequency quadrupoles
can hardly be studied within the ORSR in $\Lambda$-configuration because in
this case we would need the intermediate dipole state lying above the
quadrupole plasmon \cite{comm1}.
It is hence better to explore these
quadrupoles by DTP. As a result, we naturally come to a combined analysis
implementing the ORSR for the low-frequency quadrupoles and the DTP for
their high-frequency counterparts. The aim of the present paper is to develop
optimal schemes for the combined DTP-ORSR method.
The paper is outlined as follows. In Section
2 the calculation scheme is sketched. In Sec. 3 the ORSR excitation of the
low-frequency quadrupole is discussed for two cases of the intermediate state:
an isolated dipole state and tail of the dipole plasmon. Stability of the process
to variation of the main parameters is scrutinized. The DTP population of the
quadrupole plasmon states is outlined and the general scheme for the combined
DTP-ORSR experiment is developed. The conclusions are drawn in Sec. 4.
\section{Calculation scheme and test case}
The theoretical and numerical framework of our study are explained in detail
elsewhere \cite{Reinbook,Cal}. So we summarize here only the key points. The
calculations are performed within the time-dependent local density
approximation (TDLDA) using the exchange-correlation functional of \cite{ex_corr}
and an averaged self-interaction correction \cite{adsic}. As a first step, the static
single-electron wave functions $\bar\phi_i (\vec r)$ are calculated from the
stationary Kohn-Sham equations. Then time evolution of the single-electron
wave functions $\phi_i (\vec r,t)$ is computed starting from the initial
condition $\phi_i (\vec r,t=0)=\bar\phi_i (\vec r)$. The ionic background is
treated in the soft jellium approximation \cite{Reinbook}.
This approximation, although a bit daring, is capable of reproducing the basic
trends of shapes and subsequent plasmon spectra of Na clusters
\cite{Brack,Hee93,Mon95b}. In the present paper we consider axially deformed
clusters and therefrom Na$_{11}^+$ as a particularly suitable test case.
The axial symmetry and jellium approximation together greatly reduce the computational
effort and thus allow huge scans in the multi-parameter space of multi-photon
processes even in deformed clusters. Absorbing boundary conditions are employed
for the description of photoionization. The numerical handling is performed by
standard methods (gradient iterations for the ground state, time splitting for time
propagation). The excitation spectra in the linear response regime are computed
in TDLDA by standard techniques of spectral analysis \cite{Cal,spectran}. The
laser induced dynamics is simulated by adding to the TDLDA the laser pulses as
classical external dipole fields of the form $W(t)=E_0\,z\,\sin^2(\pi t/T)\cos
(\omega t)$ with the field strength $E_0$ ($\propto$ square root of the intensity
$I$) lasting for one half-cycle of the profile function $\sin^2(\pi t/T)$. The field
is applied in $z$-direction (the symmetry axis of the system);
$\omega$ is the frequency and $T$ is the pulse duration.
\begin{table}[t]
\begin{center}
\caption{\label{tab:spectrum}
The spectrum of the quadrupole states below 1 eV in Na$_{11}^+$,
approximated by the energies of the dominant $1eh$ pairs. The structure of
$1eh$ pairs is done in terms of the Nilsson-Clemenger quantum numbers
$[Nn_z\Lambda]$ \protect\cite{Clem}. See text for more details.
}
\begin{tabular}{ccc}
\hline
$\lambda\mu$ & $\hbar\omega $ [eV] & $[Nn_z\Lambda]_e \; [Nn_z\Lambda]_h$ \\
\hline
21 & 0.41 & [211] [220]\\
22 & 0.60 & [202] [220]\\
20 & 0.75 & [200] [220]\\
\hline
\end{tabular}
\end{center}
\end{table}
Small deformed sodium clusters are optimal for a first exploratory analysis. We
consider here the test case Na$_{11}^+$. It is strongly prolate which
provides a comfortably strong collective splitting of the plasmon
resonance and of the single-electron spectrum. Its infrared spectrum below 1
eV is very dilute and displays only three well separated electronic
levels, namely the quadrupole modes of multipolarity $\lambda\mu$=20,
21 and 22 (see Table 1). Following our estimations \cite{Ne_PRA_2004},
these modes almost correspond to pure $1eh$ states as indicated in
Table 1. The collective shifts of these states through the Coulomb
residual interaction are modest, e.g. $\sim 0.05$ eV for
$\lambda\mu$=20, which corroborates their $1eh$ structure. This
feature is a direct consequence of the dilute spectrum, because large
energy intervals between the levels prevent their collective mixture.
In Na$_{11}^+$ dipole states start above 1 eV and so are well
separated from the low-frequency quadrupole modes. Such a spectral
separation is a general feature of small deformed clusters. As was
mentioned above and as can be seen from Table 1, the low-frequency
quadrupole modes are represented by the $1eh$ transitions inside the
valence shell ($\Delta N =0$). Moreover, most of these
modes arise due to the deformation splitting of the levels and so
their energy scale is small. Instead, the dipole modes are generated
by $\Delta N =1$ electron-hole transitions between the neighbor quantum
shells and thus acquire much higher energies. The effective energy
separation of the quadrupole and dipole modes favors discrimination of
the low-frequency quadrupole spectrum.
\begin{figure}
\centerline{
\includegraphics[height=9cm,width=6.5cm,angle=-90]{fig2.eps}
}
\caption{\label{fig:be12_fig2}
Dipole ($\lambda\mu =$10) and quadrupole ($\lambda\mu =$20)
strength distributions in Na$^+_{11}$. The large peaks above 2 eV represent
the states of the dipole and quadrupole plasmons. The sets of
horizontal arrows depict
two-photon (two-dipole) processes: DTP (pump + pump) and ORSR (pump + Stokes).
The latter runs via the isolated dipole state at 1.35 eV and the tail of the
dipole plasmon. The low- and high-frequency quadrupole states of interest
are seen as peaks at 0.8 and 2.6 eV.
}
\end{figure}
In what follows, we will concentrate on the states with
$\lambda\mu$=20. This suffices for our exploration. Besides, the limitation
to $\lambda\mu$=20 allows to maintain the axial symmetry which, in turn,
reduces computational expense. Figure \ref{fig:be12_fig2} shows the relevant part of
the excitation spectrum in terms of the dipole ($\lambda\mu=10$) and
quadrupole ($\lambda\mu=20$) photo-absorption strengths. The low- and
high-frequency quadrupole modes of interest are seen at the energies
$e_{20}=$0.8 and 2.6 eV. The horizontal arrows depict the DTP and
ORSR processes considered below. Two ORSR versions are discussed. In
the first case, the process runs via the isolated dipole state at 1.35
eV. In the second case, it proceeds via the intermediate region
between the isolated dipole state at 1.9 eV and the dipole plasmon. In
both cases, there is an appreciable detuning from the dipole states
such that one deals only with remote tails of the states.
\begin{figure}
\centerline{
\includegraphics[height=11cm,width=8cm,angle=-90]{fig3.eps}
}
\caption{\label{fig:orsr_iso_fig3}
ORSR in Na$^+_{11}$ via the isolated dipole state at 1.35 eV.
The left panels show quadrupole and dipole strengths as function of
the excitation energy. The right panels display the electronic
quadrupole and dipole moments (in atomic units) as a function of
time. It is seen that, while the quadrupole oscillation is persistent,
(right-top plot), the dipole signal exists only during the coinciding
pump and Stokes pulses (right-bottom plot). The calculations were
performed for laser frequencies $\hbar \omega_{s}=$0.46 eV and $\hbar
\omega_{p}=$1.27 eV, intensities $I_{s}=1.5 I_{p}= 2.2 \cdot 10^{10}
W/cm^2$ and pulse durations $T_{s}=T_{p}= 300$ fs. The detuning from
the intermediate state is $\Delta \sim $0.08 eV. }
\end{figure}
\section{Results and discussion}
\subsection{ORSR for low-frequency isolated quadrupole}
Figure \ref{fig:orsr_iso_fig3} shows the ORSR via the isolated dipole
state at 1.35 eV with a detuning of $\Delta \sim $0.08 eV. The right
panels show time evolution of the dipole and quadrupole moments. It
is seen that the ORSR mechanism leads to {\it enduring} quadrupole
oscillation. Since electron-ion and electron-electron relaxations are
not taken into account here, these oscillations persist for several ps
and further. The dipole oscillations, on the other hand, exist only
during the pulses at $t=0\!-\!300$ fs. The left panels display the
corresponding dipole and quadrupole strengths in the frequency domain,
obtained as the Fourier transforms of the oscillating moments. The
quadrupole mode of interest at 0.81 eV dominates all other quadrupole
excitations, even the quadrupole plasmon. So, just this mode is presented
in the enduring oscillation seen in the right-top panel. The dipole
strength is negligible (compare different scales of the top and bottom
panels) and so should not noticeably compete with the quadrupole mode
of interest. As was shown in \cite{Ne_PRA_2006}, a significant decoupling of
competing modes is crucial for detection of the target quadrupole
state.
\begin{figure}
\centerline{
\includegraphics[height=11cm,width=8cm,angle=-90]{fig4.eps}
}
\caption{\label{fig:orsr_pla_fig4}
Similar as figure 3 but for an intermediate dipole frequency of
2.05 eV. The calculations were performed for the laser frequencies
$\hbar \omega_{s}=$1.24 eV and $\hbar \omega_{p}=$2.05 eV, intensities
$I_{s}=1.5 I_{p}= 1.44 \cdot 10^{10} W/cm^2$ and pulse durations
$T_{s}=T_{p}= 300$ fs. The detuning is $\Delta \sim$0.15 eV with
respect to the dipole state at 1.9 eV and $\Delta \sim$0.25 eV with
respect to the dipole plasmon at 2.3 eV. }
\end{figure}
Figure \ref{fig:orsr_pla_fig4} shows results for the ORSR via the
region between two close dipole structures. The pump laser frequency
2.05 eV is placed between the isolated peak $1eh$ at 1.9 eV
and the tail of the dipole plasmon lying at 2.3 eV, thus representing a
considerable detuning from both dipole structures. As these structures
are much stronger than the intermediate dipole in the previous case,
it becomes possible to get twice larger quadrupole signal even at the
lower laser intensity ($I_{s}=1.5 I_{p}= 1.44 \cdot 10^{10} W/cm^2$
against $I_{s}=1.5 I_{p}= 2.2 \cdot 10^{10} W/cm^2$ in
figure \ref{fig:orsr_iso_fig3}). Like in the previous case, we have no
appreciable competitors though now the pump frequency is rather close
to the dipole plasmon. Altogether, figures \ref{fig:orsr_iso_fig3} and
\ref{fig:orsr_pla_fig4} justify that one can get robust ORSR signals
in clusters via both isolated dipole states and tail of the
dipole plasmon.
One may observe in the right-up panel of figure \ref{fig:orsr_pla_fig4}
that the quadrupole oscillation leads to some shift of
the average quadrupole moment. Indeed, the oscillation starts at the moment
26.6 $a_0^2$ but then proceeds around a bit lower average value $\sim 26.4 \; a_0^2$.
Our analysis shows that this effect is caused by a non-isotropic emission of
electrons from the cluster. Indeed, the axial cluster Na$^{+}_{11}$
has a shape of the prolate ellipsoid. The quadrupole oscillation of multipolarity
$\lambda\mu=20$ drives electrons along the symmetry axis of the cluster and thus favors
emission of electrons from the poles of the cluster ellipsoid. This makes
the shape of the electron subsystem more spherical and therefore
effectively decreases the quadrupole moment. The stronger the oscillation, the
larger the moment shift. Hence the effect is most apparent in figure \ref{fig:orsr_pla_fig4}
which exhibits the strongest quadrupole mode. However, even in this case the moment shift
is quite modest (~2\%) and thus should not noticeably influence the accuracy
of measurements of the energy of the quadrupole mode in ORSR experiments.
\subsection{Coherence and population for the target state}
\begin{figure}[t]
\centerline{
\includegraphics[height=8cm,width=7cm,angle=-90]{fig5.eps}
}
\caption{
\label{fig:fig5}
The quadrupole strength as a function of the pulse shift for ORSR
via the isolated dipole state and the dipole plasmon.
}
\end{figure}
Figures \ref{fig:orsr_iso_fig3} and \ref{fig:orsr_pla_fig4} were obtained
with simultaneously active pump and Stokes pulses, i.e. with the relative
time shift $T_{\rm shift}=0$. The dependence of the quadrupole peak
height on the time shift it shown in figure \ref{fig:fig5}. It is
seen that the maximal quadrupole strength is achieved at coincident
pump and Stokes pulses. A similar picture emerges for stimulated Raman
scattering, when plotting the population of the state instead of its strength
\cite{Scoles,SEP,Vitanov}. However, such a correlation between the
strength and population arises only at low population and does not
imply equality of these two characteristics. This point deserves a closer
inspection.
Let us confine the formal considerations to the active subspace. At
any time $t$, the many-body wave function of the three-level $\Lambda$
system can be represented as a superposition
\begin{equation}
\psi(t)=c_0(t)|0\rangle+c_1(t)|1\rangle+c_2(t)|2\rangle
\end{equation}
where $c_0(t)$, $c_1(t)$ and $c_2(t)$ are time-dependent amplitudes of
the initial, intermediate and final bare states, respectively. The
population of the target quadrupole state then reads $|c_2(t)|^2$. But
the strength of the quadrupole transition (considered in figures
\ref{fig:orsr_iso_fig3}-\ref{fig:fig5}) is
$\sim c_0(t)*c_2(t)\langle2|E2|0\rangle^2$ and so corresponds not to
the population but to the coherence $c_0(t)*c_2(t)$ of the initial and
target states. The population and coherence have different
behaviors. They both grow at the onset of the population of the level
$|2\rangle$ but then the coherence gets the maximum at
$c_0^2(t)=c_2^2(t)=0.5$ and begins to vanish with further increasing
$|c_2(t)|^2$ from 0.5 to 1.
The calculation of the population $|c_2(t)|^2$ in TDLDA is somewhat
involved. So, in the present study, we will only consider a simple
estimate. We know that the dominant component of the low-frequency
quadrupole state is the $1eh$ configuration $[200]_e[220]_h$ (see
Table 1). To populate this configuration, the electron from the
occupied state $[220]$ should be transferred to the unoccupied state
$[200]$. This should manifest itself in the time dependent
single-particle wave function $\phi_{[220]}(\vec r,t)$. Namely, it has
to coincide with the initial state $\bar{\phi}_{[220]}(\vec r)$ at t=0
and then acquire large contribution of $\bar{\phi}_{[200]}(\vec r)$ in
the course of time. Hence the population $P^{eh}_2(t)$ of the
$[200]_e[220]_h$ quadrupole component can be estimated as the squared
overlap
\begin{equation}
P^{eh}_2(t)= |\int d\vec r \phi_{[220]}(\vec r,t) \bar{\phi}_{[200]}(\vec r)|^2 .
\end{equation}
This estimate yields for $t>600$ fs (i.e. for the time when, following
figures \ref{fig:orsr_iso_fig3} and \ref{fig:orsr_pla_fig4}, only the
low-lying quadrupole mode survives) quite
small population 0.01-0.03. Instead, the overlap with the initial static state
\begin{equation}
P^{in}_2(t)= |\int d\vec r \phi_{[220]}(\vec r,t) \bar{\phi}_{[220]}(\vec r)|^2
\end{equation}
turns out to contribute the complementing 99-97\%. Similar relations
were obtained for the direct two-photon (DTP) excitation of the
quadrupole, described in \cite{Ne_PRA_2006}. During the time
evolution, the single-electron wave functions thus mainly keep their
initial structure and only a small fraction (a few per cent) of the
intended $1eh$ quadrupole configuration $[200]_e[220]_h$ is really
populated. This is mainly the consequence of using the
large detuning. One might be tempted to decreasing the detuning
or, alternatively, enhance the population by increasing the laser
intensity. But then we would run into the on-linear regime of TDLDA where
the cross coupling between numerous states of the system takes place
and hence the three-level picture most probably fails.
This trouble reflects the more complex dynamics of metal clusters
as compared to simple molecules. In any case, even
the population of a few percent has to be sufficient to detect the quadrupole
state in experiment. The TDLDA calculations for the DTP in
Na$^+_{11}$ showed measurable signatures of the low-lying quadrupole
state in PES \cite{Ne_PRA_2006}. A similar situation is expected for
the ORSR.
\subsection{ORSR stability}
\begin{figure}
\centerline{
\includegraphics[height=14cm,width=9cm,angle=-90]{fig6a_6c.eps}
}
\caption{
\label{fig:fig6}
The strengths calculated as in figure \protect\ref{fig:orsr_iso_fig3} but
a) for larger intensities $I_{s}=1.5 I_{p}=7.44 \cdot 10^{10} W/cm^2$;
b) with small deviation 0.03 eV from the two-photon resonance;
c) without detuning from the intermediate dipole state. See text for
more details.
}
\end{figure}
It is still necessary to check the stability of the ORSR scheme to
variations of the process parameters. Our tests are illustrated in
figure \ref{fig:fig6}. It shows ORSR via the isolated dipole. The
process via the tail of the dipole plasmon (not shown here) produces
the same features.
The panels a) of figure \ref{fig:fig6} show the quadrupole and dipole
strengths at larger pulse intensities. The intensities are increased
by a factor 3.4 as compared to figure \ref{fig:orsr_iso_fig3}. This
allows to get a twice stronger quadrupole mode at 0.81 eV. However, we
are punished by stronger dipole excitations and a coupling to the
quadrupole mode at 2.8 eV (as a part of the quadrupole plasmon), which
can complicate discrimination of the target state in PES. So, though
the ORSR works even at high intensities, the lower intensities are
better suited for the experimental analysis. On the other hand, it is not
worth to go below the optimal intensities of figure \ref{fig:orsr_iso_fig3}
since this would
lead to an unnecessary weakening the quadrupole strength.
The panels b) in figure \ref{fig:fig6} show that small deviations from
the two-photon resonance condition $\omega_p-\omega_s=\omega_2$ lead
to weakening of the target mode. Nevertheless, because of the finite
width of the mode, the signal does not vanish too rapidly. In fact
the width of the mode determines the maximal deviation for $\omega_s$
while looking for the mode in the experiment.
Finally, the panels c) of the figure demonstrate what
happens without detuning from the intermediate dipole state. In this
case, the dipole strength of the intermediate state is considerably
increased while the population of the target quadrupole state
shrinks. Besides that, a large fraction of competing high-frequency
quadrupoles appear. So, a considerable detuning is crucial for the
success of the ORSR scheme.
\subsection{DTP exploration of quadrupole plasmon}
The top panel of figure \ref{fig:fig6}c deserves a deeper analysis because
it demonstrates an important feature of ORSR and similar two-photon
processes in metal clusters. Indeed, if two photons from the pump and/or
Stokes pulses happen to come into resonance with one of the peaks of the
quadrupole plasmon, then this peak is strongly excited. Such a case is
observed in the top panel of figure \ref{fig:fig6}c). Comparison of
this plot with Fig. 2 shows that $2\hbar\omega_p = 2.7$ eV and
$2\hbar\omega_p + 2\omega_s = 3.7$ eV: this approximately covers two
of the plasmon states and thus result in two prominent pikes at these
energies.
Obviously, such undesirable excitation of the quadrupole plasmon can
spoil the discrimination of low-frequency quadrupole modes in PES. At
the same time, this provides new possibilities for
simultaneous investigation of low- and high-frequency quadrupoles.
\begin{figure}[t]
\centerline{
\includegraphics[height=11.0cm,width=9.5cm,angle=-90]{fig7.eps}
}
\caption{
\label{fig:fig7}
Direct two-photon excitation of the quadrupole plasmon state at 2.6 eV
in Na$^+_{11}$.
The cluster is irradiated by the pump laser only, with characteristics
$\hbar \omega_{p}=$1.31 eV, $I_{p}= 2.2 \cdot 10^{10} W/cm^2$ and
$T_{s}=300$ fs.
}
\end{figure}
The example above indicates that implementation of two lasers with
different frequencies complicates the analysis of the quadrupole
plasmon because we may have at once more than one resonance.
This would hinder the detection procedure. For the same reason
is not optimal to use for investigation of high-frequency states
the ORSR in the ladder configuration.
It is much better to apply the DTP method proposed in
\cite{Ne_PRA_2006}, where the quadrupole state of interest is
populated by absorption of two photons from one laser, as shown in
figures \ref{fig:lam_sys_fig1} and \ref{fig:be12_fig2}.
An example of such a DTP process is presented in figure \ref{fig:fig7}
where the resonant absorption of two photons from the pump laser
results in a strong excitation of the quadrupole plasmon state at
2.6 eV. Unlike the top panel of Fig. 6c), only one frequency is
involved and thus only one resonant peak excited. A properly tuned DTP
with scanning frequencies allows to investigate the quadrupole plasmon
state by state.
\subsection{Proposed experiment}
The discussion above shows that the most optimal way is a
simultaneous investigation of the low- and high-frequency quadrupole
states by a combination of ORSR and DTP methods, respectively.
For this aim we need three synchronized lasers: tunable infrared pump
and Stokes lasers and ultraviolet probe laser.
The detection scheme can be the same as proposed in \cite{Ne_PRA_2006}.
Namely,
the cluster is irradiated by a probe pulse (with an appropriate delay)
leading to a direct emission of electron out of the excited quadrupole state.
The coupling of the quadrupole oscillation to the single-electron PES
structures will create the satellites in the PES.
Thus, by recording the PES and measuring the relative frequencies of
the satellites, one may determine the frequency of the quadrupole state.
The experiment should follow three steps.
1) As a first step, we should find the optimal parameters
(intensity, duration, ...) for the probe pulse responsible
for the photoionization of the cluster and detection process.
For this aim we should scan the pulse parameters so as to get
the strongest and, at the same time, distinct single-particle PES
from the ground state. Our predictions \cite{Ne_PRA_2006}
for the optimal pulse intensities and durations
($I =2\cdot 10^{10} - 2\cdot 10^{11} W/cm^2$ and $T =
200 - 500$ fs) can be used here as a first guess. These predictions
are relevant for all three pulses (pump, Stokes and probe)
which may have similar characteristics, apart from
the photon frequency.
In principle, the photoionization can be also provided by the coherent
synchrotron source with high-frequency photons. Then the two-photon ionization
proposed in \cite{Ne_PRA_2006} can be replaced by more effective
one-photon ionization. In this case, the characteristics of the pump and
Stokes pulses can be taken again from \cite{Ne_PRA_2006}, while the parameters
of the probe irradiation need an independent adjusting.
2) Now we can proceed with the next step: to explore the high-frequency
states of the quadrupole plasmon. We use here only the pump and probe
lasers. The pump frequency is scanned until its double value comes into
resonance with the states of the quadrupole plasmon. The probe pulse
should have sufficient delay with respect to the pump pulse so as to be
safely decoupled from it and to measure self-sustaining oscillations only.
Since the quadrupole
plasmon has high energy, one-photon ionization by a probe laser in the
visible range suffices for our aims. The maximum satellite
signal provides the quadrupole energy in two ways, first as the double
pump frequency and second (as a countercheck) from the offset of the
satellites. Thus one can explore, state by state, all the quadrupole
plasmon \cite{comm2}.
3) In a final step, one can refine the measurement of the low-frequency
quadrupole state. First, one should choose the optimal intermediate
dipole frequency by combining a suitable detuning above or below the
dipole state with the feature that the double pump frequency lies
between the peaks of the
quadrupole plasmon and thus avoid their resonance excitation. By such
a way we will hopefully minimize the influence of the quadrupole
plasmon. Then one should scan the Stokes laser until the two-photon
resonance $\omega_p-\omega_s=\omega_2$ with a low-frequency quadrupole
is achieved.
The proposed scheme allows to obtain not only the frequencies of the
quadrupole states but also their lifetime. To that end, one
should simply increase, step by step, the delay between the pump and
probe pulses. The relaxation of quadrupole oscillation will finally
lead to an extinction of the satellites from which one can read off
the lifetime.
\section{Conclusions}
In this paper, we have proposed a combined exploration of electronic
quadrupole low-frequency (infrared region) and high-frequency (region of the
quadrupole plasmon) states in free
metal clusters by means of two-photon processes: direct
two-photon (DTP) excitation and off-resonant stimulated Raman (ORSR)
scattering. The DTP uses one pump laser and retrieves the two photons from
it while the ORSR employs a pump and a Stokes pulses with different
frequencies. The final proof of the successful excitation of a quadrupole
mode is achieved by a probe pulse with a subsequent measurement of the
photo-electron spectra (PES). The present analysis
is based on realistic simulations within the time-dependent local density
approximation. The main attention is paid to the ORSR which, for our knowledge,
still have never been considered for atomic clusters.
The calculations show that the high-frequency quadrupole states in the
regime of the quadrupole plasmon are preferably explored by DTP. The
more flexible ORSR is a powerful tool to investigate the isolated
low-frequency (infrared) quadrupoles. ORSR allows to use various
intermediate dipole states, from the isolated infrared dipoles to
the dipole plasmon. In all the cases, an appreciable
detuning from true dipole states is crucial to avoid unwanted cross
talk. A proper combination of both methods (DTP, ORSR) should allow to explore
both spectra and lifetimes of the quadrupole states. We have worked out
optimal parameters of DTP and ORSR schemes and checked the sensitivity
of the schemes to parameter variations. The proposed two-photon schemes
are quite general and, in principle, can be used for a variety
of clusters including supported and embedded ones.
The low- and high-frequency quadrupole states can deliver important
information on electron-hole excitations inside the valence shell
($\Delta N =0$) and through two shells $(\Delta N =2)$. By combining
the two-photon and photoelectron data, one can get the single-electron
energies above the Fermi level and thus greatly enlarge our knowledge
on the mean field spectra of valence electrons. These spectra give
access to other cluster features (mean field, deformation) and
provide a critical test for the theory of cluster structure.
\begin{acknowledgments}
V.O.N. thanks Profs. K. Bergmann and B.W. Shore for valuable discussions.
The work was partly supported by the DFG grant GZ:436 RUS 17/104/05, the grant of
University of Paul Sabatier (Toulouse, France) and
Heisenberg-Landau (Germany-BLTP JINR) grants for 2005 and 2006 years.
\end{acknowledgments}
\section*{References}
|
2,877,628,089,439 | arxiv |
\section{Introduction}
Self-driving has recently benefited from deep learning breakthroughs, which have enhanced the performance of autonomy systems significantly.
The performance achieved by these systems is tightly coupled to the quality, size and richness of training datasets. Furthermore, as self-driving is a safety critical application it is very important to have a diverse set of testing scenarios that are representative of driving.
Collecting data is a fairly easy process -- a single vehicle can generate several Tb of data a day.
However, it is not feasible to label everything that has been collected. For instance, it could cost around \$150K\footnote{scale.com} to simply annotate the bounding-box of objects in one hour of camera data, assuming average density of 50 objects per image.
Hence, it is of key importance to have a mechanism to identify "what to label" such that we can get the most relevant labeled dataset to achieve the highest autonomy performance given a labeling budget.
One of the most relevant areas of research in this spirit is active learning, which has been used as a mechanism to identify interesting examples to label. However, active learning approaches are tied to a model solving a given task\footnote{We consider an ensemble to be a model belonging to a specific model class}, while in our setting many tasks need to be performed -- a self-driving car needs to perceive the world, predict the future trajectory of all the actors in the scene and perform safe motion planning.
Furthermore, the assumption in active learning approaches is that the model is fixed and we are interested in improving its performance via additional labels.
However, in current modern self-driving approaches the autonomy stack is constantly changing, and as a consequence, examples that might be informative to label a few weeks ago might not be interesting anymore under the evolution of the perception, and motion forecasting modules.
As a consequence, existing self-driving benchmarks have not been created by using active learning, but instead by exploiting random sampling, handcrafted set of heuristics, or manual selection.
In this paper we look at this task with a new lens and define specific criteria to quantize interestingness in order to identify a dataset of challenging and diverse scenarios for self-driving tasks. These criteria are not bound to a specific autonomy architecture or model and does not require multiple iterations of training and evaluation.
More specifically, we propose a set of complexity measures to characterize self-driving scenarios.
These measures include various factors such as the map, static and dynamic objects surrounding the ego-vehicle, and the executed trajectory of the data collection platform as well as the possible interactions it performs with other road-users.
We then utilize these complexity measures inside a novel data selection approach to curate challenging and diverse dataset of traffic scenes.
Through extensive experiments using various models in the literature, that such curation approach leads to better generalization of a wide variety of models in different autonomy tasks. More specifically, we show average 6.5\% improvement in performance of various perception models and average 6\% improvement in motion forecasting models from the recent state-of-the-art compared to other approaches such as active learning.
\section{Conclusion}
In this paper we presented a dataset curation pipeline for self-driving to select unlabeled data for labeling. We described a set of intuitive complexity measures to characterize the traffic scene of the collected data wrt various aspects including the topology of the map, diversity and complexity of surrounding actors and their behaviors, and the executed maneuver of the SDV. We also presented a method to select interesting and challenging sections of the collected logs using the described measures together with adding diverse examples to the final selected set of scenarios. Through extensive experiments using various models for the main tasks of perception and prediction, we demonstrated the effectiveness of our curation strategy compared to active learning methods. We believe future extensionsof our method will be impactful in areas of robotics where complex perception, prediction and ego-action plays a significant role.
\section{Related Work}
\subsection{Data Selection}
Finding hard examples has been used to train models more effectively.
In \cite{felzenszwalb2009object} an active training set is maintained and is iteratively updated during training by removing easy examples and adding harder ones based on, e.g.~, classification margins.
In \cite{shrivastava2016training} hard examples are mined online at ROI level for object detection task. However such methods deal with data that is already labeled.
Data selection has been studied in the \textit{active learning} literature where batches of unlabeled data are selected for labeling by iteratively training a model on the labeled set and finding informative unlabeled data.
Uncertainty-based methods select difficult examples by considering entropy in predicted distributions \cite{joshi2009multi, li2013adaptive}, uncertainty across a model ensemble \cite{beluch2018power, gal2017deep} or the estimated loss of each example \cite{yoo2019learning}.
To avoid training a full model multiple times, \cite{coleman2019selection} proposed to use a simpler model as a proxy to perform efficient data selection.
The proxy model can be either a simpler architecture or the original model trained with fewer iterations. Diversity-based approaches \cite{nguyen2004active, Sener2018ActiveLF} aim to select a subset of the unlabeled pool that best represents the entire dataset. \cite{haussmann2020scalable} describes an scalable production system for active learning in object detection where various scoring and sampling strategies are compared.
\textit{Semi-supervised} active learning frameworks has also been proposed recently. In \cite{simeoni2019rethinking} labeled data as well as the unlabeled ones are used during model training in each active learning cycle, for example by treating the most confident prediction for unlabeled data as pseudo label.
Similarly in \cite{gao2019consistency} the model is trained with a usual loss on labeled data (e.g. cross-entropy) and a consistency loss on unlabeled data which penalize highly inconsistent predictions of slightly distorted samples of an example data.
Active learning and semi-supervised approaches above assume that the model is fixed and we are interested in improving its performance via additional labels.
However, our work does not make this assumption since examples that might be informative to label a few weeks ago might not be interesting anymore under the evolution of the different self-driving stack modules.
\subsection{Self-driving Datasets}
Many self-driving datasets have been released publicly in the past few years.
Kitti \cite{geiger2012we}, most notably, was the first benchmark with a dataset that included multiple sensors of LiDAR and camera supporting the tasks of stereo, optical flow, visual odometry, and 3D object detection.
Citiscapes dataset \cite{cordts2016cityscapes} includes annotated images from various cities in multiple seasons, suitable for semantic and instance segmentation.
BDD100K dataset \cite{yu2020bdd100k} offers a diverse set of images that has been crowd-sourced with annotations such as 2D labels, lane markers and weather.
LiDAR sensor data particularly has rich 3D information making such datasets suitable for tasks of 3D detection and possibly tracking and motion forecasting \cite{wang2019apolloscape, geyer2020a2d2}.
Waymo open dataset \cite{sun2019scalability} includes 2D/3D bounding boxes with camera and LiDAR data collected over large geographical and time of day extent. Argoverse \cite{chang2019argoverse} also provides map rasters and graph with lane and intersection annotations.
In order to create an interesting and challenging benchmark for trajectory forecasting task, they mine vehicles that are at intersection, performing a turn action or in dense traffic.
nuSenes \cite{caesar2020nuscenes} is the first dataset that includes radar data and supports tasks of 3D detection and tracking as well as behavior prediction.
The scene selection is achieved manually to include dense traffic, rare classes, dangerous traffic situations, and difficult AV maneuvers.
Other datasets include only sensor data without any human annotations of objects \cite{santana2016learning, agarwal2020ford}.
In contrast to the above mentioned works, we propose a method that combines a systematic semantic analysis of the raw data from all aspects relevant to driving: the complexity of the map , other actor motion and configurations, and last but not least the SDV pose and motion. We utilize these complexity measures in a framework that encourages not only challenging but a diverse selection to arrive at our curated dataset.
\section{Experiments}
\paragraph{Experimental Setup:}
We use a very large labeled dataset that consists of 140 hours of manual driving data (20k+ snippets of 25s).
The dataset is split into test and validation sets as well as a base training set $\Psi$, ensuring no geographical overlap and similar label distributions.
Using our proposed approach (CR), random sampling (RN), and an active learning baseline (AL), we select two separate training sets of size 1,000 and 3,000 snippets from $\Psi$.
Similarly, we we select 2 test sets out of the original test set: \textit{Easy} which is selected randomly, and \textit{Hard}, which is selected using our proposed approach.
We train various perception and motion-forecasting models using the curated training sets, and evaluate them on the two test sets.
Specifically for perception-only task we use PointPillars\cite{lang2019pointpillars}, and PointRCNN\cite{shi2019pointrcnn}.
Additionally, we use joint perception-prediction models of ILVM\cite{casas2020implicit}, MultiPath\cite{chai2019multipath}, and ESP\cite{rhinehart2019precog} in which both tasks are trained end-to-end.
\paragraph{Metrics:} For the perception task, we use mean average precision (mAP) to compare models. Note that this metrics is computed for each class of \textit{Vehicles}, \textit{Bicycles}, and \textit{Pedestrians}.
For motion forecasting we use the following metrics: mean average displacement error (meanADE), minimum average displacement error (minADE), and collision rate among actors.
The motion prediction metrics are computed over 5s horizon, similar to how they are trained.
Following \cite{casas2020implicit}, we evaluate prediction in true positive object detections at a common recall point across models. That means, we find the detection threshold for each model such that all of them are evaluated at the same recall point of the detection Precision-Recall curve.
\paragraph{Active Learning Baseline (AL):}
We compare our dataset curation method against an uncertainty-based active learning approach. More specifically, we select snippets with high entropy predictions generated by a trained prediction model.
We use a prediction model with the same backbone as ILVM \cite{casas2020implicit}.
However, in order to compute entropy easily, the header is replaced with a simplified model that outputs a distribution $p(y)$ with an independent 2D Gaussian for each actor $i$ and time-step $t$,
\begin{equation}
p(y) = \prod_{i}^{N} \prod_{t}^{T} p(y^{i}_{t}; \mu^{i}_{t}, \Sigma^{i}_{t})~~.
\end{equation}
\noindent Afterwards, for each frame, we can compute the entropy of the output distribution as,
\begin{equation}
H(y) = \sum_{i}^{N} \sum_{t}^{T} \frac{1}{2} \log \lvert 2 \pi e \Sigma^{i}_{t} \rvert~~,
\end{equation}
and sum the entropies across all frames in a snippet to obtain a final uncertainty score.
Given the size of the base set $\Psi$, it is infeasible to iteratively re-train and re-score all examples more than once.
Therefore, we train the prediction model initially on a random subset of $250$ snippets and then select the remaining snippets with the highest entropies to obtain datasets of size 1k and 3k snippets.
Specific model details are available in the supplementary materials.
\subsection{Results}
Table \ref{tbl:sum} shows the summary of all results where the metrics are averaged per model and object classes (macro-averaging).
It is clear that our curation method outperforms the baselines on average in all tasks and training-set sizes.
Specifically, in object detection we improve 7.5\% and 6.3\% in the \textit{Easy} set over active learning respectively in 1k and 3k settings, and 7.8\% and 6.7\% in the \textit{Hard} set.
A similar trend is observed in the motion forecasting task.
Compared to active learning, our approach gains 10.3\% and 8\% improvement in prediction metrics for 1k and 3k settings in the \textit{Easy} test set, and 6.3\% and 4\% in the \textit{Hard} set.
The results indicate that using the data selected by our proposed method, we can achieve significantly better generalizations in various models and tasks in self-driving.
In the following sections we present the result of each model for each task.
\input{sections/tables/summary.tex}
\paragraph{Detection:}
Table \ref{tbl:detection} shows the detection metrics for detection-only models a well as joint perception-prediction models.
The results indicate that our approach consistently improves Bicycle detections significantly across all models and training set sizes.
Similarly for Pedestrians, our approach leads to better mAP in majority of the models.
Interestingly for Vehicle detection both our approach and random sampling show strong performance in the \textit{Hard} test set across different models.
\input{sections/tables/det}
\paragraph{Motion Prediction:}
Table \ref{tbl:prediction} shows the motion forecasting results for various models and curation methods.
Similar to detection task, we can see that our approach is performing significantly better across the majority of models for all classes.
Specifically in ILVM model, active learning baseline outperforms our approach in some of the metrics.
This can be expected as our active learning baseline uses the ILVM backbone for its prediction model.
This example validates our assumption that even though active learning can be used to select informative examples to improve the performance of a model, the selected training set is not necessarily useful for training other models in the same or different tasks.
Another interesting metric for motion prediction is collision between the predicted trajectories of actors.
Both ILVM and MultiPath prediction models generate trajectories at scene level, modeling the joint probability distribution among actor trajectories. As shown in the last column of Table \ref{tbl:prediction}, using our curated data, the models are able to generate more consistent predictions for actors.
\input{sections/tables/pred}
\section{Data Selection via Complexity Measures}
In this section we propose a novel approach to decide which frames are more interesting to be labeled in order to create a rich and diverse dataset for training and evaluation.
Towards this goal, we define a number of complexity measures driven by the static constructs in a driving scenario (e.g.,
lane topology, presence of an intersection), the traffic participants (e.g., other vehicles, pedestrians) and the maneuvers that the ego-car performs.
Note that our data collection platform is a self-driving vehicle (SDV) and thus we will use these two terms interchangeably.
We assume the existence of a prebuilt HD Map for the areas that we will drive, a localization system that can localize the vehicle with respect to this map, as well as a
perception software stack that can be ran on the collected raw data whose by product are
object detections for each frame and their associated tracks that link those detections across time.
In the following, we first introduce our method to select challenging and diverse scenarios given a set of complexity measures.
We then discuss in detail the complexity measures we exploit in our work.
\subsection{Data Selection}
Our selection process operates at the snippet level (\ie a sequence of frames) instead of frame level.
This allows observing phenomena that have a temporal component such as a lane-change or an interaction between actors, or the change in the speed of the SDV.
Moreover, labeling a sequence of consecutive sensor data can be done more efficiently compared to to scattered individual frames.
More formally, we define a snippet
$s=\left\{f_i|i=0,...,T-1\right\}$ as a set of $T$ consecutive frames of data. In our experiments, each snippet represents 25 seconds of data with 250 frames.
Throughout this paper, we will interchangeably refer to a snippet as a scenario.
The purpose of the data selection process is then to pick $K$ non-overlapping snippets from the pool of unlabeled data.
We formulate the snippet selection as an optimization problem where first interesting scenarios are selected, and then the dataset is enhanced to be diverse and complete.
\paragraph{Selecting Challenging Scenarios:}
In order to identify a set of challenging scenarios, we first rank the snippets using a scoring function $g$.
In particular, we utilize a linear combination of the complexity measures as our "interestingness" score for a given snippet $s$
$$g(s;w)=w^TE(s)$$
where $E(s)$ is the vector of complexity measures.
It is important to mention that what is considered to be challenging/interesting depends on the target task. F
or example, imagine a scenario where the SDV is stopped at a busy intersection with a red light.
As there are many interactions between actors, this is an interesting scenario for the tasks of perception and motion forecasting, however, for motion planning, this scenario may not be very useful as the SDV is not moving.
On the other hand, there can be many scenarios where the SDV needs to interact with one or two actors of interest in an empty intersection, making this scenario interesting for planning and motion forecasting tasks, but not for detection.
Therefore, we use different weight vector $w$ when ranking the snippets for each specific task.
We assume there are $n$ tasks and the goal is to select $c_i, i=1,...,n$ snippets for each task $i$. Therefore we will have the following optimization:
\begin{align}
\label{eq:opt}
S^*_1, ..., S^*_n = \argmax_{\substack{S_1, ..., S_n \subset \Psi \\ |S_i| = c_i, \forall i \\ s_i \cap s_j = \emptyset, \forall s_i,s_j \in \cup_i S_i}} \sum_i^n \sum_{s \in S_i} g(s;w_i)
\end{align}
where $\Psi$ is the set of all snippets, $w_i$ corresponds to the complexity weight vector specific for task $i$.
The constraints simply makes sure there are no overlapping snippet within or across selected sets.
Note that for our experiments, we consider two tasks: perception and motion forecasting (i.e. prediction).
However, we still consider motion planning when we consider the distribution of the data, since as the downstream task it will have indirect affect on both perception and motion forecasting performance.
We choose the weight vector for each task is tuned empirically.
We solve this optimization problem by first ranking all the snippets and then iteratively picking the next most "interesting" snippet from the queue and removing any overlapping one from the candidate set.
We repeat this process for a fix number of iterations until we have reached a fixed budget.
\paragraph{Selecting Diverse Scenarios:}
Limiting the data selection to only challenging scenarios will not necessarily lead to a diverse dataset, or to a complete set of scenarios that we might encounter in the real world. The goal of this additional selection step is to identify a set of snippets that ensures completeness and diversity.
We quantize the dissimilarity between snippets as a function of their difference in the complexity measures, where in order to get geo-diversity we expand the complexity vectors with the latitude and longitude coordinates of the frames.
We then iteratively look for the snippet that is farthest from the current selected set and added to the set to be labeled. In particular, at each iteration we select
\begin{equation}
\label{eq:diversity}
s^* = \argmax_{s\in\{\Psi-\mathcal{S}\}} \bigl( \min_{s^\prime \in \mathcal{S}} d(s, s^\prime)\bigl),
\end{equation}
where $\Psi$ is the set of all unlabeled snippets, $\mathcal{S}$ is the set of already selected snippets, and $d$ is a dissimilarity function.
In order to capture diversity at a granular level, we have define the maximum difference between the closest pair, as our dissimilarity function between snippets
\begin{equation}
d(s_i, s_j) = \max_{k} \min_{l} ||E(s_i,k)-E(s_j,l)^\prime||_2
\end{equation}
where $k$, $l$ index over the frames of the $s_i$ and $s_j$ snippets respectively.
The the full process of data selection is depicted in Algorithm \ref{alg:selection}.
\begin{algorithm}[t!]\small
\caption{Data Selection}
\label{alg:selection}
\begin{algorithmic}[1]
\Procedure{Select}{$\Psi$, $E$, weight vectors $w_i$ for each task, desired number of snippets $k_i$ for each task, desired number of diverse snippets $k_{div}$}
\State $S_1, ..., S_k$ $\gets \emptyset$
\Statex \Comment{Select challenging scnarios}
\While{$|S_i| < k_i, \exists i$}
\For{ each task $j$}
\If{$|S_j| < k_j$}
\State $s^* \gets \argmax_{s \in \Psi-\cup_i S_i} g(s; w_j)$
\State $S_j \gets S_j \cup \{s^*\}$
\EndIf
\EndFor
\EndWhile
\Statex \Comment{Select diverse scnarios}
\State $S \gets S_1 \cup, ..., \cup S_k$
\State $S_{div} \gets \emptyset$
\For{$i$ =$1, ..., k_{div}$}
\State $s^* \gets \argmax_{s \in \Psi-S\cup S_{div}} \min_{s^\prime \in S\cup S_{div}} d(s, s^{\prime})$
\State $S_{div} \gets S_{div} \cup \{s^*\}$
\EndFor
\Return $S_1, ..., S_n, S_{div}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Complexity Measures}
In order to characterize a traffic scene, we consider the map surrounding the location of the SDV and the detected objects within a region of intersect (ROI) around it.
We also consider elements from the scenario that are directly related to the SDV maneuver.
\begin{figure*}
\centering
\includegraphics[width=0.90\textwidth]{imgs/map_comp}
\caption[]
{Examples of the infrastructure-related complexity measures. In the top row, the path complexity measure of the lanes increases, from zero curvature to large varying curvature. in the bottom row, the map crossing measures indicates the complexity of the lane-graph topology as can be seen in the scene visualizations.}
\label{fig:map_comp}
\end{figure*}
\subsubsection{Infrastructure-related Complexity Measures}
We utilize a diverse set of complexity measures related to the static part of the environment.
This contains information about how the vehicles might drive in the scene, the presence of intersections as well as traffic control elements and other road elements such as bike lanes and crosswalk.
\paragraph{Geometry and topology of driving paths:}
We define \textit{driving paths} as a plane curve in $\mathbb{R}^2$, representing the center-line of the map lanes.
A driving path of constant curvature is a straight line or a circle and a vehicle can follow such path simply with a constant steering wheel. On the other hand, paths with variable curvature require more complex steering as shown in Figure \ref{fig:map_comp}.
We use this intuition to define the complexity of a path.
Specifically, we represent a path, $C(s)$, as a finite set of $K$ way points sampled along its arc-length,
$s$: $\mathcal{C} = \left\{C(s_i) | 0 \leq s \leq 1, i=0,...,K\right\}$.
Using a finite-difference method, the curvature (and rate of change in curvature) can be computed for each way point, resulting in the
set: $\mathcal{K}_\mathcal{C}=\left\{\kappa(s_i) | 0 \leq s \leq 1, i=0,...,K\right\}$.
Finally, we propose to use the mean of curvature $\mu(\mathcal{K}_\mathcal{C})$, and the mean of its derivative, $\mu(\dot{\mathcal{K}}_\mathcal{C})$,
as the measure of complexity of the curve:
\begin{equation}
\label{eq:curve_complexity}
E^{\text{curve}} = \mu(\mathcal{K}_\mathcal{C}) + \mu(\dot{\mathcal{K}}_\mathcal{C}),
\end{equation}
The driving paths in the map can cross each other creating scenes where vehicles can have potentially conflicting goals, and hence interesting. We measure such complexity by $E^{\text{crossing}}=\sum_c v_c$ as with $v_c$ being the number of times a driving path $c$ is crossed by other lanes. Figure \ref{fig:map_comp} shows various examples of lanes and their complexity measures.
\paragraph{Intersections, traffic-lights, and signage:}
Traffic scenes are generally more interesting at intersections.
Hence we consider whether the SDV is at an intersections or not, as well as the complexity of the topology of the intersection by counting the number of roads reaching the intersections and the number of lanes that exist at each road.
Besides, we also compute the number of traffic-lights and other signage such as stop-signs or yield-signs.
\paragraph{Bike-lanes and crosswalks:}
The existence of bike-lanes and how they interact with vehicle lanes can increase the complexity of a traffic scene.
To capture this we use the same vehicle-lane geometry and topology measures mentioned above for bike-lanes.
Similarly, we consider crosswalks and how they expand over the vehicle lanes.
\paragraph{Drivable-area height variation:}
Driving on hilly areas can be more challenging compared to flat roads.
We define an additional complexity as the height variance in the drivable surface, \ie $E^{\text{height}} = \sigma^2(\mathcal{Z})$ where $\mathcal{Z}$ is a set representing the map height of points uniformly sampled on lanes.
\subsubsection{Traffic Participants}
Other important aspects of a scenario are how crowded the scene is as well as the diversity of its actors.
We thus define the following complexity measures.
\paragraph{Crowdedness:}
We measure the number of objects in an ROI around the SDV to capture how crowded a traffic scene is, using the following:
\begin{equation}
\label{eq:crowd}
E^{\text{crowd}} = \frac{1}{T}\sum_{t}^T|\mathcal{D}_{t}|,
\end{equation}
where $\mathcal{D}_{t}$ is the set of detections in frame $f_t$.
We measure this separately for static and dynamic actors to have more granular information.
Note that this does not measure how the object can potentially interact with SDV which will be covered by different complexity measures.
\paragraph{Class and spatial diversity:}
Many interesting interactions can happen when there are multiple types of actors in a traffic scene.
Some classes of actors (e.g. bicyclists) are orders of magnitude more rare than others (e.g., vehicles).
We thus measure the diversity of actors by:
\begin{equation}
\label{eq:class}
E^{\text{class}} = \frac{1}{T}\sum_{t}\frac{1}{|\mathcal{D}_{t}|}\prod_c\left(1+|{}_c\mathcal{D}_{t}|\right),
\end{equation}
where ${}_c\mathcal{D}_{t}$ is the set of detections that belong to class $c$ in frame $f_t$. Figure \ref{fig:actor_sdv_comp} shows various traffic scenes and their actor class diversity measure.
In addition we measure the variance of the distance to the SDV for those actors.
\paragraph{Path and speed diversity:}
Up to now we have introduced measures related to the existence of certain actors.
However, it is important to take into account how those actors move.
Similar to the complexity of the driving-paths in (\ref{eq:curve_complexity}), we use the curvature of the path that each actor took, along with its first derivative to measure the complexity of the actor's behavior.
This measure can capture many interesting events.
For example a vehicle that is making a lane-change follows a path with high curvature change. Similarly, pedestrians the change the direction of their motion will lead to high path complexity.
Such behaviors of actors will serve as rich examples for training prediction models.
The variation in the speed of actors can also add to the complexity of the traffic scene indicating an interesting interaction of an actor with another one, a traffic-control element, or can simply show an intention to change path. We define a measures reflecting the speed variance of an actor as well as the variance of the mean speed of all actors in a given scene:
\begin{equation}
\label{eq:speed}
E^{\text{speed}} = \sigma^2(\Omega)+\sum_i^{|\Omega|}\sigma_i^2(\omega_i),
\end{equation}
where $\omega_i$ is a discreet speeds computed for the $i^{\text{th}}$ actor and $\Omega$ is the set of average speeds for all the actors.
\subsubsection{SDV Maneuvers}
Finally it is important to identify scenarios where the SDV performs complex maneuvers.
We thus introduce the following measures to capture how interesting a scenario is as it relates to the SDV.
\paragraph{Path and speed:}
We use the same measures of (\ref{eq:curve_complexity}, \ref{eq:speed}) introduced for actors to obtain the complexity of the ego vehicle path and motion.
Figure \ref{fig:actor_sdv_comp} shows various SDV trajectories and their speed and path complexities.
\paragraph{Route:}
Some of the the high-level maneuvers of the ego vehicle are naturally less frequent that others, e.g. lane-change and turns \textit{v.s} lane-following.
Therefore, we count such maneuvers to measure the complexity of the SDV route.
We also consider whether SDV interacts with traffic-lights or stop-signs while following the route.
\paragraph{Interactions with other actors:} We obtain the number of static and dynamic objects that are within distance to the SDV path as measure of how other objects affect SDV behavior.
However distance is not always a sufficient measure.
For example, vehicles that are waiting to make an unprotected left turns, or passing through and intersection controlled by stop-sign, may not be close to the ego path, but their behaviors affect SDV's decision and vice versa.
To capture this, we measure the number of vehicles that pass through a lane that is in conflict with SDV route.
We also count the vehicles that can reach such lanes within a short time horizon to capture potential actors interacting with SDV.
Figure \ref{fig:actor_sdv_comp} shows an example of SDV interacting with another vehicle at an intersection.
\paragraph{Nudging maneuvers:}
In some scenarios the SDV needs to partially move out of its lane and come back in order to pass an object or vehicle that is partially blocking the path. We specifically capture these maneuvers as they make the scenario very complex.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{imgs/sdv_actor_comp}
\caption[]
{Examples of complexity measures related to the traffic participants and the SDV maneuver. The top row shows compares, for example, the static-crowd measure, increasing from left to right, v.s.~ the dynamic-crowd complexity measure. In the middle row, the class diversity measure increases from right to left, increasing the complexity of the traffic scene. It also indicates how the complexity of SDV path corresponds to the complexity of the scene. Lastly, in the bottom row the path and velocity taken by the actors become more complex from right to left, leading to increasingly more challenging scenarios for tasks such as motion forecasting.}
\label{fig:actor_sdv_comp}
\end{figure*}
\section{Comparison of Selected Datasets}
Figure \ref{fig:stats} shows the mean number of static\footnote{We consider actors with less than 0.5$\frac{m}{s}$ speed as static.} and dynamic actors available in the selected datasets by the baseline methods and oru proposed approach.
This indicates that the random sampling (RN) baseline, as expected, selects scenes that has much more static actors, as the majority of the actors in the snippets are likely to be stopped.
On the other hand, both active learning (AL) and our curation method (CR) are able to pick snippets with significantly more dynamic actors.
Note that AL selects selects scenes with marginally more dynamic vehicles in both 1k and 3k settings compared to CR.
For dynamic pedestrians, the sets selected by AL and CR have equal number of labels in 3k, with CR having marginally more pedestrians in 1k setting. Finally, CR selects snippets with significantly more number of Bicycles.
Additionally, even though in AL and CR have close label statistics for dynamic vehicles and pedestrians, our approach yields to significantly better perception and prediction results as discussed in the main paper.
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{imgs/stats-st.pdf}
\includegraphics[width=0.49\textwidth]{imgs/stats-dy.pdf}
\caption[]
{Label statistics. The plots show mean number of actor class over all the frames in the dataset selected by random sampling (RN), active learning baseline (AL), and out proposed curation method (CR).}
\label{fig:stats}
\end{figure}
\section{Qualitative Examples of Complexity Measures}
Figures \ref{fig:sdv}-\ref{fig:actors} present qualitative examples of the our proposed complexity measures. Each row provides examples for a specific complexity measure, decreasing from left to right.
Specifically, in Figure \ref{fig:sdv}, examples of SDV path is shown. In the top-left scene, the SDV is performing a very sharp turn and hence high SDV-path complexity measure.
Similarly, the the top-middle figure, the SDV performs a nudging maneuver which results in a path with some curvature and varying curvature rate of change, compared to the top-right scene which has a fixed curvature.
On the bottom row, examples of interaction complexity measures are presented.
In bottom-left scene, another vehicle is merging into SDV lane, with SDV and the other vehicle having high speed. The bottom-middle and right scenes, show similar interactions where the time-to-interaction is higher and less interesting.
In Figure \ref{fig:map} bottom row, the crosswalk measure is shown where the number of motion paths that intersect with the crosswalk polygon indicate the complexity.
Similarly, The lane-crossings, shown on the second row from bottom, is directly correlated to how complex the traffic situation can be.
In Figure \ref{fig:lane}, the lane-curve complexity is shown.
Note that the curvature of the lanes in bottom-left scene is high as the turns are sharp, compared to the other scenes there they are more open.
\begin{figure*}[h]
\centering
\includegraphics[width=0.77\textwidth]{imgs/sdv.pdf}
\caption[]
{Qualitative examples of complexity measures related to SDV path (top row) and the interactions between SDV and actors that are merging into SDV's lane (bottom row).}
\label{fig:sdv}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.77\textwidth]{imgs/curves.pdf}
\caption[]
{Qualitative examples of complexity measures related to vehicle (top row) and bike (bottom row) motion-paths.}
\label{fig:lane}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\textwidth]{imgs/map.pdf}
\caption[]
{Qualitative examples of complexity measures related to map.}
\label{fig:map}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\textwidth]{imgs/actors.pdf}
\caption[]
{Qualitative examples of complexity measures related to actors in the traffic-scene.}
\label{fig:actors}
\end{figure*} |
2,877,628,089,440 | arxiv | \section{Introduction}
\label{sect:intr}
This paper is concerned with energy and time constraints and performance tradeoff issues one frequently encounters in distributed formation control of multi-agent systems. A fundamental problem under investigation is how energy level, mission termination time, and steady-state error tolerance may inherently impact on the achievable performance of formation control, and how such impacts may be quantified analytically. Formation control problems have been widely studied in the recent literature (see, e.g., \cite{oh2015survey,su2009flocking,chen2017connection,beard2001coordination,balch1998behavior,lin2005necessary} and the references therein). However, only a rather limited number of works have considered energy constraints \cite{weimerskirch2001energy,derenick2011energy,papakostas2018energy,sardellitti2011optimal}, though the issue is of significant importance for agents with limited energy supplied by on-board batteries.
The energy and time constraints impose severe limitations on distributed cooperative control design and have motivated several existing works involving various cooperative tasks \cite{babazadeh2018cooperative,babazadeh2018anoptimal,zhang2018consensus,moarref2014optimal,mei2015distributed,xiang2019advances}, wherein the energy cost is defined as an integral of the square of the input, and is to be minimized, together, with certain control error functions. Other relevant attempts have been pursued by researchers to reduce redundant communication to decrease the energy cost \cite{demirel2017trade,varma2019energy}. In addition, it has been recognized that the resistance caused by velocity mismatches may also contribute to the energy expenditure, which cannot be ignored for systems with relatively high velocities \cite{niu2017numerical,chu2014numerical}.
The LQR-based method is just one case of many efforts which seek to limit the energy consumption. It is noted that a direct application of the LQR-based method to multi-agent systems will generically require an all-to-all network topology (see, e.g., \cite{cao2009optimal,di2012rendezvous}). That is, there is a dilemma between distributed control and LQR-based optimal control. Very recently, a network approximation approach is developed in \cite{chen2019minimum} by introducing a ``minimal'' distribution cost in the LQR function, which guarantees that the resulting control law is optimal in the global sense.
The present paper continues the aforementioned development in the study of energy-aware formation control of multi-agent systems. The main contributions are three-fold. Firstly, a distributed formation control law is derived which is globally optimal with respect to a cost pertinent to energy and control error of the multi-agent system under the LQR framework. To the best of the authors' knowledge, the proposed algorithm is the first formation control algorithm that is concurrently distributed and optimal while satisfying the hard constraints on energy expenditure and convergence time. Secondly, the conditions on the feasibility of the formation control problem are derived analytically, which depends upon the initial energy level, the formation termination time, the steady-state error tolerance, the network topology, as well as the control parameters. Thirdly, monotonicity properties of the achievable termination time and the required minimum initial energy with respect to the control parameters are further revealed, which provides some design guidelines in achieving formation control missions under time and energy constraints. A preliminary version of the results discussed here has appeared in \cite{jia2020distributed}. With respect to \cite{jia2020distributed}, the current version provides a comprehensive analysis on the monotonicity properties of the PARE solution, the termination time, as well as the energy expenditure. Moreover, numerical examples are also provided to illustrate the validity of the proposed results.
The rest of this paper is organized as follows. In Section~\ref{sect:prel}, preliminaries are presented and the problem is formulated. Section~\ref{sect:contr} is devoted to the development of the optimal distributed control algorithm and its analysis. Section~\ref{sec:IV} discusses the monotonicity properties of the achievable termination time and the required minimum energy with respect to the control parameters. Simulation results are presented in Section~\ref{sect:sim}. Finally, Section~\ref{sect:concl} concludes the paper.
\section{Preliminaries and problem statement}
\label{sect:prel}
\subsection{Notation}
Let $\mathbb{R}$ denote the set of real numbers, $\mathbb{R}^+$ the set of positive real numbers, $\mathbb{R}^n$ the set of $n$-dimensional real vectors, and $\mathbb{R}^{n\times n}$ the set of $n\times n$ real matrices. Let $I_n\in \mathbb{R}^{n\times n}$ be the $n$-dimensional identity matrix, $\mathbf{0}_n\in \mathbb{R}^n$ the vector with all zeros, and $\mathbf{1}_n\in \mathbb{R}^n$ the vector with all ones. The subscripts of $I_n$, $\mathbf{0}_n$, and $\mathbf{1}_n$ might be dropped if no confusion arises from the context. The superscript $T$ denotes the transpose of a matrix or a vector. The set of the eigenvalues of $A$ is denoted by $\mathrm{spec}(A)$. The Euclidean norm is given by $\Vert\cdot\Vert$. For two matrices $A\in \mathbb{R}^{m\times n}$ and $B\in \mathbb{R}^{p\times q}$, their Kronecker product is denoted by
\begin{align*}
A\otimes B=\left[
\begin{array}{ccc}
a_{11}B & \cdots & a_{1n}B \\
\vdots & \ddots & \vdots \\
a_{m1}B & \cdots & a_{mn}B \\
\end{array}
\right].
\end{align*}
The abbreviation ``iff'' means ``if and only if''.
\subsection{Graph theory}
The information exchange among the agents is described by a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, where $\mathcal{V}=\{\nu_1,\dots,\nu_N\}$ is the set of nodes and $\mathcal{E} \subseteq \mathcal{V}\times \mathcal{V}$ is the set of edges. In this paper, the graph $\mathcal{G}$ is assumed to be undirected. The adjacency matrix $\mathcal{A}=[a_{ij}]\in \mathbb{R}^{N\times N}$ of $\mathcal{G}$ is defined as: $a_{ij}=1$ if $(i,j)\in \mathcal{E}$, and $a_{ij}=0$ otherwise. The degree matrix is then given by $\mathcal{D}=\mathrm{diag}([d_1,\dots,d_N])$, where $d_i=\sum_{j=1}^{N}a_{ij}$. A path from node $\nu_i$ to node $\nu_j$ is a sequence of nodes $\nu_i,\dots,\nu_j$, such that each two consecutive nodes in the sequence is connected by an edge. An undirected graph is connected if for any two vertices in $\mathcal{V}$, there always exists a path connecting them. Throughout the paper, the following assumption is made.
\begin{assumption}\label{assump1}
Graph $\mathcal{G}$ is undirected and connected.
\end{assumption}
The Laplacian matrix of the undirected graph $\mathcal{G}$ is given by $\mathcal{L}=\mathcal{D}-\mathcal{A}\in \mathbb{R}^{N\times N}$, which is known to be symmetric and positive semi-definite. It has a zero eigenvalue whose normalized eigenvector is $\frac{1}{\sqrt{N}}\mathbf{1}_N$, where $\textbf{1}_N\in \mathbb{R}^N$ is the vector with all ones. The $N$ real eigenvalues of $\mathcal{L}$ can be ordered as $0=\lambda_1\leq\lambda_2\leq\dots\leq\lambda_N$. Let $\mathcal{W}=[w_1,\dots, w_N]^T$ be the matrix comprising orthonormal eigenvectors of $\mathcal{L}$. The Laplacian matrix $\mathcal{L}$ can be diagonalized as follows:
\begin{align}\label{1}
\mathcal{L}=\mathcal{W}^T \mathcal{J} \mathcal{W},
\end{align}
where $\mathcal{J}=\mathrm{diag}([\lambda_1,\dots,\lambda_N])$.
\subsection{Problem statement}
Consider a multi-agent system consisting of $N$ agents moving in the $n$-dimensional space. Each agent is governed by the following equations:
\begin{align}
\dot{p}_i(t)&=v_i(t), \qquad \dot{v}_i(t)=u_{i}(t), \label{eq:dyn}\\
\dot{E}_i(t)&=-u_i^T(t)u_i(t)-\frac{\beta}{2}\sum_{i=1}^Na_{ij}\Vert v_i(t)-v_j(t)\Vert^2,\label{eq:enDyn}\\
p_i(0)&=p_i^0, \quad v_i(0)=v_i^0, \quad E_i(0)=E_i^0,\quad i=1,\dots,N, \nonumber
\end{align}
where $p_i(t)\in \mathbb{R}^n$, $v_i(t)\in \mathbb{R}^n$, $u_i(t)\in \mathbb{R}^n$, and $E_i(t) \in \mathbb{R}$ denote, respectively, the position, velocity, input, and energy level of agent $i$, and $p_i^0\in \mathbb{R}^n$, $v_i^0\in \mathbb{R}^n$, and $E_i^0\in \mathbb{R}$ are their initial values. Equation~\eqref{eq:dyn} describes the double-integrator dynamics of the agents, while Eq.~\eqref{eq:enDyn} delineates how the energy level of the agents changes. The first term of \eqref{eq:enDyn} represents the energy expenditure caused by the control input, while the second term represents the energy expenditure due to the resistance of velocity mismatch, where $\beta$ is a positive constant. Let
\begin{align}
J_E^i(t)&=\int_0^t -\dot{E}_i(\tau)d\tau \nonumber
\end{align}
be the energy consumed by agent $i$ till time $t$. The energy cost of the multi-agent system is given by
\begin{align}\label{5}
J_E(t)&=\sum_{i=1}^N\int_0^t -\dot{E}_i(\tau)d\tau\nonumber\\
&=\int_0^t\{u^T(\tau)u(\tau)+\beta v^T(\tau)(\mathcal{L}\otimes I_n)v(\tau)\}d\tau,
\end{align}
where $u(\tau)=[u_1^T(\tau),\dots,u_N^T(\tau)]^T\in \mathbb{R}^{Nn}$ and $v(\tau)=[v_1^T(\tau),\dots,v_N^T(\tau)]^T\in\mathbb{R}^{Nn}$.
For notational convenience, $J_E(\infty)$ will be simplified as $J_E$ in the rest of the paper.
Define $x_i(t)=[p_i^T(t)\quad v_i^T(t)]^T\in \mathbb{R}^{2n}$. Equation~(\ref{eq:dyn}) can be written compactly as
\begin{align}\label{6}
\dot{x}_i(t)&=Ax_i(t)+Bu_i(t),
\end{align}
where
$A=\left[\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}\right]\otimes I_n$ and
$
B=\left[\begin{array}{c}
0 \\
1 \\
\end{array}\right]\otimes I_n$.
Let $x^d=[(p^d)^T\quad (v^d)^T]^T\in \mathbb{R}^{2Nn}$ represents the desired state with $p^d=[(p_1^d)^T,\dots,(p_N^d)^T]^T\in \mathbb{R}^{Nn}$ and $v^d=[(v_1^d)^T,\dots,(v_N^d)^T]^T\in \mathbb{R}^{Nn}$ denoting, respectively, the desired position and velocity. To guarantee the tracking result, it is necessary that all agents have the same desired velocity. Particularly, for notational convenience, it is assumed that $v^d=\mathbf{0}$. Accordingly, the energy cost function (\ref{5}) can be rewritten in terms of $u(t)$ and $x(t)$ as
\begin{align}\label{7}
J_E=&\int_0^\infty\{u^T(t)u(t)+\beta[x(t)-x^d]^T(\mathcal{L}\otimes Q)\nonumber\\
&\times[x(t)-x^d]\}dt,
\end{align}
where $Q=\mathrm{diag}([0\quad1])\otimes I_n$, $x(t)=[x_1^T(t),\dots,x_N^T(t)]\in \mathbb{R}^{2Nn}$. Let $d_{ij}=[(p_{ij}^d)^T\quad (v_{ij}^d)^T]^T$ denote the prespecified relative state between agent $i$ and $j$, i.e., $p_{ij}^d=p_i^d-p_j^d$ and $v_{ij}^d=v_i^d-v_j^d=0$. Let $T$ be the termination time of the formation task, and $\varepsilon \in \mathbb{R}^+$ be the parameter of the steady-state error tolerance. The following problem is investigated in the paper.
\begin{problem}\label{prob1}
Design a distributed control input $u_i(t)$ for the system \eqref{6}, based on local information, such that for some $t_f \in \mathbb{R}^+$,
\begin{align}
& \quad \limsup_{t\rightarrow t_f}\Vert x_i(t)-x_j(t)-d_{ij}\Vert\leq\varepsilon \nonumber \\
\mathrm{s. t.} & \quad t_f\leq T, \quad J_E^i(T)<E_i^0.
\end{align} \nonumber
\end{problem}
It is worth pointing out that $t_f\leq T$ and $J_E^i(T)<E_i^0$ are two ``hard'' constraints on the formation task. If $t_f>T$, the formation task fails to be achieved since it is not accomplished in a timely manner. On the other hand, $J_E^i(T) \geq E_i^0$ means that the energy is exhausted before the mission is completed.
\section{Distributed optimal energy-aware formation control}
\label{sect:contr}
This section is devoted to the development of an energy-aware distributed formation control algorithm by employing solely local information.
To this aim, define the performance measure
\begin{align
J=J_E+J_x^f+J_x^{NA}, \nonumber
\end{align}
where the energy cost $J_E$ is defined in \eqref{7}, and
\begin{align*}
J_x^f&=\alpha\int_0^\infty [x(t)-x^d]^T(\mathcal{L}\otimes I_{2n})[x(t)-x^d]dt\\
&=\frac{\alpha}{2}\int_0^\infty\sum_{i=1}^Na_{ij}\|x_i(t)-x_j(t)-d_{ij}\|^2dt,\\
J_x^{NA}&=\alpha\int_0^\infty [x(t)-x^d]^T[M\otimes S][x(t)-x^d]dt.
\end{align*}
Here, $\alpha>0$ is a tradeoff parameter,
$M=\alpha(\mathcal{L}^2-\sigma \mathcal{L})$ with $0<\sigma<\lambda_2$, and $S\geq0$ is a positive semi-definite matrix to be designed. The formation cost term $J_x^f$ represents the accumulated formation error, and ensures that the formation is reached asymptotically. It has been recognized that for a multi-agent system, the LQR-based optimal control law only exists under an all-to-all network topology \cite{cao2009optimal}. To circumvent the difficulty, the distribution cost term $J_x^{NA}$ is introduced to warrant that the optimal distributed control law exists for a generic connected network topology \cite{chen2019minimum}. The main results of this section are given as follows.
\begin{theorem}\label{the1}
Let
\begin{align
P=\frac{1}{\sqrt{\sigma\alpha}}\left[
\begin{array}{cc}
\sqrt{\sigma\alpha+\beta\sigma+2\sqrt{\sigma\alpha}} & 1 \\
1 & \sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}} \\
\end{array}
\right]\otimes I_n, \nonumber
\end{align}
where $0<\sigma<\lambda_2$ with $\lambda_2$ being the second smallest eigenvalue of the Laplacian matrix $\mathcal{L}$. If $M=\alpha(\mathcal{L}^2-\sigma \mathcal{L})$, $S=PBB^TP$, and Assumption \ref{assump1} holds, then
\begin{enumerate}
\item the optimal distributed control input of (\ref{6}) that minimizes $J$ is given by
\begin{align}\label{11}
u_i^*(t)=-\alpha\sum_{j=1}^Na_{ij}B^TP[x_i^*(t)-x_j^*(t)-d_{ij}],
\end{align}
where $x_i^*(t)$ is the state under the optimal input $u_i^*(t)$ at time $t$;
\item for given initial energy $E(0)=[E_1(0),\dots,E_N(0)]^T\in \mathbb{R}^N$, termination time $T>0$, and steady-state error tolerance $\varepsilon>0$, if the following inequalities hold
\begin{numcases}{}
T\geq \lambda_{\min}(P)\ln{\frac{V(x(0))}{\lambda_{\min}(P)(N-1)\varepsilon^2}},\label{12}\\
E_i(0)\geq \frac{V_\mathcal{L}(0)}{2}\bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}}\bigg)\nonumber\\
+\beta\bigg]\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}\bigg(1-e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\bigg),\nonumber\\ \qquad i\in\{1,\dots,N\}, \label{13}
\end{numcases}
where
\begin{align*}
&\lambda_{\min}(P)=\frac{1}{2\sqrt{\sigma\alpha}}\bigg(\bigg(1+\sqrt{\sigma\alpha}\bigg)\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}}\\
&\quad-\sqrt{\bigg(1+\sigma\alpha-2\sqrt{\sigma\alpha}\bigg)\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)+4}\bigg),\\
&V(x(0))=[x(0)-x^d]^T\left[\left(I_N-\frac{1}{N}\textbf{11}^T\right)\otimes P\right]\nonumber \\
& \hspace*{40pt} \times [x(0)-x^d],\\
&V_\mathcal{L}(0)=[x(0)-x^d]^T\left(\mathcal{L}\otimes I_{2n}\right)[x(0)-x^d],
\end{align*}
then Problem \ref{prob1} is solved under the distributed optimal control algorithm \eqref{11}.
\end{enumerate}
\end{theorem}
\begin{proof}
1) Define
\begin{align
&\quad J(t_f,x(t_f))\nonumber\\
&=\int_0^{t_f}\{u^T(t)u(t)+\beta [x(t)-x^d]^T(\mathcal{L}\otimes Q)[x(t)-x^d]\}dt\nonumber\\
&\quad +\alpha \bigg\{\int_0^{t_f}[x(t)-x^d]^T(\mathcal{L}\otimes I_{2n})[x(t)-x^d]dt\nonumber\\
&\quad +\int_0^{t_f}[x(t)-x^d]^T[M\otimes S(t)][x(t)-x^d]dt\nonumber\\
&\quad +[x(t_f)-x^d]^T(\mathcal{L}\otimes I_{2n})[x(t_f)-x^d]\bigg\},\nonumber
\end{align}
where $S(t)\in \mathbb{R}^{2n\times 2n}$ is a time-varying positive semi-definite matrix, and $t_f$ is the actual convergence time defined in Problem \ref{prob1}. Let $\tilde{x}_i(t)$ and $\tilde{u}_i(t)$ denote, respectively, the $i$th component of $\tilde{x}(t)\triangleq(\mathcal{W} \otimes I_{2n})[x(t)-x^d]$ and $\tilde{u}(t)\triangleq(\mathcal{W} \otimes I_{2n})u(t)$, where $\mathcal{W}$ is defined in \eqref{1}. The multi-agent system (\ref{6}) can be written equivalently as
\begin{align}\label{15}
\dot{\tilde{x}}_i(t)=A\tilde{x}_i(t)+B\tilde{u}_i(t),\quad i=1,\dots,N,
\end{align}
where $Ax_i^d=0$, $i=1,\dots,N,$ is used. Due to Assumption \ref{assump1}, $J(t_f,x(t_f))$ can be written equivalently as $J(t_f,x(t_f))=\sum_{i=1}^NJ_i(t_f,\tilde{x}_i(t_f)),$
where
\begin{align}\label{16}
J_1(t_f,\tilde{x}_1(t_f))&=\int_0^{t_f}\tilde{u}_1^T(t)\tilde{u}_1(t)dt,\nonumber\\
J_i(t_f,\tilde{x}_i(t_f))&=\int_0^{t_f}\{\tilde{u}_i^T(t)\tilde{u}_i(t)+ \lambda_i\beta \tilde{x}_i^T(t)Q\tilde{x}_i(t)\}dt\nonumber\\
& +\alpha \bigg\{\int_0^{t_f}\tilde{x}_i^T(t)[\lambda_iI_{2n}+m_iS(t)]\tilde{x}_i(t)dt\nonumber\\
&+\lambda_i\tilde{x}_i^T(t_f)\tilde{x}_i(t_f)\bigg\},\quad i=2,\dots,N
\end{align}
with $m_i\triangleq \alpha(\lambda_i^2-\sigma\lambda_i)$. It is straightforward to obtain that $\tilde{u}_1^*\equiv0$.
Next, the optimal input $\tilde{u}_i^*$ is derived for $i=2,\dots,N$. Let $\tilde{x}_i^*(t)$ denote the state of (\ref{15}) under the optimal input $\tilde{u}_i^*(t)$, i.e.,
\begin{align}\label{17}
\dot{\tilde{x}}_i^*(t)=A\tilde{x}_i^*(t)+B\tilde{u}_i^*(t)
\end{align}
with the initial condition $\tilde{x}_i^*(0)=\tilde{x}_i^0$, where $\tilde{x}_i^0$ is the $i$th component of $\tilde{x}^0=(\mathcal{W}\otimes I_{2n})(x^0-x^d)$. Consider a new input vector
\begin{align}\label{18}
\tilde{u}_i(t)=\tilde{u}_i^*(t)+\epsilon \hat{u}_i(t)
\end{align}
for (\ref{15}), where $\hat{u}_i(t)$ is an arbitrary function of time, and $\epsilon\in \mathbb{R}$ is an arbitrary number. Due to the variation of the input vector, the state of the system (\ref{15}) will change from $\tilde{x}_i^*(t)$ to
\begin{align}\label{19}
\tilde{x}_i(t)=\tilde{x}_i^*(t)+\epsilon\hat{x}_i(t), \quad 0\leq t\leq t_f,
\end{align}
where $\hat{x}_i(t)$ is some function of time. Substitution of (\ref{18}) and (\ref{19}) into (\ref{15}) yields
\begin{align}\label{20}
\dot{\tilde{x}}_i^*(t)+\epsilon\dot{\hat{x}}_i(t)=&A[\tilde{x}_i^*(t)+\epsilon\hat{x}_i(t)]+B[\tilde{u}_i^*(t)+\epsilon\hat{u}_i(t)].
\end{align}
Substraction of \eqref{17} from \eqref{20} and cancelation of $\epsilon$ lead to
\begin{align}\label{21}
\dot{\hat{x}}_i(t)=A\hat{x}_i(t)+B\hat{u}_i(t)
\end{align}
with the initial condition $\hat{x}_i(0)=0$. The solution of (\ref{21}) is
\begin{align}\label{22}
\hat{x}_i(t)=\int_0^te^{A(t-\tau)}B\hat{u}_i(\tau)d\tau.
\end{align}
Using (\ref{18}) and (\ref{19}), Equation~(\ref{16}) can be rewritten as a function related to $\epsilon$, denoted by $J_i(t_f,\tilde{x}_i(t_f),\epsilon)$. Since $\tilde{u}_i^*(t)$ is the control input that minimizes $J_i(t_f,\tilde{x}_i(t_f),\epsilon)$, $J_i(t_f,\tilde{x}_i(t_f),\epsilon)$ must have a minimum at $\epsilon=0$, which implies that the first derivative of $J_i(t_f,\tilde{x}_i(t_f),\epsilon)$ with respect to $\epsilon$ should be zero at $\epsilon=0$. It thus follows that
\begin{align}\label{23}
&\int_0^{t_f}\{\hat{u}_i^T(t)\tilde{u}_i^*(t)+\hat{x}_i^T(t)[\lambda_i\beta Q+\lambda_i\alpha I_{2n}+m_i\alpha S(t)]\nonumber\\
&\times\tilde{x}_i^*(t)\}dt+\lambda_i\alpha\hat{x}_i^T(t_f)\tilde{x}_i^*(t_f)=0.
\end{align}
Substitution of (\ref{22}) into (\ref{23}) together with some rearrangements leads to
\begin{align}\label{24}
&\int_0^{t_f}\hat{u}_i^T(t)\bigg\{\tilde{u}_i^*(t)+B^T\int_t^{t_f}e^{A^T(\tau-t)}[\lambda_i\beta Q+\lambda_i\alpha I_{2n}\nonumber\\
&+m_i\alpha S(\tau)]\tilde{x}_i^*(\tau)d\tau+\lambda_i\alpha B^Te^{A^T(t_f-t)}\tilde{x}_i^*(t_f)\bigg\}dt=0.
\end{align}
Let
\begin{align}\label{25}
p_i(t)\triangleq&\int_t^{t_f}e^{A^T(\tau-t)}[\lambda_i\frac{\beta}{\alpha}Q+\lambda_i I_{2n}+m_i S(\tau)]\tilde{x}_i^*(\tau)d\tau\nonumber\\
&+\lambda_i e^{A^T(t_f-t)}\tilde{x}_i^*(t_f).
\end{align}
Equation~(\ref{24}) can be written compactly as
\begin{align}\label{26}
\int_0^{t_f}\hat{u}_i^T(t)\{\tilde{u}_i^*(t)+\alpha B^Tp_i(t)\}dt=0.
\end{align}
Since (\ref{26}) holds for all possible $\hat{u}_i(t)$, it follows that
\begin{align}\label{27}
\tilde{u}_i^*(t)=-\alpha B^Tp_i(t).
\end{align}
Therefore, the problem of finding the optimal input $\tilde{u}_i^*(t)$ is transformed into the problem of finding the solution of $p_i(t)$ that satisfies (\ref{25}). Similar to the process of obtaining $p_i(t)$ in \cite{chen2019minimum}, it can be shown that
\begin{align}\label{28}
p_i(t)=\lambda_iP(t)\tilde{x}_i^*(t),
\end{align}
where $P(t)$ is the solution to the following parametric differential Riccati equation (PDRE)
\begin{equation}\label{29}
\dot{P}(t)+ I_{2n}+\frac{\beta}{\alpha}Q+A^TP(t)+P(t)A-\sigma\alpha P(t)BB^TP(t)=0,
\end{equation}
\begin{align}
P(t_f)= I_{2n}, \nonumber
\end{align}
where $0<\sigma<\lambda_2$. Substitution of (\ref{28}) into (\ref{27}) yields
\begin{align*}
\tilde{u}_i^*(t)=-\alpha\lambda_iB^TP(t)\tilde{x}_i^*(t),
\end{align*}
or equivalently
\begin{align
\tilde{u}^*(t)=-\alpha[\mathcal{J}\otimes B^TP(t)]\tilde{x}^*(t), \nonumber
\end{align}
which can be further written as
\begin{align
u^*(t)=-\alpha[\mathcal{L}\otimes B^TP(t)][x^*(t)-x^d]. \nonumber
\end{align}\\
Let $P$ be the solution to the following parametric algebraic Riccati equation (PARE)
\begin{align}\label{33}
I_{2n}+\frac{\beta}{\alpha} Q+A^TP+PA-\sigma\alpha PBB^TP=0.
\end{align}
Since $(A,B)$ is stabilizable and $(A,I_{2n})$ is detectable, the solution to (\ref{29}) converges to that of (\ref{33}) as $t_f\rightarrow \infty$. This leads to the optimal control input in the infinite-horizon case,
\begin{align}\label{34}
\tilde{u}_i^*(t)=-\alpha\lambda_iB^TP\tilde{x}_i^*(t),
\end{align}
or equivalently
\begin{align
u^*(t)=-\alpha(\mathcal{L}\otimes B^TP)[x^*(t)-x^d]. \nonumber
\end{align}
Substituting $A=\left[\begin{array}{cc}
0 & 1 \\
0 & 0 \\
\end{array}\right]\otimes I_n$ and
$
B=\left[\begin{array}{c}
0 \\
1 \\
\end{array}\right]\otimes I_n$ into \eqref{33}, the solution $P$ to \eqref{33} is given by
\begin{align}\label{36}
P=\frac{1}{\sqrt{\sigma\alpha}}\left[
\begin{array}{cc}
\sqrt{\sigma\alpha+\beta\sigma+2\sqrt{\sigma\alpha}} & 1 \\
1 & \sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}} \\
\end{array}
\right]\otimes I_n.
\end{align}
The proof of the first part is thus completed.
2) Substituting the optimal control law (\ref{11}) into the system \eqref{6} yields the following closed-loop system:
\begin{align}\label{37}
\dot{x}^*(t)=(I_N\otimes A)x^*(t)-\alpha(\mathcal{L}\otimes BB^TP)[x^*(t)-x^d].
\end{align}
Define $V(x)=[x(t)-x^d]^T\left[(I_N-\frac{1}{N}\textbf{11}^T)\otimes P\right][x(t)-x^d],$
where $P$ is given by (\ref{36}). Note that $V(x)=0$ iff the formation is reached.
It follows from (\ref{37}) that
\begin{align}\label{38}
&\quad\dot{V}(x^*)\nonumber\\
&=2[x^*(t)-x^d]^T\bigg[\bigg(I_N-\frac{1}{N}\textbf{11}^T\bigg)\otimes PA\bigg][x^*(t)-x^d]\nonumber\\
&-2[x^*(t)-x^d]^T[\mathcal{L}\otimes \alpha PBB^TP][x^*(t)-x^d].
\end{align}
The first term in (\ref{38}) can be written as
\begin{align}\label{39}
&\quad 2[x^*(t)-x^d]^T\bigg[\bigg(I_N-\frac{1}{N}\textbf{11}^T\bigg)\otimes PA\bigg][x^*(t)-x^d]\nonumber\\
&=2[x^*(t)-x^d]^T(\mathcal{W}^T\Upsilon \mathcal{W}\otimes PA)[x^*(t)-x^d]\nonumber\\
&=2\sum_{i=2}^N[\tilde{x}_i^*(t)]^TPA\tilde{x}_i^*(t),
\end{align}
where $\Upsilon=\mathrm{diag}([0,1,\dots,1])\in \mathbb{R}^{N\times N}$. Similarly, the second term can be rewritten as
\begin{align}\label{40}
&\quad2[x^*(t)-x^d]^T[\mathcal{L}\otimes\alpha PBB^TP][x^*(t)-x^d]\nonumber\\
&=2\sum_{i=2}^N\alpha\lambda_i[\tilde{x}_i^*(t)]^TPBB^TP\tilde{x}_i^*(t).
\end{align}
Substituting (\ref{40}) and (\ref{39}) into (\ref{38}) yields
\begin{align}\label{41}
\dot{V}(x^*)&=\sum_{i=2}^N[\tilde{x}^*_i(t)]^T[(A-\lambda_i\alpha BB^TP)^TP \nonumber\\
&\quad +P(A-\lambda_i\alpha BB^TP)]\tilde{x}_i^*(t)\nonumber\\
&=-\sum_{i=2}^N[\tilde{x}^*_i(t)]^T\bigg\{I_{2n}+\frac{\beta}{\alpha} Q+\alpha[\sigma+2(\lambda_i-\sigma)] \nonumber\\
&\quad\times PBB^TP\bigg\}\tilde{x}^*_i(t)\nonumber\\
&\leq-\sum_{i=2}^N[\tilde{x}_i^*(t)]^T\tilde{x}_i^*(t),
\end{align}
where the second equality is due to $(A-\lambda_i\alpha BB^TP)^TP+P(A-\lambda_i\alpha BB^TP)=A^TP-\sigma\alpha PBB^TP+PA-\sigma\alpha PBB^TP-2\alpha(\lambda_i-\sigma)PBB^TP
=-\bigg( I_{2n}+\frac{\beta}{\alpha} Q+\alpha[\sigma+2(\lambda_i-\sigma)]PBB^TP\bigg).$
Additionally,
\begin{align}\label{42}
V(x^*)&=[x^*(t)-x^d]^T\bigg[(I_N-\frac{1}{N}\textbf{11}^T)\otimes P\bigg][x^*(t)-x^d]\nonumber\\
&\geq\lambda_{\min}(P)\sum_{i=2}^N[\tilde{x}_i^*(t)]^T\tilde{x}_i^*(t).
\end{align}
It follows from \eqref{41} and \eqref{42} that $\frac{\dot{V}(x^*)}{V(x^*)}\leq-\frac{1}{\lambda_{\min}(P)},$
which gives
\begin{align}\label{44}
V(x^*(t))\leq e^{-\frac{1}{\lambda_{\min}(P)}t}V(x(0)).
\end{align}
Moreover, $V(x^*(t_f))=\sum_{i=2}^N[\tilde{x}_i^*(t_f)]^TP\tilde{x}_i^*(t_f)
\geq \lambda_{\min}(P)\sum_{i=2}^N\Vert \tilde{x}_i^*(t_f) \Vert^2
\geq \lambda_{\min}(P)(N-1)\varepsilon^2.$
By \eqref{44}, the upper bound on the formation time is given by
\begin{align
t_f&\leq \lambda_{\min}(P)\ln{\frac{V(x(0))}{V(x^*(t_f))}}\nonumber\\
&\leq\lambda_{\min}(P)\ln{\frac{V(x(0))}{\lambda_{\min}(P)(N-1)\varepsilon^2}}.\nonumber
\end{align}
Therefore, for the given steady-state error tolerance $\varepsilon$ and the termination time $T$, the formation task can be achieved if
\begin{align
T\geq \lambda_{\min}(P)\ln{\frac{V(x(0))}{\lambda_{\min}(P)(N-1)\varepsilon^2}},\nonumber
\end{align}
where by \eqref{36}
\begin{align
&\lambda_{\min}(P)=\frac{1}{2\sqrt{\sigma\alpha}}\bigg(\bigg(1+\sqrt{\sigma\alpha}\bigg)\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}}\nonumber\\
&-\sqrt{\bigg(1+\sigma\alpha-2\sqrt{\sigma\alpha}\bigg)\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)+4}\bigg).\nonumber
\end{align}
Let $J_{E^*}$ denote the energy consumption during $[0,T]$ under the optimal control law (\ref{11}). Due to Assumption \ref{assump1}, $J_{E^*}$ can be written as $J_{E^*}=\sum_{i=2}^NJ_{\tilde{E}_i^*},$
where $$J_{\tilde{E}_i^*}=\int_0^{T}\{[\tilde{u}_i^*(t)]^T\tilde{u}_i^*(t)+\beta\lambda_i[\tilde{x}_i^*(t)]^TQ\tilde{x}_i^*(t)\}dt.$$
It follows from (\ref{34}) that
\begin{align}\label{51}
J_{\tilde{E}_i^*}&=\int_0^{T}[\tilde{x}_i^*(t)]^T(\alpha^2\lambda_i^2PBB^TP+\beta\lambda_iQ)\tilde{x}_i^*(t)dt\nonumber\\
&\leq[\alpha^2\lambda_i\lambda_{\max}(PBB^TP)+\beta]\lambda_i\int_0^{T}[\tilde{x}_i^*(t)]^T\tilde{x}_i^*(t)dt\nonumber\\
&\leq\bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}}\bigg)+\beta\bigg]\lambda_i\nonumber\\
&\quad \times\int_0^{T}[\tilde{x}_i^*(t)]^T\tilde{x}_i^*(t)dt.
\end{align}
On the other hand, the solution of (\ref{17}) is given by
\begin{align
\tilde{x}_i^*(t)=e^{(A-\lambda_i\alpha BB^TP)t}\tilde{x}_i(0). \nonumber
\end{align}
It hence follows that $\int_0^{T}[\tilde{x}_i^*(t)]^T\tilde{x}_i^*(t)dt=\int_0^{T}\tilde{x}_i^T(0) \times e^{(A-\lambda_i\alpha BB^TP)^Tt}e^{(A-\lambda_i\alpha BB^TP)t}\tilde{x}_i(0)dt\leq \Vert\tilde{x}_i(0)\Vert^2\int_0^{T} $ $\Vert e^{(A-\lambda_i\alpha BB^TP)t}\Vert^2dt. $
Since
\begin{align
\Vert e^{(A-\lambda_i\alpha BB^TP)t}\Vert\leq e^{[\max\limits_{i=2,\dots,N}\lambda_{\max}(A-\lambda_i\alpha BB^TP)]t}, \nonumber
\end{align}
it follows that
\begin{align}\label{55}
& \quad\int_0^{T}[\tilde{x}_i^*(t)]^T\tilde{x}_i^*(t)dt\nonumber\\
&\leq \Vert\tilde{x}_i(0)\Vert^2\int_0^{T}e^{2[\max\limits_{i=2,\dots,N}\lambda_{\max}(A-\lambda_i\alpha BB^TP)]t}dt\nonumber\\
&= \frac{\Vert\tilde{x}_i(0)\Vert^2}{2\vert\max\limits_{i=2,\dots,N}\lambda_{\max}(A-\lambda_i\alpha BB^TP)\vert} \nonumber\\
&\quad \times\bigg(1-e^{2[\max\limits_{i=2,\dots,N}\lambda_{\max}(A-\lambda_i\alpha BB^TP)]T}\bigg).
\end{align}
It can be verified that
\begin{align
&\quad \lambda_{\max}(A-\lambda_i\alpha BB^TP) \nonumber \\
&=\frac{1}{2}\bigg(-\lambda_i\sqrt{\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)}\nonumber\\
&\quad +\sqrt{\lambda_i^2\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)-4\lambda_i\sqrt{\frac{\alpha}{\sigma}}}\bigg). \nonumber
\end{align}
Additionally,
\begin{align}\label{57}
& \quad \lambda_i\Vert \tilde{x}_i(0)\Vert^2 \nonumber \\
&\leq\sum_{i=1}^N\lambda_i \Vert \tilde{x}_i(0)\Vert^2\nonumber\\
&\leq [x(0)-x^d]^T(\mathcal{L}\otimes I_{2n})[x(0)-x^d].
\end{align}
Combining (\ref{51}), (\ref{55}), and (\ref{57}) leads to
\begin{align}\label{58}
J_{\tilde{E}_i^*} \leq & \frac{V_\mathcal{L}(0)[\lambda_N(\alpha+\frac{1}{\sigma})(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}})+\beta]}{2\vert\max\limits_{i=2,\dots,N}\lambda_{\max}(A-\lambda_i\alpha BB^TP)\vert}\nonumber\\
&\times\bigg(1-e^{2[\max\limits_{i=2,\dots,N}\lambda_{\max}(A-\lambda_i\alpha BB^TP)]T}\bigg),
\end{align}
where
\begin{align*}
V_\mathcal{L}(0)=[x(0)-x^d]^T(\mathcal{L}\otimes I_{2n})[x(0)-x^d].
\end{align*}
Let $\mathrm{Max}(\lambda)$ denote the parameter $\lambda_i$ that maximizes $\lambda_{\max}(A-\lambda_i\alpha BB^TP)$. Eq.~\eqref{58} can be written as
\begin{align}\label{59}
J_{\tilde{E}_i^*} \leq & \frac{V_\mathcal{L}(0)[\lambda_N(\alpha+\frac{1}{\sigma})(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}})+\beta]}{2\vert\lambda_{\max}(A-\mathrm{Max}(\lambda)\alpha BB^TP)\vert}\nonumber\\
&\times\bigg(1-e^{2\lambda_{\max}(A-\mathrm{Max}(\lambda)\alpha BB^TP)T}\bigg).
\end{align}
Since
\begin{align}
& \quad \lambda_{\max}(A-\mathrm{Max}(\lambda)\alpha BB^TP)\nonumber \\
&=\frac{1}{2}\bigg(-\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}
+\frac{2}{\sqrt{\sigma\alpha}}\bigg)}\nonumber\\
&\quad+\sqrt{\mathrm{Max}^2(\lambda)\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)- 4\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}}}\bigg)\nonumber\\
&\geq -\frac{1}{2}\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)}, \nonumber
\end{align}
it follows that
\begin{align}\label{60}
&\quad1-e^{2\lambda_{\max}(A-\mathrm{Max}(\lambda)\alpha BB^TP)T}\nonumber\\
&\leq1-e^{-\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\nonumber\\
&\leq1-e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}.
\end{align}
Combining \eqref{59} and \eqref{60} leads to
\begin{align}\label{61}
J_{\tilde{E}_i^*}&\leq \frac{V_\mathcal{L}(0)[\lambda_N(\alpha+\frac{1}{\sigma})(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}})+\beta]}{2\vert\lambda_{\max}(A-\mathrm{Max}(\lambda)\alpha BB^TP)\vert}\nonumber\\
&\quad\times\bigg(1-e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\bigg)\nonumber\\
&= \frac{V_\mathcal{L}(0)[\lambda_N(\alpha+\frac{1}{\sigma})(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}})+\beta]}{4\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}}}\nonumber\\
&\quad\times\bigg(\sqrt{\mathrm{Max}^2(\lambda)\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})-4\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}}}\nonumber\\
&\quad +\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})} \bigg)\nonumber\\
&\quad\times\bigg(1-e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\bigg),
\end{align}
which holds by multiplying the numerator and denominator with $\sqrt{\mathrm{Max}^2(\lambda)\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)-4\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}}}
+\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)}.$
Additionally,
\begin{align}\label{62}
&\quad\sqrt{\mathrm{Max}^2(\lambda)\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)-4\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}}}\nonumber\\
&\quad+\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)}\nonumber\\
&\leq2\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)}.
\end{align}
Substituting \eqref{62} into \eqref{61} yields
\begin{align*}
&\quad J_{\tilde{E}_i^*}\\
&\leq\frac{V_\mathcal{L}(0)[\lambda_N(\alpha+\frac{1}{\sigma})(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}})+\beta]}{4\mathrm{Max}(\lambda)\sqrt{\frac{\alpha}{\sigma}}}2\mathrm{Max}(\lambda)\\
&\times\sqrt{\frac{\alpha}{\sigma}\bigg(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}\bigg)}\bigg(1-e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\bigg)\\
&=\frac{1}{2}V_\mathcal{L}(0) \bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}}\bigg)+\beta\bigg]\\
&\times\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}\bigg(1-e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\bigg).
\end{align*}
The energy constraint is given by $J_{\tilde{E}_i^*}\leq E_i(0).$
Thus, the energy requirement can be met if
\begin{align
E_i(0)&\geq \frac{1}{2}V_\mathcal{L}(0) \bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}}\bigg)+\beta\bigg] \nonumber\\ &\times\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}\bigg(1-e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\bigg),\nonumber\\
& \quad i\in\{1,\dots,N\}. \nonumber
\end{align}
The proof is thus completed.
\end{proof}
According to Theorem~\ref{the1}, if the time constraint is removed, i.e., $T\rightarrow\infty$, the energy bound can be simplified as $E_i(0)\geq \frac{1}{2}V_\mathcal{L}(0) \bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}}\bigg)+\beta\bigg]
\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}},\quad i\in\{1,\dots,N\}.$
Additionally, it is noted that if the initial formation error is large, a longer termination time $T$ and a higher energy level $E_i(0)$ are expected for achieving the formation of the multi-agent system. Besides, the smaller the formation threshold $\varepsilon$, the longer the termination time and the more the energy consumption.
\section{Monotonicity properties of the optimal formation algorithm}
\label{sec:IV}
This section is devoted to the discussion of the relationships between the lower bound of the required initial energy $E_i(0)$, the lower bound of the achievable termination time $T$, and the algorithm parameters.
\subsection{Monotonicity of the PARE solution}
The following result presents the monotonicity of the solution $P$ of the PARE \eqref{33} with respect to the parameters $\alpha$, $\sigma$, and $\beta$.
\begin{theorem}\label{the2}
The solution $P$ of the PARE \eqref{33} is a decreasing function of $\alpha$ and $\sigma$, and an increasing function of $\beta$, i.e.,
\begin{align
\frac{\partial P}{\partial \alpha}\leq 0,\quad \frac{\partial P}{\partial \sigma}\leq 0,\quad \frac{\partial P}{\partial \beta}\geq 0, \quad \forall {\alpha, \sigma, \beta}>0. \nonumber
\end{align}
\end{theorem}
\begin{proof}
It follows from \eqref{33} that
\begin{align}\label{64}
&\quad(A-\alpha\sigma BB^TP)^TP+P(A-\alpha\sigma BB^TP)\nonumber\\
&=-\bigg(I_{2n}+\frac{\beta}{\alpha}Q+\alpha\sigma PBB^TP\bigg).
\end{align}
Since $I_{2n}+\frac{\beta}{\alpha}Q+\alpha\sigma PBB^TP$ is positive definite, it follows from \eqref{64} that $(A-\alpha\sigma BB^TP)$ is Hurwitz. To show the relationship between $P$ and $\alpha$, differentiating both sides of \eqref{64} with respect to $\alpha$ yields
\begin{align}\label{65}
&\quad \frac{\partial P}{\partial\alpha}(A-\alpha\sigma BB^TP)+(A-\alpha\sigma BB^TP)^T\frac{\partial P}{\partial \alpha}\nonumber\\
&=\sigma PBB^TP+\frac{\beta}{\alpha^2}Q.
\end{align}
Since $(A-\alpha\sigma BB^TP)$ is Hurwitz, and the right-hand side of \eqref{65} is positive semidefinite, \eqref{65} has the following unique solution
\begin{align
\frac{\partial P}{\partial \alpha}&=-\int_0^\infty e^{(A-\alpha\sigma BB^TP)^Tt}\bigg(\sigma PBB^TP+\frac{\beta}{\alpha^2}Q\bigg)\nonumber\\
&\quad \times e^{(A-\alpha\sigma BB^TP)t}dt\nonumber\\
&\leq 0. \nonumber
\end{align}
Thus, $P$ is monotonically decreasing with $\alpha$.
Similarly, it can be shown that
\begin{align
\frac{\partial P}{\partial\sigma}(A-\alpha\sigma BB^TP)+(A-\alpha\sigma BB^TP)^T\frac{\partial P}{\partial \sigma}=\alpha PBB^TP, \nonumber
\end{align}
which has the following unique solution
\begin{align
\frac{\partial P}{\partial \sigma}&=-\int_0^\infty e^{(A-\alpha\sigma BB^TP)^Tt}\alpha PBB^TPe^{(A-\alpha\sigma BB^TP)t}dt\nonumber\\
&\leq 0. \nonumber
\end{align}
Similarly, it can be shown that
\begin{align
\frac{\partial P}{\partial\beta}(A-\alpha\sigma BB^TP)+(A-\alpha\sigma BB^TP)^T\frac{\partial P}{\partial \beta}=-\frac{1}{\alpha}Q, \nonumber
\end{align}
which has the unique solution
\begin{align
\frac{\partial P}{\partial \beta}&=\int_0^\infty e^{(A-\alpha\sigma BB^TP)^Tt}\frac{1}{\alpha}Qe^{(A-\alpha\sigma BB^TP)t}dt\geq 0. \nonumber
\end{align}
Thus, $P$ is monotonically decreasing with $\sigma$ and monotonically increasing with $\beta$. The proof is thus completed.
\end{proof}
\subsection{Termination time}
\label{sect:time}
The following result discusses the monotonicity of the lower bound of the termination time $T$ in \eqref{12}.
\begin{theorem}\label{the3}
The lower bound of the achievable termination time $T$ in (\ref{12}) is a decreasing function of both $\sigma$ and $\alpha$ and an increasing function of $\beta$.
\end{theorem}
\begin{proof}
For notational convenience, define the lower bound of the termination time $T$ as $T_l$, i.e.,
\begin{align}\label{71}
T_l=\lambda_{\min}(P)\ln{\frac{V(x(0))}{\lambda_{\min}(P)(N-1)\varepsilon^2}}.
\end{align}
Differentiating both sides of \eqref{71} with respect to $\alpha$ yields
\begin{align}\label{72}
\frac{\partial {T}_l}{\partial \alpha}=&\frac{\partial\lambda_{\min}(P)}{\partial\alpha}\bigg(\ln{\frac{V(x(0))}{\lambda_{\min}(P)(N-1)\varepsilon^2}}-1\bigg)\nonumber\\
&+\frac{\lambda_{\min}(P)}{V(x(0))}\frac{\partial V(x(0))}{\partial \alpha}.
\end{align}
It is straightforward to know that $\frac{\partial T_l}{\partial \alpha}\leq 0$ if the two terms on the right-hand side of \eqref{72} are non-positive. According to the relationship of $P$ and $\alpha$, it follows that
\begin{align
\frac{\partial V(x(0))}{\partial \alpha}\leq 0. \nonumber
\end{align}
Since $P$ is symmetric and positive semidefinite, it can be diagonalized as
\begin{align}\label{74}
P=\mathcal{M}^T\Lambda(P)\mathcal{M},
\end{align}
where $\mathcal{M}=[m_1,\dots,m_{2n}]$ is the matrix comprising the orthonormal eigenvectors of $P$ and $\Lambda(P)=\mathrm{diag}([\lambda_1(P),\dots,\lambda_{2n}(P)])$ with $\lambda_i(P)$ being the $i$th eigenvalue of $P$.\\
Differentiating both sides of \eqref{74} with respect to $\alpha$ yields
\begin{align
\frac{\partial P}{\partial \alpha}&=\frac{\partial \mathcal{M}^T}{\partial \alpha}(\Lambda(P)\mathcal{M})+\mathcal{M}^T\frac{\partial (\Lambda(P)\mathcal{M})}{\partial \alpha}\nonumber\\
&=0+\mathcal{M}^T(\frac{\partial \Lambda(P)}{\partial \alpha}\mathcal{M}+\Lambda(P)\frac{\partial \mathcal{M}}{\partial \alpha})\nonumber\\
&=\mathcal{M}^T\frac{\partial \Lambda(P)}{\partial \alpha}\mathcal{M}. \nonumber
\end{align}
Since $\frac{\partial P}{\partial \alpha}\leq 0$, each eigenvalue of $\frac{\partial P}{\partial \alpha}$ must be non-positive, i.e.,
\begin{align
\frac{\partial \lambda_i(P)}{\partial \alpha}\leq 0,\quad i\in\{1,\dots,2n\}, \nonumber
\end{align}
which gives $\frac{\partial\lambda_{\min}(P)}{\partial \alpha}\leq 0.$
Additionally,
\begin{align*}
&\quad\ln{\frac{V(x(0))}{\lambda_{\min}(P)(N-1)\varepsilon^2}}-1 \\ &\geq \ln{\frac{\lambda_{\min}(P)\sum_{i=2}^N\Vert\tilde{x}_i(0)\Vert^2}{\lambda_{\min}(P)(N-1)\varepsilon^2}}-1\\
&=\ln{\frac{\sum_{i=2}^N\Vert\tilde{x}_i(0)\Vert^2}{(N-1)\varepsilon^2}}-1\geq 0,
\end{align*}
which leads to $\frac{\partial {T}_l}{\partial \alpha}\leq 0.$
Similarly, it can be shown that
\begin{align*}
\frac{\partial {T}_l}{\partial \sigma}\leq 0, \quad \frac{\partial {T}_l}{\partial \beta}\geq 0.
\end{align*}
The proof is thus completed.
\end{proof}
\subsection{Energy expenditure}
\label{sect:ener}
Next, the effect of the parameters $\alpha$, $\sigma$, and $\beta$ on the lower bound of the energy level $E_i(0)$ in \eqref{13} is investigated. The following assumption is made in this subsection.
\begin{assumption}\label{assump2}
Suppose that
\begin{align*}
\lambda_N \left(\frac{3}{2}\alpha+\frac{1}{2}\beta+2\sqrt{\frac{\alpha}{\sigma}}+\frac{1}{2\sigma}-\frac{\beta}{2\alpha\sigma}\right)-\frac{\beta}{2\alpha}\geq 0,
\end{align*}
where $\lambda_N$ denotes the largest eigenvalue of the Laplacian matrix.
\end{assumption}
\begin{theorem}\label{the4}
If Assumption \ref{assump2} holds, then the lower bound of the required initial energy $E_i(0)$ in \eqref{13} is an increasing function of $\alpha$ and $\beta$ and a decreasing function of $\sigma$.
\end{theorem}
\begin{proof}
For notational convenience, define the lower bound of $E_i(0)$ as $E_{i_l}$, i.e.,
\begin{align}\label{80}
&E_{i_l}=\frac{1}{2}V_\mathcal{L}(0)\bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}}\bigg)+\beta\bigg]\nonumber\\
&\times\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}\bigg(1-e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\bigg).
\end{align}
Differentiating both sides of \eqref{80} with respect to $\alpha$ yields
\begin{align}\label{81}
\frac{\partial {E_{i_l}}}{\partial \alpha}= \frac{\partial H_1}{\partial \alpha}V_\mathcal{L}(0)H_2+\frac{\partial H_2}{\partial \alpha}V_\mathcal{L}(0)H_1,
\end{align}
where $H_1=\bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}}\bigg)+\beta\bigg] \sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}},$ $H_2=\frac{1}{2}\bigg(1-e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\bigg)$,
$\frac{\partial H_1}{\partial \alpha}=\lambda_N\bigg(2\alpha+\beta+3\sqrt{\frac{\alpha}{\sigma}}+\frac{1}{\sigma}+\frac{1}{\sigma\sqrt{\sigma\alpha}}\bigg)
\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}-\bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta
+2\sqrt{\frac{\alpha}{\sigma}}\bigg)+\beta\bigg] \frac{\frac{\beta}{\alpha^2}+\frac{1}{\alpha\sqrt{\alpha\sigma}}}{2\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}},$ and $\frac{\partial H_2}{\partial \alpha}=\lambda_NT\frac{\frac{1}{\sigma}(1+\frac{1}{\sqrt{\sigma\alpha}})}{4\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}}e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}$.
It can be seen that $\frac{\partial {E_{i_l}}}{\partial \alpha}\geq 0$ when the two terms on the right-hand side of (\ref{81}) are non-negative. Since $V_\mathcal{L}(0)\geq 0,\quad H_1\geq 0,\quad H_2\geq 0$, and $\frac{\partial H_2}{\partial \alpha}\geq 0,$
the second term of (\ref{81}) is non-negative. In the following, the sign of $\frac{\partial H_1}{\partial \alpha}$ is discussed. It follows that
\begin{align
\frac{\partial H_1}{\partial \alpha}&\geq\lambda_N\bigg(2\alpha+\beta+3\sqrt{\frac{\alpha}{\sigma}}+\frac{1}{\sigma}+\frac{1}{\sigma\sqrt{\sigma\alpha}}\bigg)\nonumber\\
&\times\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}-\bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta\nonumber\\
&+2\sqrt{\frac{\alpha}{\sigma}}\bigg)+\beta\bigg]
\frac{\frac{1}{\alpha}[(\frac{\beta}{\alpha}+\frac{1}{\sqrt{\alpha\sigma}})+1+\frac{1}{\sqrt{\alpha\sigma}}]}{2\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}},
\end{align}
which leads to
$\frac{\partial H_1}{\partial \alpha} \geq \lambda_N\bigg(2\alpha+\beta+3\sqrt{\frac{\alpha}{\sigma}}+\frac{1}{\sigma}+\frac{1}{\sigma\sqrt{\sigma\alpha}}\bigg) \sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}-\bigg[\lambda_N\bigg(\alpha+\frac{1}{\sigma}\bigg)\bigg(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}}\bigg)+\beta\bigg] \frac{\frac{1}{\alpha}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}{2\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}}= \bigg[\lambda_N\bigg(\frac{3}{2}\alpha+\frac{1}{2}\beta+2\sqrt{\frac{\alpha}{\sigma}}+\frac{1}{2\sigma}-\frac{\beta}{2\alpha\sigma}\bigg)-\frac{\beta}{2\alpha}\bigg]\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}.$
When Assumption \ref{assump2} holds, one has $ \frac{\partial H_1}{\partial \alpha}\geq 0,$
which gives $\frac{\partial {E_{i_l}}}{\partial \alpha}\geq 0.$
Thus, the lower bound of the required initial energy $E_i(0)$ is an increasing function of $\alpha$.
Similarly, it can be shown that
\begin{align
\frac{\partial {E_{i_l}}}{\partial \sigma}=V_\mathcal{L}(0)\frac{\partial H_1}{\partial \sigma}H_2+V_\mathcal{L}(0)\frac{\partial H_2}{\partial \sigma}H_1, \nonumber
\end{align}
where
\begin{align*}
\frac{\partial H_1}{\partial \sigma}&=-\frac{\lambda_N}{\sigma^2}\bigg(\alpha+\beta+3\sqrt{\frac{\alpha}{\sigma}}+\alpha\sqrt{\alpha\sigma}\bigg)\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\sigma\alpha}}}\\
&\quad -\frac{\lambda_N(\alpha+\frac{1}{\sigma})(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}})+\beta}{2\sigma\sqrt{\alpha\sigma}\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\alpha\sigma}}} \nonumber \\
& \leq 0,\\
\frac{\partial H_2}{\partial \sigma}&=-\frac{\lambda_NT\frac{\alpha}{\sigma^2}(1+\frac{\beta}{\alpha}+\frac{3}{\sqrt{\alpha\sigma}})}{4\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}}e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\nonumber \\
& \leq 0,
\end{align*}
which further leads to $\frac{\partial {E_{i_l}}}{\partial \sigma}\leq 0.$
Also, one has $\frac{\partial {E_{i_l}}}{\partial \beta}=V_\mathcal{L}(0)\frac{\partial H_1}{\partial \beta}H_2+V_\mathcal{L}(0)\frac{\partial H_2}{\partial \beta}H_1,$ and
\begin{align*}
\frac{\partial H_1}{\partial \beta}&=[\lambda_N(\alpha+\frac{1}{\sigma})+1]\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}\\
&\quad+\frac{[\lambda_N(\alpha+\frac{1}{\sigma})(\alpha+\beta+2\sqrt{\frac{\alpha}{\sigma}})+\beta]}{2\alpha\sqrt{1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}}}}\geq0,\\
\frac{\partial H_2}{\partial \beta}&=\frac{\lambda_NT}{4\sigma\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}}e^{-\lambda_N\sqrt{\frac{\alpha}{\sigma}(1+\frac{\beta}{\alpha}+\frac{2}{\sqrt{\alpha\sigma}})}T}\geq0,
\end{align*}
which leads to $\frac{\partial {E_{i_l}}}{\partial \beta}\geq 0.$
Thus, the lower bound of the required initial energy $E_i(0)$ is a decreasing function of $\sigma$ and an increasing function of $\beta$. The proof is hence completed.
\end{proof}
It follows from Theorems~\ref{the3} and \ref{the4} that the lower bounds on the achievable termination time and the required initial energy are both decreasing functions of $\sigma$. Hence, one can increase the value of $\sigma$ to reduce the formation time and the energy consumption. However, $\sigma$ is not allowed to be arbitrarily large, because the condition $\sigma< \lambda_2$ must be met as indicated by Theorem~\ref{the1}. Meanwhile, a large value of $\alpha$ is capable of speeding the convergence of the formation algorithm, yet at the cost of more energy consumption. Finally, the resistance coefficient $\beta$ is both harmful to convergence time as well as energy consumption. That is, a larger value of $\beta$ will lead to a longer convergence time and more energy consumption.
\section{Simulation}
\label{sect:sim}
In this section, numerical examples are presented to verify the theoretical results. Let $N=5$ and $n=2$. The initial states of the agents are given by $x_1(0)=(0,4,0,0)$, $x_2(0)=(12,9,0,0)$, $x_3(0)=(5,3,0,0)$, $x_4(0)=(9,3,0,0)$, and $x_5(0)=(4,0,0,0)$. The desired relative states are set to $d_{12}=(5,-2.5,0,0)$, $d_{23}=(5,2.5,0,0)$, $d_{34}=(-5,2.5,0,0)$, $d_{14}=(-5,-2.5,0,0), d_{15}=(5,0,0,0)$, and $d_{53}=(5,0,0,0)$. The initial energy levels are given by $E(0)=\{1000,1200,700,900,500\}$, the termination time is $T=3s$, and the steady-state error tolerance is $\varepsilon=0.1$. The network topology is given in Fig.~\ref{Fig.1}, for which the eigenvalues of the Laplacian matrix $\mathcal{L}$ are $\mathrm{spec}(\mathcal{L})=\{0,1.382,1.382,3.618,3.618\}$, and the second smallest eigenvalues is $\lambda_2=1.382$.
\begin{figure}[htp!]
\centering
\begin{tikzpicture}[-,auto,node distance=1.5cm, thick,main node/.style={circle,scale=.6,fill=blue!20,draw,font=\sffamily\bfseries}]
\node[main node] (1) at(2.25,2.7027) {1};
\node[main node] (2) at(3.4615,1.42665) {2};
\node[main node] (3) at(3,0) {3};
\node[main node] (4) at(1.5,0) {4};
\node[main node] (5) at(1.0365,1.42665) {5};
\path[every node/.style={font=\sffamily\small}]
(1) edge node [right] {} (2)
(2) edge node [right] {} (3)
(3) edge node [right] {} (4)
(4) edge node [right] {} (5)
(5) edge node [right] {} (1);
\end{tikzpicture}
\caption{The network topology}
\label{Fig.1}
\end{figure}
\begin{figure*}[htp!]
\centering
\setcounter{figure}{1}
\subfloat[$T_l$ vs. $\alpha$ ($\beta=0.2$)]{\label{Fig.2(a)}\includegraphics[width=0.3\textwidth]{1.eps}}
\quad
\subfloat[$T_l$ vs. $\sigma$ ($\beta=0.2$)]{\label{Fig.2(b)}\includegraphics[width=0.3\textwidth]{2.eps}}
\quad
\subfloat[$T_l$ vs. $\beta$ ($\sigma=1.3$)]{\label{Fig.2(c)}\includegraphics[width=0.3\textwidth]{3.eps}}\\
\caption{The lower bound of the achievable termination time for the multi-agent system.}
\label{Fig.2}
\end{figure*}
\begin{figure*}[htp!]
\centering
\setcounter{figure}{2}
\subfloat[$E_l$ vs. $\alpha$ ($\beta=0.2$)]{\label{Fig.3(a)}\includegraphics[width=0.3\textwidth]{4.eps}}
\quad
\subfloat[$E_l$ vs. $\sigma$ ($\beta=0.2$)]{\label{Fig.3(b)}\includegraphics[width=0.3\textwidth]{5.eps}}
\quad
\subfloat[$E_l$ vs. $\beta$ ($\sigma=1.3$)]{\label{Fig.3(c)}\includegraphics[width=0.3\textwidth]{6.eps}}\\
\caption{The lower bound of the required initial energy for all agents.}
\label{Fig.3}
\end{figure*}
Fig.~\ref{Fig.2} shows the curves of the lower bound of the achievable termination time versus the parameters $\alpha$, $\sigma$ and $\beta$. It can be observed that the lower bound of the achievable termination time is a decreasing function of both $\alpha$ and $\sigma$ and is an increasing function of $\beta$. This is consistent with the theoretical results in Section~\ref{sec:IV}.
Fig.~\ref{Fig.3} shows the curves of the lower bound of the required total initial energy, i.e., $E_l=\sum_{i=1}^NE_{i_l}$, versus the parameters $\alpha$, $\sigma$ and $\beta$. It can be observed that the lower bound of the required total initial energy is an increasing function of $\alpha$ and $\beta$ and is a decreasing function of $\sigma$. In the following simulation, $\sigma=1.3$ is employed which is smaller than $\lambda_2$.
\begin{figure*}[htp!]
\centering
\setcounter{figure}{3}
\subfloat[Simulation I: $\alpha=450$, $\beta=0.2$]{\label{Fig.4(a)}\includegraphics[width=0.3\textwidth]{7.eps}}
\quad
\subfloat[Simulation II: $\alpha=5$, $\beta=0.3$]{\label{Fig.4(b)}\includegraphics[width=0.3\textwidth]{8.eps}}
\quad
\subfloat[Simulation III: $\alpha=853$, $\beta=0.7$]{\label{Fig.4(c)}\includegraphics[width=0.3\textwidth]{9.eps}}\\
\caption{The final formation shape of the multi-agent system. The position of each agent is indicated by $*$.}
\label{Fig.4}
\end{figure*}
\begin{figure*}[htp!]
\centering
\setcounter{figure}{4}
\subfloat[]{\label{Fig.5(a)}\includegraphics[width=0.3\textwidth]{10.eps}}
\quad
\subfloat[]{\label{Fig.5(b)}\includegraphics[width=0.3\textwidth]{11.eps}}
\quad
\subfloat[]{\label{Fig.5(c)}\includegraphics[width=0.3\textwidth]{12.eps}}\\
\caption{The energy consumption of the multi-agent system. The red color indicates the energy consumed, whilst the blue color indicates the remaining energy level.}
\label{Fig.5}
\end{figure*}
\begin{table}[h]
\setlength{\tabcolsep}{1.45mm}
\centering
\caption{Values of the parameters}
\begin{tabular}{c|c|c|c|c}
\hline
Simulation & Value of & Value of &Value of &Formation \\
Number & $\alpha$ & $\sigma$ & $\beta$ &Time $t_f$(s) \\
\hline
\uppercase\expandafter{\romannumeral1} & 450& 1.3 & 0.2 &0.49\\
\uppercase\expandafter{\romannumeral2} & 5& 1.3& 0.3 &N/A\\
\uppercase\expandafter{\romannumeral3} & 853& 1.3 &0.7 &N/A\\
\hline
\end{tabular}
\label{tab1}
\end{table}
Table~\ref{tab1} shows three sets of the values of $\alpha$, $\sigma$, and $\beta$. Only the first set satisfies the energy and time constraints, i.e., Eqs.~(\ref{12}) and (\ref{13}), simultaneously. The second set violates the termination time constraint (\ref{12}), while the third set violates the energy constraint \eqref{13}. Fig.~\ref{Fig.4} depicts the final formation shape of the multi-agent system in each case. Fig.~\ref{Fig.5} shows the energy consumption of the agents during the formation task. It can be observed that in the first case, the formation is achieved and the energy is not exhausted for each agent; in the second case, the formation task is not accomplished by the end of the termination time $T=3s$; in the third case, the energy of agent~$4$ is exhausted before the formation mission is accomplished.
\section{Conclusions}
\label{sect:concl}
This paper presents a globally optimal distributed formation control algorithm and a comprehensive analysis of the roles of energy levels, termination time, control parameters, as well as the network topology on achieving energy and time constrained formation control. Two lower bounds on the required initial energy levels and on the achievable termination time are explicitly given, which help answer the question whether a distributed formation control problem is feasible under prescribed hard constraints on the termination time and energy expenditure. Additionally, several monotonicity properties in relation to the control parameters, in particular, the achievable termination time and the required initial energy with respect to those control parameters are derived. These properties can be properly exploited to facilitate the formation control design. The formulation of this paper provides a solution to LQR-based formation control under constraints of both termination time and energy. The future topic can be directed to nonlinear agent dynamics and directed network topologies.
\bibliographystyle{IEEEtran}
|
2,877,628,089,441 | arxiv | \section{White Paper Information}
\begin{enumerate}
\item {\bf Science Category:} Taking an Inventory of the Solar System
\item {\bf Survey Type Category:} mini survey
\item {\bf Observing Strategy Category:} An integrated program with science that hinges on the combination of pointing and detailed observing strategy
\end{enumerate}
\clearpage
\section{Scientific Motivation}\label{sec:motivation}
A foundational goal of the Large Synoptic Survey Telescope (LSST) is to map the Solar System \citep{2008arXiv0805.2366I,2009arXiv0912.0201L}.
Multiple major small body populations (described below) are key windows into understanding our Solar System's formation and evolution, but are asymmetrically distributed on the sky.
They will be partially mapped or completely missed without coverage of the Northern ecliptic, which is absent from the Wide-Fast-Deep (WFD) footprint.
Other yet-unseen asymmetric distributions are likely to exist, and will only be found by surveying the entire ecliptic. To achieve the goals in the LSST Solar System Science Collaboration's roadmap \citep{2018arXiv180201783S}, we propose a Northern Ecliptic Spur (NES) mini survey covering up to $+10^\circ$ ecliptic latitude.
\\
\\
\textbf{Main-Belt Comets:} Main-belt comets (MBCs) occupy dynamically asteroidal orbits between Mars and Jupiter, yet exhibit comet-like activity near perihelion due to sublimation of volatile ices \citep{2006Sci...312..561H}. Fewer than a dozen are currently known, where they are considered valuable probes of volatile distribution in the Solar System's primordial disk. They comprise a subset of the active asteroids, which are dynamically asteroidal objects that exhibit activity due to sublimation, rotational destabilization, impacts, and other effects \citep[][]{2015aste.book..221J}. \cite{2018AJ....155..142K} find that almost all of the known MBCs reach perihelion (and thus become active) in the same direction as Jupiter's perihelion, clustering in our proposed NES survey region (Figure \ref{fig:mbcs}). A NES mini survey is needed to (a) determine whether this alignment of MBC perihelia is maintained as more MBCs are discovered, and if this apparent alignment is verified to be real, (b) discover smaller MBCs, which will only be bright enough to detect when near perihelion and active, and (c) monitor known main-belt asteroids for activity at times when they are most likely to become active.
\\
\\
\textbf{The Kuiper Belt's Structure and Neptune's Migration:} The detailed structure of the Kuiper belt, the swarm of planetesimals orbiting beyond Neptune, provides important constraints on early Solar System dynamical history.
The populations now in mean-motion resonance constrain Neptune's orbit during its outward migration.
The number of Kuiper belt objects (KBOs) in each resonance constrain high-eccentricity phases and/or semi-major axis jumps during Neptune's migration \citep[e.g.,][]{1995AJ....110..420M,Levison:2008,Nesvorny:2016}.
Detailed distribution inside within resonances is also valuable. For example, the ratio of KBOs in the leading and trailing libration islands in Neptune's 2:1 resonance, can act as a speedometer for Neptune's migration \citep{Murray-Clay:2005}. KBOs are exceptionally distant, have a steep size distribution, and are thus faint: their discoveries are strongly biased toward detection at perihelion.
Detectable resonant KBOs are asymmetrically distributed on the sky, as they come to perihelion at specific geometries relative to Neptune \citep[e.g.,][]{Gladman:2012}. KBOs traverse a minute fraction of their orbit during LSST's 10-year baseline. Surveying the entire ecliptic is critical to observing enough resonant KBOs to make these tests.
In the absence of the NES, a substantial fraction of key orbital groupings within these important resonances will be completely missed.
\\
\\
\textbf{L4 Neptune Trojans:} Neptune Trojans co-orbit with Neptune around its L4 and L5 Lagrangian points, emplaced during Neptune's migration. Their orbital/physical property dependencies and L4/L5 population asymmetries are important probes both of Neptune's dynamical history and the Solar System's primordial disk. \cite{Lin:2018} discovered the first ultra-red Neptune Trojan, similar to the ultra-red surfaces seen residing within the Kuiper belt (see Figure \ref{fig:NT}). With an inclination $\sim 31^{\circ}$, this discovery may show that ultra-red surfaces only occur at high inclinations, but the origin of this surface type remains unknown.
\cite{Lin:2016} also find that the larger (H $<$ 8) Neptune Trojans have lower inclinations. Only 19 L4 and 3 L5 Neptune Trojans are known to date; more detections with LSST are needed to confirm these correlations. Figure \ref{fig:NT} plots the on-sky positions of simulated Neptune Trojans. L5 Neptune Trojans will have good coverage within the WFD survey and the wide inclination distribution of Neptune Trojans make part of the high-inclination L4 Trojans detectable in the WFD. However, the majority of low-inclination L4 Trojans will be missing without the NES. The NES is crucial to test the size-inclination and color-inclination dependencies and symmetry in properties between the L4 and L5 Neptune Trojans.
\\
\\
\textbf{Planet 9 and the Origin of the Inner Oort Cloud:}
Inner Oort Cloud objects (IOCs) are on highly elongated orbits with perihelia beyond 45 au and semi-major axes greater than 250 au and less than 2000 au \citep{2015MNRAS.446.3788B}. IOCs are not significantly influenced by the known inner giant planets or outside forces, but were emplaced by some sort of dynamical interactions, possibly from past stronger outside forces, such as would happen in a dense stellar cluster or from an unseen massive planet \citep{2004ApJ...617..645B}. As shown in Figure \ref{fig:IOCs}, all of the few known IOCs come to perihelion at similar locations on the sky, which is proposed to be from a distant planet gravitationally shepherding the IOCs onto similar orbits \citep{2014Natur.507..471T,2016AJ....151...22B}. The IOCs appear to cluster in longitude of perihelia near an RA of $4\pm 3$ hrs, placing most detectable IOCs in the NES fields.
The NES is thus vital to understand this enigmatic population and determine the true clustering of the IOCs across the sky. If the orbit clustering is real, the NES can expect to find a significant number of IOCs every few hundred square degrees at 24th mag \citep{2016AJ....152..221S}, constraining the orbit of the proposed planet. In addition, \cite{2016ApJ...824L..23B} predict the planet itself would be in the NES fields and detectable by LSST. Alternatively, the current wide range of IOC formation scenarios, e.g. the size of the Sun's birth cluster, will be significantly constrained.
\\
\\
\textbf{KBO Stellar Occultations:} A NES mini survey will recover most of the $\sim$1800 known northern KBOs, providing precise 10-year observation arcs calibrated to Gaia astrometry \citep{2018A&A...616A...2L}. It will be possible to accurately predict stellar occultations of these bodies. Stellar occultations enable accurate measures of sizes, albedos, and binarity, and also probe for rings, atmospheres, and topographic features \citep[e.g.][]{2013ApJ...773...26B,2014Natur.508...72B,2017AJ....154...22D,2017Natur.550..219O}. No planned survey will reach the same depth as LSST in the NES: the occultation science gains are unique to the NES mini survey.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.89\columnwidth]{fig_mbc_alignment_c}
\caption{\label{fig:mbcs}\textbf{(a)} Directions of the longitudes of perihelion of outer-main-belt (OMB) MBCs whose activity is attributed to sublimation or a combination of sublimation and rotational destabilization. Adapted from \citet{2018AJ....155..142K}. \textbf{(b)} Sky positions of sublimation-driven OMB MBCs, and sublimation and rotation-driven OMB MBCs when at $\nu=30^{\circ}$ (i.e., when peak activity is expected) over the course of the LSST survey.}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.87\columnwidth]{NT}
\caption{ \label{fig:NT}\textbf{Left:} The color-inclination relation of Neptune Trojans. The only known extra-red Neptune Trojans has second highest inclination of $\sim 31^{\circ}$ (red circle). \textbf{Right:} The on-sky positions of Neptune Trojans in 2022 (grey) and 2032 (yellow).}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.44\columnwidth]{ETNOplan2018lsst}
\caption{\label{fig:IOCs} Alignment of known inner Oort cloud orbits, with perihelia beyond 45 au and semi-major axes $250 < a < 2000$~au. Orbits in red have perihelia greater than 60 AU. All these extreme objects come to perihelion within a few hours of 4 hours RA.
}
\end{center}
\end{figure}
\section{Technical Description}
\subsection{High-level description}
We propose a mini-survey with observations covering the ecliptic plane beyond the region covered within the main Wide-Fast-Deep (WFD) Survey footprint, in order to fully sample small body populations throughout the Solar System. Our proposed NES contains the missing 50\% of the total area of the ecliptic on the sky that is not contained within the WFD survey -- and has about the same fraction of Solar System small bodies at any time. In the absence of the NES, the substantial fraction of objects near the ecliptic in the Northern hemisphere would be completely missed. We are requesting 255 visits per field over ten years in $grz$, over our mini survey region (the `Northern Ecliptic Spur' region, NES)
reaching from the northernmost limit of the WFD up to an ecliptic latitude of $+10^{\circ}$ (see Figure~\ref{fig:footprint}). We assume LSST's discovery and attribution performance will be as described in \cite{2018Icar..303..181J}. The cadence of observations is important, in order to enable linking and tracking;
we are requesting 6 visits in pairs per night, for each of 5 months in 7 `Discovery' years,
with 15 visits per year over (split between 5 months) in 3 `Tracking' years;
this is similar to but dramatically less densely sampled than the WFD baseline strategy. Section \ref{sec:distvisits} describes in detail our preference for how these observations should be divided over the 10-year LSST operational baseline. The details in this section, including the number of fields and fraction of time required, is based on analysis of a series of simulations created with the LSST Operations Simulator \citep[{\tt OpSim},][]{2014SPIE.9150E..15D} for the call for cadence optimization white papers.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.7\columnwidth]{NES_requested_coverage}
\caption{Light blue represents the pointings requested in our proposed Northern Ecliptic Survey. The solid blue line represents the ecliptic. The dashed blue lines represent $\pm$ 10 degrees ecliptic latitude. The solid green line plots the center of the galactic plane. The dashed green lines reflect $\pm$ 10 degrees galactic latitude.}
\label{fig:footprint}
\end{center}
\end{figure}
\clearpage
\subsection{Footprint -- pointings, regions and/or constraints}
In order to complete coverage of the ecliptic plane, we propose a mini survey footprint ranging from the
northernmost top of the WFD ($\approx \delta = 0^\circ$) up to an ecliptic latitude of $+10^{\circ}$
(see Figure~\ref{fig:footprint}). This correlates to 604 distinct LSST fields, using the pointing tessellation
provided in the current simulations. This sky coverage is the best compromise between the northern declination limit of the telescope and the inclination distributions of the Solar System's small body reservoirs. Although orbits of all inclinations cross the ecliptic, those bodies on orbits with higher inclinations spend most of their time away from the ecliptic plane. Additionally some of the KBO resonant populations are perturbed by Neptune such that they come to closest approach off of the ecliptic. By covering fields to 0 degrees declination as the lower extent of the NES, we ensure adequate coverage of all our key Northern Solar System populations.
\subsection{Image quality}
The image quality used in the NES images should be similar to what is used in the WFD. There are no special constraints on image quality beyond what has already been set in the current operation simulations of NES fields and what is needed to achieve our desired individual image depths. In particular, the image quality (seeing) constraints for northern ecliptic observations in the baseline2018a and kraken\_2026 {\tt OpSim} runs are sufficient for our needs.
\subsection{Individual image depth and/or sky brightness}
The individual image depth is important (and thus its implied constraints on image quality and sky brightness),
as moving objects must be detectable in individual images. As the goal is to discover small bodies at all
locations in the ecliptic plane without introducing significant bias, the individual image depths in this mini survey region should be similar to the individual image depths in the WFD footprint. There are no hard cutoffs, but there is an overall preference for visits to be as deep as possible without sacrificing sky coverage. Past wide-field Solar System surveys have reached a limiting magnitude of $\sim$22nd mag in R \citep[e.g ][]{2010ApJ...720.1691S,2016arXiv161205560C,2018ApJ...855L...6H}. Thus for LSST to make a significant contribution to Solar System Science, the 5-$\sigma$ limiting magnitude per exposure in $r$ and $g$ must be greater than 23rd magnitude with exposure times of 30s or more. We propose the same total exposure time per visit as the WFD (30s per visit), which currently meets our detection goals. In particular, individual image depths in the northern ecliptic observations in the baseline2018a and kraken\_2026 {\tt OpSim} runs are sufficient for our needs. Specific details on the Solar System Object Differential Completeness Goals that are desired for both the WFD footprint and this proposed NES mini survey are detailed in the Community Observing Strategy Evaluation Paper \citep[COSEP;][]{2017arXiv170804058L}. Additionally, we note that sky brightness can be a particular concern for this region, as the distance to the moon will tend to be small, but is not a constraint in and of itself.
\subsection{Co-added image depth and/or total number of visits}
There are no constraints on the co-added image depth.
We do have constraints on the total number of visits, due to cadence preferences and requirements for identifying moving objects and characterizing their orbits and physical properties.
These are discussed below in more detail, but result in a total number of visits on the order
of 255 per field.
\subsection{Number of visits within a night}
At least two visits per night to a field are required to detect and identify moving Solar System objects. At least three separate nights are required to identify and link newly discovered moving objects (see the high-level requirements for LSST project's Moving Object Pipeline System (MOPS) defined
in LSE-30\footnote{\url{http://ls.st/LSE-30}} and LDM-156\footnote{\url{http://ls.st/LDM-156}}).
To guarantee discovery of Solar System bodies at 95$\%$ confidence by MOPS, three tracklets (a pair of images
in the same night, acquired no more than 90 minutes apart) acquired within 15 days are needed. The most distant objects in the Inner Oort cloud region beyond 200 au, do not move appreciably within a single night, but with two visits per night for each field we can use those the two epochs to weed out the faster/closer moving Solar System bodies in order to optimize and speed up the search algorithms. Thus, we propose at least two visits per night to each field when possible.
\subsection{Distribution of visits over time}
\label{sec:distvisits}
To balance our key science cases for the NES we spread the `Discovery' and `Tracking' years as described in Table \ref{tab:obs_summary}, below. This cadence allows orbital recovery for the different populations and maximizes temporal coverage for main-belt comet and asteroid collision discovery. An additional requirement during the `Discovery' years is an observation schedule that supports MOPS discovery of main-belt objects. MOPS requires a pair of visits per night, in at least three separate nights, within 15 days in order to identify and link minor planets (resulting in a 95$\%$ confidence of discovery; LSE-30; LDM-156). Each pair consists of two visits within the same night, no more than 90 minutes apart. Once the discovery criteria are satisfied for a given field in the `Discovery' year, the remaining observations can be scheduled more flexibly. During `Tracking' years, the measured orbits of previous detected asteroids found in `Discovery' years will be used to predict the locations of these bodies in the tracking observations to check on cometary activity.
\begin{table}[h]
\centering
\begin{tabular}{|c|l|l|}
\hline
Year & Observation Type & Summary \\ \hline
1 & Discovery & 6 observations per field per month for 5 months \\ \hline
2 & Discovery & 6 observations per field per month for 5 months \\ \hline
3 & Discovery & 6 observations per field per month for 5 months \\ \hline
4 & Tracking & 15 observations per field divided over 2-3 months \\ \hline
5 & Discovery & 6 observations per field per month for 5 months \\ \hline
6 & Discovery & 6 observations per field per month for 5 months \\ \hline
7 & Tracking & 15 observations per field divided over 2-3 months \\ \hline
8 & Tracking & 15 observations per field divided over 2-3 months \\ \hline
9 & Discovery & 6 observations per field per month for 5 months \\ \hline
10 & Discovery & 6 observations per field per month for 5 months \\ \hline
\end{tabular}
\caption{{\bf Proposed NES Year-by-Year Summary} }
\label{tab:obs_summary}
\end{table}
The population of MBCs we have discovered in the past decade are likely the brightest and youngest members. With LSST's superior sensitivity, we will search for activity around all asteroids, with additional scrutiny for those in the outer main-belt. For known asteroids, we only require a few observations of MBC-like asteroids near perihelion, rather than the more strict circumstances needed for discovery. This is readily satisfied in both our `Discovery' and `Tracking' years as described above. The `Discovery' years are also crucial for the MBCs. Cometary activity enhances the brightness, and therefore discoverability of MBCs. However, detectable activity for the currently known MBCs has been limited to brief periods near perihelion, typically only a few months long but sometimes even shorter \citep{2015Icar..248..289H}. Being outer main belt objects, MBC have orbital periods of $\sim$6 years. Thus, we request 7 `Discovery' years in the NES to increase the discoverability of MBCs during their brief active periods. We simulated LSST's discoverability of asteroids with MBC-like orbits based on \cite{2018AJ....155..142K}, and found the rate of discovery during periods of activity is up to 35\% without the NES, and up to 55\% with the NES, corresponding to an enhancement of $\sim$1.6. The distribution of observations during `Tracking' years is flexible, but we nominally request a cadence to maximize temporal coverage to search for MBC activity over the 5 months an NES field is typically observable. We request that `Tracking' year observations consist of ideally nightly pairs of visits made on seven different nights spread out over 5 months centered around opposition for each field, with no specific spacing requirements other than the first and last pairs of visits being spaced by at least two months. This approach would provide somewhat denser and more uniform time sampling of known objects to search for cometary and other activity.
The proposed multi-year observations of our proposed NES footprint are also designed to characterize the orbits for Outer Solar System bodies both in the Kuiper belt and Inner Oort cloud. At least two years (two-oppositions) of observations of the NES are required to test the Planet 9 hypothesis and probe the origin of objects like Sedna in the Inner Oort Cloud region. With a single opposition of observations, we will be unable to distinguish the distant Sedna-like Inner Oort cloud objects from our detections residing within the Kuiper belt. A high-perihelion Inner Oort cloud object and a typical scattered disk KBO near aphelion both provide reasonable Keplerian fits to the observations. Follow-up recovery observations one month or a few months later are crucial in reducing the one-year positional uncertainties, but the two families of orbits only diverge sufficiently after a year. The key to understanding the formation of the Sedna-like Inner Oort cloud objects is through accurate orbits. With firmly characterized orbits that identify whether the distant objects beyond 50 au are at their closest point in their orbit, and therefore detached from Neptune, we can begin to explore the dynamics and structure of the Sedna region. Although two oppositions can separate regular scattered disk KBOs from Sedna-like detached Inner Oort cloud orbits, the semi-major axis uncertainty is sufficiently large that further oppositions as requested in our NES survey are needed to full characterize these distant orbits.
Observations of the NES at later years are also critical to reducing orbit fit uncertainties enough to allow secure identification of objects in mean motion resonances with Neptune. The distribution of Neptune's resonant population provides a critical test of early Solar System dynamical histories, and securely identifying resonant orbits typically requires 3-$\sigma$ semi-major axis uncertainties below $\sim$1\%. This precision would likely be most easily achieved with moderately dense sampling of the NES over two oppositions (multiple nights for each field at opposition and $\sim2$ months before and after opposition) early in the survey and sparsely sampled follow-up at later oppositions in the survey. Shifting the scheduling of the higher-density sampling years and low-density sampling years during the survey should not significantly affect orbit quality because the most important factor is the overall baseline of observations.
Our Neptune Trojan science case also benefits from the additional `Discovery' years. As shown in Figure \ref{fig:NT}, over the 10-year baseline, the core of the L4 Neptune Trojan cloud moves significantly on the sky. In Years 1-3, the bulk of L4 Neptune Trojans will be further than 10 degrees from the galactic plane, making them easier to detect. In later years, they will move within 10 degrees of the galactic plane. The LSST image subtraction pipeline will have to deal with the significant blending from the crowded fields; likely detection efficiency of MOPS will be lower in these regions of the sky. The outer edges of the cloud that were in the galactic plane in 2022 will move out by 2032, and may have a better chance of discovery in less crowded stellar fields. To also aid in the discoverability of L4 Neptune Trojans, we designed our proposed NES survey to cluster `Discovery' years in the first few years and then later in the last 2 years of LSST baseline operations.
While the exact ordering of our `Discovery' and `Tracking' years for the NES within LSST is flexible for achieving our science goals, we note that having three `Discovery' years at the beginning of LSST will help maximize the use of ground-based facilities in the Northern hemisphere to enhance LSST Solar System science. Having early `Discovery' years will ensure that there are Northern Solar System targets for follow-up observations early in the 10 year LSST baseline. For example, newly discovered MBCs could be targeted for spectroscopic follow-up. Expected IOCs could also be targeted for additional astrometric observations to aid in orbit determination. Preliminary photometric measurements for detected KBOs could also be used to select scientifically compelling targets for follow-up studies to obtain higher precision lightcurve or color measurements.
\subsection{Filter choice}
\label{sec:filter_selection}
Solar System minor planets, visible from reflected sunlight, are brightest in the mid-optical wavelengths. Inner Solar System objects will move sufficiently over LSST's 10-year operational baseline to be imaged at some point in all 5 filters in the WFD footprint. Over a decade Outer Solar System bodies will not move much in their orbits, such that nearly all of the KBOs imaged in year 1 in the NES survey will remain within the NES footprint in year 10. Thus, our filter choice maximizes the return on Outer Solar System science. Since the NES fields would receive fewer visits than WFD fields, we prioritize the bulk of observations in $r$, with some $g$ and $z$-band imagery for surface color/composition studies.
As LSST does not have a wide-band $gri$-type filter as used, e.g., by Pan-STARRS \citep{2016arXiv161205560C}, optimum detection efficiency will be in the $r$ band; we thus prioritize observations in this filter, requesting 60$\%$ of the total observation in each field be taken in $r$. Spectral slope from solar-neutral to solar-reddened can be minimally parameterized with the use of $g$.
However, red surfaces will be very faint in $g$, thus a comparatively substantial 25\% of time must be allocated to obtain sufficient SNR to ensure moderate-quality and (potentially) non-simultaneous $g-r$ colours on LSST's detections.
\citet{Pike:2017} showed that the cold classical population of TNOs display a distinct colour in the colour-space of the filters $g,r,z$.
We thus add $z$ as our third filter for the remaining 15\% of time, in preference to $i$, where no such population distinctiveness is seen.
Most minor planets are very faint reflectors in $u$, so we (reluctantly) omit it from our mini survey.
Observations within a single night do not necessarily need to be in the same filter, however we will be constrained in detection efficiency by the shallower limiting magnitude of the pair. Outer Solar System objects have much redder surfaces compared to the inner Solar System bodies. $(g-r)$ colors range to 1 for the very red surfaces in the dynamically excited and cold classical Kuiper belt. To maximize the detection of the reddest KBOs and IOCs, we ideally request that when possible the nightly pairs be taken in $r$. When this is not possible, we request the second nightly visit should be in $g$ since Solar System objects will be faintest in z-band. We request that the various $g,r,z$ nightly filter pair combinations in a NES mini survey observations with a cadence as proposed here be simulated to better quantify the impact on discovery metrics with an improved KBO SED (spectral energy distribution) in the {\tt OpSim} moving object package.
\subsection{Exposure constraints}
The goal of the NES is to detect sufficient numbers of Northern Solar System objects to characterize the asymmetric distributions of MBCS, resonant KBOs, Neptune Trojans, and IOCs. Ideally, our requirement would be that the proposed NES mini survey is to have the same or better sensitivity to Solar System objects as in the Southern Ecliptic covered in the WFD footprint. This suggests that we aim for a similar NES detection threshold as in the Southern Ecliptic that will be covered by the WFD footprint. There are trade-offs between exposure time and coverage (discussed further below), but our nominal plan uses the same 30s exposure time in the NES as in the WFD survey. Though longer exposures go deeper, the loss in coverage is probably detrimental to our coverage goals which are crucial since these populations are relatively sparse on the sky. Future work could employ metric-driven optimization to investigate these details.
\subsection{Other constraints}
None noted.
\subsection{Estimated time requirement}
\begin{itemize}
\item In total 604 fields that have field centers greater than or equal to declination of 0 up to an ecliptic latitude of +10 degrees.
\item 7 `Discovery' year with 30 visits (6 observations per field per month for 5 months)
\item 3 `Tracking' year with 15 observations per field spread out over 2/3 months
\item In total 255 visits for each fields in ten years.
\item Time required per visit is (30 second exposure time + 3 seconds slew/settle + 2 seconds shutter open/close) = 35 second for one snap (see \ref{sec:snaps}). It will be (2 $\times$ 15 second exposure time + 3 seconds slew/settle + 2 $\times$ 2 seconds shutter open/close) = 37 second for two snaps.
\end{itemize}
The total time request will be (604 fields $\times$ 255 visits $\times 35$ second) = 5,390,700 second = 1,497.4 hours for one snap or (604 fields $\times$ 255 visits $\times 37$ second) = 5,698,740 second = 1,583.0 hours for two snaps. This is approximately equivalent to 187 nights total, over the lifetime of LSST, or about 5-6$\%$ of the total available time. Noted that the NES will request 176.2/186.2 hours (one/two snaps) for a `Discovery' observed year and 88.1/93.1 for a `Tracking' year.
\begin{table}[ht]
\centering
\begin{tabular}{l|l|l|l}
\toprule
Properties & Importance \hspace{.3in} \\
\hline
Image quality & 1 \\
Sky brightness & 3 \\
Individual image depth & 1 \\
Co-added image depth & 3 \\
Number of exposures in a visit & 3 \\
Number of visits (in a night) & 1 \\
Total number of visits & 2 \\
Time between visits (in a night) & 1 \\
Time between visits (between nights) & 1 \\
Long-term gaps between visits & 2\\
Separation between First and Final observation & 2 \\
Filter Selection & 1 \\
Number of Snaps in a Visit & 3 \\
\hline
\end{tabular}
\caption{{\bf Constraint Rankings:} Summary of the relative importance of various survey strategy constraints, ranked from 1=very important, 2=somewhat important, 3=not important.}
\label{tab:obs_constraints}
\end{table}
\subsection{Technical trades}
\subsubsection{What is the effect of a trade-off between your requested survey footprint (area) and requested co-added depth or number of visits}
Trading survey area for co-added depth/number of visits will lead to an increasingly biased sample of Solar System object discoveries or a decreased number of discoveries. Decreasing the NES survey area will likely decrease our longitudinal or inclination coverage, adversely affecting the observed distributions of objects in the Kuiper belt resonances, Neptune Trojan clouds, MBC reservoirs, and the Inner Oort Cloud. Increased co-added depth or number of visits per field does not make up for the missing orbital phase space. In addition, discovery of moving targets requires multiple observations of the same field. Reducing the number of visits per field in order to increase areal coverage will affect the moving object pipeline's ability to discover unknown moving objects.
\subsubsection{If not requesting a specific timing of visits, what is the effect of a trade-off between the uniformity of observations and the frequency of observations in time? e.g. a `rolling cadence' increases the frequency of visits during a short time period at the cost of fewer visits the rest of the time, making the overall sampling less uniform.}
Our science goals can be carried out with the NES being observed with a higher cadence of observations in 7 years for discovery and orbit characterization with 3 years of sparser monitoring observations in between these discovery/orbit characterization years. There is some flexibility in when the `Tracking' lower number of observation years are scheduled as described in the Sections above. We have some flexibility between the frequency of observations and uniformity of observations, as long as the cadence of the visits are such that during `Discovery' years MOPS is able to successfully run and detect moving objects. MOPS needs three tracklets (a pair in the same night made no more than 90 minutes apart) over 15 days, to guarantee minor planet discovery with 95$\%$ confidence (see LSE-30 and LDM-156). Once the discovery criteria are satisfied, additional observations should be scheduled in nightly pairs when possible, but the frequency of the observations from year to year can vary.
\subsubsection{What is the effect of a trade-off on the exposure time and number of visits (e.g., increasing the individual image depth but decreasing the overall number of visits)?}
Increasing the image depth would increase the 5-$\sigma$ limiting magnitude which would increase the number of objects detected, but we require at least 2 observations per night in 3 pairs for MOPS identification in our `Discovery' years. The additional visits proposed are to characterize and confirm the orbit of the new NES discoveries. At least two years of observations are needed to fully secure the outer Solar System orbits. Decreasing the number of observations will also lower the opportunities to detect main-belt comets. MBCs are mainly expected to be visible when active (i.e., brighter), so decreasing the number of visits will decrease the chances of finding main-belt comets.
\subsubsection{What is the effect of a trade-off between uniformity in number of visits and co-added depth? }
All of our science goals are constrained not by co-added image depth, but by the 5-sigma detection depth in individual LSST frames. There is no significant gain in trading off the uniformity in number of visits for increasing co-added depth.
\subsubsection{Is there any benefit to real-time exposure time optimization to obtain nearly constant single-visit limiting depth?}
There would be a small benefit for real-time exposure time optimization to obtain a nearly constant single-visit limiting magnitude, as the 5-sigma limiting depth of exposures drives what Solar System objects will be detectable in the proposed NES observations. Given the high airmass ($>$2) that these observations are normally scheduled, the expected benefit from exposure time optimization will be small (see \url{http://astro-lsst-01.astro.washington.edu:8080/allMetricResults?runId=2})
\subsubsection{Are there any other potential trade-offs to consider when attempting to balance this proposal with others which may have similar but slightly different requests?\label{sec:snaps}}
\noindent
\textbf{Snaps:} \\
Aside from questions of image quality and cosmic ray rejection, which we do not consider here, the SSSC finds very little benefit in having two 15-sec snaps co-added to form one 30 sec visit. Rather, the gain in survey efficiency from eliminating the time lost for the shutter throw and CCD (Charge-Coupled Device) read-out between snaps would be better used for additional observing time during the survey. Furthermore, combining the CCD read with the slew between visits allows for slower read times and thus reduced read noise.
There are two possible benefits to SS science from separate snaps:
\begin{enumerate}
\item Snaps would allow us to ascertain the direction of motion of trailed Solar System detections, which could potentially ease linking to companion trails in the transient stream. However, if there is a companion, it must be near one of two obvious positions, with a known length and orientation. The companion can be found by searching both directions, leading to only a twofold increase in computational effort (for a relatively small number of objects with significant trailing).
\item A few small near-Earth asteroids rotate rapidly enough that they have photometric variation on the time scale of 15 s, so rotation information could be extracted from two snaps. However, this is only for a small fraction of small objects, and thus represents a tiny fraction of the small body object catalog. Moreover, it is not clear whether the photometric variation from snaps would be sufficient to constrain the rotation period of so-called super-fast rotators.
\end{enumerate}
Based on the priorities in the SSSC's Science Roadmap \citep{2018arXiv180201783S}, we consider these benefits as minor in comparison to gaining an addition 1-2$\times$10$^5$ additional survey observations which would increase the number of Outer Solar System detections in key populations. Moving the Wide-Fast-Deep survey to one 30s snap per observation would add 7 percent of the operations time to on-sky observations. The Solar System metrics described in the COSEP \citep{2017arXiv170804058L} and in this white paper will show no negative impact from moving to one 30s snap. Thus, we advocate for the elimination of snaps in order to accommodate observing the Northern Ecliptic Spur and other proposed mini surveys and deep drilling fields. \newline
\\
\textbf{Filter Selection:} \\
As described in Section \ref{sec:filter_selection}, $g$,$r$,and $z$-band observations best suite on science cases, with the majority of the proposed observations taken in $r$-band. If the NES is restricted to single-band observations, our minimum discovery needs require $r$-band. We note that observing the NES in filters without $r$ or $g$ observations would result in significant losses for Solar System detections based on the discovery metrics in the COSEP \citep{2017arXiv170804058L} and the metrics described in Section \ref{sec:metrics}. \newline
\\
\textbf{Extended WFD Footprint:} \\
We have proposed the minimum number of observations and filters that we believe will achieve our key science goals. Increasing the number of filters and increasing number of visits for all or part of the NES in order to accommodate other science cases such as an extended WFD footprint will not negatively impact our science goals, as long as the majority of the 604 fields in the NES region are surveyed. Additional visits would enable better characterization of rotational variability and provide increased sampling in the search for MBCs. \newline
\\
\textbf{Distribution of Observations Within Tracking Years:} \\
We have some flexibility in scheduling observations within Tracking years if there are strong tensions with other proposed observing needs. Instead of evenly distributing nightly over the months the field is observable, for example, Tracking Years could consist of three pairs of visits to a field using the same cadence used during `Discovery' years in the month when a field is at opposition, and one pair of visits per month in the 2 months before and 2 months after opposition. This approach would preserve some minimal ability to discover newly active objects (which may have been too faint due to the absence of activity to be detectable in previous years) during `Tracking' years, while also continuing to monitor known objects throughout their available observing windows. \newline
\\
\textbf{A Big Sky Approach to Cadence Diplomacy:} \\
We note that our requested NES footprint is also part of the extended footprint proposed in the `A Big Sky Approach to Cadence Diplomacy" White Paper (Olsen et al) . Our proposed NES mini survey is compatible with the observing scheme outlined in their proposal. We propose that each NES field receive approximately 255 visits over the baseline survey. This value is very similar to the number of visits propose for the NES and other extended footprint regions proposed by Olsen et al.
\section{Performance Evaluation}
\label{sec:metrics}
We quantify the impact of including and excluding the NES in LSST survey operations. We simulate LSST observations for representative orbital distributions\footnote{Orbital distributions used in our assessments are available via a public GitHub repo \url{https://github.com/lsst-sssc/SSSC_LSST_Cadence_Optimization_Orbit_Test_Populations}.} based on current observational constraints using existing {\tt OpSim} runs and Metric Analysis Framework \citep[MAF; ][]{2014SPIE.9149E..0BJ} moving object tools. We find for our metrics, astro-lsst-01$\_$2039,which does not include the NES region, underperforms.
\textbf{Main-Belt Comets:} For outer MBCs on eccentric orbits, activity is confined to time periods near perihelion ($r_h\lesssim2.9$~au).
Our MBC test population has properties consistent with the currently known population
\citep{2018AJ....155..142K}.
Using this test population, we propose the following metric to test the discoverability of MBCs:
(1) We take the ad hoc assumption that activity discovery requirements mirror MOPS discovery requirements (6 visits in 15 days; {\tt DiscoveryMetric}); (2) We require discovery circumstances to occur in a 1-month period after perihelion \citep{2015Icar..248..289H} (custom metric). The $\sim$10 known MBCs are active near perihelion, with a bias to post-perihelion epochs ($-30$ to $+60$ days about perihelion seems typical), which is currently based on a sample of 10 objects. We base our metric on the period 0 to $+$30 days to allow for some additional diversity in the population. Our metric tested with select available {\tt OpSim} runs is shown in \ref{fig:metrics} (part c) In summary, excluding the NES reduces the number of MBC discoveries due to the perihelion alignment noted in the Section 2. \citet{2015Icar..248..289H} estimated an occurrence rate of about 60 MBCs per 10$^6$ outer main belt asteroids. Taking this rate, the size-frequency distribution of asteroids \citep{2002aste.book...71J}, and the current set of \texttt{opsim} runs, we estimate the number of LSST MBC discoveries to be 10--15 without the NES, and 20--25 with the NES. The increased number allows us a better estimate of how MBC properties vary,
and assess the lifetime of water ice in $\sim200$~m objects in the outer belt.
\textbf{Resonant KBOs and L4 Neptune Trojans:}
The effect of including the NES on the science return for the outer Solar System can be quantified in terms 1) We need a sufficient number of detections across many different dynamical populations with accurately determined orbits to constrain models of the early Solar System. 2) We need sufficiently accurately determined orbits to classify the detections into these different populations. For each {\tt OpSim} run, the Neptune Trojans and objects in Neptune's 2:1 and 5:1 mean motion resonances are used to determine the expected number of detections for each population. {\tt OpSim} runs with no north ecliptic coverage have extremely few detections for Neptune Trojans in the L4 cloud (none at low inclinations crucial for testing the color-inclination relationship.) and few detections in the leading libration islands of Neptune's N:1 resonances; excluding the north ecliptic cuts the total number of expected discoveries in approximately half (see Fig.~\ref{fig:metrics} a and b). For the case of the 5:1 resonance (Figure 5 d), there would be too few detections to usefully constrain the ratio of leading to trailing populations to test Neptune's migration speed. For distant N:1 resonances, losing half the detections would limit the accuracy of population estimates; we would ideally like the Poisson sample size uncertainties to be less than $\sim15\%$, even for the more distant resonances.
\textbf{Outer Solar System Orbit Metric:} A metric to measure the orbit fit quality for detected objects will need to be constructed, accounting for the total number of observations assuming an appropriate color distribution. To securely classify objects, we typically need the orbit fit to have a $3-\sigma$ semi-major axis uncertainty $\Delta a/a < 0.01$. This is necessary to separate out resonant and non-resonant objects because Neptune's mean motion resonances are of order $\sim 1$~au wide for typical Kuiper belt eccentricities, though resonances embedded in the classical Kuiper belt can be narrower.
\textbf{IOCs:} The predicted IOC orbital distribution is dependent on the formation model. Given that the IOCs can only be efficiently discovered near their perihelion and the Planet 9 model predicts perihleion clustering, we suggest a simple metric for success of IOC discovery using fractional coverage of the NES region within 20$^\circ$ of the ecliptic may be most suitable.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.92\columnwidth]{metrics}
\caption{\label{fig:metrics} Metric results for a) Neptune Trojans excluding the NES b)Neptune Trojans including the NES, c) MBCs, d) 5:1 KBO Mean Motion Resonance Libration Islands }
\end{center}
\end{figure}
\section{Special Data Processing}
For understanding detection efficiency and characterizing the survey losses, having the same detection algorithm used in the Northern Ecliptic as Southern Ecliptic will be extremely beneficial. In the WFD, MOPS will be the primary moving object search algorithm for Solar System bodies at distances less than approximately 200 au. The proposed observations are designed such that LSST Project-developed MOPS will be able to generate tracklets and link them to identify moving objects in the NES.
One science case that requires an additional pipeline is the search for very distant small bodies, including additional
Sedna-like objects or Planet 9. MOPS is designed to detect motion between the two visits of the same field within
a night, separated by $\sim$15-90 minutes. Distant Solar System bodies moving slower than this will be identified as
stationary sources by MOPS. Thus for identifying the most distant Solar System objects beyond a heliocentric
distance of 200 au and directly detecting Planet 9 (hypothesized to have a semi-major axis of $\sim$700 au),
a separate detection pipeline will need to be developed by the planetary community. This pipeline task has
been identified as one of the key tasks in the SSSC Software Roadmap. We note that several members of the
SSSC have written versions of a slow moving object pipeline \citep[e.g. ][]{2004ApJ...617..645B,2010ApJ...720.1691S,2016AJ....152..221S,2017AJ....153..262B, 2017ApJ...839L..15G,2018ApJ...855L...6H}
for other outer Solar System surveys and have the expertise to develop such a community pipeline. We also
note that this pipeline could reasonably work on the sources generated from individual images, rather than
requiring the image pixels directly, and can further reject a large majority of the sources in each individual image
immediately as correlated with (long-term) stationary objects; the relevant inputs are relatively small compared to LSST
data processing.
Main-belt comet science will also require specialized data processing in the form of advanced activity detection
and characterization software that go beyond the basic activity detection and characterization performed by the
standard LSST pipelines. These pipelines will build upon the alert stream and LSST produced Solar System data
products. This specialized software is equally essential for comet science in general for LSST (i.e., including
observations as part of the main Wide-Fast-Deep survey in the South), and development of this software is already
a high priority for the SSSC software development and active objects working groups. No additional special
data processing requirements beyond what is already planned to be developed to handle comet data from LSST
in general will be imposed by this mini survey.
\section{Acknowledgements}
The authors thank the Large Synoptic Survey Telescope (LSST) Project Science Team and the LSST Corporation for their support of LSST Solar System Science Collaboration's (SSSC) efforts. This work was supported in part by a LSST Corporation Enabling Science grant. The authors also thank the B612 Foundation, AURA, and the Simons Foundation for their support of workshops, hackathons, and sprints that lead to the development of this white paper. Elements of this work were enabled by the Solar System JupyterHub service at the University of Washington's DIRAC Institute (\url{http://dirac.astro.washington.edu}). This white paper has made use of NASA's Astrophysics Data System Bibliographic Services. This version of our NES whitepaper was formatted using the AASTex latex classfile and template package from America Astronomical Society (AAS) Journals \url{http://journals.aas.org/authors/aastex/aasguide.html}.
\bibliographystyle{aasjournal}
|
2,877,628,089,442 | arxiv | \section{Introduction}
Analog models of gravitational systems are useful to understand the
physics of black holes and early universe \cite{Barcelo2005}. Indeed,
although quantum aspects of black hole evaporation with Hawking
radiation has been paid much attention to theoretically, it is
difficult to observe the phenomena in our universe due to its too low
temperature for astrophysical black holes. Concering cosmological
particle creations, it is hard to observe the occurence of this effect
in the early stage of the universe directly. However, considering
condensed matter systems, it is possible to design models with causal
horizons for excitations, which have similar properties as black
hole horizons or cosmological horizons for null rays in general
relativity. Thus it is possible to perform experiments of analog
Hawking radiation \cite{Steinhauer2016a,MunozdeNova2019} and particle
creations in expanding universes in laboratories. Quantum Hall (QH)
systems can become one of such analog models
\cite{Hegde2019,Dalui2020,Subramanyan2021}. A QH system emerges when a
strong perpendicular magnetic field is applied to two-dimensional
electrons when the Landau level filling factor becomes an integer or a
rational fraction \cite{Yoshioka2002, Tong2016}. The QH systems are
typical topological materials consisting of the bulk and edge. In the
bulk, the dynamics yields a large energy gap in its dispersion
relation. In the edge, the dispersion relation of the edge current is
protected owing to the topological structure of the system, and the
edge excitations are always gapless. Thus the edge effective theories
are given by free field theories with a chiral condition, and belong
to a class of conformal field theory in 1+1 dimensional spacetime.
Most of all experiments of QH systems have been performed in a static
situation. The electrons are confined in the bulk region by a static
electric field created by the surface potential of host semiconductors
of the 2D electrons and the edge attached to the bulk remains
unchanged in time. Expanding edges were proposed in \cite{Hotta2014a}
and experimental realization is on going \cite{Kamiyama2022a,Kamiyama2022b}; the
edge expands by gradually relaxing the external electric fields
through continuous electron supply into the bulk and the excitations
moving along the edge are affected by the expansion. Thus, it is
possible to perform experiments of the quantum scalar field in an
analog expanding universe. In our previous paper \cite{Hotta2022b}, we
formulated a quantum field theory in 1+1 dimensional curved
spacetime to analyze the edge dynamics. It was shown that the
expanding edges can be regarded as expanding universe simulators of
two dimensional dilaton-gravity models, and pointed out that our
theoretical setup can simulate the classical counterpart of an analog Hawking
radiation with Gibbons-Hawking temperature from the future de
Sitter horizon formed in the expanding edge region.
In this paper, applying the formulation developed in our previous
paper \cite{Hotta2022b}, we investigate quantum aspects of the scalar
field in an expanding edge model which reproduces an analog de Sitter
universe. In particular, we focus on particle creation in the analog
de Sitter universe (Hawking radiation), generation of quantum
fluctuations by edge expansion, and their entanglement behavior. We
will show that the thermal radiation with the Gibbons-Hawking
temperature from the expanding edge region is created and that it is
detectable in a static edge region. For this purpose, instead of
introducing a specific detector model to measure the Hawking
radiation, we define local spatial modes of the scalar field using
window functions and consider their correlations. Furthermore, we will
also investigate the entanglement between two spatial regions and show
that it decreases by Hawking radiation coming from the expanding edge
region. We regard this behavior as a feature corresponding to the
disappearance of quantumness of the primordial quantum fluctuations
expected in cosmic inflations \cite{Nambu2008,Nambu2011,
Matsumura2018}. Thus, our analog de Sitter model can be available as a
simulator of the early universe to explore the generation mechanism and
the feature of primordial quantum fluctuations originated by cosmic
inflations.
The plan of the paper is as follows. In Sec.II, we review our setup of
the expanding edge of the QH systems and an analog de Sitter universe. In
Sec. III, we present the behavior of classical wave propagation in the expanding edge
system. In Sec.IV, we formulate the quantum treatment of edge excitations
and investigate the behavior of spatial local modes which are
measurable in an experiment of the QH system. In Sec. V, we discuss entanglement between spatial
modes. Sec. VI is devoted to summary and speculation.
\section{Expanding edge of quantum Hall system}
Let us consider a massless scalar field $\varphi$ on the edge of QH
systems. Based on the effective theory of the edge excitations in QH
systems \cite{Yoshioka2002,Tong2016}, the edge mode is
represented by a massless scalar field $\varphi$ the wavelength
of which is 100 times larger than the magnetic length
\begin{equation}
\ell_B=\sqrt{\frac{\hbar}{eB}},
\end{equation}
where $B$ is a perpendicular magnetic field. The edge current and the
edge charge density are given as derivatives of the scalar field. We
derive the wave equation for $\varphi$. The left moving modes and the
right moving modes of $\varphi$ obey
\begin{equation}
\pa_\tau\varphi_L-\frac{v}{a(\tau)}\pa_x\varphi_L=0,\quad
\pa_\tau\varphi_R+\frac{v}{a(\tau)}\pa_x\varphi_R=0.
\end{equation}
where $\tau$ denotes a time variable in a laboratory, $x$ is the
comoving coordinate along the edge and the proper length along the
edge is given by $a(\tau)\int dx$. The scale factor $a(\tau)$
represents the expansion of the edge. Using the trapping potential
$U(y)$ perpendicular to the edges of the QH system, the propagation
speed of the edge excitation $v$ is determined as
\begin{equation}
v=\frac{c\,U'(y)}{eB}=\frac{cE}{B},
\end{equation}
where $E$ is the electric field induced by $U$. This propagation speed
of the edge excitation is same as the classical drift velocity of
electrons. The solution of these equations is
\begin{equation}
\varphi_L=A\left(v\int\frac{d\tau}{a}+x\right),\quad
\varphi_R=B\left(v\int\frac{d\tau}{a}-x\right),
\end{equation}
where $A,B$ are arbitrary functions. The scalar field
$\varphi:=\varphi_L+\varphi_R$ obeys
\begin{equation}
\ddot\varphi+\frac{\dot
a}{a}\dot\varphi-\frac{v^2}{a^2}\pa_x^2\varphi=0,\quad\dot{}=\frac{\pa}{\pa\tau}.
\end{equation}
This is the Klein-Gordon equation $\square\varphi=0$ in a
1+1 dimensional expanding universe the metric of which is given by
\begin{equation}
ds^2=-v^2d\tau^2+a^2(\tau)dx^2.
\label{eq:metric1}
\end{equation}
The propagation speed of the edge excitation $v$ plays the same role
as the speed of light $c$ in general relativity which determines causal
structures of spacetimes. It is possible to control the expansion law
$a(\tau)$ by tuning the external trapping electric field for the edge
region. We can perform experiments of quantum physics of an early
universe using the analog expanding universe by analysing QH systems
with an expanding edge. From now on, we set $v=1$ and we use $v$ as a
unit of length and time in our analog spacetimes. By introducing the
conformal time $t:=\int d\tau/a$ and null coordinates $x^\pm:=t\pm x$,
the metric is written as the conformally flat form:
\begin{equation}
ds^2=-a^2 dx^+dx^-.
\label{eq:metric2}
\end{equation}
The scalar field is represented as
\begin{equation}
\varphi=\varphi_L(x^+)+\varphi_R(x^-).
\end{equation}
For a given form of $a(\tau)$ which represents the expansion law of the
edge region, it
is possible to identify a corresponding analog universe using the
metric \eqref{eq:metric2}. In the QH systems, either $\varphi_L$ or
$\varphi_R$ is allowed due to the boundary condition of QH systems.
In this paper, we consider an analog de Sitter universe in our setup
of the QH system. Left panel of Fig. \ref{fig:penrose1} depicts a
setup of our QH experiment with the expanding edge of the QH
system: the edge system is composed of an input static region I
($L/2<x$), an expanding region II ($-L/2\le x\le L/2$), and a output
static region III ($x<-L/2$). The analog metric of this system is
written as
\begin{align}
&ds^2=a^2(t)(-dt^2+dx^2),\notag \\
& a(t)=\begin{cases}
\dfrac{1}{\cos(H
t)}\theta(t)+\theta(-t)&\quad\text{for}\quad-L/2\le x\le
L/2\quad\text{(region II)},\\
%
1&\quad\text{for}\quad L/2<|x|\quad\text{(region I,III)}.
\end{cases}
\label{eq:metric}
\end{align}
where $\theta(t)$ is the Heaviside function. Using the proper time
$\tau=\int_0^t dt'a(t')$, the metric in region II is
\begin{equation}
ds^2
=\begin{cases}
-d\tau^2+\cosh^2(H\tau)dx^2,&\quad\text{for}\quad \tau\ge0.\\
-d\tau^2+dx^2, &\quad\text{for}\quad \tau<0.
\end{cases}
\end{equation}
Thus, we assume a spacetime that is flat Minkowski for
$t<0$ and de Sitter expansion starts at $t=0$ in region II. The global structure
of this spacetime is shown in the right panel of
Fig. \ref{fig:penrose1} with the parameter $\pi/4<LH<\pi/2$. There
emerges formation of the future de Sitter horizon $\mathcal{H}^+$ in region II.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{./figs/PenroseMD.pdf}
\caption{Left: schematic picture of the expanding edge system. $x$
denotes a coordinate along the edge of the QH system. Region I and
III are static Minkowski regions and the expanding region II
corresponds to a de Sitter universe. Right panel: Penrose diagram
representing the present setup with the expanding edge region II (gray region), which starts
accelerated expansion at $t=0$. A and B denote world lines of
detectors which perform measurements of edge excitations. This
diagram corresponds to $\pi/4<LH<\pi/2$ case. For parameter
values not included in this range, global structure of the
spacetime becomes different (see \cite{Hotta2022b}). }
\label{fig:penrose1}
\end{figure}
It is possible to obtain a relation between spatial coordinates of
region I and region II explicitly. Null coordinates in region I and
region III are related by the following formula (see detail in
Appendix):
\begin{align}
&x_\text{I}^+=\Phi[-L+\Phi^{-1}[x_\text{III}^++\Phi[L/2]]]+L/2=:
f(x_\text{III}^+),\\
&x_\text{III}^-=\Phi[-L+\Phi^{-1}[x_\text{I}^{-}+L/2]]-\Phi[L/2]+L.
\end{align}
where the function $\Phi$ is defined by
\begin{align}
\Phi(x)&=\int_0^x dy\, a(y) \notag \\
&=\begin{cases}
\dfrac{1}{2H}\ln\dfrac{1+\sin Hx}{1-\sin Hx} &\quad
\text{for}\quad 0\le x<\dfrac{\pi}{2H}, \\
%
x&\quad\text{for}\quad x<0.
\end{cases}
\end{align}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{./figs/function-f.pdf}
\caption{Left panel: the function $\Phi(x)$. $\Phi=+\infty$ at $H
x=\pi/2$. Right panel: the function $f(x)$.}
\end{figure}
\noindent
The inverse function is
\begin{equation}
\Phi^{-1}(x)=
\begin{cases}
\dfrac{1}{H}\,\mathrm{arcsin}\tanh(Hx)&\quad\text{for}\quad x>0 \\
x&\quad\text{for}\quad x<0
\end{cases},
\end{equation}
and for $x\rightarrow+\infty$,
\begin{equation}
\Phi^{-1}(x)\sim\frac{\pi}{2H}-\frac{2}{H}e^{-Hx}.
\end{equation}
The asymptotic form of the function $f(x^+_\text{III})=x^+_\text{I}[x^+_\text{III}]$ is
\begin{equation}
f(x_\text{III}^+)\sim
\begin{cases}
c_0-c_1 e^{-Hx_\text{III}^+}&\quad\text{for}\quad
x_\text{III}^+\rightarrow+\infty \\
x_\text{III}^+&\quad\text{for}\quad x_\text{III}^+\rightarrow-\infty
\end{cases}
\label{eq:x1}
\end{equation}
where the constants $c_0$ and $c_1$ are
\begin{equation}
c_0=\frac{L}{2}-\ln\tan(HL/2),\quad c_1=\frac{2}{H\sin
HL}\sqrt{\frac{1-\sin HL/2}{1+\sin HL/2}}.
\end{equation}
\section{Classical simulation of Hawking radiation}
We send plane waves from region I (in-region) and detect them at a point in region
III (out-region). Normalized wave modes in region I and III are
\begin{equation}
\varphi_k^\text{(I)}=\frac{e^{-ikx_\text{I}^+}}{\sqrt{4\pi
k}}=\frac{e^{-ikf(x_\text{III}^+)}}{\sqrt{4\pi k}},\quad
\varphi_k^\text{(III)}=\frac{e^{-ikx_\text{III}^+}}{\sqrt{4\pi
k}},\quad k>0.
\end{equation}
An input plane wave $e^{-ikx^+_\text{I}}$ in region I has the wave form
$\exp(-ikf(x^+_\text{III}))$ in region III. Distortion of plane waves
due to de Sitter expansion of region II is encoded in the function
$f(x_\text{III}^+)$. Figure 3 depicts wave forms in region III. Left
panel shows real part and imaginary part of $\varphi_k^{(\text{I})}$
as the function of $x^+_{\text{III}}$. Right panel shows snapshots of
wave forms $\mathrm{Re} ~\varphi_k^{(\text{I})}$ at $t=0, 3, 6$. We
can observe the wave is stretched by de Sitter expansion in region II.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth,clip]{./figs/wave-form.pdf}
\caption{Wave forms in region III ($L=H=1, k=7$). Left panel : Real
part (blue) and imaginary part (red) of $\varphi$. Right panel:
Change of the spatial profile of waves (real part of $\varphi$) at different
times ($t=0,3,6$).}
\end{figure}
\noindent
In-mode and out-mode are related by the Bogoliubov transformation
\begin{align}
&\varphi_k^\text{(I)}=\int_0^\infty
dk'\left[\alpha(k,k')\,\varphi_{k'}^\text{(III)}+\beta(k,k')\,
\varphi_{k'}^\text{(III)*}\right], \\
%
&\varphi_k^\text{(III)}=\int_0^\infty
dk'\left[\alpha^*(k',k)\,\varphi_{k'}^\text{(I)}-\beta(k',k)\,
\varphi_{k'}^\text{(I)*}\right],
\end{align}
where Bogoliubov coefficients $\alpha$ and $\beta$ are obtained from the relation
\begin{equation}
\frac{e^{-ik'f(x_\text{III}^+)}}{\sqrt{k'}}=\int_0^\infty
\frac{dk}{\sqrt{k}}\left[\alpha(k,k')\,e^{-ik x_\text{III}^+}+\beta(k,k')\,e^{ikx_\text{III}^+}\right].
\end{equation}
Thus,
\begin{equation}
\alpha(k,k')=\frac{1}{2\pi}
\sqrt{\frac{k}{k'}}\int^{\infty}_{-\infty}dy\,e^{-ik'f(y)}\,e^{iky},\quad
\beta(k,k')=\frac{1}{2\pi}
\sqrt{\frac{k}{k'}}\int^{\infty}_{-\infty}dy\,e^{-ik'f(y)}\,e^{-iky}.
\end{equation}
Using the asymptotic form \eqref{eq:x1} of $f(x)$, we have \cite{Hotta2022b}
\begin{equation}
|\beta(k,k')|^2\sim
\begin{cases}
0&\quad\text{for}\quad x_\text{III}^+\rightarrow -\infty, \\
\dfrac{1}{2\pi Hk'}\dfrac{1}{\exp(2\pi k/H)-1}&\quad\text{for}\quad
x_\text{III}^+\rightarrow +\infty.
\end{cases}
\end{equation}
For $x_\text{III}^+\rightarrow +\infty$, the Bogoliubov
coefficient $\beta$ show the Planckian distribution with a
temperature
\begin{equation}
T_H=\frac{H}{2\pi}.
\end{equation}
This temperature coincides with the Gibbons-Hawking temperature in the
de Sitter spacetime. Thus, it is possible to detect the classical
counterpart of Hawking radiation from the cosmological horizon in a de
Sitter universe by measuring the Fourier component of wave signals
in the Minkowski region III.
\section{Quantum simulation of Hawking radiation }
From now on, we consider a quantum scalar field $\hat\varphi$ in the
expanding edge system. Our main purpose is to investigate quantum
effects of the edge mode in the quantum Hall system, which is
measurable through the local charge density $\pa_{x^+}\hat\varphi$ of
the edge excitation.
\subsection{Correlation functions }
Let us consider the setup shown in Fig.~\ref{fig:penrose1}. We prepare
measurement points A and B in region III. At these points, a part of
the signals emitted from $\mathcal{I}^-$ of region I cannot reach
region III after the formation of the future de Sitter horizon
$\mathcal{H}^+$ in region II. Thus, the spacetime structure is similar
to that with a black hole formation by gravitational collapse. We
consider detection of quantum fluctuations of the scalar field in
region III by imposing the vacuum condition at $\mathcal{I}^-$ in
region I:
\begin{equation}
\hat a_k\ket{0_\text{I}}=0.
\end{equation}
Owing to the chirality of the edge mode, we only consider a left moving
scalar field and the field operator in region III is expressed as
\begin{equation}
\hat\varphi(x^{+})=\int_0^\infty\frac{dk}{\sqrt{4\pi k}}\left[\hat
a_k\,e^{-ik f(x^+)}+\hat
a_k^\dag\,e^{ik f(x^+)}\right].
\end{equation}
In our setup, the gauge invariant physical quantity is charge density,
which is given by the derivative of the field operator $\hat\varphi$:
\begin{equation}
\hat\Pi(x^+):=\hat\varphi'(x^+)=-if'(x^+)\int_0^\infty
dk\sqrt{\frac{k}{4\pi}}
\left[\hat
a_k\,e^{-ik f(x^+)}-\hat
a_k^\dag\,e^{ik f(x^+)}\right].
\end{equation}
We investigate quantum effects based on the field operator $\hat\Pi$.
Commutators between field operators are
\begin{align}
&[\hat\varphi(x^+),\hat\varphi(y^+)]=-\frac{i}{4}\mathrm{sign}(f(x^+)-f(y^+)),\\
&[\hat\varphi(x^+),\hat\Pi(y^+)]=\frac{i}{2}f'(y^+)\delta(f(x^+)-f(y^+)),\\
&[\hat\Pi(x^+),\hat\Pi(y^+)]=\frac{i}{2}f'(x^+)f'(y^+)\delta'(f(x^+)-f(y^+)).
\end{align}
The Wightman function for $\hat\varphi$ is
\begin{align}
D(x_1^+,x_2^+)&=\expval{\hat\varphi(x_1^+)\hat\varphi(x_2^+)}=\frac{1}{4\pi}\int_\mu^\infty
\frac{dk}{k}e^{-ik(f(x_1^+)-f(x_2^+)-i\Delta f)}, \notag \\
%
&=-\frac{1}{4\pi}\log[\mu(f(x_1^+)-f(x_2^+)-i\Delta f)],
\end{align}
where we introduced an IR cutoff $\mu$ as the lower bound of the integral,
and a UV cutoff $\Delta f>0$ by
\begin{equation}
\Delta f:=f'\left((x_1^++x^+_2)/2\right)|_{x_1^+=x_2^+}\,\epsilon,
\end{equation}
with the spatial cutoff length $\epsilon$ in the flat region III. In the
Minkowski phase $t<0$, $\Delta f=\epsilon$, and in the de Sitter phase
$t\ge 0$, $\Delta f\sim e^{-Ht}\epsilon$ which corresponds to the comoving
wavelength in the de Sitter region II. The local spatial modes
prepared in the Minkowski region III can detect long wavelength
quantum fluctuation in the de Sitter region II. In our analysis, the
scalar field $\hat\varphi$ is an effective one and there exists the
short-distance cutoff length $\epsilon$ below which effective treatment of
the edge mode breaks down. In the QH systems, this scale corresponds to
the magnetic length $\ell_B$. We regard the short-distance cutoff $\epsilon$
as this length in our analysis.
The Wightman function for $\hat\Pi$ is
\begin{align}
D_\Pi(x_1^+,x_2^+)
&:=\expval{\hat\Pi(x^+_1)\hat\Pi(x^+_2)}=\pa_{x_1^+}\pa_{x_2^+}D(x_1^+,x_2^+) \notag \\
%
&= \frac{f'(x^+_1)f'(x^+_2)}{4\pi}\int_0^\infty dk
k\,e^{-ik(f(x_1^+)-f(x_2^+)-i\Delta f)}
\notag \\
%
&=-\frac{1}{4\pi}\frac{f'(x_1^+)\,f'(x_2^+)}
{\left(f(x_1^+)-f(x_2^+)-i\Delta f\right)^2}.
\end{align}
This quantity is independent of the IR cutoff $\mu$. Using
\eqref{eq:x1}, the asymptotic behavior becomes
\begin{equation}
D_\Pi(x_{1}^+,x_{2}^+)\sim -\frac{1}{4\pi}
\begin{cases}
\dfrac{1}{\left[(2/H)\sinh(H(x_{1}^+-x_{2}^+)/2)-i\epsilon\right]^2}&\quad\text{for}\quad
x_{{1,2}}^+\rightarrow+\infty\\
\dfrac{1}{(x_{1}^+-x_{2}^+-i\epsilon)^2}&\quad\text{for}\quad x_{{1,2}}^+\rightarrow-\infty
\end{cases}
\end{equation}
For $x_1^+, x_2^+\rightarrow+\infty$, $D_\Pi$ has the same behavior
as that of a thermal state with the Gibbons-Hawking temperature $T_H$.
Correlation functions are
\begin{align}
&\expval{\left\{\hat\varphi(x^+),\hat\varphi(y^+)\right\}}=\frac{1}{2\pi}\int_\mu^\infty
\frac{dk}{k}\cos(k(f(x^+)-f(y^+)))e^{-k\Delta f},\\
%
&\expval{\left\{\hat\varphi(x^+),\hat\Pi(y^+)\right\}}=\frac{f'(y^+)}{2\pi}\int_0^\infty
dk\sin(k(f(x^+)-f(y^+)))e^{-k\Delta f},\\
%
&\expval{\left\{\hat\Pi(x^+),\hat\Pi(y^+)\right\}}=\frac{f'(x^+)f'(y^+)}{2\pi}\int_0^\infty
dk k\cos(k(f(x^+)-f(y^+)))e^{-k\Delta f}.
\end{align}
\subsection{Correlation of local spatial mode}
We consider the measurement of the field $\hat\Pi(x^+)$ at $x_\text{A}$
and $x_\text{B}$ in region III. This measurement process can be represented by
the interaction between the field operator $\hat\Pi$ and the canonical
variables $(\hat Q_D,\hat P_D)$ of the measurement apparatus. In the
present analysis, we do not specify details of the
apparatus. The interaction Hamiltonian between the field operator and
the apparatus is
\begin{equation}
H_\text{int}=\sum_{j=\text{A,B}}\lambda_j(t)g_j(\hat Q_D,\hat P_D)\otimes\int dx\, w_j(x)\hat\Pi(t+x),
\end{equation}
where $g_j(\hat Q_D,\hat P_D)$ is a function of canonical variables of the
measurement apparatus, $w_j(x)$ is a window function defining a
spatial local mode of the field at $x_\text{A,B}$, and $\lambda_j(t)$ is a
switching function of the interaction. After acting on the apparatus state, this
interaction causes a change of the ``reading'' of the apparatus
depending on the state of the quantum field $\hat\Pi$ at $x_\text{A,B}$. In the
present analysis, we do not introduce details of measurement protocols
but just pay attention to the behavior of the local mode of the
quantum field introduced by
the spatial window function $w_\text{A,B}(x)$.
For the purpose of observing spatial correlations of the field, we
define a canonical pair of variables corresponding to the local
spatial mode of the field at $x_{A}$ and $x_{B}$:
\begin{equation}
\hat Q_j(t)=\int dx\, w_Q(x-x_j)\hat\Pi(t+x),\quad \hat P_j(t)=\int
dx\, w_P(x-x_j)\hat\Pi(t+x),\quad j=\text{A,B}.
\end{equation}
We assume the window functions $w_{P,Q}(x)$ have non-zero values only in a compact spatial region
$x\in[-\ell/2,\ell/2]$. Requiring these variables to be canonical
pairs, equal time commutators between these variables should be
\begin{align}
&[\hat Q_j, \hat P_k]=\frac{i}{2}\int dx\, w_Q(x-x_j)w_P'(x-x_k)\equiv
i\delta_{jk}, \quad j,k=\text{A,B},
\label{eq:s-mode1}\\
&[\hat Q_j, \hat Q_k]= \frac{i}{2}\int dx\,
w_Q(x-x_j)w_Q'(x-x_k)\equiv 0,\\
&[\hat P_j, \hat P_k]=\frac{i}{2}\int dx\,
w_P(x-x_j)w_P'(x-x_k)\equiv 0.
\end{align}
These equations provide conditions for window functions and are
independent of the state of the quantum field. Thus, the local
spatial modes $(\hat Q_\text{A},\hat P_\text{A})$,
$(\hat Q_\text{B},\hat P_\text{B})$ associated with spatial regions A
and B can be introduced by using suitably chosen window functions
$w_Q(x)$ and $w_P(x)$ irrespective of the states the quantum field
(Fig. \ref{fig:setup}). Regions A and B are assumed to have no overlap
and their separation is $dx$. Locality of the spatial modes is guaranteed if
we adopt window functions with compact support. The center of each
region is assumed to be
\begin{equation}
x_\text{A}=-\frac{L}{2}-\frac{3\ell}{2}-dx,\quad
x_\text{B}=-\frac{L}{2}-\frac{\ell}{2},\quad x_\text{B}-x_\text{A}=\ell+dx.
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=0.45\linewidth]{./figs/setup.pdf}
\caption{Setup of defining spatial region A and B. The center of
each region is $x_\text{A}=-L/2-3\ell/2-dx$ and
$x_\text{B}=-L/2-\ell/2$. $x_B-x_A=\ell+dx$. }
\label{fig:setup}
\end{figure}
\noindent
We choose the
following window functions in our analysis:
\begin{equation}
w_Q(x)=\frac{2}{\sqrt{\pi}}\cos\left(\frac{\pi x}{\ell}\right),\quad
w_P(x)=\frac{2}{\sqrt{\pi}}\sin\left(\frac{\pi x}{\ell}\right),\quad x\in[-\ell/2,\ell/2].
\label{eq:window}
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\linewidth]{./figs/window.pdf}
\caption{Spatial profile of adopted window functions.}
\label{fig:window}
\end{figure}
\noindent
These window functions satisfy the condition \eqref{eq:s-mode1} and
the window functions defining the bipartite state for canonical
variables
$(\hat Q_\text{A},\hat P_\text{A},\hat Q_\text{B},\hat P_\text{B})$ do
not have spatial overlap for $x_\text{B}-x_\text{A}\ge\ell$. Equal
time correlations of these canonical variables are
\begin{align}
&\langle\hat Q_\text{A}\hat Q_\text{B}+\hat Q_\text{B}\hat
Q_\text{A}\rangle=
\int dx dy\, w_Q(x-x_\text{A})w_Q(y-x_\text{B})\expval{\left\{\hat\Pi(t+x),\hat\Pi(t+y)\right\}},\\
%
&\langle\hat P_\text{A}\hat P_\text{B}+\hat P_\text{B}\hat
P_\text{A}\rangle=
\int dx dy\, w_P(x-x_\text{A})w_P(y-x_\text{B})\expval{\left\{\hat\Pi(t+x),\hat\Pi(t+y)\right\}},\\
%
&\langle\hat Q_\text{A}\hat P_\text{B}+\hat P_\text{B}\hat
Q_\text{A}\rangle=
\int dx dy\, w_Q(x-x_\text{A})w_P(y-x_\text{B})\expval{\left\{\hat\Pi(t+x),\hat\Pi(t+y)\right\}}.
\end{align}
As the bipartite state $\rho_\text{AB}$ defined by these canonical
variables is Gaussian, the state is determined by the covariance
matrix
\begin{equation}
V_\text{AB}=
\begin{bmatrix}
a_1&a_3&c_1&c_3\\
a_3&a_2&c_4&c_2\\
c_1&c_4&b_1&b_3\\
c_3&c_2&b_3&b_2
\end{bmatrix},
\end{equation}
where its components are defined by
\begin{align}
c_1&=\frac{1}{2}\langle\hat Q_\text{A}\hat Q_\text{B}+\hat Q_\text{B}\hat
Q_\text{A}\rangle, \quad
%
c_2=\frac{1}{2}\langle\hat P_\text{A}\hat P_\text{B}+\hat P_\text{B}\hat
P_\text{A}\rangle, \quad
%
c_3=\frac{1}{2}\langle\hat Q_\text{A}\hat P_\text{B}+\hat P_\text{B}\hat
Q_\text{A}\rangle, \quad
c_4=\frac{1}{2}\langle\hat Q_\text{B}\hat P_\text{A}+\hat P_\text{A}\hat
Q_\text{B}\rangle, \\
%
a_1&=\langle\hat Q^2_\text{A}\rangle,\quad
a_2=\langle\hat P^2_\text{A}\rangle,\quad
a_3=\frac{1}{2}\langle\hat Q_\text{A}\hat P_\text{A}+\hat P_\text{A}\hat
Q_\text{A}\rangle,\quad b_j=a_j(\text{A}\rightarrow \text{B}).
\end{align}
We first show temporal behavior of auto-correlation functions of the local
spatial mode in region III. The behavior of the auto-correlation
functions $a_{1,2,3}(t)$ with different region size $\ell=1,2$ are shown
in Fig. \ref{fig:ac1}. We can observe a signature of de Sitter
expansion in region II as a change of correlations around $0<t<2$.
\begin{figure}[H]
\centering
\includegraphics[width=1.01\linewidth]{./figs/c1xx.pdf}
\caption{Behavior of the auto-correlation
functions with different region size $\ell=1$ (blue lines) and $\ell=2$ (red lines). ($H=L=1,\epsilon=0.01$)}
\label{fig:ac1}
\end{figure}
\noindent
These quantities are measurable as output signals of the detector in
our QH experiment. To obtain a qualitative understanding of the
behavior of the auto-correlation functions, we evaluate
$a_1(t)=\langle\hat Q_\text{A}^2\rangle$ analytically with a window
function $w_Q(x)=w_0\,\theta(\ell/2+x)\theta(\ell/2-x)$ where $w_0$ is
a normalization constant the value of which is unspecified:
\begin{align}
\langle\hat Q_\text{A}^2\rangle&=\frac{1}{4\pi}\int_0^\infty dk k\left|\int dx \,w_Q(x)f'(t+x_\text{A}+x)e^{ikf(t+x_\text{A}+x)}\right|^2e^{-\Delta f \,k} \notag \\
&=\frac{2w_0^2}{\pi}\int_0^\infty\frac{dk}{k}\sin^2\left[\frac{k}{2}\left(f(t+x_\text{A}+\ell/2)-f(t+x_\text{A}-\ell/2)\right)\right]e^{-\Delta f\,k} \notag\\
&=\frac{w_0^2}{4\pi}\ln\left[1+\frac{(f(t+x_\text{A}+\ell/2)-f(t+x_\text{A}-\ell/2))^2}{(f'(t+x_\text{A})\epsilon)^2}\right].
\label{eq:QA2}
\end{align}
Using the asymptotic form \eqref{eq:x1} of $f(x)$,
\begin{equation}
\langle\hat Q_\text{A}^2\rangle\sim
\begin{cases}
\dfrac{w_0^2}{4\pi}\ln\left[1+\left(\dfrac{\ell}{\epsilon}\right)^2\right]\approx\dfrac{w_0^2}{2\pi}\ln\left(\dfrac{\ell}{\epsilon}\right)
&\quad\text{for}\quad
t\rightarrow-\infty,\\
\dfrac{w_0^2}{4\pi}\ln\left[1+\left(\dfrac{\ell}{\epsilon}\right)^2\left(\dfrac{\sinh(H\ell/2)}{(H\ell/2)}\right)^2\right]\approx
\dfrac{w_0^2}{2\pi}\ln\left(\dfrac{\ell}{\epsilon}\right)+\dfrac{w_0^2}{2\pi}\ln\left(\dfrac{\sinh(H\ell/2)}{(H\ell/2)}\right)&\quad\text{for}\quad
t\rightarrow+\infty,
\end{cases}
\label{eq:c1-asym}
\end{equation}
where we assume that the region size of the detection region is far larger than the UV cutoff
and $\ell/\epsilon\gg 1$. The difference of the
auto-correlation between $t=-\infty$ and $t=+\infty$ is
\begin{equation}
a_1(+\infty)-a_1(-\infty)\sim \frac{w_0^2}{2\pi}\ln\left[\frac{\sinh(H\ell/2)}{(H\ell/2)}\right].
\label{eq:c1diff}
\end{equation}
This quantity is independent of the UV cutoff and its amount depends
only on $H\ell$. Hence we expect that it reflects the signature of Hawking
radiation from the de Sitter region. For further understanding of
the behavior of $a_1=\langle\hat Q_\text{A}^2\rangle$ we pay attention to
its $\ell$-dependence. We expand $a_1(\ell)$ as
\begin{equation}
a_1(\ell)=\int_0^\infty dK\, \tilde a_1(K)e^{iK\ell}.
\end{equation}
Then, the power spectrum for $a_1$ is obtained as
\begin{equation}
P(K)=K|\tilde a_1(K)|,
\end{equation}
which represents the power of detected signals with wave number
$K=2\pi/\ell$ corresponding to the size $\ell$ of the detection
region. We see that the power spectrum shows the Planckian
distribution with the temperature $T_H=H/(2\pi)$. Using the asymptotic
form of $a_1$ in Eq. \eqref{eq:c1-asym}, the Fourier component of
$a_1(\ell)$ is
\begin{equation}
\tilde a_1(K)\sim
\begin{cases}
-\dfrac{i}{k}\left(\gamma+\ln(K\epsilon)-i\pi/2\right)&\quad\text{for}\quad t\rightarrow-\infty,\\
\dfrac{H}{2K^2}\left[1-i\dfrac{2K}{H}\left(\gamma+\psi(-iK/H)\right)\right]&\quad\text{for}\quad t\rightarrow+\infty,
\end{cases}
\end{equation}
where $\gamma$ is Euler's constant and $\psi(x)=\Gamma'(x)/\Gamma(x)$ is the poly Gamma function. For $t\rightarrow-\infty$, the power spectrum of the signal is
\begin{equation}
P(K)\sim-\ln(K\epsilon).
\label{eq:pow1}
\end{equation}
For $t\rightarrow+\infty$, the power spectrum of the signal is
\begin{align}
P(K)&\sim
\dfrac{\pi}{e^{2\pi K/H}-1}\quad
\text{for}\quad K<H.
\label{eq:pow2}
\end{align}
Therefore, for long wavelength modes larger than the de Sitter horizon
size $H^{-1}$, the power spectrum observed in region III shows the
Planckian distribution with temperature $T_H=H/(2\pi)$ originated from
Hawking radiation in the de Sitter region II. Comparing
\eqref{eq:pow1} and \eqref{eq:pow2}, the power $P(t=+\infty)$ is
larger than $P(t=-\infty)$ in the long wavelength region $K<H$ due to
the Hawking radiation. This enhancement or amplification of the power
for long wavelength fluctuations larger than the de Sitter horizon
$H^{-1}$ has the same physical origin as that of the generation of
primordial quantum fluctuations in cosmic inflation.
To understand the behavior of Hawking radiation in more detail, we
consider the covariance matrix of a single mode
$(\hat Q_\text{A},\hat P_\text{A})$
\begin{equation}
V_\text{A}=
\begin{bmatrix}
a_1&a_3\\ a_3&a_2
\end{bmatrix},
\end{equation}
and determinant of this matrix, which is the square of the symplectic
eigenvalue $\nu^2$ of the reduced state $\rho_\text{A}$. The physical
condition of the state requires $\nu^2\ge1/4$ and $\nu^2=1/4$
corresponds to a pure state. As we are considering a single sub-region
A of the entire space, the state $\rho_\text{A}$ is mixed and the
mixedness represents the amount of entanglement between A and its
complement $\overline{\text{A}}$. Temporal behavior of $\nu^2$
(Fig. \ref{fig:nu2}) shows Hawking radiation causes an increase of
mixedness of the mode A and enhances entanglement between A and
$\overline{\text{A}}$. The mode A and $\overline{\text{A}}$ constitute
a pure two mode squeezed state and its amount of squeezing and
entanglement between A and $\overline{\text{A}}$ increases due to the Hawking
radiation created by the rapid accelerated expansion of the background
space.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{./figs/nu2.pdf}
\caption{Evolution of the symplectic eigenvalue of the state
$\rho_\text{A}$. $\nu^2-1/4$ represents the mixedness of this state,
which also represents the amount of entanglement between A and $\overline{\text{A}}$.}
\label{fig:nu2}
\end{figure}
\section{Entanglement of spatial modes}
We investigate behavior of entanglement between spatial regions A and
B in region III (Fig. \ref{fig:setup}) using associated spatial local
modes $(\hat Q_\text{A},\hat P_\text{A})$ and
$(\hat Q_\text{B},\hat P_\text{B})$. We can evaluate entanglement
negativity from the symplectic eigenvalues of the covariance matrix
$V_\text{AB}$; the covariance matrix $V_\text{AB}$ has two symplectic
eigenvalues $\nu_{\pm}\ge 1/2$, and the partially transposed
covariance matrix $\tilde V_\text{AB}$ has two symplectic eigenvalues
$\tilde\nu_{\pm}$. Based on the positivity criterion of the partially
transposed covariance matrix for bipartite Gaussian
states \cite{Peres1996,Horodecki1997,Simon2000}, a measure of entanglement between A and B is
given by the logarithmic negativity defined as \cite{Vidal2002a}
\begin{equation}
E_N:=-\mathrm{min}[\log_2(2\tilde\nu_{-}),0].
\end{equation}
For $E_N>0$, the bipartite state $\rho_\text{AB}$ is entangled and the
logarithmic negativity represents the amount of entanglement between A
and B.
\subsection{Minkowski case}
We first show the behavior of entanglement for the case that region II
is static and there is no expanding edge region (Minkowski case). We
confirm that the detection of entanglement between A and B is possible
using local modes defined by window functions \eqref{eq:window}. The
left panel of Fig. \ref{fig:neg-M} shows the symplectic eigenvalues as a
function of separation $dx$ of two regions, and the right panel of
Fig. \ref{fig:neg-M} shows the negativity as a function of $dx$. There
exists a critical separation and below this separation entanglement
between A and B can be detected.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\linewidth]{./figs/neg-M.pdf}
\caption{Left panel: separation dependence of symplectic eigenvalues
$\nu_{-}$(blue), $\tilde\nu_{-}$(red). Positivity of the bipartite
state $\nu_{-}\ge 1/2$ is preserved for $dx\ge 0$. For
$\tilde\nu_{-}<1/2$, A and B are entangled. Right panel:
separation dependence of logarithmic negativity for Minkowski
case. The bipartite state $\rho_\text{AB}$ becomes separable for
large separation ($\epsilon=0.01, \ell=1, L=1$). The critical
separation depends on the values of the UV cutoff $\epsilon$.}
\label{fig:neg-M}
\end{figure}
\noindent
The left panel of Fig. \ref{fig:neg-ep} shows the dependence of the
cutoff parameter $\ell/\epsilon$ on negativity with $dx=0$. For large
values of the ratio $\ell/\epsilon$, the bipartite system AB becomes
separable and local modes cannot detect entanglement of the scalar
field. It is possible to understand this behavior from the viewpoint
of entanglement monogamy \cite{Hiroshima2007}. Let us focus on the
entanglement entropy of region A which is a subsystem of the entire
spatial region. The entanglement entropy for a single Gaussian mode
$(\hat Q_\text{A},\hat P_\text{A})$ is given by \cite{Holevo2001}
\begin{equation}
S_\text{A}=(\nu+1/2)\log_2(\nu+1/2)-(\nu-1/2)\log_2(\nu-1/2)
\end{equation}
with the symplectic eigenvalue $\nu$ of the covariance matrix
$V_\text{A}$ for the mode A. As is shown in the right panel of
Fig. \ref{fig:neg-ep}, $S_\text{A}$ behaves
$\propto\log(\ell/\epsilon)$,\footnote{This behavior is confirmed
numerically and we do not derive this relation analytically.}
which is the typical scaling behavior of entanglement entropy of the
massless scalar field in the 1+1 dimensional case
\cite{Bombelli1986,Srednicki1993}. This behavior implies that
entanglement between region A and its complement becomes larger as
$\ell/\epsilon$ increases. Concerning entanglement between A and B,
because A and its complement, and B and its complement become
strongly entangled as $\ell/\epsilon$ becomes larger, owing to the monogamy
property of multipartite entanglement
\cite{Bengtsson2016,Hiroshima2007}, the entanglement between A and B
should become smaller and the entanglement between A and B vanishes
above a some critical value of $\ell/\epsilon$.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{./figs/neg-M-ep.pdf}
\caption{Left panel: $\ell/\epsilon$ dependence of negativity with
$dx=0$. Right panel: $\ell/\epsilon$ dependence of entanglement entropy
for a single region A. For $\ell/\epsilon\gg1$, the entanglement
entropy behaves as $S_A\propto \log(\ell/\epsilon)$.}
\label{fig:neg-ep}
\end{figure}
\subsection{De Sitter case}
We move on to the expanding edge case which mimics a de Sitter
universe and consider entanglement between adjacent regions A and B in
region III under the influence of Hawking radiation from the de Sitter
region II. Figure \ref{fig:neg-D} shows the evolution of entanglement
between A and B with different sizes of A and B with $dx=0$. As we can
observe from Fig. \ref{fig:neg-D}, following the transient change of
negativity during $0<t<3$, which is determined by the shape of the
window function, the negativity becomes asymptotically constant. The
final amount of entanglement is reduced compared to the initial
Minkowski value. The reduction of entanglement depends on the size of
spatial regions $\ell$. For $\ell=1.0-1.4$, a non-zero value of
negativity survives at $t=6$. On the other hand, for sufficiently
large region size $\ell\gg H^{-1}$, which corresponds to detection of
long wavelength super horizon fluctuation in the de Sitter universe,
the entanglement between A and B becomes zero after arrival of the
Hawking radiation. This behavior of ``entanglement death'' is the same
as that confirmed in inflationary models
\cite{Nambu2008,Nambu2011,Matsumura2018} and is responsible for the
emergence of classical behavior from quantum fluctuations. Thus, using
our setup of the QH experiment, it is possible to simulate ``classical
to quantum transition'' of primordial quantum fluctuations in a
laboratory.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth]{./figs/neg-D.pdf}
\caption{Evolution of negativity between regions A and B for the de Sitter case
with different spatial region size $\ell$ ($dx=0, \epsilon=0.01, H=L=1$). Due to Hawking
radiation from the de Sitter region, negativity decreases
around $t=0\sim2$. The final value of negativity becomes smaller than
the initial negativity in the Minkowski region. For
$\ell=1.8,2.0$, the final value of negativity becomes zero and entanglement death occurs. For $\ell=1.6$, both death and revival of
entanglement are observed.}
\label{fig:neg-D}
\end{figure}
Figure \ref{fig:neg-D2} shows the region size dependence of the negativity at
$t=6$. For $\ell\ge 1.65$, the negativity becomes zero and the two regions A
and B become separable. The quantum correlation between the two regions is
lost for large scales compared to the de Sitter horizon length
$H^{-1}$. For these large scales, spatial correlations between A and B
exists as classical correlations. Therefore, the long wavelength
Hawking radiation can be treated as classical stochastic fluctuations
and we can confirm the classicality of Hawking radiation originated from
zero point quantum fluctuations of the scalar field.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{./figs/DS-sepa.pdf}
\caption{Region size dependence of negativity at $t=6$
($dx=0,L=H=1, \epsilon=0.01$). A and B becomes separable for large
scales $\ell\ge1.65H^{-1}$.}
\label{fig:neg-D2}
\end{figure}
\section{Summary and Speculation}
We considered the analog de Sitter universe realized by the expanding
edge of a QH system. We investigated the behavior of the chiral
massless scalar field corresponding to an edge excitation, and
discussed the detection of Hawking radiation from the de Sitter
region. In our setup of the expanding edge system, the spacetime
structure is similar to that of a black hole formation via
gravitational collapse; the future event horizon is formed and Hawking
radiation with thermal spectrum from the vicinity of the future event
horizon is expected. The entanglement between spatial regions A and B
in the flat region is also evaluated and we found that Hawking
radiation from the de Sitter region reduces pre-existent entanglement
before the arrival of Hawking radiation, and for sufficient large size
of detection regions compared to the Hubble length in de Sitter
region, the two regions becomes separable and only classical
correlation survives. This behavior is same as that appearing in
cosmic inflation. To conduct the UV divergence of the quantum scalar
field, we introduced a UV cutoff as the scale at which the effective
field treatment of the edge excitation breaks down. The correlation
functions of the local spatial modes also contain this cutoff
dependence and it is possible to examine the impact of the cutoff on
Hawking radiation using our experiment, which is related to the
trans-Planckian problem in black hole evaporation and cosmic
inflations. It is important to investigate how the effective theory
for the edge excitation breaks down below the cutoff length since the
deviation from the effective theory may introduce corrections to the
massless Klein-Gordon equation adopted in this paper. Beside applying
the expanding edge of the QH system as a simulator of the quantum
cosmology, it is also possible to explore physics of fundamental
aspects of quantum mechanics and quantum field theory because this
system can provide the squeezed vacuum state by amplification of the
vacuum fluctuations in the expanding edge region. Thus, investigation
of the violation of macro realism (the Legget-Garg inequality
\cite{Emary2014a}) with the quantum field and the realization of the
quantum energy teleportation \cite{Hotta2014a} are possible.
In this paper, the Hall edges are described by quantum field theory in
curved space in the long wave length regime compared to the magnetic
length $\ell_{B}$. As seen in the above analysis, the edge can be
regarded as a fixed 1+1 dimensional universe. It may be interesting to
point out a possibility that the same system can be described by
different effective theories of quantum gravity. For instance, let us
consider a static QH system confined in a circle edge. The edge is
regarded as a closed 1+1 dimensional universe, the spacetime curvature
curvature of which vanishes. Since the electrons located at the edge
are in a quantum state with position fluctuation, the edge fluctuates
quantum mechanically. This yields quantum superposition of edge
configurations with different edge lengths. In this sense, quantum
universes with different sizes are quantum mechanically
superposed. This suggests a realization of quantum gravity at the QH
edge. Though the precise model for the static quantum universe has not
yet been specified, the classical action may be given by the following
dilaton gravity model:
\begin{equation}
S=\int d^2 x \sqrt{-g(x)} \left(\Phi(x) R(x) +\frac{\Lambda}{\ell_B} \right),
\end{equation}
where $\Lambda$ is a positive constant, $\Phi(x)$ is a real scalar
field referred to as dilaton field, and $R(x)$ is the scalar curvature
of the 1+1 dimensional universe. Taking the variation of $S$ with
respect to $\Phi(x)$ yields $R(x)=0$ as the equation of motion. Thus,
the classical action is evaluated as
\begin{equation}
S_\text{QG}=\frac{\Lambda}{\ell_B}\int d^2 x \sqrt{-g(x)} .
\end{equation}
By using a static configuration of $\varphi$ independent of $t$, let us parameterize the edge in the $x$-$y$ plane as
\begin{equation}
(x,y)=\left(x,\ell_B \varphi(x) \right),
\end{equation}
where the edge fluctuation occurs in the $y$ direction. The induced
metric for the edge is given by
\begin{equation}
ds^2 =-dt^2+dx^2+dy^2=-dt^2 +h(x)dx^2,
\end{equation}
where $h(x)=1+\ell_B^2 (\partial_x \varphi(x))^2$. Then the value of $S_\text{QG}$ is computed as
\begin{equation}
S_\text{QG}=\frac{\Lambda}{\ell_B}\int d^2x \sqrt{1+\ell_B^2 (\partial_x \varphi(x))^2}. \label{01}
\end{equation}
It may be worth noting that the above value of $S_\text{QG}$ can be reproduced by the classical action of the field theory of QH edges:
\begin{equation}
S_\text{QH}=\int d^2 x \left( N(x) \frac{\Lambda^2}{4} +\frac{1}{N} \left( (\partial_x \varphi)^2 +\frac{1}{\ell_B^2} \right) \right),
\end{equation}
where $N(x)$ is a lapse function. When we take $N(x)=1$, $S_\text{QH}$ yields the action of the quantum field theory for the edges. By taking the variation of $S_\text{QH}$ with respect to $N(x)$, we get
\begin{equation}
N(x)=\frac{2}{\Lambda \ell_B}\sqrt{1 +\ell_B^2 (\partial_x \varphi)^2}.
\end{equation}
It turns out that substitution of the above $N(x)$ into $S_\text{QH}$
reproduces the value of $S_\text{QG}$ in (\ref{01}). Though the
correct relation between the theories with $S_\text{QG}$ and
$S_\text{QH}$ remains vague at present, it may be interesting to
explore the correspondence and quantum gravity effective theory for
the QH edges. In this case, the circular edge corresponds to a closed
universe. The interpretation of the wave functions of the quantum
closed universe, which satisfy the Wheeler-DeWitt equation, can be
developed from the viewpoint of the many-body wave funtions of the QH
systems.
\begin{acknowledgements}
We would like to thank A. Matsumura, Y. Osawa, Y. Sugiyama and K. Yamamoto for providing
their valuable insight on the subject. This research was supported
in part by a Grant-in-Aid for Scientific Research, Grant No. 21H05188 (M.H.), No. 21H05182 (M.H.), No. JP19K03838 (M.H.), No. 19K03866 (Y.N.) and No. 22H05257 (Y.N.) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan.
\end{acknowledgements}
|
2,877,628,089,443 | arxiv | \section{Introduction}
The anomalous transport induced by electromagnetic fields has been widely studied recently. In the presence of an axial chemical potential, a vector current will propagate parallel to an applied magnetic field led by triangle anomalies, which is the renowned chiral magnetic effect(CME)\cite{Kharzeev:2007tn,Kharzeev:2007jp,Kharzeev:2010gd,Son:2004tq}. Although this effect was initially found in the deconfined phase, it may exist in the hadronic phase as well\cite{Asakawa:2010bu}. Analogous to CME, a vector chemical potential can generate an axial current along the magnetic field, which is the so-called chiral separation effect(CSE)\cite{Fukushima:2008xe}.
These effects have been further derived from varieties
of different approaches, including
relativistic hydrodynamics \cite{Son:2009tf,Pu:2010as,Sadofyev:2010pr,Kharzeev:2011ds,Nair:2011mk},
kinetic theory \cite{Gao:2012ix,Son:2012wh,Stephanov:2012ki,Son:2012zy,Chen:2012ca,Pu:2012wn,Chen:2013iga,Manuel:2013zaa,Manuel:2014dza,Satow:2014lva,Duval:2014ppa},
and lattice simulations\cite{Abramczyk:2009gb,Buividovich:2009wi,Buividovich:2010tn,Yamamoto:2011gk,Bali:2014vja}.
Also they were analyzed in the strongly coupled plasmas through the AdS/CFT correspondences\cite{Yee:2009vw,Rebhan:2009vc,Gorsky:2010xu,Gynther:2010ed,Kalaydzhyan:2011vx,Hoyos:2011us,Gahramanov:2012wz}.
However, in the Sakai-Sugimoto(SS) model as a commonly used model for AdS/QCD\cite{Sakai:2004cn,Sakai:2005yt}, CME may disappear when requiring both gauge-invariance and conservation of the vector current\cite{Rebhan:2009vc,Yee:2009vw,Gynther:2010ed,Rubakov:2010qi}.
For a recent review of CME/CSE and related topics, see e.g. \cite{Kharzeev:2012ph,Liao:2014ava} and the references therein.
The effects are particularly important in the heavy ion experiments, where the charge separation could arise from the strong magnetic field produced from the colliding nuclei and non-vanishing chemical potentials in the quark gluon plasma(QGP). In light of CME/CSE, it was proposed that the thermal fluctuations of the vector and axial chemical potentials in thermal plasmas can further result in density waves propagating along the magnetic field as the chiral magnetic waves(CMW)\cite{Kharzeev:2010gd}. In \cite{Kharzeev:2010gd}, the dispersion relation of CMW was investigated in the framework of the SS model with zero chemical potentials. As shown in \cite{Burnier:2011bf}, the CMW could generate a chiral dipole and a charge quadrapole in QGP, which may contribute to the charge asymmetry of elliptic flow $v_2$ measured in the relativistic heavy ion collider(RHIC)\cite{Wang:2012qs,Ke:2012qb}. Further study of CMW in an expanding QGP can be found in \cite{Taghavi:2013ena}. In addition to the anomalous effects, the strong magnetic field also gives rise to profound phenomena such as the enhanced photon production \cite{Basar:2012bp,Fukushima:2012fg,Bzdak:2012fr,Wu:2013qja,Yee:2013qma,Muller:2013ila},
which could be crucial for the large elliptic flow observed in RHIC \cite{Adare:2011zr} and in the large hadron collider(LHC)\cite{Lohner:2012ct},
the production of heavy quarkonia\cite{Yang:2011cz,Machado:2013rta,Alford:2013jva}, and the modified shear viscosity of QGP\cite{Nam:2013fpa,Critelli:2014kra}.
In addition to the strong magnetic field, a strong electric field could be produced in heavy ion collisions as well. The strong electric field having the magnitude of $m_{\pi}^2$ with $m_{\pi}$ being the mass of pions could exist in the asymmetric collisions such as the Au nucleus to the Cu nucleus in early times\cite{Hirono:2012rt}.
Furthermore, the electric field can be comparative to that of the magnetic field on the basis of event-by-event fluctuations even in the symmetric collisions\cite{Bzdak:2011yy,Deng:2012pc}. A novel phenomenon called chiral electric effect(CESE) has been proposed in \cite{Huang:2013iia}, where an axial current can be produced parallel to the electric field in the presence of both vector and axial chemical potentials. The direct-current(DC) conductivity of the axial charge was found to be proportional to the product of the axial chemical potential and the vector chemical potential in the weakly coupled QED with small chemical potentials compared to the temperature of the medium. Such a relation was later verified in the strongly coupled scenario in the SS model\cite{Pu:2014cwa}. Moreover, the relation is approximately hold even for large chemical potentials. Unlike CME/CSE, since CESE is not contributed by the Chern-Simons(CS) term related to the axial anomaly but only by the nonzero vector and axial chemical potentials, the axial current from CESE in the SS model is well defined. Besides, in Ref.\cite{Chen:2013tra}, the studies of electric conductivities of
non-singlet currents in a weakly coupled QCD system with multi-flavors
implies that the similar behavior of axial conductivities in small
chemical potentials could also observed in QCD. Similar to CMW, the density fluctuations may induce the propagating waves along the electric field as the chiral electric waves(CEW)\cite{Huang:2013iia}. In phenomenology, the combination of CME and CESE could possibly generate quadrapole distribution of charge particles when the electric field and magnetic field are perpendicular to each other as in the asymmetric collisions. It is thus imperative to further investigate CESE and CEW.
We will continue our study in \cite{Pu:2014cwa} to further explore the CESE and CEW with arbitrary chemical potentials. From the classical electrodynamics, the presence of both an electric field and a magnetic field perpendicular to each other should yield a Hall current perpendicular to both applied fields. Since the CESE is analogous to the normal transport process which is governed by the interaction between the chiral particles, we will find an axial Hall current similar to the axial current parallel to the electric field in the absence of the axial anomaly.
In general, we analyze the CESE, classical Hall effect, and chiral Hall effect(CHE) in chiral systems in the presence of external electromagnetic fields and also investigate the propagating waves caused by the density fluctuations with arbitrary chemical potentials. Nevertheless, we will assume that the interaction between the chiral particles dominates the topological effect and thus neglect the CME/CSE. In addition, we will implement the SS model to compute the transport coefficients including the damping times, wave velocities, and diffusion constants of CEW.
For convenience, we briefly summarize CME, CSE, CESE,
CHE, CMW and CEW in Tab. \ref{tab:A-brief-summary}.
This paper is organized in the following order. In section \ref{Hall_effects}, we review the classical Hall effect and derive the axial Hall current. In section \ref{ph_implications}, we will discuss the phenomenological implications of the CESE and CHE. In section \ref{density_waves}, we then generalize both the CMW and CEW to the cases with arbitrary chemical potentials. Also, we analyze the CEW on the basis of the CESE and CHE. In section \ref{SS_model}, we review the setup of the SS model in a chiral symmetric phase at finite temperature with chemical potentials and a constant electric field perpendicular to a constant magnetic field, where we further derive the axial Hall current. In section \ref{CHE_holography}, we will analyze the CESE and CHE in different limits and present the numerical results in the framework of the SS model. In section \ref{CEW_holography}, we numerically solve for CEW in the SS model. In addition, we briefly compare the CEW at small chemical potentials in the strongly coupled QCD with that in the weakly coupled QED. Finally, we make a brief summary and outlook in section \ref{sum_outlook}. Throughout the paper, we will set ${\bf B}=B_x\hat{x}$, ${\bf E}=E_y\hat{y}$ when we discuss the Hall and chiral Hall effects, where ${\bf E}$ and ${\bf B}$ denote the external electric and magnetic fields in our systems.
\begin{table}
\caption{A brief summary to CME, CSE, CESE, CHE, CMW and CEW. Here $\mu_{V},\mu_{A}$
are vector and axial vector chemical potentials, respectively. $\mathbf{j}_{v}$
and $\mathbf{j}_{a}$ are vector and axial vector current. $\sigma_{a},(\sigma_{v})_{zy},(\sigma_{a})_{zy}$
are transport coefficients. \label{tab:A-brief-summary} }
\centering{
\begin{tabular}{|c|c|c|}
\hline
& Currents & Possible phenomena\tabularnewline
\hline
\hline
Chiral Magnetic Effect & $\mathbf{j}_{v}=\frac{e}{2\pi^{2}}\mu_{A}\mathbf{B},$ & charge separation along $\mathbf{B}$ field \tabularnewline
\hline
Chiral Separation Effect & $\mathbf{j}_{a}=\frac{e}{2\pi^{2}}\mu_{V}\mathbf{B},$ & chirality separation along $\mathbf{B}$ field \tabularnewline
\hline
Chiral Electric Separation & $\mathbf{j}_{a}=\sigma_{a}\mathbf{E},$ & charge and chirality separation \tabularnewline
Effect & & along $\mathbf{E}$ field\tabularnewline
\hline
Chiral Hall Effect & $j_{v,z}=(\sigma_{v})_{zy}E_{y},$ & charge and chirality separation\tabularnewline
& $j_{a,z}=(\sigma_{a})_{zy}E_{y},$ & in rapidity direction\tabularnewline
\hline
\hline
Chiral Magnetic Wave & Evolution equations for & density wave induced by magnetic field\tabularnewline
& currents with CME, CSE & and charge separation along $\mathbf{B}$ field\tabularnewline
\hline
Chiral Electric Wave & Evolution equations for & density wave induced by electric field,\tabularnewline
& currents with CESE, CHE & charge separation along $\mathbf{E}$ field\tabularnewline
& & and rapidity direction\tabularnewline
\hline
\end{tabular}
\end{table}
\section{Hall effect and chiral Hall effect}\label{Hall_effects}
In classical physics, the Hall current is coming from the balance
of two forces in a conductor, i.e. the electric and magnetic forces,
\begin{equation}
e\mathbf{E}=-e\mathbf{v}\times\mathbf{B},\label{eq:Lorentz_force_01}
\end{equation}
where $\mathbf{v}$ is the velocity of a single electron or positron and
$e$ is the charge of particles. In a many body system, multiplying
the number density of particle, $n$, to the both sides of above equations,
yields,
\begin{equation}
ne\mathbf{E}=-ne\mathbf{v}\times\mathbf{B}.
\end{equation}
Recalling the charge currents in an equilibrium state, $j_{eq0}=n$,
$\mathbf{j}_{eq}(x)=n\mathbf{\bar{v}}$, with $\bar{\mathbf{v}}$
the average of the particles' velocities at point $x$. Without external
fields, the system will be homogenous and $\mathbf{j}_{eq}(x)=n\mathbf{\bar{v}}\rightarrow0$
in the local rest frame. In the present of external fields, most of
particles will be accelerated by $\mathbf{E}$ field and become
the normal electric conducting flow. While a few particles, which move
orthogonal to $\mathbf{E},\mathbf{B}$ fields and satisfy Eq. (\ref{eq:Lorentz_force_01}),
will not feel the external fields and cause a new current $\mathbf{j}$.
Neglecting high order terms of \textbf{$\mathbf{E},\mathbf{B}$},
this new current will satisfy,
\begin{equation}
j_{0}\mathbf{E}=-\mathbf{j}\times\mathbf{B},
\end{equation}
Since the current is proportional to the absolute value of \textbf{$\mathbf{E}$}
field, one can consider it as another conducting flow and introduce
the conductivity tensor as,
\begin{equation}
j_{i}=\sigma_{ij}eE_{j},
\end{equation}
If $\mathbf{E}=E\hat{y}$, $\mathbf{B}=B\hat{x}$, then we find,
\begin{equation}
\sigma_{zy}=-\frac{n}{eB},\label{eq:Hall_strong_B_01}
\end{equation}
which is Hall conductivity. Note that, above discussion cannot be
applied to a small $\mathbf{B}$ field case, otherwise, the balance
of two forces will never be reached, if $|\mathbf{E}|>c|\mathbf{B}|,$
with $c$ the speed of light. Since, if $B=0$, there will be no Hall
effect, therefore, we expect that in small $\mathbf{B}$ case, the
Hall conductivity will be,
\begin{equation}
\sigma_{zy}=-n\tau_{H}^{2}eB,\label{eq:Hall_weak_B_01}
\end{equation}
where $\tau_{H}$ is parameter with dimension $MeV^{-2}$. Physically,
$\tau_{H}$ is related to the interaction between particles. Since
when $\mathbf{B}$ is too weak, the interaction from particles will
give an effective force to each particles and the force will help
to satisfy Eq. (\ref{eq:Lorentz_force_01}). As shown in Eq.(\ref{eq:sol_Langevin_01})
at Appendix A, the $\tau_{H}$ can be solved in weakly magnetic field
limit in Langevin equations (\ref{lorentzf}), i.e. $\tau_{H}=\xi M$
, with $\xi$ the drag coefficient related to the interactions and
$M$ the mass of particles. A systematic discussion in both strong
and weak $B$ limit via Langevin equation and Boltzmann equation with
relaxation time approaches is shown in Appendix A.
Although it seems that the normal electric conductivities $\sigma_{ii}$
vanishes in this discussion, for a fixing \textbf{$\mathbf{E}$} and
$\mathbf{B}$ fields, as we mentioned, only a few particles could
satisfy Eq. (\ref{eq:Lorentz_force_01}) and others will still be
accelerated by the \textbf{$\mathbf{E}$} field. Therefore, the normal
electric conducting flow is still there. This can be understood in
the language of the Lagevin equations or Boltzmann equations, as shown in Eq. Appendix A.
Now let us extend our discussion to a chiral fermion system. In this
case, the single charge current will become the right and left handed
currents, $\mathbf{j}_{R}$ and $\mathbf{j}_{L}$. In the present
of axial chemical potential $\mu_{A}$, the Hall conductivities in
Eq. (\ref{eq:Hall_strong_B_01}, \ref{eq:Hall_weak_B_01}) for $\mathbf{j}_{R/L}$
will be different because of $n_{R}\neq n_{L}$,
\begin{equation}
(j_{R/L})_i=(\sigma_{R/L})_{ij}E_{j}.
\end{equation}
Therefore, the vector and axial vector currents are defined as,
\[
\mathbf{j}_{v}=\frac{1}{2}(\mathbf{j}_{R}+\mathbf{j}_{L}),\;\mathbf{j}_{a}=\frac{1}{2}(\mathbf{j}_{R}-\mathbf{j}_{L}).
\]
There will be a chiral Hall effect(CHE) caused by the differences of Hall
conductivities of right and left handed fermions. If $\mathbf{E}=E_y\hat{y}$,
$\mathbf{B}=B_x\hat{x}$, we can define the normal Hall conductivity,
\begin{equation}
(\sigma_{v})_{zy}=-(\sigma_v)_{yz}=\frac{1}{2}(\sigma_{R}+\sigma_{L})_{zy},
\end{equation}
and the chiral Hall conductivity,
\begin{equation}
(\sigma_a)_{zy}=-(\sigma_a)_{yz}=\frac{1}{2}(\sigma_{R}-\sigma_{L})_{zy}.
\end{equation}
Now we can discuss the property of the normal and chiral Hall conductivity. The parity transformation, $\mathbf{x\rightarrow-x}$, will lead to
\begin{equation}
(\sigma_{a})_{zy}(\mathbf{x})=-(\sigma_{a})_{zy}(-\mathbf{x}),\;(\sigma_{v})_{zy}(\mathbf{x})=(\sigma_{v})_{zy}(-\mathbf{x}),
\end{equation}
which implies that $\sigma_{5H}\propto\mu_{A}$, since in the macroscopic
scaling, there is only a pseudo scalar in our system, $\mu_{A}$.
In a small $\mu_{V}$ and $\mu_{A}$ limit, from Eq. (\ref{eq:Hall_strong_B_01},
\ref{eq:Hall_weak_B_01}), we find, in a weak $\mathbf{B}$ field
case,
\begin{eqnarray}\label{paritysmallmu}
(\sigma_{v})_{zy} & = & \chi_{e}eB_x\mu_V,\nonumber \\
(\sigma_{a})_{zy} & = & \chi_{5e}eB_x\mu_A,\label{eq:power_counting_Hall_weak_01}
\end{eqnarray}
and in a strong $\mathbf{B}$ field case,
\begin{eqnarray}
(\sigma_{v})_{zy} & = & \chi_{e}^{\prime}T^{2}\mu_V/(eB_x),\nonumber \\
(\sigma_{a})_{zy} & = & \chi_{5e}^{\prime}T^{2}\mu_A/(eB_x),\label{eq:power_counting_hall_strong_02}
\end{eqnarray}
with $\chi_{e,5e},\chi_{e,5e}^{\prime}$ dimensionless function of
$T$ and $\mathbf{E}$.
A similar effect can be observed in an anisotropic fluid with Berry
phase. When neglecting the interactions between particles, at an external
electric and magnetic fields, the effective velocity of a single right
handed Weyl fermions reads \cite{Son:2012zy,Stephanov:2012ki,Chen:2012ca},
\begin{equation}
\dot{\mathbf{x}}=\frac{\mathbf{p}}{|\mathbf{p}|}+\mathbf{E}\times\boldsymbol{\Omega}+\mathbf{B}(\frac{\mathbf{p}}{|\mathbf{p}|}\cdot\boldsymbol{\Omega}),
\end{equation}
where $\mathbf{p}$ is the momentum of that particle and $\boldsymbol{\Omega}=\mathbf{p}/(2|\mathbf{p}|^{3})$
is the Berry curvature. The right handed current is defined by
\begin{equation}
\mathbf{j}_{R}=\int\frac{d^{3}p}{(2\pi)^{3}}\dot{\mathbf{x}}f(x,p)=n_{R}\mathbf{v}+\mathbf{E}\times\int\frac{d^{3}p}{(2\pi)^{3}}\boldsymbol{\Omega}f(x,p)+\frac{\lambda}{2}\mu_{A}\mathbf{B},
\end{equation}
where $f(x,p)$ is the distribution function. The third term gives
the CME. Once $f(x,p)$ is anisotropic in momentum space, the second
term will induce a current perpendicular to the electric field. However,
this current can survive even if \textbf{$\mathbf{B}=0$}.
In a 2+1 dimensional non-interacting fermion system, similar effects
from Chern-Simions term in an effective action of 2+1 dimensional
QED are also appear \cite{Chen:2013dca}.
Quite different with above effects, the Hall and chiral Hall effects
depends on interactions and can survive without topological effects
and Berry phase.
\section{Phenomenological Implications}\label{ph_implications}
The CESE and CHE may have important implications for the phenomenology of heavy ion collisions. For simplicity, we consider a system with only $u$ and $\bar{u}$
quarks in the following discussion. If $\mu_{V}>0$ or $\mu_{V}<0$,
there will be more particles or anti-particles, respectively. On the contrary,
if $\mu_{A}>0$ or $\mu_{A}<0$, there are more right or left handed
fermions.
In the following discussion, we will assume there is a small net positive
$\mu_{V}$ after the two nuclei collide with each other since totally
there are more particles than anti-particles. For CME and CSE, a finite
$\mu_{A}$ is not necessary, since the CSE will induce a finite $\mu_{A}$
with the evolution. Nevertheless, to simplify the condition in the presence of
both electric and magnetic fields, we ignore the detail of the axial charge distribution
from CSE and just assume there
exists a net positive $\mu_{A}$
as an initial condition when we discuss CESE and CHE. One can consider
the net $\mu_{A}$ to be induced by CSE
or by fluctuations or topological
transitions of QCD vacuum in each events.
\begin{figure}
\begin{minipage}[t]{1\columnwidth
\subfigure[CSE and CME \label{fig:CSE-and-CME}]{\includegraphics[scale=0.42]{figs1.eps}
}
\hspace {1cm}
\subfigure[CESE and CME \label{fig:CESE-and-CME}]{\includegraphics[scale=0.42]{figs2.eps}
\end{minipage}
\par
\begin{centering}
\subfigure[Hall and chiral Hall effects\label{fig:Hall-and-chiral}]{\includegraphics[scale=0.42]{figs3.eps}
}
\par\end{centering}
\caption{A schematic illustration for (a) CSE, (b) CESE and (c) Hall and chiral
Hall effects. In (b,c), for simplicity, we have assumed the system
has a $\mu_{A}>0$. In those figures, two nuclei collide through the $z$
direction. The strong magnetic and electric fields are at $x$ and
$y$ directions. The origin of the frame is set to be the center
of the fireball. In (c), we find a possible charge and chirality separation
induced by Hall and chiral Hall effects in the $z$ direction. \label{fig:cartoon}}
\end{figure}
Firstly, we will give a brief review to the scenario caused by the CME and CSE. In the
relativistic non-central heavy ion collisions, two nuclei collide with
each other through the $z$ direction as the beam direction shown in Fig. \ref{fig:cartoon}
and a very strong magnetic field $\mathbf{B}$ appears perpendicular
to the reaction plane, which is at the $x$ direction in Fig. \ref{fig:CSE-and-CME}.
According to the CSE, because of the nonzero net baryon chemical potential,
the strong magnetic field will induce an axial current and a local
axial chemical potential $\mu_{A}$. For example, assuming the reaction plane is on the $y-z$ plane in Fig.\ref{fig:cartoon}, in the $x>0$ or
$x<0$ region, the CSE will lead $\mu_{A}>0$ or $\mu_{A}<0$. When
there exists a local axial chemical potential, the CME will give rise
to the charge separation, where the positive-charged particles will be pushed away from the reaction plane as illustrated in the right panel of Fig.\ref{fig:CSE-and-CME}.
These dynamical and reaction-plane-dependent fluctuations
of electric charge is expected not to vanish when averaged over lots
of events. A possible result from these effects is the charge asymmetry encoded by the $v_{2}$ difference
of $\pi^{\pm}$\cite{Burnier:2011bf}.
In \cite{Huang:2013iia}, the authors considered a
small global axial chemical potential induced by fluctuations or topological
transitions of QCD vacuum in each events. For example, as shown in
Fig.\ref{fig:CESE-and-CME}, we assume there is a global $\mu_{A}>0$
in a certain event. In the Cu+Au collisions, because of geometric asymmetry
of the nuclei, there will be a large electric field from Au to Cu in the
early stage\cite{Hirono:2012rt}, e.g. as
shown in Fig.\ref{fig:CESE-and-CME}, the $\mathbf{E}$ field is
along the $y$ direction. Because of the normal electric conduction $\mathbf{j}_{v}\propto\mathbf{E}$
, the positive and negative charged particles will be dragged to the $y>0$
and $y<0$ region, respectively. However, since the CESE yields $\mathbf{j}_{a}\propto\mu_{V}\mu_{A}\mathbf{E}$,
the right and left handed quarks will also be pushed to the $y>0$
and $y<0$ region, respectively. Therefore, the electric field enhances
the charge and chirality separation.
Now in $y>0$
region, there are more positive-charged particles and more right-handed
particles, i.e. locally $\mu_{V}>0,\mu_{A}>0$. While in $y<0$ region,
there are more negative-charged particles and more left-handed particles,
i.e. locally $\mu_{V}<0,\mu_{A}<0$.
Now we can add the CME and CSE to the system.
As shown in the right panel of Fig.\ref{fig:CESE-and-CME}, in the $y>0$ region, since
$\mathbf{j}_v\propto\mu_{A}\mathbf{B}$ and $\mathbf{j}_{a}\propto\mu_{V}\mathbf{B}$
with $\mu_{V},\mu_{A}>0$, the positive-charged and right-handed (or negative-charged and left-
handed) quarks will move
along (or along the opposite direction of) the $\mathbf{B}$ field
and accumulate in the $x>0$ (or $x<0$) side. Similarly,
in the $y<0$ region, the opposite processes will occur because of $\mu_{V},\mu_{A}<0$.
In $x>0$ sider
of $y<0$ region, the positive-charged and right-handed particles
will move along the opposite direction of the $\mathbf{B}$ field.
Note that, initially there is the net $\mu_{V}>0$ after the collisions.
Therefore, after the evolution in $x>0$ side there will still be
more positive-charged particles at $y>0$ region than negative-charged
particles at $y<0$ region.
Eventually, the combinations of magnetic and electric fields might
cause a quadrupole distribution at certain angle $\Psi_{q}$ with respect to the reaction plane.
The Hall and chiral
Hall effects are expected to play a role in such strong electric and
magnetic fields. However, the dynamics evolution is very complicated
and the quantitative predictions require numerical
studies in hydrodynamics. Here, we will only discuss some possible
phenomena in a qualitative description. For simplicity,
we neglect all other chiral effects expect Hall and chiral Hall effects.
As illustrated in Fig. \ref{fig:Hall-and-chiral}, in heavy ion collisions,
the fireball is approximately boost-invariant along the $z$ direction as the beam direction in Fig.\ref{fig:Hall-and-chiral}. Since both magnetic and electric
fields are at the transverse plane ($x,y$) plane in Fig. \ref{fig:Hall-and-chiral},
according to (\ref{eq:Hall_strong_B_01}) and
(\ref{eq:power_counting_hall_strong_02}), the
Hall and chiral Hall effects will only induce currents anti-parallel
or parallel to the $z$ direction. For example, we
assume there is a global net $\mu_{A}>0$ and $\mu_{V}>0$ in the
QGP. Since $j_{v,z}\propto-n_{v}\propto-\mu_{V}$, the positive-charged
particles will move anti-parallel to $z$ direction, while the negative-charged
particles will move parallel to $z$ direction. From $j_{a,z}\propto-n_{a}\propto-\mu_{A}$,
the chirality separation happens similarly. It will further causes the nontrivial charge distribution with rapidity. Note that an axial Hall current can be generated by the CHE even at $\mu_V=0$.
Furthermore, when combining the CESE, CME, and CHE, we might find the difference in charge asymmetry of the flow coefficients $v_n$ of charged pions with different rapidity. For example, we could expect that the quadrapole distribution will be enhanced in the backward rapidity but reduced in the forward rapidity.
In the next section, we will study the propagating waves coming from the density fluctuations and the above effects, while we only consider the fluctuations of currents
and then solve the linearized desperation relation and discuss all possible
propagating modes. We will leave the numerical studies based on hydrodynamic simulations in the future.
\section{Density Waves with finite chemical potentials}\label{density_waves}
\subsection{Chiral Magnetic Waves}
We firstly review the derivation of CMW from the CME and CSE in the right-handed and left-handed (R/L) bases in the presence of an external magnetic field. However, we will consider the presence of nonzero chemical potentials and electric conductivities of the medium. The CME and CSE along with the internal electric fields yield
\begin{eqnarray}\label{jRL}
{\bf j_R}=\lambda\mu_R{\bf B}+e\sigma_R{\bf E}_{in},\quad{\bf j_L}=-\lambda\mu_L{\bf B}+e\sigma_L{\bf E}_{in},
\end{eqnarray}
where $\lambda=N_ce/(2\pi^2)$ and $\sigma_{R/L}$ denote the electric conductivities for right/left handed fermions and ${\bf B}$ denotes a constant strong background magnetic field. Therefore, the fluctuations of magnetic fields from the charged particles could be neglected. For simplicity, we further consider a decoupled system, where the right-handed fermions do not interact with the left-handed fermions. The ${\bf E}_{in}$ here represents an "internal" electric field, which may come from a charged medium.
Given that the right-handed fermions do not interact with left-handed fermions, we may assume that $\mu_R(\sigma_R)$ and $\mu_L(\sigma_L)$ depend on $j^0_R$ and $j^0_L$, respectively. By implementing the conservation equation $\partial_{\mu}j^{\mu}=0$ and $\nabla\cdot{\bf B}=0$, $\nabla\cdot{\bf E_{in}}=j^0_v$, we obtain
\begin{eqnarray}\nonumber\label{waveeqRL0}
&&\partial_0 j^0_R+\lambda {\bf B\cdot\nabla}\mu_R+e\sigma_Rj^0_v+e{\bf E_{in}\cdot\nabla}\sigma_R=0,\\
&&\partial_0 j^0_L-\lambda {\bf B\cdot\nabla}\mu_L+e\sigma_Lj^0_v+e{\bf E_{in}\cdot\nabla}\sigma_L=0.
\end{eqnarray}
We then introduce the fluctuations of the charge densities in R/L bases,
\begin{eqnarray}
j^0_{R/L}\rightarrow n_{R/L}+\delta j_{R/L}^0.
\end{eqnarray}
Inserting the static charge densities $n_{R/L}$ or $n_{v/a}$ back
to (\ref{waveeqRL0}) and assuming $\sigma_{R/L}$ and $\mu_{R/L}$
uniform, we can solve the charge densities directly, i.e. $n_{v}=n_{0,v}\exp\left(-e\sigma_{v}t\right)+const.$,
with $n_{0,v}$ constant given by initial conditions. That implies
the nonzero charge density will eventually damp out with the damping
time $\tau_{c}=1/(e\sigma_{v})$, which was as well indicated
in \cite{Huang:2013iia}. Therefore, the time scale of the fluctuations
$\delta j_{R/L}$ or $\delta j_{v/a}$ is required to be much smaller
than the damping time $\tau_{c}$. Fortunately, we find in
the following model used in Sec. \ref{CHE_holography}, the damping
time scale is about a few $fm/c$.
By using the results in our previous study of the DC conductivities
in holography in \cite{Pu:2014cwa}, we get $e\sigma_{v}\sim5T\hat{\sigma}_{v}$
with $\hat{\sigma}_{v}$ being a dimensionless constant depending
on the ratios of vector and axial chemical potentials to temperature.
When $T=200$ MeV as the average temperature in RHIC, we obtained
$e\sigma_{v}\sim26$ MeV for $\mu_{V}=\mu_{A}=0$ and $e\sigma_{v}\sim36$
MeV for $\mu_{V}=4T$ and $\mu_{A}=0$. The corresponding characteristic
times are $\tau_{c}\sim7.6fm/c$ and $\tau_{c}\sim5.5fm/c$, respectively.
These values of the damping times are sufficient long to compare
with the fluctuations we assumed here. In this case, we can just simply
consider $n_{R/L}$ or $n_{v/a}$ as constants in our following discussion.
Similarly, according to the lattice calculations\cite{Aarts:2007wj,Ding:2010ga,Tuchin:2013ie},
the DC conductivity of a static QGP is $e\sigma_{v}\sim5.8T/T_{c}$
MeV with $T_{c}$ the critical temperature. The damping time scale
is about $\tau_{c}=1/(e\sigma_{v})\sim17-34fm/c$ for $T\sim T_{c}-2T_{c}$
as the temperature of the QGP in RHIC.
From (\ref{jRL}), we find
\begin{eqnarray}
&&{\bf\delta j_R}=\lambda\alpha_R\delta j^0_R{\bf B}+e\beta_{R}\delta j^0_{R}{\bf E_{in}},\quad{\bf\delta j_L}=-\lambda\alpha_L\delta j^0_L{\bf B}+e\beta_{L}\delta j^0_{L}{\bf E_{in}},
\end{eqnarray}
where
\begin{eqnarray}
\alpha_{R/L}=\left(\frac{\partial\mu_{R/L}}{\partial j^0_{R/L}}\right)_{j^0_{R/L}\rightarrow n_{R/L}},\quad \beta_{R/L}=\left(\frac{\partial\sigma_{R/L}}{\partial j^0_{R/L}}\right)_{j^0_{R/L}\rightarrow n_{R/L}}.
\end{eqnarray}
By assuming an uniform charge distribution, where $n_{R/L}$ are spacetime independent, (\ref{waveeqRL0}) becomes
\begin{eqnarray}\nonumber\label{waveeqRL}
&&\partial_0\delta j^0_R+\lambda \alpha_R{\bf B\cdot\nabla}\delta j^0_R
+e\beta_Rn_v\delta j^0_R+e\sigma_R\delta j^0_R+e\beta_R{\bf E_{in}\cdot\nabla}\delta j^0_R=0,\\
&&\partial_0\delta j^0_L-\lambda \alpha_L{\bf B\cdot\nabla}\delta j^0_L
+e\beta_Ln_v\delta j^0_L+e\sigma_L\delta j^0_L+e\beta_L{\bf E_{in}\cdot\nabla}\delta j^0_L=0.
\end{eqnarray}
Here we assume that $\mu_{R/L}$ and $\sigma_{R/L}$ have no spacial dependence, while their fluctuations do.
For ${\bf E_{in}\ll{\bf B}}$, we may drop the last terms explicitly depending on the electric field, whereas we could preserve the terms contributed by nonzero $\beta_{R/L}$ and $\sigma_{R/L}$.
We may now rewrite (\ref{waveeqRL}) in terms of the vector/axial(v/a) bases, which reads
\begin{eqnarray}\nonumber\label{eq:perturbation_01}
&&\partial_0\delta j^0_v+\lambda (\alpha_{-}{\bf B\cdot\nabla}\delta j^0_v+\alpha_{+}{\bf B\cdot\nabla}\delta j^0_a)
+en_v(\beta_+\delta j_v^0+\beta_-\delta j_a^0)+e\sigma_v\delta j^0_v=0,\\
&&\partial_0\delta j^0_a+\lambda (\alpha_{-}{\bf B\cdot\nabla}\delta j^0_a+\alpha_{+}{\bf B\cdot\nabla}\delta j^0_v)
+en_v(\beta_-\delta j_v^0+\beta_+\delta j_a^0)+e\sigma_a\delta j^0_v=0,
\end{eqnarray}
where
\begin{eqnarray}
\delta j^{\mu}_{v/a}=\frac{1}{2}(\delta j^{\mu}_R\pm \delta j^{\mu}_L),\quad \alpha_{\pm}=\frac{1}{2}(\alpha_R\pm\alpha_L),\quad
\beta_{\pm}=\frac{1}{2}(\beta_R\pm\beta_L),\quad
\sigma_{v/a}=\frac{1}{2}(\sigma_R\pm\sigma_L).
\end{eqnarray}
By taking $\delta j^0_{v/a}=C_{v/a}e^{-iwt+i{\bf k\cdot x}}$ with $C_{v/a}$ being constants, we derive the dispersion relation
\begin{eqnarray}\label{CMWomega}
\omega_{\pm}=\lambda\alpha_-{\bf B\cdot k}-ien_v\beta_+-\frac{ie\sigma_v}{2}\pm
\sqrt{\left(\lambda\alpha_+{\bf B\cdot k}-ien_v\beta_-\right)\left(\lambda\alpha_+{\bf B\cdot k}-ie(n_v\beta_-+\sigma_a)\right)-\frac{e^2\sigma_v^2}{4}},
\end{eqnarray}
where $C_a=\pm C_v$.
In the hydrodynamic description, we may make a small-momentum expansion of the right hand side in (\ref{CMWomega}),
\begin{eqnarray}\nonumber\label{CMWdispersion}
\omega_{\pm}&=& -ie\left(n_v \beta_{+}+\frac{\sigma_v}{2}\right)\mp ie\sqrt{n_v^2 \beta_-^2+ n_v\beta_-\sigma_a+\frac{\sigma_v^2}{4}}+ \lambda \left(\alpha_-\pm\frac{\alpha_+(2n_v\beta_{-}+\sigma_a)}{\sqrt{4n_v^2\beta_-^2+4 n_v \beta_- \sigma_a+\sigma_v^2}}\right){\bf B\cdot k}
\\
&&\pm\frac{i \alpha_+^2 \lambda^2 \left(\sigma_v^2-\sigma_a^2\right) ({\bf B\cdot k})^2}{e\left(4 n_v^2\beta_-^2+4 n_v \beta_- \sigma_a+\sigma_v^2\right)^{3/2}}
+\mathcal{O}\left({({\bf B\cdot k})^3}\right).
\end{eqnarray}
The momentum-independent terms above characterize the damping effect and the prefactors of the terms linear to ${\bf k}$ corresponds to the wave velocity.
The last term proportional to ${\bf k}^2$ is associated with the diffusion.
For a chargeless system ($n_v=0$), the two modes become
\begin{eqnarray}\nonumber
\omega_+&=&-ie\sigma_v+\lambda \left(\alpha_{-} +\alpha_{+}\frac{\sigma_a}{\sigma_v}\right){\bf B\cdot k}+i(e\sigma_v)^{-1}\alpha_+^2 \lambda ^2 \left(1-\frac{\sigma_a^2}{\sigma_v^2}\right) ({\bf B\cdot k})^2+\mathcal{O}\left({({\bf B\cdot k})^3}\right),\\
\omega_-&=& \lambda \left(\alpha_{-}-\alpha_{+}\frac{\sigma_a}{\sigma_v}\right) {\bf B\cdot k}-i(e\sigma_v)^{-1}\alpha_+^2 \lambda ^2 \left(1-\frac{\sigma_a^2}{\sigma_v^2}\right) ({\bf B\cdot k})^2+\mathcal{O}\left({({\bf B\cdot k})^3}\right).
\end{eqnarray}
In the limit of $n_v=0$ and $\sigma_{v/a}=0$, the dispersion relation in (\ref{CMWomega}) further reduces to
\begin{eqnarray}\label{dispCMW1}
\omega_{\pm}=\lambda({\bf B\cdot k})(\alpha_-\mp\alpha_+)=-\lambda({\bf B\cdot k})\alpha_L\mbox{ or }
\lambda({\bf B\cdot k})\alpha_R.
\end{eqnarray}
It turns out that there exist two wave velocities $v_{\chi}=N_c|{\bf eB}|\alpha_{R/L}/(2\pi^2)$. For small chemical potentials (small charge densities),
$\alpha_R=\alpha_L$, the two velocities become degenerate. Our result then reduces to what has been found in \cite{Kharzeev:2010gd}.
\subsection{Chiral Electric Waves}
Generally, in a QCD plasma, the interaction between left and right
handed fermions will play a role to the propagating modes. However,
since we will only investigate those modes by SS model, in which there
are no effective interactions between the fermions with different
chiralities, we will neglect this kind of interaction in
the following discussion, i.e. we assume $\sigma_{R}$ (or $\sigma_{L}$)
will only be functions of $T$ and $\mu_{R}$ (or $\mu_{L}$), respectively.
By following the same strategy, we can derive the CEW in the presence of an external electric field. We may start with
\begin{eqnarray}
{\bf j_R}=e\sigma_R(\mu_R){\bf E}=e\sigma_R(j^0_R){\bf E},\quad{\bf j_L}=e\sigma_L(\mu_L){\bf E}=e\sigma_L(j^0_L){\bf E}.
\end{eqnarray}
In general, we set $\bf E=E_{ex}+E_{in}$, where $\bf{E_{ex}}$ and $\bf{E_{in}}$ denote the external and internal electric fields, respectively. We may assume that the external electric field is a constant field, whereas ${\bf \nabla\cdot E_{in}}=j^0_v$.
Similarly, we introduce the fluctuations of the currents,
\begin{eqnarray}\label{flucj}
{\bf\delta j_{R/L}}=e\beta_{R/L}\delta j^0_{R/L}{\bf E}.
\end{eqnarray}
The conservation equation $\partial_{\mu}j^{\mu}=0$ then leads to
\begin{eqnarray}
\partial_0j^0_{R/L}+e{\bf E\cdot\nabla}\sigma_{R/L}+e\sigma_{R/L}{\bf\nabla\cdot E}=0.
\end{eqnarray}
By further perturbing the above equation and utilizing $\nabla\cdot{\bf E}=j_v^0$ and $\delta\sigma_{R/L}=\beta_{R/L}\delta j^0_{R/L}$, we find
\begin{eqnarray}\label{waveRL}
\partial_0\delta j^0_{R/L}+e\beta_{R/L}{\bf E\cdot\nabla}\delta j^0_{R/L}+e\beta_{R/L}n_v\delta j^0_{R/L}+e\sigma_{R/L}\delta j^0_v=0.
\end{eqnarray}
Here $\mathbf{E}$ in the above equation is the total electric field.
In a strong external field case, the contribution from $\mathbf{E_{ex}}$
is dominant, where the one from $\mathbf{E_{in}}$
can be neglected. However, in the absence of external fields, $\mathbf{E_{in}}$
becomes dominant. Actually, in this
case, this term plays an important role to guarantee the conservation
of the total charge number. Especially, in the $n_{v}=0$ limit, this
term will be proportional to $\mathbf{E_{in}}\cdot\mathbf{k}$ and
finally appear in (34). Although it will be subleading in terms of the fluctuations in the bulk, it will be in the order linear to $\delta j^0_{R/L}$ on the surface of the medium, which yields the propagation of density waves outward the thermal medium. The argument for the Hall current in
(40) is similar.
We may further rewrite (\ref{waveRL}) in terms of the v/a bases,
\begin{eqnarray}\nonumber\label{waveva}
&&\partial_0\delta j^0_v+e (\beta_{+}{\bf E\cdot\nabla}\delta j^0_v+\beta_{-}{\bf E\cdot\nabla}\delta j^0_a+n_v\beta_+\delta j^0_v+n_v\beta_-\delta j^0_a+\sigma_v\delta j^0_v)=0,\\
&&\partial_0\delta j^0_a+e (\beta_{-}{\bf E\cdot\nabla}\delta j^0_v+\beta_{+}{\bf E\cdot\nabla}\delta j^0_a+n_v\beta_-\delta j^0_v+n_v\beta_+\delta j^0_a+\sigma_a\delta j^0_v)=0.
\end{eqnarray}
When taking $\delta j^0_{v/a}=C_{v/a}e^{-iwt+i{\bf k\cdot x}}$ with $C_{v/a}$ being constants, the dispersion relation reads
\begin{eqnarray}\label{CEWomega}
\omega_{\pm}=e\beta_+{\bf E\cdot k}-ien_v\beta_+-\frac{ie\sigma_v}{2}\pm e
\sqrt{\left(\beta_-{\bf E\cdot k}-in_v\beta_-\right)\left(\beta_-{\bf E\cdot k}-i(n_v\beta_-+\sigma_a)\right)-\frac{\sigma_v^2}{4}}.
\end{eqnarray}
By expanding (\ref{CEWomega}) with the momentum in the hydrodynamic approximation, we obtain
\begin{eqnarray}\nonumber
\omega_{\pm}&=& -ie \left(n_v \beta_{+}+\frac{\sigma_v}{2}\pm\sqrt{n_v^2\beta_-^2+n_v \beta_-\sigma_a+\frac{\sigma_v^2}{4}}\right)+
e \left(\beta_{+}\pm\frac{ \beta_- (2 n_v \beta_{-}+\sigma_a)}{\sqrt{4n_v^2 \beta_-^2+4 n_v\beta_{-}\sigma_a+\sigma_v^2}}\right){\bf E\cdot k}\\
&&\pm\frac{ie \beta_-^2 \left(\sigma_v^2-\sigma_a^2\right) ({\bf E\cdot k})^2}{\left(4n_v^2 \beta_-^2+4n_v\beta_-\sigma_a+\sigma_v^2\right)^{3/2}}
+\mathcal{O}\left({({\bf E\cdot k})^3}\right).
\end{eqnarray}
Similar to CMW, for a chargeless system ($n_v=0$), we find two modes,
\begin{eqnarray}\nonumber
\omega_+&=&-ie\sigma_v+e\left(\beta_{+} +\beta_{-}\frac{\sigma_a}{\sigma_v}\right){\bf E\cdot k}+ie\sigma_v^{-1}\beta_-^2 \left(1-\frac{\sigma_a^2}{\sigma_v^2}\right) ({\bf E\cdot k})^2+\mathcal{O}\left({({\bf E\cdot k})^3}\right),\\
\omega_-&=&e\left(\beta_{+}-\beta_{-}\frac{\sigma_a}{\sigma_v}\right) {\bf E\cdot k}-ie\sigma_v^{-1}\beta_-^2 \left(1-\frac{\sigma_a^2}{\sigma_v^2}\right) ({\bf E\cdot k})^2+\mathcal{O}\left({({\bf E\cdot k})^3}\right).
\end{eqnarray}
When considering the chargeless case ($n_v=0$, $\sigma_{v/a}=0$), the dispersion relation in (\ref{CEWomega}) reduces to
\begin{eqnarray}\label{dispCEW1}
\omega_{\pm}=e({\bf E\cdot k})(\beta_+\mp\beta_-)=-e({\bf E\cdot k})\beta_L\mbox{ or }
e({\bf E\cdot k})\beta_R.
\end{eqnarray}
This result is very similar to that for CMW. Although the wave velocity of CEW is dictated by the fluctuations of the conductivities, it implicitly depends on the fluctuations of the chemical potentials which influence the conductivities.
We may now consider the CEW in the limit of small chemical potentials. In light of the assumption in \cite{Huang:2013iia} based on the symmetries, the currents in R/L bases are
\begin{eqnarray}\label{JRLsmallmu}
{\bf j_{R/L}}=e(\sigma_0+\rho\mu_{R/L}^2){\bf E},
\end{eqnarray}
where $\rho$ is a function of temperature. Note that we drop the interaction between the R/L sectors, which is interpreted as the screening in \cite{Huang:2013iia}. From (\ref{JRLsmallmu}), the CESE is given by
\begin{eqnarray}\nonumber
{\bf j_v}&=&e\left(\sigma_0+\rho(\mu_v^2+\mu_a^2)\right){\bf E},\\
{\bf j_a}&=&e\chi_e\mu_v\mu_a{\bf E},
\end{eqnarray}
where $\chi_e=2\rho$. Given that $\mu_{R/L}=\alpha_{R/L}j^0_{R/L}$
\footnote{Although $\delta\mu_{R/L}=\alpha_{R/L}\delta j^0_{R/L}$ is always true, $\mu_{R/L}=\alpha_{R/L}j^0_{R/L}$ only hold for $n_{R/L}$ being small or $n_{R/L}$ being linearly dependent to $\mu_{R/L}$.}, which corresponds to the case with small densities, we obtain
\begin{eqnarray}
\beta_{R/L}=2\rho\alpha_{R/L}^2n_{R/L}.
\end{eqnarray}
For small chemical potentials, we have $\alpha_R=\alpha_L=\alpha_+$, which yields
\begin{eqnarray}\label{betapm}
\beta_+=2\rho\alpha_+^2n_v,\quad\beta_-=2\rho\alpha_+^2n_a.
\end{eqnarray}
The wave equations in (\ref{waveva}) up to $\mathcal{O}(n_{v/a})$ now reduces to
\begin{eqnarray}\nonumber\label{wavevasmallmu}
&&\partial_0\delta j^0_v+2e\rho\alpha_+^2 (n_v{\bf E\cdot\nabla}\delta j^0_v+n_a{\bf E\cdot\nabla}\delta j^0_a)+e\sigma_0\delta j^0_v=0,\\
&&\partial_0\delta j^0_a+2e\rho\alpha_+^2 (n_a{\bf E\cdot\nabla}\delta j^0_v+n_v{\bf E\cdot\nabla}\delta j^0_a)=0.
\end{eqnarray}
We may compare (\ref{wavevasmallmu}) with the result found in \cite{Huang:2013iia}. By definitions, we find
\begin{eqnarray}\nonumber
\alpha_v&=&\frac{\partial\mu_v}{\partial j^0_v}=\frac{1}{2}\left(\frac{\partial\mu_R}{\partial j^0_R}\frac{\partial j^0_R}{\partial j^0_v}+\frac{\partial\mu_L}{\partial j^0_L}\frac{\partial j^0_L}{\partial j^0_v}\right)=\alpha_+,\\
\alpha_a&=&\frac{\partial\mu_a}{\partial j^0_a}=\frac{1}{2}\left(\frac{\partial\mu_R}{\partial j^0_R}\frac{\partial j^0_R}{\partial j^0_a}-\frac{\partial\mu_L}{\partial j^0_L}\frac{\partial j^0_L}{\partial j^0_a}\right)=\alpha_+=\alpha_v.
\end{eqnarray}
When turning off the magnetic field and taking $\chi_e=2\rho$ and $\rho=\sigma_2$ as defined in \cite{Huang:2013iia},
we find that (\ref{wavevasmallmu}) is consistent with the result therein in the absence of a magnetic field.
By further including the Hall effect yet excluding CME and CSE, the fluctuations of the currents become
\begin{eqnarray}
(\delta j_{R/L})_i=e(\beta_{R/L})_{ij}\delta j^0_{R/L}E_j,
\end{eqnarray}
where
\begin{eqnarray}
(\beta_{R/L})_{ij}=\left(\frac{\partial(\sigma_{R/L})_{ij}}{\partial j^0_{R/L}}\right)_{j^0_{R/L}\rightarrow n_{R/L}}.
\end{eqnarray}
The wave equations now take the form
\begin{eqnarray}
\partial_0\delta j^0_{R/L}+e(\beta_{R/L})_{ij}E_j\partial_i\delta j^0_{R/L}+e(\beta_{R/L})_{ii}n_v\delta j^0_{R/L}+e(\sigma_{R/L})_{ii}\delta j^0_v=0.
\end{eqnarray}
We can subsequently work in the $v/a$ bases and derive the dispersion relations.
By taking $\delta j^0_{v/a}=C_{v/a}e^{-iwt+i{\bf k\cdot x}}$ with $C_{v/a}$ being constants, the dispersion relation reads
\begin{eqnarray}\label{CEWomegaij}
\omega_{\pm}&=&e(\beta_+)_{ij}E_jk_i-ien_v(\beta_+)_{ii}-\frac{ie(\sigma_v)_{ii}}{2}
\\\nonumber
&&\pm e
\sqrt{\left((\beta_-)_{ij} E_jk_i-in_v(\beta_-)_{ii}\right)\left((\beta_-)_{ij}E_j k_i-i(n_v(\beta_-)_{ii}+(\sigma_a)_{ii})\right)-\frac{e^2(\sigma_v)_{ii}^2}{4}}.
\end{eqnarray}
In our setup, we have
\begin{eqnarray}
(\sigma_{v/a})_{ii}=(\sigma_{v/a})_{yy},\quad (\beta_{\pm})_{ii}=(\beta_{\pm})_{yy},\quad
(\beta_{\pm})_{ij}E_jk_i=(\beta_{\pm})_{zy}E_yk_z+(\beta_{\pm})_{yy}E_yk_y.
\end{eqnarray}
After making the momentum-expansion, the dispersion relation in (\ref{CEWomegaij}) becomes
\begin{eqnarray}
\omega_{\pm}=-i\tau_{\pm}^{-1}+(v_{\pm})_ik^i-i(D_{\pm})_{ij}k^ik^j,
\end{eqnarray}
where
\begin{eqnarray}\nonumber
\tau_{\pm}^{-1}&=&e\left(n_v(\beta_+)_{ii}+\frac{(\sigma_v)_{ii}}{2}
\pm\sqrt{n_v^2(\beta_-)_{ii}^2+n_v(\beta_-)_{ii}(\sigma_a)_{ii}+\frac{(\sigma_v)_{ii}^2}{4}}\right),
\\\nonumber
(v_{\pm})_k&=&e\left((\beta_{+})_{kj}\pm\frac{ (\beta_-)_{kj} (2 n_v (\beta_{-})_{ii}+(\sigma_a)_{ii})}{\sqrt{4n_v^2 (\beta_-)_{ii}^2+4 n_v(\beta_{-})_{ii}(\sigma_a)_{ii}+(\sigma_v)_{ii}^2}}\right)E_j,
\\
(D_{\pm})_{ij}&=&\mp\frac{e (\beta_-)_{ik}(\beta_-)_{jl} \left((\sigma_v)_{mm}^2-(\sigma_a)_{mm}^2\right) (E_k E_l)}{\left(4n_v^2 (\beta_-)_{mm}^2+4n_v(\beta_-)_{mm}(\sigma_a)_{mm}+(\sigma_v)_{mm}^2\right)^{3/2}}.
\end{eqnarray}
Here $\tau_{\pm}$ represent the damping times for two modes of the density wave and $(v_{\pm})_k$ correspond to the wave velocities. The diffusion of the density wave is characterized by $(D_{\pm})_{ij}$.
In the following sections, we will employ the SS model in holography to investigate the CESE, CHE, and CEW in the strongly coupled QGP.
\section{SS model}\label{SS_model}
We will follow the approach in \cite{O'Bannon:2007in,Lifschytz:2009si} to investigate the currents induced by the external electromagnetic fields at finite chemical potentials.
In the SS model at finite temperature, the induced metric of $D8/\overline{D8}$ branes in the chiral symmetric phase is given by
\begin{eqnarray}
ds^2=\left(\frac{U}{L}\right)^{3/2}(-f(U)dt^2+d\vec{x}^2)
+\left(\frac{L}{U}\right)^{3/2}\frac{dU^2}{f(U)}
+\left(\frac{L}{U}\right)^{3/2}U^2d\Omega^2_4,
\end{eqnarray}
where $f(U)=1-U_T^3/U^3$ with $U_T$ being the position of an event horizon and $L=(\pi^3g_sN_cl_s^3)^{1/3}$ is the curvature radius. The temperature of the background reads
\begin{eqnarray}
T=\frac{3}{4\pi}\left(\frac{U_T}{L^3}\right)^{1/2}.
\end{eqnarray}
There are also background dilaton and form flux
\begin{eqnarray}
e^{\phi}=g_s\left(\frac{U}{L}\right)^{3/4},\quad F_4=\frac{2\pi N_c}{V_4}\epsilon_4,
\end{eqnarray}
where $V_4$ is the volume of the four-sphere and $\epsilon_4$ is the corresponding volume form.
The full DBI action reads
\begin{eqnarray}
S_{DBI}=S_{D8}+S_{\overline{D8}},
\end{eqnarray}
where
\begin{eqnarray}
S_{D8/\overline{D8}}=-T_{D8}\int d^9xe^{-\phi}\sqrt{-\mbox{det}(g+2\pi\alpha'F_{L/R})}.
\end{eqnarray}
Moreover, we have Chern-Simons (CS) terms
\begin{eqnarray}
S_{D8/\overline{D8}}^{CS}=\mp\frac{N_c}{96\pi^2}\int d^4xdU\epsilon^{MNPQR}(A_{L/R})_M(F_{L/R})_{NP}(F_{L/R})_{QR}.
\end{eqnarray}
By turning on the world-volume gauge fields \footnote{Here $E_y$ and $B_x$ actually correspond to $eE_y$ and $eB_x$. We will hereafter omit $e$ in the holographic computations for simplicity.}, $(A_{L/R})_t(U)$, $(A_{L/R})_x(t,U)=(a_{L/R})_x(U)$, $(A_{L/R})_y(t,U)=-E_yt+(a_{L/R})_y(U)$
, and $(A_{L/R})_z(t,U)=B_xy+(a_{L/R})_z(U)$, we obtain
\begin{eqnarray}\nonumber
S_{D8/\overline{D8}}&=&-C\int d^4xdUU^{5/2}\sqrt{X},
\end{eqnarray}
where
\begin{eqnarray}\nonumber
X&=&1+\frac{B_x^2 L^3}{U^3}-\frac{E_y^2 L^3}{U^3 f}-A_t'^2\left(1+\frac{B_x^2 L^3 }{U^3}\right)+a_x'^2f\left(1-\frac{E_y^2 L^3 a_x'^2}{fU^3}+\frac{B_x^2 L^3 }{U^3}\right)+f a_y'^2\\\nonumber
&&+\frac{2 B_x E_y L^3 A_t' a_z'}{U^3}+a_z'^2f\left(1-\frac{E_y^2 L^3}{fU^3}\right),\\
C&=&\frac{T_{D8}V_4L^{3/2}}{g_s}=\frac{N_c}{96\pi^5l_s^6L^{3/2}}.
\end{eqnarray}
Here the primes denote the derivatives with respect to $U$. We also set $2\pi l_s^2=1$ GeV$^{-2}$ and drop the $L/R$ subscripts above for simplicity. In our setup, the CS terms read
\begin{eqnarray}
S_{D8/\overline{D8}}^{CS}=\mp\frac{8N_c}{96\pi^2}\int d^4xdU\left(B_x(A_ta_x'-a_xA_t')+E_y(a_xa_z'-a_za_x')\right).
\end{eqnarray}
The full actions take the form
\begin{eqnarray}
S_{D8/\overline{D8}}^f=-C\left(\int d^4xdUU^{5/2}\sqrt{X}\pm r\int d^4xdU\left(B_x(A_ta_x'-a_xA_t')+E_y(a_xa_z'-a_za_x')\right) \right),
\end{eqnarray}
where $r=N_c/(12\pi^2C)=(2\pi l_s)^3L^{3/2}$. We may add the boundary terms according to \cite{Lifschytz:2009si}, which lead to $r=3/2\times (2\pi l_s)^3L^{3/2}$. The value of $r$ actually depends on the renormalization scheme.
The equations of motion are
\begin{eqnarray}\label{conseq}\nonumber
\frac{U^{5/2}\left((A_{L/R})'_t\left(1+\frac{B_x^2L^3}{U^3}\right)-(a_{L/R})'_z\frac{B_xE_yL^3}{U^3}\right)}
{\sqrt{X_{L/R}}}&=&(J_{L/R})_t\mp 2rB_x(a_{L/R})_x
\\\nonumber
\frac{U^{5/2}f(a_{L/R})_x'\left(1-\frac{E_y^2 L^3}{fU^3}+\frac{B_x^2 L^3}{U^3}\right)}{\sqrt{X_{L/R}}}&=&(J_{L/R})_x\mp 2r (B_x (A_{L/R})_t-E_y (a_{L/R})_z)\\\nonumber
\frac{U^{5/2}f(a_{L/R})_y'}{\sqrt{X_{L/R}}}&=&(J_{L/R})_y,\\
\frac{U^{5/2}\left((A_{L/R})'_t\frac{B_xE_yL^3}{U^3}+
f(a_{L/R})'_z\left(1-\frac{E_y^2L^3}{fU^3}\right)\right)}
{\sqrt{X_{L/R}}}&=&(J_{L/R})_z\mp 2rE_y(a_{L/R})_x,
\end{eqnarray}
where $(J_{L/R})_{\mu}$ are integration constants. In the AdS/CFT correspondence, the electromagnetic currents correspond to the boundary currents of the DBI actions. From the definition of boundary currents,
\begin{eqnarray}
j_{\mu}=J^b_{\mu}=\frac{\delta S_{EOM}}{\delta A_{\mu}(\infty)}=\left(\frac{\delta L_{eff}}{\delta A'_{\mu}}\right)_{U\rightarrow\infty},
\end{eqnarray}
we have
\begin{eqnarray}\label{bcurrent}\nonumber
(J^b_{L/R})_t&=&C\left(\frac{U^{5/2}\left((A_{L/R})'_t\left(1+\frac{B_x^2L^3}{U^3}\right)-(a_{L/R})'_z\frac{B_xE_yL^3}{U^3}\right)}
{\sqrt{X_{L/R}}}\pm rB_x(a_{L/R})_x\right)_{U\rightarrow\infty},
\\\nonumber
(J^b_{L/R})_x&=&C\left(-\frac{U^{5/2}f(a_{L/R})_x'\left(1-\frac{E_y^2 L^3}{fU^3}+\frac{B_x^2 L^3}{U^3}\right)}{\sqrt{X_{L/R}}}\mp r (B_x (A_{L/R})_t-E_y (a_{L/R})_z)\right)_{U\rightarrow\infty},
\\\nonumber
(J^b_{L/R})_y&=&C\left(-\frac{U^{5/2}f(a_{L/R})_y'}{\sqrt{X_{L/R}}}\right)_{U\rightarrow\infty},
\\
(J^b_{L/R})_z&=&C\left(-\frac{U^{5/2}\left((A_{L/R})'_t\frac{B_xE_yL^3}{U^3}+
f(a_{L/R})'_z\left(1-\frac{E_y^2L^3}{fU^3}\right)\right)}
{\sqrt{X_{L/R}}}\mp rE_y(a_{L/R})_x\right)_{U\rightarrow\infty},
\end{eqnarray}
where $L_{eff}$ is the effective Lagrangian.
By comparing (\ref{conseq}) and (\ref{bcurrent}), the boundary currents can be rewritten as
\begin{eqnarray}\nonumber
(J^b_{L/R})_t&=&C\left((J_{L/R})_t\mp r B_x(a_{L/R})_x\right)_{U\rightarrow\infty},
\\\nonumber
(J^b_{L/R})_x&=&C\left(-(J_{L/R})_x\pm r \left(B_x(A_{L/R})_t-E_y(a_{L/R})_z\right)\right)_{U\rightarrow\infty},
\\\nonumber
(J^b_{L/R})_y&=&-C(J_{L/R})_y,
\\
(J^b_{L/R})_z&=&C\left(-(J_{L/R})_z\pm rE_y(a_{L/R})_x\right)_{U\rightarrow\infty}.
\end{eqnarray}
Following \cite{Lifschytz:2009si}, we may define the modified currents,
\begin{eqnarray}\nonumber
(\tilde{J}_{L/R})_t&=&(J_{L/R})_t\mp 2rB_x(a_{L/R})_x,\\\nonumber
(\tilde{J}_{L/R})_x&=&(J_{L/R})_x\mp 2r (B_x (A_{L/R})_t-E_y (a_{L/R})_z),\\\nonumber
(\tilde{J}_{L/R})_y&=&(J_{L/R})_y,\\
(\tilde{J}_{L/R})_z&=&(J_{L/R})_z\mp 2rE_y(a_{L/R})_x.
\end{eqnarray}
By doing some algebra with (\ref{conseq}), we find
\begin{eqnarray}\label{Ap}\nonumber
(A_{L/R})'_t&=&\pm\frac{\Big|\left(1-\frac{E_y^2L^3}{fU^3}\right)(\tilde{J}_{L/R})_t+\frac{E_yB_xL^3}{fU^3}(\tilde{J}_{L/R})_z\Big|}{\sqrt{Z}},\\
(A_{L/R})'_x&=&\pm\frac{\Big|(\tilde{J}_{L/R})_x\Big|}{f\sqrt{Z}},\\\nonumber
(A_{L/R})'_y&=&\pm\frac{\Big|\left(1+\frac{B_x^2L^3}{U^3}-\frac{E_y^2L^3}{fU^3}\right)(\tilde{J}_{L/R})_y\Big|}{f\sqrt{Z}},\\\nonumber
(A_{L/R})'_z&=&\pm\frac{\Big|\left(1+\frac{B_x^2L^3}{U^3}\right)(\tilde{J}_{L/R})_z-\frac{E_yB_xL^3}{U^3}(\tilde{J}_{L/R})_t\Big|}{f\sqrt{Z}},
\end{eqnarray}
where
\begin{eqnarray}\label{Zfun}\nonumber
Z&=&\left(1+\frac{B_x^2L^3}{U^3}-\frac{E_y^2L^3}{fU^3}\right)
\left(U^5+(\tilde{J}_{L/R})_t^2-\frac{(\tilde{J}_{L/R})_y^2+(\tilde{J}_{L/R})_z^2}{f}\right)
\\&&
-\frac{L^3}{U^3}\left(B_x(\tilde{J}_{L/R})_t-\frac{E_y(\tilde{J}_{L/R})_z}{f}\right)^2
-\frac{(\tilde{J}_{L/R})_x^2}{f}
\end{eqnarray}
By requiring that $(A_{L/R})'_{\mu}$ are real and well defined, we have to make both the numerators and denominators on the left hand side of (\ref{Ap}) vanish at a critical point $U=U_c$. We thus have
\begin{eqnarray}\label{Jeom}\nonumber
\left(1-\frac{E_y^2L^3}{fU_c^3}\right)(\tilde{J}_{L/R})_t-\frac{E_yB_xL^3}{fU_c^3}(\tilde{J}_{L/R})_z=0,
\\\nonumber
(\tilde{J}_{L/R})_x=0,\\\nonumber
\left(1+\frac{B_x^2L^3}{U_c^3}-\frac{E_y^2L^3}{fU_c^3}\right)=0,\\\nonumber
\left(1+\frac{B_x^2L^3}{U_c^3}\right)(\tilde{J}_{L/R})_z-\frac{E_yB_xL^3}{U_c^3}(\tilde{J}_{L/R})_t=0,\\
Z(U_c)=0.
\end{eqnarray}
Note that the first equation in (\ref{Jeom}) is redundant, which can be obtained from the third and fourth equations therein.
In fact, (\ref{Jeom}) is equivalent to finding the double zeros of $Z(U_c)$ from the expression in (\ref{Zfun}), where all three terms therein have double zeroes at $U_c$.
From the third equation in (\ref{Jeom}), we find the critical point
\begin{eqnarray}
U_c=\frac{U_T}{2^{1/3}}\Bigg(1+\frac{L^3}{U^3_T}(E_y^2-B_x^2)
+\sqrt{\frac{4B_x^2L^3}{U_T^3}+\left(1+\frac{L^3}{U^3_T}(E_y^2-B_x^2)\right)^2}\Bigg)^{1/3}
\end{eqnarray}
One may now solve rest of equations in (\ref{Jeom}) to derive $(J_{L/R})_i$ for $i=x,y,z$ in terms of $(J_{L/R})_t$. We find
\begin{eqnarray}\label{JRLbases}\nonumber
(J_{L/R})_x&=&\pm 2 r(B_x (A_{L/R})_t-E_y (a_{L/R})_z)_{U=U_c},\\\nonumber
(J_{L/R})_y&=&-\frac{E_yL^{3/2}U_c^{3/2}}{B_x^2L^3+U_c^3}
\left((J_{L/R})_t^2+B_x^2 L^3U^2+U^5\mp 4r B_x (J_{L/R})_t (a_{L/R})_x+4 B_x^2 r^2 (a_{L/R})_x^2\right)^{1/2}_{U=U_c},\\
(J_{L/R})_z&=&\left(\frac{B_x E_y (J_{L/R})_t L^3\pm 2 r E_y U^3 (a_{L/R})_x}{B_x^2 L^3+U^3}\right)_{U=U_c}.
\end{eqnarray}
The boundary currents then become
\begin{eqnarray}\nonumber\label{bcurrentRL}
(J^b_{L/R})_x&=&C\left[\mp 2 r\left(B_x (A_{L/R})_t-E_y (a_{L/R})_z\right)_{U=U_c}\pm r \left(B_x(A_{L/R})_t-E_y(a_{L/R})_z\right)_{U=\infty}\right],
\\\nonumber
(J^b_{L/R})_y&=&C\Bigg[\frac{E_yL^{3/2}U_c^{3/2}}{B_x^2L^3+U_c^3}
\left((J_{L/R})_t^2+B_x^2 L^3U^2+U^5\mp 4r B_x (J_{L/R})_t (a_{L/R})_x+4 B_x^2 r^2 (a_{L/R})_x^2\right)^{1/2}_{U=U_c}\Bigg],
\\
(J^b_{L/R})_z&=&C\left[\left(\frac{-B_x E_y (J_{L/R})_t L^3\mp 2 r E_y U^3 (a_{L/R})_x}{B_x^2 L^3+U^3}\right)_{U=U_c}\pm \left(rE_y(a_{L/R})_x\right)_{U=\infty}\right].
\end{eqnarray}
In the presence of CS terms, we find that $(J_{L/R})_i$ not only depend on $(J_{L/R})_t$ but also depend on $(a_{L/R})_x$ and $(a_{L/R})_z$ at the boundary and $U_c$. It turns out that the gauge invariance of the boundary currents is broken by the CS terms. The nonzero values of $(a_{R/L})_{i}(\infty)$ with $i=x,y,z$ correspond to the pion gradient in the chiral-symmetry-broken phase\cite{Bergman:2008qv}.
In the chiral-symmetry-restored phase, $(a_{R/L})_{i}(\infty)$ become free parameters, which are set to zero in \cite{Lifschytz:2009si}. For simplicity and preciseness, we focus on the condition that the particle interaction dominates the topological effect. The axial Hall current should exist without the axial anomaly, while it could vary in the presence of the strong axial anomaly and become non-gauge-invariant in the SS model.
Considering the gauge-invariant currents from interactions, we may turn off $(a_{L/R})_x(U)$ and neglect the effect from the CS terms.
By rewriting (\ref{JRLbases}) in terms of vector/axial bases, we find
\begin{eqnarray}\label{bdcurrents}
\nonumber
(J^b_{v/a})_y&=&\frac{CE_yL^{3/2}U_c^{3/2}}{2(B_x^2L^3+U_c^3)}
\left(\sqrt{((J_v)_t+(J_a)_t)^2+B_x^2 L^3U_c^2+U_c^5}\pm
\sqrt{((J_v)_t-(J_a)_t)^2+B_x^2 L^3U_c^2+U_c^5}\right),
\\\nonumber
(J^b_{v/a})_z&=&-C\frac{B_x E_y (J_{v/a})_t L^3}{B_x^2 L^3+U_c^3},
\\
(J^b_{v/a})_t&=&C(J_{v/a})_t.
\end{eqnarray}
Now, both $(J^b_{v/a})_y$ and $(J^b_{v/a})_z$ depend on the charge densities $(J^b_{v/a})_t$ on the boundary as functions of the chemical potentials. To find the relations between the charge densities and the chemical potentials, we have to solve the field equation of $(A_{L/R})_t$ in (\ref{Ap}). By utilizing (\ref{JRLbases}), this field equation can be further written as
\begin{eqnarray}\label{solAt}
(A_{L/R})'_t=\frac{\Big|\left(1-\frac{E_y^2L^3U_c^3}{fU^3(B_x^2L^3+U_c^3)}\right)(J_{L/R})_t\Big|}{\sqrt{Z}}.
\end{eqnarray}
We will then render the boundary conditions $(A_{L/R})_t(U_T)=0$ and numerically solve the field equation. The chemical potentials are given by
\begin{eqnarray}
\mu_{L/R}=(A_{L/R})_t(\infty),
\end{eqnarray}
which are varied by the values of $(J_{L/R})_t$.
\section{CESE/CHE in holography}\label{CHE_holography}
\subsection{Weak and Strong Electromagnetic Fields}
Although the boundary currents with different chemical potentials can be solved numerically, we may approximate their analytic expressions in the limit of weak electromagnetic fields.
In the presence of weak electromagnetic fields, the induced currents should follow the linear response theory. When taking $E_y\approx 0$ and $B_x\approx 0$, from (\ref{solAt}), the chemical potentials are given by
\begin{eqnarray}
\frac{\mu_{L/R}}{U_T}=\frac{2}{3\tilde{U}_{L/R}^{5/2}}{}_2F_1
\left(\frac{3}{10},\frac{1}{2},\frac{13}{10},-\frac{1}{\tilde{U}_{L/R}^5}\right),
\quad
\tilde{U}_{L/R}=\frac{U_T}{(J_{L/R})_t^{2/5}}.
\end{eqnarray}
In the limit of $\tilde{U}_{L/R}\rightarrow 0$, which corresponds to high-density or low-temperature conditions, we find
\begin{eqnarray}
\frac{\mu_{L/R}}{U_T}\approx\frac{2\Gamma\left(\frac{1}{5}\right)\Gamma\left(\frac{13}{10}\right)}{3\sqrt{\pi}\tilde{U}_{L/R}}
-\frac{10\Gamma\left(\frac{13}{10}\right)}{3\Gamma\left(\frac{3}{10}\right)}
+\mathcal{O}(\tilde{U}_{L/R}^5).
\end{eqnarray}
Up to the leading order in the expansion with respect to $\tilde{U}_{L/R}$, we obtain
\begin{eqnarray}
(J_{L/R})_t=\left(\frac{3\sqrt{\pi}}{2\Gamma\left(\frac{1}{5}\right)\Gamma\left(\frac{13}{10}\right)}\right)^{5/2}\mu_{L/R}^{5/2}.
\end{eqnarray}
By expanding the boundary currents in (\ref{bdcurrents}), we derive the relation between the currents and chemical potentials in the high-density(low temperature) limit. The currents now take the form
\begin{eqnarray}\nonumber
(J^b_{v/a})_y&=&
\frac{CE_y}{2}\left(\frac{R}{U_T}\right)^{3/2}\left((J_R)_t\pm(J_L)_t\right)
=\frac{CE_y}{2a^3T^3L^3}
\left(\frac{3\sqrt{\pi}}{2\Gamma\left(\frac{1}{5}\right)\Gamma\left(\frac{13}{10}\right)}\right)^{5/2}(\mu_R^{5/2}\pm\mu_L^{5/2}),\\
(J^b_{v/a})_z&=&-\frac{CB_x E_y}{a^6T^6L^6}
\left(\frac{3\sqrt{\pi}}{2\Gamma\left(\frac{1}{5}\right)\Gamma\left(\frac{13}{10}\right)}\right)^{5/2}(\mu_R^{5/2}\pm\mu_L^{5/2}),
\end{eqnarray}
where $a=4\pi/3$.
On the contrary, in the limit of $\tilde{U}_{L/R}\rightarrow \infty$, which corresponds to low-density or high-temperature conditions, we find
\begin{eqnarray}
\frac{\mu_{L/R}}{U_T}\approx\frac{2}{3}\tilde{U}_{L/R}^{-5/2}-\frac{1}{13}\tilde{U}_{L/R}^{-15/2}
+\mathcal{O}(\tilde{U}_{L/R}^{-25/2}).
\end{eqnarray}
Up to the leading order in the expansion with respect to $\tilde{U}_{L/R}^{-1}$, we obtain
\begin{eqnarray}\label{Jtmurelation}
(J_{L/R})_t=\frac{3}{2}U_T^{3/2}\mu_{L/R}.
\end{eqnarray}
The boundary currents now read
\begin{eqnarray}\nonumber\label{JyJzhighT}
(J^b_{v/a})_y&=&\frac{CE_y}{2}\rho^2T^2L^{9/2}
\left(\left(1+\frac{9\mu_R^2}{8(a^2T^2L^3)^2}\right)\pm\left(1+\frac{9\mu_L^2}{8(a^2T^2L^3)^2}\right)\right)
\\
(J^b_{v/a})_z&=&-\frac{3CB_x E_y}{2a^3T^3L^{3/2}}(\mu_R\pm\mu_L).
\end{eqnarray}
One may further rewrite (\ref{JyJzhighT}) in terms of $\mu_V/\mu_A$,
\begin{eqnarray}\nonumber\label{Jbsmallmu}
(J^b_v)_y&=&CE_ya^2T^2L^{9/2}
\left(1+\frac{9}{8(a^2T^2L^3)^2}(\mu_V^2+\mu_A^2)\right),
\\\nonumber
(J^b_a)_y&=&\frac{9CE_y}{4a^2T^2L^{3/2}}\mu_V\mu_A,
\\
(J^b_{v/a})_z&=&-\frac{3CB_x E_y}{a^3T^3L^{3/2}}\mu_{V/A},
\end{eqnarray}
where $\mu_{V/A}=(\mu_R\pm\mu_L)/2$. The small-chemical-potential dependence here is consistent with that found in \cite{Huang:2013iia,Pu:2014cwa} and (\ref{paritysmallmu}).
In the presence of strong electromagnetic fields, we are unable to solve (\ref{solAt}) analytically with the strong-field approximation. Nevertheless, it is useful to further investigate the explicit dependence of the electromagnetic fields and charge densities for the boundary currents. When having large $E_y$ and finite $B_x$, we find $U_c^3\rightarrow L^3E_y^2$. By doing some algebra with (\ref{bdcurrents}), we obtain
\begin{eqnarray}\nonumber\label{JblargeE}
(J^b_v)_y&\approx& C L^{5/2}E_y^{5/2},
\\\nonumber
(J^b_a)_y&\approx& \frac{(J^b_v)_t(J^b_a)_t}{CL^{5/2}E_y^{8/3}},
\\
(J^b_{v/a})_z&\approx&-\frac{B_x(J^b_{v/a})_t}{E_y}.
\end{eqnarray}
On the contrary, when having large $B_x$ and finite $E_y$, we find $U_c^3\rightarrow U_T^3$, which gives
\begin{eqnarray}\nonumber\label{JblargeB}
(J^b_v)_y&\approx& C \frac{U_T^{5/2}E_y}{B_x},
\\\nonumber
(J^b_a)_y&\approx& \frac{E_y(J^b_v)_t(J^b_a)_t}{CL^{3}B_x^{3}},
\\
(J^b_{v/a})_z&\approx&-\frac{E_y(J^b_{v/a})_t}{B_x}.
\end{eqnarray}
\subsection{Numerical Results}
We now numerically solve (\ref{solAt}) for the boundary currents.
The numerical values of the relevant coefficients are
\begin{eqnarray}\label{parameter1}
2\pi l_s^2=1\text{GeV}^{-2},\quad \lambda_t=g_{YM}^2N_c=17,\quad M_{KK}=0.94\text{GeV},
\end{eqnarray}
which give
\begin{eqnarray}
L^3=(2M_{KK})^{-1}(g_{YM}^2N_cl_s^2)=1.44 \text{GeV}^{-3}.
\end{eqnarray}
We can further set $N_c=3$, which leads to $C=0.0211$ GeV$^{-15/2}$.
We then choose the temperature as the average temperature in RHIC,
\begin{eqnarray}
T=200\text{MeV}=0.2\text{GeV},
\end{eqnarray}
which yields
\begin{eqnarray}\label{parameter2}
U_T=1.02\text{GeV}^{-1}.
\end{eqnarray}
We firstly evaluate the axial currents generated by weak electromagnetic fields and by the average electromagnetic fields in RHIC\cite{Bzdak:2011yy,Hirono:2012rt} with different chemical potentials. In Fig.\ref{Jay_fixmuV} and Fig.\ref{Jaz_fixmuV}, we fix the vector chemical potentials and vary the axial chemical potentials by implementing the shooting method, where the currents are normalized by $C$. We find that the axial currents led by the CESE are approximately proportional to $\mu_V\mu_A$ even with finite chemical potentials. Our result is consistent with what have been found by using Kubo formula in \cite{Pu:2014cwa}. Moreover, the axial Hall currents are approximately linear to $\mu_A$, which match the approximation under weak electromagnetic fields and small chemical potentials. Analogously, the vector Hall currents are also approximately linear to $\mu_V$ as shown in Fig.\ref{Jvz_fixmuV}.
It turns out that the small-chemical-potential approximation could be applied to the conditions when the chemical potentials are around the magnitude of the temperature. Also, the average electromagnetic fields in RHIC only result in minor corrections. The similar behaviors of the axial and vector currents can be found in Fig.\ref{Jay_fixmuA}, Fig.\ref{Jaz_fixmuA}, and Fig.\ref{Jvz_fixmuA} when we fix the axial chemical potentials and vary the vector ones.
\begin{figure}[t]
\begin{minipage}{7cm}
\begin{center}
{\includegraphics[width=7.5cm,height=5cm,clip]{Jay_fixmuV.eps}}
\caption{The green, black, blue, and red (from top to bottom) correspond to $\mu_V=0.002T$, $T$, $1.6T$, and $2T$, respectively. The solid and dashed curves correspond to $(B_x,E_y)=(m_{\pi}^2,m_{\pi}^2)=(0.135^2,0.135^2)$ GeV$^{2}$ and $(B_x,E_y)=(0.001^2,0.01^2)$ GeV$^{2}$, where $\hat{\mu}_{V/A}=\mu_{V/A}/T$.}\label{Jay_fixmuV}
\end{center}
\end{minipage}
\hspace {1cm}
\begin{minipage}{7cm}
\begin{center}
{\includegraphics[width=7.5cm,height=5cm,clip]{Jaz_fixmuV.eps}}
\caption{The color corresponds to the same cases in Fig.\ref{Jay_fixmuV}.}
\label{Jaz_fixmuV}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}[t]
\begin{minipage}{7cm}
\begin{center}
{\includegraphics[width=7.5cm,height=5cm,clip]{Jay_fixmuA.eps}}
\caption{The green, black, blue, and red (from top to bottom) correspond to $\mu_A=0.002T$, $T$, $1.6T$, and $2T$, respectively. The solid and dashed curves correspond to $(B_x,E_y)=(m_{\pi}^2,m_{\pi}^2)=(0.135^2,0.135^2)$ GeV$^{2}$ and $(B_x,E_y)=(0.001^2,0.01^2)$ GeV$^{2}$.}\label{Jay_fixmuA}
\end{center}
\end{minipage}
\hspace {1cm}
\begin{minipage}{7cm}
\begin{center}
{\includegraphics[width=7.5cm,height=5cm,clip]{Jaz_fixmuA.eps}}
\caption{The color corresponds to the same cases in Fig.\ref{Jay_fixmuA}.}
\label{Jaz_fixmuA}
\end{center}
\end{minipage}
\end{figure}
\begin{figure}[t]
\begin{minipage}{7cm}
\begin{center}
{\includegraphics[width=7.5cm,height=5cm,clip]{Jvz_fixmuV.eps}}
\caption{The color corresponds to the same cases in Fig.\ref{Jay_fixmuA}.}\label{Jvz_fixmuV}
\end{center}
\end{minipage}
\hspace {1cm}
\begin{minipage}{7cm}
\begin{center}
{\includegraphics[width=7.5cm,height=5cm,clip]{Jvz_fixmuA.eps}}
\caption{The color corresponds to the same cases in Fig.\ref{Jay_fixmuA}.}
\label{Jvz_fixmuA}
\end{center}
\end{minipage}
\end{figure}
Next, we may study the electric and Hall currents varied by electromagnetic fields. The numerical results are shown in Fig.\ref{currentsfixB}-\ref{currentsfixE}, where we fix both the vector and axial chemical potentials to be small compared with the temperature. In Fig.\ref{currentsfixB}, we fix $B_x$ to the average value in RHIC and vary $E_y$. In the regions of small electric field for $E_y<20m_{\pi}^2$, the increase of the charge densities led by $E_y$ is mild, while the currents $(J^b_{v/a})_{y/z}$ are linear to the electric field as expected from (\ref{Jbsmallmu}). In the region with a large $E_y$, the charge densities are increased by the electric field when fixing the chemical potentials, while the currents start to decrease except for $(J^b_v)_y$. The result could be qualitatively consistent with the strong-field approximation in (\ref{JblargeE}). However, the increase of $(J^b_{v/a})_t$ mitigates the decrease of $(J^b_a)_y$ and $(J^b_{v/a})_z$. In Fig.\ref{currentsfixE}, we then fix $E_y$ and vary $B_x$. We observe the linear increase of $(J^b_{v/a})_z$ as expected from (\ref{Jbsmallmu}). Also, the decrease of $(J^b_v)_y$ is mild with small $B_x$, but the nonlinear effect quickly takes over for $(J^b_a)_y$. In the region with large $B_x$, all currents decrease as anticipated from (\ref{JblargeB}).
\begin{figure}[t]
\begin{center}
\subfigure[vector charge density]
\label{fig:first}
\includegraphics[height=4cm, width=0.4\textwidth]{Jvt_fixB.eps}
\subfigure[axial charge density]
\label{fig:second}
\includegraphics[height=4cm,width=0.4\textwidth]{Jat_fixB.eps}
}\\
\subfigure[vector current]
\label{fig:third}
\includegraphics[height=4cm,width=0.4\textwidth]{Jvy_fixB.eps}
\subfigure[axial current]
\label{fig:fourth}
\includegraphics[height=4cm,width=0.4\textwidth]{Jay_fixB.eps}
}\\% ------- End of the second row ---------------------
\subfigure[vector Hall current]
\label{fig:third}
\includegraphics[height=4cm,width=0.4\textwidth]{Jvz_fixB.eps}
\subfigure[axial Hall current]
\label{fig:fourth}
\includegraphics[height=4cm,width=0.4\textwidth]{Jaz_fixB.eps}
}
\end{center}
\caption
Boundary currents normalized by $C$ with $B_x=m_{\pi}^2$, $\mu_V=0.2T$, and $\mu_A=0.1T$.
\label{currentsfixB}
\end{figure}
\begin{figure}[t]
\begin{center}
\subfigure[vector charge density]
\label{fig:first}
\includegraphics[height=4cm,width=0.4\textwidth]{Jvt_fixE.eps}
\subfigure[axial charge density]
\label{fig:second}
\includegraphics[height=4cm,width=0.4\textwidth]{Jat_fixE.eps}
}\\
\subfigure[vector current]
\label{fig:third}
\includegraphics[height=4cm,width=0.4\textwidth]{Jvy_fixE.eps}
\subfigure[axial current]
\label{fig:fourth}
\includegraphics[height=4cm,width=0.4\textwidth]{Jay_fixE.eps}
}\\% ------- End of the second row ---------------------
\subfigure[vector Hall current]
\label{fig:third}
\includegraphics[height=4cm,width=0.4\textwidth]{Jvz_fixE.eps}
\subfigure[axial Hall current]
\label{fig:fourth}
\includegraphics[height=4cm,width=0.4\textwidth]{Jaz_fixE.eps}
}
\end{center}
\caption
Boundary currents normalized by $C$ with $E_y=m_{\pi}^2$, $\mu_V=0.2T$, and $\mu_A=0.1T$.
\label{currentsfixE}
\end{figure}
\section{CEW in holography}\label{CEW_holography}
\subsection{CEW in the SS Model}
In this section, we will investigate the transport coefficients of CEW in the frame work of the SS model. We may focus on the cases with weak electric fields such that the boundary currents are linear to the electric fields, while we may preserve the nonlinear effect from the magnetic fields encoded in the conductivities. Also, we will neglect the contributions from the CS terms.
From (\ref{bcurrentRL}), we find
\begin{eqnarray}\nonumber\label{betayyzy}
(\beta_{L/R})_{yy}&=&\frac{L^{3/2}U_T^{3/2}}{B_x^2L^3+U_T^3}
\frac{(J_{L/R})_t}{\sqrt{(J_{L/R})_t^2+B_x^2L^3U_T^3+U_T^5}},\\
(\beta_{L/R})_{zy}&=&\frac{-B_xL^3}{B_x^2L^3+U_T^3},
\end{eqnarray}
where we take $U_c\approx U_T$ for small $E_y$. Since $(\beta_{L/R})_{zy}$ are independent of $(J_{L/R})_t$, we directly obtain $(\beta_-)_{zy}=0$ for arbitrary chemical potentials. We thus obtain
\begin{eqnarray}\nonumber
(\delta j_{v})_y&=&\left((\beta_+)_{yy}\delta j^0_{v}+(\beta_-)_{yy}\delta j^0_{a}\right)E_y,\\\nonumber
(\delta j_{a})_y&=&\left((\beta_-)_{yy}\delta j^0_{v}+(\beta_+)_{yy}\delta j^0_{a}\right)E_y,\\
(\delta j_{v/a})_z&=&(\beta_+)_{zy}\delta j^0_{v/a}E_y.
\end{eqnarray}
The transport coefficients in the dispersion relation read
\begin{eqnarray}\nonumber\label{transptcoeff}
\tau_{\pm}^{-1}&=&\left(n_v(\beta_+)_{yy}+\frac{(\sigma_v)_{yy}}{2}
\pm\sqrt{n_v^2(\beta_-)_{yy}^2+n_v(\beta_-)_{yy}(\sigma_a)_{yy}+\frac{(\sigma_v)_{yy}^2}{4}}\right),
\\\nonumber
(v_{\pm})_y&=&\left((\beta_{+})_{yy}\pm\frac{ (\beta_-)_{yy} (2 n_v (\beta_{-})_{yy}+(\sigma_a)_{yy})}{\sqrt{4n_v^2 (\beta_-)_{yy}^2+4 n_v(\beta_{-})_{yy}(\sigma_a)_{yy}+(\sigma_v)_{yy}^2}}\right)E_y,
\\\nonumber
(v_{\pm})_z&=&(\beta_{+})_{zy}E_y,
\\\nonumber
(D_{\pm})_{yy}&=&\mp\frac{(\beta_-)^2_{yy} \left((\sigma_v)_{yy}^2-(\sigma_a)_{yy}^2\right) (E_y^2)}{\left(4n_v^2 (\beta_-)_{yy}^2+4n_v(\beta_-)_{yy}(\sigma_a)_{yy}+(\sigma_v)_{yy}^2\right)^{3/2}},
\\
(D_{\pm})_{zz}&=&(D_{\pm})_{zy}=(D_{\pm})_{yz}=0.
\end{eqnarray}
Recall that $(\sigma_v)_{yy}>(\sigma_a)_{yy}$ in the limit of small chemical potentials. By further turning off $n_v$, we find that only the $\tau_-^{-1}$ vanishes. Therefore, when $n_v=0$, the dissipation of the "$-$" mode of CEW only comes from the diffusion. Although the diffusion constant for the "$+$" mode here is negative, the finite damping time should dominate the dissipation. The same argument can be applied to CMW showed in (\ref{CMWdispersion}) as well. Moreover, the "$-$" mode of the Hall CEW becomes non-dissipative when $n_v=0$ and $k_y=0$. This may be somewhat anticipated since the Hall currents are not influenced by the collisional effect in the "stationary state" in the absence of the currents along the electric field, which is equivalent to the condition with zero drag force or infinite relaxation time as discussed in Sec.\ref{sub:BE}.
We now evaluate the transport coefficients in (\ref{transptcoeff}) numerically. We first consider the cases with fixed electromagnetic fields and different magnitudes of the chemical potentials. The results are shown in Fig.\ref{rtimemfixEB}-\ref{DyyfixEB}. As illustrated in Fig.\ref{rtimemfixEB} and Fig.\ref{rtimepfixEB}, the damping is more prominent for the "$+$" mode which mainly stems from the nonzero normal conductivity. For both two modes, the damping is increased by the vector chemical potential, while it is less affected by the axial chemical potential. Similarly, the wave velocities along the electric field of two modes are enhanced by the vector chemical potential and degenerate in the presence of an axial chemical potential as shown in Fig.\ref{vyfixEB} .
On the contrary, as expected from (\ref{betayyzy}) and (\ref{transptcoeff}), the Hall velocities of two modes as illustrated in Fig.\ref{vzfixEB} are degenerate and independent of the chemical potentials. As shown in Fig.\ref{DyyfixEB}, the diffusion constant vanishes at zero axial chemical potentials and increase when the axial chemical potential is increased. However, the diffusion constant is reduced by the vector chemical potential due to the presence of $n_v$ in the denominator as shown in (\ref{transptcoeff}).
Next, we may fix the chemical and vary the magnitudes of the constant electromagnetic fields. As shown in Fig.\ref{rtimemfixmuVT}-\ref{vzfixmuVT},
we plot the coefficients with $\mu_V=T$ and $\mu_A=0$. Since $(\beta_-)_{yy}=0$ when $\mu_A=0$, $(v_+)_y$ and $(v_-)_y$ are degenerate as illustrated in Fig.\ref{vyfixmuVT}. Also, $(D_\pm)_{yy}$ vanish under this condition. In Fig.\ref{rtimemfixmuV2T}-\ref{DyyfixmuV2T}, we take $\mu_V=2T$ and $\mu_A=T$, where the degeneracy of $(v_+)_y$ and $(v_-)_y$ is broken and $(D_{\pm})_{yy}$ are nonzero. Recall that $(D_+)_{yy}=-(D_-)_{yy}$. In addition, the magnitudes of $(D_{\pm})_{yy}$ will saturate to zero at large $B_x$, which could be expected from (\ref{transptcoeff}) since $(\beta_-)_{yy}$ drop to zero at large $B_x$ according to (\ref{betayyzy}). In general, when we increase the chemical potentials, the wave velocities increase, while the damping and diffusion contributing to the dissipation of CEW are enhanced as well. Nonetheless, with zero chemical potentials, the CEW may only propagate perpendicular to the applied fields without dissipation. Although the damping effect is absent only for the "$-$" mode here in the SS model due to presence of nonzero conductivity for the system at zero chemical potentials, both "$\pm$" modes for the Hall CEW will be non-dissipative in the system with zero conductivity and zero chemical potentials.
\begin{figure}[t]
\begin{center}
\subfigure[]
\label{rtimemfixEB}
\includegraphics[height=4cm,width=0.4\textwidth]{rtimemfixEB.eps}
\subfigure[]
\label{rtimepfixEB}
\includegraphics[height=4cm,width=0.4\textwidth]{rtimepfixEB.eps}
}\\
\subfigure[]
\label{vyfixEB}
\includegraphics[height=4cm,width=0.4\textwidth]{vyfixEB.eps}
\subfigure[]
\label{vyfixEB0mu}
\includegraphics[height=4cm,width=0.4\textwidth]{vyfixEB0mu.eps}
}\\% ------- End of the second row ---------------------
\subfigure[]
\label{vzfixEB}
\includegraphics[height=4cm,width=0.4\textwidth]{vzfixEB.eps}
\subfigure[]
\label{DyyfixEB}
\includegraphics[height=4cm,width=0.4\textwidth]{DyyfixEB.eps}
}
\end{center}
\caption
The green, black, blue, and red (from bottom to top in (a)-(e) and from top to bottom in (f)) correspond to $\mu_V=0.002T$, $T$, $1.6T$, and $2T$, respectively. Here we take $E_y=B_x=10m_{\pi}^2$. The unit of $\tau_-$ is in GeV$^{-1}$. In (c), the solid and dashed curves represent $(v_-)_y$ and $(v_+)_y$.
\end{figure}
\begin{figure}[t]
\begin{center}
\subfigure[]
\label{rtimemfixmuVT}
\includegraphics[height=4cm,width=0.4\textwidth]{rtimemfixmuVT.eps}
\subfigure[]
\label{rtimepfixmuVT}
\includegraphics[height=4cm,width=0.4\textwidth]{rtimepfixmuVT.eps}
}\\
\subfigure[]
\label{vyfixmuVT}
\includegraphics[height=4cm,width=0.4\textwidth]{vyfixmuVT.eps}
\subfigure[]
\label{vzfixmuVT}
\includegraphics[height=4cm,width=0.4\textwidth]{vzfixmuVT.eps}
}
\end{center}
\caption
The green(long-dashed), black(dot-dashed), blue(dashed), and red(solid) correspond to $eE_y=m_{\pi}^2$, $5m_{\pi}^2$, $10m_{\pi}^2$, and $20m_{\pi}^2$, respectively. Here we take $\mu_V=T$ and $\mu_A=0$. The unit of $\tau_-$ is in GeV$^{-1}$.
\end{figure}
\begin{figure}[t]
\begin{center}
\subfigure[]
\label{rtimemfixmuV2T}
\includegraphics[height=4cm,width=0.4\textwidth]{rtimemfixmuV2T.eps}
\subfigure[]
\label{rtimepfixmuV2T}
\includegraphics[height=4cm,width=0.4\textwidth]{rtimepfixmuV2T.eps}
}\\
\subfigure[]
\label{vyfixmuV2T}
\includegraphics[height=4cm,width=0.4\textwidth]{vyfixmuV2T.eps}
\subfigure[]
\label{vzfixmuV2T}
\includegraphics[height=4cm,width=0.4\textwidth]{vzfixmuV2T.eps}
}\\% ------- End of the second row ---------------------
\subfigure[]
\label{DyyfixmuV2T}
\includegraphics[height=4cm,width=0.4\textwidth]{DyyfixmuV2T.eps}}
\end{center}
\caption
The green(long-dashed), black(dot-dashed), blue(dashed), and red(solid) correspond to $eE_y=m_{\pi}^2$, $5m_{\pi}^2$, $10m_{\pi}^2$, and $20m_{\pi}^2$, respectively. Here we take $\mu_V=2T$ and $\mu_A=T$. The unit of $\tau_-$ is in GeV$^{-1}$. In \ref{vyfixEB}, the solid and dashed curves represent $(v_-)_y$ and $(v_+)_y$.
\end{figure}
\subsection{CEW in the Weakly/Strongly Coupled Scenarios at Small Chemical Potentials}
In this subsection, we may focus on the CEW at small chemical potentials in the absence of a magnetic field, where the transport coefficients for CEW can be derived analytically in both the SS model and weakly coupled QED through the conductivities obtained from the hard-thermal-loop approximation in \cite{Huang:2013iia}.
For the weakly coupled scenario, we may consider an ideal gas at finite temperature and chemical potentials. The bookkeeping result(also see, for example, the number density for massless particles quarks in QGP in \cite{Pu:2011vr}) shows that
\begin{eqnarray}
j^0_{R/L}=\frac{Q_f\mu_{R/L}}{6}\left(T^2+\frac{\mu_{R/L}^2}{\pi^2}\right),
\end{eqnarray}
which results in
\begin{eqnarray}
\alpha_{R/L}=\frac{6}{Q_fT^2\left(1+\frac{3\mu_{R/L}^2}{\pi^2T^2}\right)}
\approx\frac{6}{Q_fT^2}\left(1-\frac{3\mu_{R/L}^2}{\pi^2T^2}\right)
\end{eqnarray}
for small chemical potentials, where $Q_f$ denotes the degrees of freedom of the chiral fermions.
By definition, we find
\begin{eqnarray}
\beta_{R/L}=2\rho\mu_{R/L}\alpha_{R/L}
\approx\frac{12\tilde{\rho}\mu_{R/L}}{Q_fT^3}\left(1-\frac{3\mu_{R/L}^2}{\pi^2T^2}\right),
\end{eqnarray}
where $\tilde{\rho}=\rho T$ is dimensionless. We thus have
\begin{eqnarray}
\beta_{+/-}=\frac{12\tilde{\rho}}{Q_fT^3}\mu_{V/A}+\mathcal{O}(\mu_{R/L}^3/T^3).
\end{eqnarray}
In the limit $\mu_R=-\mu_L=\mu_A$ and $\sigma_{v/a}=0$, from (\ref{dispCMW1}), the dispersion relation for CMW reads
\begin{eqnarray}
\omega_{\pm}=\pm\lambda({\bf B\cdot k})\alpha_{R/L}=\pm\frac{eN_c{\bf B\cdot k}}{2\pi^2T^2}\left(\frac{6}{Q_f}\right)\left(1-\frac{3\mu_A^2}{\pi^2T^2}\right).
\end{eqnarray}
Analogously, from (\ref{dispCEW1}), the dispersion relation for CEW is given by
\begin{eqnarray}
\omega_{\pm}=\pm\lambda({\bf E\cdot k})\beta_{R/L}=\pm e{\bf E\cdot k}\frac{2\tilde{\rho}\mu_A}{T^3}\left(\frac{6}{Q_f}\right)\left(1-\frac{3\mu_A^2}{\pi^2T^2}\right).
\end{eqnarray}
The numerical value of $\tilde{\rho}$ depends on the property of the medium. In the weakly coupled QED, one can read out $\sigma_0$ and $\tilde{\rho}$ defined in (\ref{JRLsmallmu}) from \cite{Huang:2013iia} by turning off the contributions from the interaction between the right-handed and left-handed sectors
\footnote{We simply drop the terms proportional to $\mu_R^2+\mu_L^2$ in $\sigma_{R/L}$ in \cite{Huang:2013iia}. One may choose an alternative way to truncate the interaction by dropping the term proportional to $\mu_R^2(\mu_L^2)$ in $\sigma_L(\sigma_R)$. In such a case, we have $\tilde{\rho}\approx 9.005/(e^4\ln (1/e))$.}
, where
\begin{eqnarray}
\sigma_0=15.6952\frac{T}{e^4\ln(1/e)},\quad\tilde{\rho}=10.2495\frac{1}{e^4\ln(1/e)}.
\end{eqnarray}
Here we may consider two particular cases for CEW. When $n_v=0(\beta_+=0)$, from (\ref{CEWomega}) and (\ref{betapm}),
we find
\begin{eqnarray}
\omega_{\pm}\approx\pm e\sqrt{\beta_-^2E_y^2k_y^2-\frac{\sigma_v^2}{4}}-\frac{ie\sigma_v}{2}
\approx\pm e\sqrt{\left(\frac{6\tilde{\rho}\mu_AE_y}{T^3}\right)^2k_y^2-\frac{\sigma_0^2}{4}}-\frac{ie\sigma_0}{2},
\end{eqnarray}
where the contribution from $\sigma_A$ is dropped since $\sigma_A\sim\mathcal{O}(n_{R/L}^2)$. Here we take $Q_f=2$ by summing over the spins of electrons in QED.
The small-momentum expansions of two modes up to the leading order of $k_y$ are
\begin{eqnarray}\nonumber\label{CEWdispnv0}
\omega_+&=&-ie\left(\frac{6\tilde{\rho}\mu_AE_y}{T^3}\right)^2\frac{k_y^2}{\sigma_0}
=-i\frac{240.957(eE_y)^2\mu_A^2}{e^5\ln(1/e)T^7}k_y^2,
\\
\omega_-&=&-ie\sigma_0+ie\left(\frac{6\tilde{\rho}\mu_AE_y}{T^3}\right)^2\frac{k_y^2}{\sigma_0}
=-\frac{iT}{e^3\ln(1/e)}\left(15.6952-\frac{240.957(eE_y)^2\mu_A^2}{e^2T^8}k_y^2\right),
\end{eqnarray}
where both modes do not propagate.
On the other hand, when $n_a=0(\beta_-=0)$, we have
\begin{eqnarray}\nonumber\label{CEWdispna0}
\omega_{+}&=&e\left(\frac{6\tilde{\rho}\mu_VE_y}{T^3}\right)k_y
=\frac{61.4969\mu_V(eE_y)}{e^4\ln(1/e)T^3}k_y
\\
\omega_{-}&=&e\left(\frac{6\tilde{\rho}\mu_VE_y}{T^3}\right)k_y
-i e\sigma_0
=\frac{61.4969\mu_V(eE_y)}{e^4\ln(1/e)T^3}k_y-i\frac{15.9652}{e^3\ln(1/e)},
\end{eqnarray}
where we drop $n_v\beta_+\sim\mathcal{O}(n_{R/L}^2)$. In \cite{Huang:2013iia}, the interaction between the right-handed and left handed fermions was included. When $n_V=0$, in our convention, the dispersion relation of the CEW reads
\begin{eqnarray}\label{Huangnv0}
\omega_{\pm}=\pm e\sqrt{(v_ek_y^2)-(\sigma_0/2)^2}-ie\sigma_0/2,\quad v_e=\alpha_An_a\sqrt{2\sigma_2\chi_e\alpha_V\alpha_A}E_y,
\end{eqnarray}
where
\begin{eqnarray}
\sigma_2=7.76052\frac{1}{Te^4\ln(1/e)},\quad\chi_e=20.499\frac{1}{Te^4\ln(1/e)},
\quad\alpha_{V/A}=\frac{\partial\mu_{V/A}}{\partial j^0_{V/A}}\approx\frac{3}{T^2}.
\end{eqnarray}
By making the small-momentum expansion, (\ref{Huangnv0}) becomes
\begin{eqnarray}\nonumber
\omega_+&=&-ie\frac{v_e^2k_y^2}{\sigma_0}=-i\frac{182.444(eE_y)^2\mu_A^2}{e^5\ln(1/e)T^7}k_y^2,
\\
\omega_-&=&-ie\sigma_0+ie\frac{v_e^2k_y^2}{\sigma_0}
=-\frac{iT}{e^3\ln(1/e)}\left(15.6952-\frac{182.444(eE_y)^2\mu_A^2}{e^2T^8}k_y^2\right),
\end{eqnarray}
where the diffusion is enhanced by the interaction between the R/L sectors.
When $n_a=0$, two modes read
\begin{eqnarray}\nonumber\label{Huangna0}
\omega_{+}&=&ev_ak_y
=\frac{61.4969\mu_V(eE_y)}{e^4\ln(1/e)T^3}k_y
\\
\omega_{-}&=&ev_vk_y
-i e\sigma_0
=\frac{46.5631\mu_V(eE_y)}{e^4\ln(1/e)T^3}k_y-i\frac{15.9652T}{e^3\ln(1/e)},
\end{eqnarray}
where
\begin{eqnarray}
v_a=\chi_e\alpha_V\alpha_An_vE_y,\quad v_v=2\sigma_2\alpha_V^2n_vE_y.
\end{eqnarray}
Similar to (\ref{CEWdispna0}), the $\omega_{-}$ mode will be damped out but the velocities of these two modes are different in (\ref{Huangna0}) due to the interactions between the R/L sectors. When turning off the interactions, two velocities become degenerate. In \cite{Huang:2013iia}, the $\omega_-$ and $\omega_+$ modes are called the "vector density wave" and the "axial density wave", respectively. Here we find that only the axial density wave is unaffected by the interaction.
We may compare the results obtained from weakly coupled QED with that found in strongly coupled QCD(SS model). From (\ref{bdcurrents}) and (\ref{Jtmurelation}), we find
\begin{eqnarray}
\beta_{+/-}=\frac{3\mu_{V/A}(eE_y)}{2a^5T^5L^6}(2\pi l_s^2)^2,
\quad\sigma_v=Ca^2T^2L^{9/2}(2\pi l_s^2)^2,
\end{eqnarray}
where we write out the dependence of $2\pi l_s^2$ explicitly for dimensional analysis. When $n_v=0$, we have
\begin{eqnarray}\nonumber
\omega_+&=&-i\frac{9(2\pi l_s^2)^2(eE_y)^2\mu_A^2}{4Ca^{12}L^{33/2}T^{12}}k_y^2,
\\
\omega_-&=&-i(2\pi l_s^2)^2\left(Ca^2T^2L^{9/2}+\frac{9(eE_y)^2\mu_A^2}{4Ca^{12}L^{33/2}T^{12}}k_y^2\right).
\end{eqnarray}
When $n_a=0$, we have
\begin{eqnarray}\nonumber\label{omegana0}
\omega_+&=&\frac{3\mu_V(2\pi l_s^2)^2(eE_y)k_y}{2a^5T^5L^6},
\\
\omega_-&=&(2\pi l_s^2)^2\left(\frac{3\mu_V(eE_y)k_y}{2a^5T^5L^6}
-iCa^2T^2L^{9/2}\right).
\end{eqnarray}
It turns out that the CEW in weakly coupled and in strongly coupled systems have different temperature dependence. In the weakly coupled QED, the hard-thermal-loop approximation assume that the temperature dominates all other scales in the system. However, the SS model contains $M_{KK}$ corresponding to the mesonic scale, which should be also involved in CEW. We may now focus on the propagating waves for $n_v=0$. By using $L^3=(4\pi M_{KK})^{-1}\lambda_t$ and
$C=(12\pi^2L^{3/2})^{-1}N_c$ from $2\pi l_s^2=1$ GeV$^{-2}$, (\ref{omegana0}) can be written as
\begin{eqnarray}\nonumber
\omega_+&=&\frac{729M_{KK}^2}{128\pi^2\lambda_t^2T^2}\frac{(eE_y)\mu_V}{T^3}k_y,
\\
\omega_-&=&\frac{729M_{KK}^2}{128\pi^2\lambda_t^2T^2}\frac{(eE_y)\mu_V}{T^3}k_y
-i\frac{2\lambda_tN_cT^2}{54\pi M_{KK}}.
\end{eqnarray}
In comparison with (\ref{Huangna0}), the diffusion constants for $\omega_-$ in the weakly coupled and strongly coupled scenarios have distinct dependence of both the temperature and coupling constants.
\section{Summary and Outlook}\label{sum_outlook}
In this work, we have proposed the chiral Hall effect(CHE) generated by the applied electromagnetic fields and an axial chemical potential. In the presence of an electric field and a magnetic field perpendicular to each other, collective excitations of thermal plasmas with nonzero vector and axial chemical potentials will result in density waves as the chiral electric waves(CEW) propagating along the directions parallel to the electric field and perpendicular to both applied fields. Although the CEW induced by the CESE only exist with nonzero chemical potentials, the CEW led by the CHE should survive even at zero chemical potentials. Such Hall CEW become non-dissipative at zero conductivity. In phenomenology, we have argued that the CHE could lead to rapidity-dependent charge asymmetry in asymmetric heavy ion collisions. Combining with the CME and CESE, we may find different charge asymmetry of flow harmonics $v_n$ at distinct rapidity.
Nevertheless, we are unable to draw the conclusion upon the magnitudes of the charge asymmetry of $v_n$ since the axial chemical potential in the QGP is unknown. Moreover, to describe the practical condition in heavy ion collisions, numerical simulations based on the wave equations derived in our work with proper initial charge distributions and hydrodynamic evolution of the QGP are needed. On the other hand, the topological effect in the QGP could be pronounced, we thus have to couple CEW with CMW. Also, in our work, we only consider the density fluctuations and neglect the fluctuation of the induced electromagnetic fields.
It has been indicated in \cite{Akamatsu:2013pjd,Akamatsu:2014yza} that the induced electromagnetic fields could further cause chiral-plasma instabilities in the presence of an external magnetic field. Such instabilities will reduce the CME. Therefore, it is tentative to explore the existence of similar instabilities for CESE and CHE in the future.
In holography, a substantial problem occurs when we try to compute the all currents generated by CME, CESE, and CHE, where the currents are not gauge invariant when incorporating the contributions from the CS terms in the SS model.
Moreover, there exists a persistent debate upon the presence of the CME in the SS model, where the CME current cannot be both conserved and gauge-invariant. On the other hand, in \cite{Hoyos:2011us}, the CME is reproduced in holography via a different definition of the axial chemical potential in the D3/D7 system, where the axial chemical potential comes from the rotating D7 branes instead of the temporal gauge fields in the gravity dual. It is thus intriguing to investigate the CESE and CHE along with the CME in the frame work of the D3/D7 system.
Furthermore, the Hall and chiral Hall effects can still survive
in non-relativistic systems, e.g. Weyl semi-metal. Quite different
from the spin Hall effect in the Weyl semi-metal induced by axion fields
or Berry phase, the chiral Hall effect in our work is caused by interactions, which
will play a role if there is an effective $\mu_{A}$. We will
leave the applications to condensed matter system in the future.
\section{Acknowledgment}
The authors thank Jiunn-wei Chen and Xu-guang Huang for helpful
discussions and valuable comments from the referee of Physical Review D. This work was supported by the NSFC under grant No.
11205150. SP was supported in part by the NSC, NTU-CTS, and the
NTU-CASTS of R.O.C. SYW was supported
by the National Science Council under the grant NSC 102-2811-M-009-057 and the Nation Center
for Theoretical Science, Taiwan. S.P. also acknowledges the support from the Alexander von Humboldt Foundation.
DLY was supported by Duke University under the DOE grant DE-FG02-05ER41367 and CYCU under the grant MOST 103-2811-M-033-002.
\section{Appendices}
\subsection{Hall conductivity from the Langevin equation and Boltzmann equations \label{sub:BE}}
In the presence of quasi-particles, we may incorporate the drag force coming from the medium. The equation of motion for the quasi-particles with charge $+1$ then reads
\begin{eqnarray}\label{lorentzf}
\left(\frac{d{\bf p}}{dt}\right)_{R/L}={\bf E}+{\bf v}_{R/L}\times {\bf B}-\xi{\bf p}_{R/L},
\end{eqnarray}
where ${\bf p}$ is the momentum of the quasi-particles and $\xi$ is the drag coefficient. This is basically the Langevin equation in the absence of noise terms. We then take ${\bf v}={j}/j^t$ and ${\bf p}=M{\bf v}$ with $M=M_L=M_R$ being the mass of quasi-particles. We further assume $M\ll T$ such that the chiral symmetry is approximately preserved. Here we also assume that $\xi$ is same for left/right handed particles and isotopic. In the equilibrium state when $d{\bf p}/dt=0$, (\ref{lorentzf}) can be rewritten as
\begin{eqnarray}
E_i=-\epsilon_{ijk}\frac{(j_{R/L})_j}{(j_{R/L})_0}B_k+\xi M\frac{(j_{R/L})_i}{(j_{R/L})_0}.
\end{eqnarray}
By solving the coupled equations for $i=x,y,z$, we find
\begin{eqnarray}
(j_{R/L})_x=0,\quad (j_{R/L})_y=(\sigma_{R/L})_{yy}E_y,\quad (j_{R/L})_z=(\sigma_{R/L})_{yz}E_z,
\end{eqnarray}
where
\begin{eqnarray}\label{eq:sol_Langevin_01}
(\sigma_{R/L})_{yy}=\frac{(j_{R/L})_0}{\xi M\left(1+\frac{B_x^2}{\xi^2M^2}\right)},\quad
(\sigma_{R/L})_{zy}=-\frac{(j_{R/L})_0B_x}{\xi^2 M^2\left(1+\frac{B_z^2}{\xi^2M^2}\right)}.
\end{eqnarray}
One may expect that CME and CSE should lead to non-vanishing $(j_x)_{R/L}$. However, the currents along the magnetic field should deplete in the presence of the drag force, while the currents parallel to the electric field and perpendicular to both the electric and magnetic fields are steady.
On the other hand, we can express the classical Hall effects via the Boltzmann equations.
In the present of external $\mathbf{E}$ and $\mathbf{B}$ fields,
the Boltzmann equations can be written as,
\begin{equation}
\frac{df}{dt}=\partial_{t}f+\mathbf{v}\cdot\partial_{x}f-e[\mathbf{v}\cdot\mathbf{E}+\mathbf{v}\times\mathbf{B}]\cdot\frac{\partial}{\partial\mathbf{p}}f=-\frac{f-f_{0}}{\tau},\label{eq:BE_01}
\end{equation}
where \textbf{$\mathbf{v}$} is the velocity of a single particle
with the momentum $\mathbf{p}$, $f(x,p)$ is the distribution function,
$f_{0}$ is $f$ at an equilibrium state. Here we will drop the $R/L$ signs in the derivations for simplicity. In the right handed side,
we use the relaxation time $\tau$ instead of the collision terms.
We can assume the system is very close to an equilibrium state, that
will lead us to expand the $f$ near the $f_{0}$,
\begin{equation}
f=f_{0}+\delta f,
\end{equation}
with
\begin{equation}
f_{0}=\frac{1}{e^{(E_{p}-\mu)/T}+1},
\end{equation}
where $E_{p}=|\mathbf{p}|$ is the energy of a massless single particle,$\mu$
is the chemical potential, $T$ is the temperature. Inserting it back
to Eq.(\ref{eq:BE_01}) yields,
\begin{equation}
\frac{\partial}{\partial t}\delta f+\mathbf{v}\cdot\partial_{x}\delta f-e\left[\mathbf{E}+\mathbf{v}\times\mathbf{B}\right]\cdot\frac{\partial}{\partial\mathbf{p}}\delta f+\mathbf{v}\cdot\left[e\mathbf{E}-\nabla\mu+\frac{E_{p}-\mu}{T}\nabla T\right](-\frac{\partial f_{0}}{\partial E_{p}})=-\frac{\delta f}{\tau},
\end{equation}
For simplicity, we assume the $\delta f(x,p)$, $\mu$ and $T$ are
homogenous in space. In a weak $\mathbf{E}$ field and a strong $\mathbf{B}$
field case, i.e. $\mathbf{E}\ll O(\partial_{x})\ll\mathbf{B}$, we
can also neglect the high order correction $-e\mathbf{E}\cdot\frac{\partial}{\partial\mathbf{p}}\delta f$.
Finally, we get,
\begin{equation}
\frac{\partial}{\partial t}\delta f-e\mathbf{v}\times\mathbf{B}\cdot\frac{\partial}{\partial\mathbf{p}}\delta f+\mathbf{v}\cdot e\mathbf{E}(-\frac{\partial f_{0}}{\partial E_{p}})=-\frac{\delta f}{\tau}.
\end{equation}
By using the ansatz, $\delta f=\mathbf{p}\cdot\mathbf{G}(E_{p})e^{i\omega t},$
$\mathbf{E}(t)=\mathbf{E}_{0}e^{-i\omega t}$, the Boltzmann equation
can be further simplified as,
\begin{equation}
(\tau^{-1}-i\omega)\mathbf{p}\cdot\mathbf{G}-e(\mathbf{v}\times\mathbf{B})\cdot\nabla_{p}(\mathbf{p}\cdot\mathbf{G})=e\mathbf{v}\cdot\mathbf{E}_{0}\frac{\partial f_{0}}{\partial E_{p}},
\end{equation}
and the solution is,
\begin{equation}
\mathbf{G}_{i}=\Gamma_{ji}^{-1}e\mathbf{E}_{0j}\frac{\partial f_{0}}{\partial E_{p}},
\end{equation}
with $\Gamma$ matrix,
\[
\Gamma_{ij}=(\tau^{-1}-i\omega)\delta_{ij}-\epsilon_{ijk}e\mathbf{B}_{k}.
\]
Then the current induced by the external fields is given by,
\[
\delta\mathbf{J}_{i}=\int\frac{d^{3}p}{(2\pi)^{3}}\mathbf{v}_{i}\delta f\equiv e\sigma_{ij}\mathbf{E}_{j}(t),
\]
where
\begin{eqnarray*}
\sigma_{ij} & = & \int\frac{d^{3}p}{(2\pi)^{3}}\mathbf{v}_{i}p_{l}\Gamma_{jl}^{-1}=n\Gamma_{ji}^{-1}.
\end{eqnarray*}
and $n$ is the number density, $n=\frac{1}{3}\int\frac{d^{3}p}{(2\pi)^{3}}f_{0}$.
Note that we have assume $\tau$ as a constant. In the stationary
limit, $\omega\rightarrow0$, if $\mathbf{B}=B\hat{x}$, we get,
\begin{equation}
\sigma_{zy}=\int\frac{d^{3}p}{(2\pi)^{3}}\mathbf{v}_{i}p_{l}\frac{\partial f_{0}}{\partial E_{p}}\frac{eB\tau^{2}}{E_{p}^{2}+(eB)^{2}\tau^{2}}=\begin{cases}
-\frac{n}{eB},\quad & B\rightarrow\infty,\\
-eI_{10}\tau^{2}B,\quad & B\rightarrow0,
\end{cases}\label{eq:Hall_tensor_BE_01}
\end{equation}
which is consistent with Eq. (\ref{eq:Hall_strong_B_01}) and (\ref{eq:Hall_weak_B_01}),
and
\begin{equation}
I_{10}=\frac{1}{6\pi^{2}}\int dE_{p}f_{0}(E_{p}),
\end{equation}
is a dimension $1$ quantity.
\input{chiral_electric_wave.bbl}
\end{document}
|
2,877,628,089,444 | arxiv | \section{Poisson-Dirichlet distributions and split-merge processes}
We introduce here the family of Poisson-Dirichlet distributions and explain how they relate to split-merge processes in general. This also helps to understand why these distributions appear in the loop models, and more importantly, it allows us to later calculate the PD parameter $\theta$ that identifies the distribution of loop lengths.
The relevant objects here are partitions $(X_1,X_2\dots)$ of $[0,1]$. That is, the numbers $X_i$ satisfy $X_1 \geq X_2 \geq \dots \geq 0$ and $\sum_{i=1}^\infty X_i = 1$. The simplest definition of Poisson-Dirichlet involves the related Griffiths-Engen-McCloskey (GEM) distribution. The latter is a residual allocation measure built from Beta$(1,\theta)$ random variables. Recall that a Beta$(1,\theta)$ random variable has probability density function $\theta (1-s)^{\theta-1}$, $s \in [0,1]$, where the parameter $\theta$ is positive. Then, with $Y_1, Y_2, \dots$ being independent Beta$(1,\theta)$ random variables, we consider the vector
\[
\bigl( Y_1, (1-Y_1) Y_2, (1-Y_1) (1-Y_2) Y_3, \dots \bigr).
\]
One can check that these positive numbers add up to 1. Rearranging them in decreasing order, one gets a random partition of $[0,1]$ selected with the Poisson-Dirichlet distribution PD$(\theta)$.
We denote $\mathbb E_{{\rm PD}(\theta)}$ the expectation with respect to Poisson-Dirichlet PD$(\theta)$. We will apply it to functions of the form $\prod_{i=1}^\infty f(X_i)$ where $f$ is a bounded function $[0,1] \to \mathbb C$ such that $f(s) = 1 + o(s)$ around $s=0$ --- this guarantees that the infinite product converges, and also that small loops do not contribute. Since the order of partition elements is not important we can directly use the GEM measure. Concretely, this gives
\begin{equation}
\begin{split}
\mathbb E_{{\rm PD}(\theta)} &\biggl[ \prod_{i=1}^\infty f(X_i) \biggr] = \biggl( \prod_{i=1}^\infty \int_0^1 \theta (1-s_i)^{\theta-1} d s_i \biggr) \\
&f(s_1) f \bigl( (1-s_1) s_2 \bigr) f \bigl( (1-s_1) (1-s_2) s_3 \bigr) \dots
\end{split}
\end{equation}
If the function $f$ has the Taylor series $f(s) = 1 + \sum_{k\geq1} a_k s^k$, the expectation above can be computed with the help of the moments formula obtained in \cite{Nahum2013}; we get
\begin{equation}
\label{formula PD}
\begin{split}
&{\mathbb E}_{{\rm PD}(\theta)} \biggl[ \prod_{i=1}^\infty f(X_i) \biggr] \\
&= \sum_{n=0}^\infty \frac1{n!} \sum_{k_1,\dots,k_n=1}^\infty a_{k_1} \dots a_{k_n} \frac{\theta^{n} \, \Gamma(\theta) \, \Gamma(k_{1}) \dots \Gamma(k_{n})}{\Gamma(\theta + k_{1} + \dots + k_{n})}.
\end{split}
\end{equation}
See \cite[Eq.\ (4.16)]{Ueltschi2017}.
The split-merge process (also called coagulation-fragmentation) is a Markov process on partitions of $[0,1]$. Each step consists of either merging two distinct elements, or splitting in two a given element (in which case it is split uniformly). Let $g_m,g_s$ be two positive parameters.
In its continuous-time version, the partition elements $X_i, X_j$ ($i \neq j$) are merged at rate $2 g_m X_i X_j$; the element $X_i$ is split at rate $g_s X_i^2$. The invariant measure is Poisson-Dirichlet with parameter $\theta = g_s/g_m$ \cite{Goldschmidt2011,Tsilevich2000,Diaconis2004}.
\section{Identifying the Poisson-Dirichlet parameter}
We first present a Markov process that has the measure obtained from the loop model representation of the quantum partition function $Z$ as the invariant measure. Based on this formulation, we can then obtain the PD parameter $\theta$.
It is convenient to first discretise the "time" interval $[0,\beta]$ with mesh $1/n$. Given a realisation $\omega$ of crosses and double bars, let $C(\omega)$ and $B(\omega)$ denote the number of crosses and double bars, respectively. On an arbitrary finite lattice $\Lambda$ with set of bonds ${\mathcal B}_\Lambda$, the measure can be written as
\begin{equation}
\label{measure on omega}
\begin{split}
\mu(\omega) = & \frac1Z 3^{|{\mathcal L}(\omega)|} \bigl( \tfrac{u}n \bigr)^{C(\omega)} \bigl( \tfrac{1-u}n \bigr)^{B(\omega)} \\
& \times \bigl( 1 - \tfrac1n \bigr)^{|{\mathcal B}_\Lambda| \beta n - C(\omega) - B(\omega)}.
\end{split}
\end{equation}
Let $R(\omega,\omega')$ denote the transition matrix $\omega \mapsto \omega'$, the detailed balance equation is
\begin{equation}
\begin{split}
&3^{|{\mathcal L}(\omega)|} \left( \tfrac n{u} \right)^{C(\omega)} \left( \tfrac{1-u}n \right)^{B(\omega)} R(\omega,\omega') = \\
&3^{|{\mathcal L}(\omega')|} \left( \tfrac n{u} \right)^{C(\omega')} \left( \tfrac{1-u}n \right)^{B(\omega')} R(\omega',\omega).
\end{split}
\end{equation}
Here is a natural process that satisfies the equation above:
\begin{itemize}
\item A new cross appears in $\{i,j\} \times [t,t+\frac1n]$ at rate $\sqrt3 \frac{u}n$ if it causes a loop to split; at rate $\frac1{\sqrt3} \frac{u}n$ if it causes two loops to merge; at rate $\frac{u}n$ if the number of loops does not change.
\item Same with double bars, but with $1-u$ instead of $u$.
\item An existing cross or double bar is removed at rate $\sqrt3$ if its removal causes a loop to split; at rate $\frac1{\sqrt3}$ if its removal causes two loops to merge; at rate 1 if the number of loop remains constant.
\end{itemize}
Notice that any new cross or double bar between two loops causes them to merge. When $u \in (0,1)$, a subtle phenomenon occurs: a new cross or double bar may either cause it to split, or reorganise it without splitting it (this is akin to $0 \leftrightarrow 8$); either occurs with probability $\frac12$.
Let $\gamma, \gamma'$ be two macroscopic loops of lengths $\ell(\gamma), \ell(\gamma')$. They are spread all over $\Lambda$ and they interact between one another, and among themselves, in an essentially mean-field fashion. There exists a constant $c_{1}$ such that a new cross or double bar that causes $\gamma$ to split, appears at rate $\tfrac14 \sqrt3 \, c_{1} \frac{\ell(\gamma)^{2}}{\beta |\Lambda|}$; a new cross or double bar that causes $\gamma$ and $\gamma'$ to merge appears at rate $(c_{1} / \sqrt3) \frac{\ell(\gamma) \ell(\gamma')}{\beta |\Lambda|}$. There exists another constant $c_{2}$ such that the rate for an existing cross or double bar to disappear is $\tfrac14 \sqrt\theta \, c_{2} \frac{\ell(\gamma)^{2}}{\beta |\Lambda|}$ if $\gamma$ is split, and $(c_{2} / \sqrt3) \frac{\ell(\gamma) \ell(\gamma')}{\beta |\Lambda|}$ if $\gamma$ and $\gamma'$ are merged. Consequently, $\gamma$ splits at rate
\begin{equation}
\tfrac14 \sqrt3 (c_{1}+c_{2}) \frac{\ell(\gamma)^{2}}{\beta |\Lambda|} \equiv \tfrac12 r_{\rm s} \ell(\gamma)^{2}
\end{equation}
and $\gamma, \gamma'$ merge at rate
\begin{equation}
\frac1{\sqrt3} (c_{1}+c_{2}) \frac{\ell(\gamma) \ell(\gamma')}{\beta |\Lambda|} \equiv r_{\rm m} \ell(\gamma) \ell(\gamma').
\end{equation}
Because of effective averaging over the whole domain, the constants $c_{1}$ and $c_{2}$ are the same for all loops and for both the split and merge events. This key property is certainly not obvious and the interested reader is referred to a detailed discussion for lattice permutations with numerical checks \cite{Grosskinsky2012}. It follows that the lengths of macroscopic loops satisfy an effective split-merge process, and the invariant distribution is Poisson-Dirichlet with parameter $\theta = r_{\rm s} / r_{\rm m} = 3/2$ \cite{Tsilevich2000,Diaconis2004,Goldschmidt2011}.
For $u = 0$ or $u = 1$, the "subtle phenomenon" above does not occur; splits then happen at twice the rate, and the Poisson-Dirichlet parameter is $\theta = 3$.
\section{Poisson-Dirichlet calculation of "nematic histogram"}
We study the distribution of the operator $Q=\frac1{|\Lambda|} \sum_{i \in \Lambda} Q_i$ in the Gibbs state $\langle \cdot \rangle_{\beta}$. To be precise, we seek to identify the density $\rho_Q$ such that for any function $g$, we have
\begin{equation}
\langle g( Q ) \rangle_{\beta} = \int_{-\infty}^\infty \rho_Q(s) g(s) ds.
\end{equation}
Choosing $g(s) = e^{i ks}$ gives the characteristic function of $\rho_Q$. Happily, we can use Eq.\ (3) from the main text to get an expression that involves the lengths of the loops. The Poisson-Dirichlet conjecture states that, as $\Lambda \to {\mathbb Z}^3$, we can replace the expectation in the loop model by the expectation with respect to PD(3/2), scaled by a number $\eta = \eta(\beta)$ that represents the fraction of sites in long loops at imaginary time 0 ($\eta \in [0,1]$). We then get
\begin{equation}
\begin{split}
\lim_{L\to\infty} \langle e^{ikQ} \rangle_{\beta}
&= {\mathbb E}_{{\rm PD}(\frac32)} \biggl[ \prod_{j=1}^\infty \bigl( \tfrac13 e^{-\frac23 ik \eta X_j} + \tfrac23 e^{\frac13 ik \eta X_j} \bigr) \biggr] \\
&= e^{-\frac23 ik \eta} \, {\mathbb E}_{{\rm PD}(\frac32)} \Bigl[ \prod_{j\geq1} \bigl( \tfrac13 + \tfrac23 e^{ik \eta Y_j} \bigr) \Bigr].
\end{split}
\end{equation}
We can use Eq.\ \eqref{formula PD} and we get (see \cite{Ueltschi2017} for more details)
\begin{equation}
\label{result PD}
\lim_{L\to\infty} \langle e^{ikQ} \rangle_{\beta}= e^{-\frac23 ik \eta} \, \Gamma(\tfrac32) \sum_{r=0}^\infty \frac{(ik \eta)^r}{\Gamma(r+ \frac32)}.
\end{equation}
We calculate below its inverse Fourier transform, see Eq.\ \eqref{eq rho_Q}.
\section{Symmetry breaking calculation of "nematic histogram"}
Let $n^*$ denote the "spontaneous nematisation" in the z direction
\begin{equation}
n^* = \langle Q_i \rangle_{\vec e_{\rm z}},
\end{equation}
where $i$ is any site. Let $Q_i^{\vec a}$ be the spin rotation of the operator $Q_i$, namely
\begin{equation}
Q_i^{\vec a} = \bigl( a_1 S_i^{\rm x} + a_2 S_i^{\rm y} + a_3 S_i^{\rm z} \bigr)^2 - \tfrac23.
\end{equation}
Its expectation can be expressed in terms of $n^*$:
\begin{equation}
\langle Q_i^{\vec a} \rangle_{\vec e_{\rm z}} = \sum_{j={\rm x,y,z}} a_j^2 \langle (S_i^j)^2 - \tfrac23 \rangle_{\vec e_{\rm z}} + \sumtwo{j,k={\rm x,y,z}}{j\neq k} a_j a_k \langle S_i^j S_i^k \rangle_{\vec e_{\rm z}}.
\end{equation}
It is clear that $\langle \cdot \rangle_{\vec e_{\rm z}}$ is invariant under spin rotations around $\vec e_{\rm z}$, and also that $\langle S_i^{\rm z} \rangle_{\vec e_{\rm z}} = 0$, so that $\langle S_i^j S_i^k \rangle_{\vec e_{\rm z}} = 0$ for all $j \neq k$. Further, since $(S_i^{\rm x})^2 + (S_i^{\rm y})^2 + (S_i^{\rm z})^2 = 2$, we have
\begin{equation}
\langle (S_i^{\rm x})^2 - \tfrac23 \rangle_{\vec e_{\rm z}} = \langle (S_i^{\rm y})^2 - \tfrac23 \rangle_{\vec e_{\rm z}} = -\tfrac12 \langle (S_i^{\rm z})^2 - \tfrac23 \rangle_{\vec e_{\rm z}}.
\end{equation}
This gives
\begin{equation}
\label{rotated state}
\langle Q_i^{\vec a} \rangle_{\vec e_{\rm z}} = n^* (a_3^2 - \tfrac12 a_1^2 - \tfrac12 a_2^2).
\end{equation}
This allows to calculate
\begin{equation}
\label{symm break}
\begin{split}
\lim_{L\to\infty} \langle & e^{ikQ} \rangle_{\beta} = \lim_{L \to \infty} \int_{P{\mathbb S}^2} \Bigl\langle e^{\frac{ik}{|\Lambda|} \sum_{i\in\Lambda} Q_i} \Bigr\rangle_{\vec a} d\vec a \\
&= \lim_{L \to \infty} \int_{P{\mathbb S}^2} \Bigl\langle e^{\frac h{|\Lambda|} \sum_{i\in\Lambda} Q_i^{\vec a}} \Bigr\rangle_{\vec e_{\rm z}} d\vec a \\
&= \int_{P{\mathbb S}^2} e^{ik n^* (a_3^2 - \frac12 a_1^2 - \frac12 a_2^2)} d\vec a \\
&= e^{ik n^*} \int_0^{\pi/2} d\theta \sin\theta \, e^{-\frac32 ik n^* \sin^2 \theta}.
\end{split}
\end{equation}
Expanding the exponential in Taylor series and calculating the trigonometric integrals, we obtain
\begin{equation}
\label{result symm break}
\lim_{L\to\infty} \langle e^{ikQ} \rangle_{\beta} = e^{ik n^*} \Gamma(\tfrac32) \sum_{r=0}^\infty \frac{(-\frac32 ik n^*)^r}{\Gamma(r+\frac32)}.
\end{equation}
We recover the result in Eq.\ \eqref{result PD} provided that
\begin{equation}
\label{value n*}
n^* = -\tfrac23 \eta.
\end{equation}
It is worth pointing out that $n^*$ is negative. This allows to understand the nature of the nematic extremal states. Indeed, a natural candidate is the "axial nematic" state
\begin{equation}
\langle \cdot \rangle_{\vec a} = \lim_{h \to 0+} \lim_{L\to\infty} \langle \cdot \rangle_{H - h \sum_{i \in \Lambda} (\vec a \cdot \vec S_i)^2}.
\end{equation}
One can write a loop representation for the state $\langle \cdot \rangle_{\vec e_{\rm z}}$ where short loops have spin values $-1,0,+1$ and long loops have spin values $-1,+1$. The nematic order parameter would then be equal to
\begin{equation}
\tilde n = \lim_{L\to\infty} \langle Q \rangle_{\vec e_{\rm z}} = \tfrac13 \eta.
\end{equation}
This contradicts Eq.\ \eqref{value n*}. Instead, it turns out that extremal states are "planar nematic":
\begin{equation}
\label{extremal state}
\langle \cdot \rangle_{\vec a} = \lim_{h \to 0+} \lim_{L\to\infty} \langle \cdot \rangle_{H + h \sum_{i \in \Lambda} (\vec a \cdot \vec S_i)^2}.
\end{equation}
(Notice the "$+$" sign in front of $h$). In its loop representation, long loops have the spin value 0, and Eq.\ \eqref{value n*} holds true. The fact that extremal states are planar nematic was pointed out in \cite{Fridman2013}.
We can calculate the density $\rho_Q$ starting from Eq.\ \eqref{symm break}.
\begin{equation}
\begin{split}
\rho_Q(s) &= \frac1{2\pi} \int_{-\infty}^\infty dk \, e^{-isk} e^{-\frac23 ik \eta} \int_0^{\pi/2} d\theta \, \sin\theta \, e^{ik \eta \sin^2\theta} \\
&= \frac1{2\pi} \int_0^{\pi/2} d\theta \, \sin\theta \int_{-\infty}^\infty dk \, e^{ik (-s -\frac23 \eta + \eta \sin^2\theta)} \\
&= \int_0^{\pi/2} d\theta \, \sin\theta \, \delta(\eta \sin^2\theta -s -\tfrac23 \eta) \\
&= \frac1{2\eta} \int_0^\eta \frac{dt}{\sqrt{1 - \frac t\eta}} \delta(t - s -\tfrac23 \eta).
\end{split}
\end{equation}
We used the change of variables $t = \eta \sin^2 \theta$. We finally obtain the density for the nematic observable:
\begin{equation}
\label{eq rho_Q}
\rho_Q(s) = \begin{cases} \frac1{2\sqrt\eta \sqrt{\frac13 \eta - s}} & \text{if } -\frac23 \eta \leq s \leq \frac13 \eta, \\ 0 & \text{otherwise.} \end{cases}
\end{equation}
\section{Binder cumulants}
We can use the loop representation to get expressions for the moments of the operator $Q = \frac1{|\Lambda|} \sum_{i\in\Lambda} Q_i$; then we use the Poisson-Dirichlet conjecture to write them all in terms of a single unknown variable, the fraction of sites in long loops $\eta$. The Binder cumulants follow, and they are given by ratios that do not depend on $\eta$ any more.
We write the calculations in some details since they cannot be found in the literature, to the best of our knowledge.
Here are identities that are exact in the infinite-volume limit:
\begin{equation}
\begin{split}
&\langle Q^2 \rangle_{\beta} = \tfrac29 {\mathbb P}_{\beta}^{\rm loops} [ i_1, i_2 \text{ in same loop}], \\
&\langle Q^3 \rangle_{\beta} = -\tfrac2{27} {\mathbb P}_{\beta}^{\rm loops} [ i_1, i_2, i_3 \text{ in same loop}].
\end{split}
\end{equation}
Here $i_1, i_2, i_3$ (and $i_4$ below) are sites that are very distant from one another. The first identity can be found in \cite{Ueltschi2013}; the second identity is similar. As for the 4th moment, we have
\begin{equation}
\begin{split}
&\langle Q^4 \rangle_{\beta} = \tfrac2{27} {\mathbb P}_{\beta}^{\rm loops} [ i_1, i_2, i_3, i_4 \text{ in same loop}] \\
&+ \tfrac4{27} {\mathbb P}_{\beta}^{\rm loops} [ i_1, i_2 \text{ in same loop}, i_3, i_4 \text{ in other loop}].
\end{split}
\end{equation}
Since the sites are distant, it is necessary that they belong to long loops in order to have a chance to be in the same loop. We can then use the Poisson-Dirichlet conjecture and we get
\begin{equation}
\begin{split}
&{\mathbb P}_{\beta}^{\rm loops} [ i_1, \dots, i_n \text{ in same loop}] \\
&= \eta^n {\mathbb P}_{{\rm PD}(\theta)} [n \text{ random points in same partition element}].
\end{split}
\end{equation}
The latter is the probability that, if we choose a random partition of $[0,1]$ according to PD($\theta$), and $n$ independent points in $[0,1]$, all $n$ points find themselves in the same partition element. This does not depend on the order of the elements so we can replace the Poisson-Dirichlet distribution by the GEM distribution. We calculate it by summing over the probability that the $n$ random points belong to the $k$th element; namely,
\begin{equation}
\begin{split}
&{\mathbb P}_{\beta}^{\rm loops} [ i_1, \dots, i_n \text{ in same loop}] \\
&= \eta^n \sum_{k=1}^\infty {\mathbb P}_{{\rm PD}(\theta)} [n \text{ points in $k$th partition element}] \\
&= \eta^n \sum_{k=1}^\infty {\mathbb E}_{\{Y_i\}}[(1-Y_1)^n \dots (1-Y_{k-1})^n Y_k^n].
\end{split}
\end{equation}
Since the $\{Y_i\}$ are independent, the expectation factorises and we get
\begin{equation}
\begin{split}
{\mathbb P}_{\beta}^{\rm loops} [ i_1, \dots, i_n & \text{ in same loop}] = \eta^n \frac{{\mathbb E}_{{\rm Beta}(1,\theta)}[Y^k]}{1 - {\mathbb E}_{{\rm Beta}(1,\theta)}[(1-Y)^k]} \\
&= \eta^n \frac{\Gamma(1+\theta) \Gamma(n)}{\Gamma(n+\theta)}.
\end{split}
\end{equation}
A similar calculation gives
\begin{equation}
\begin{split}
&{\mathbb P}_{\beta}^{\rm loops} [i_1, i_2 \text{ in same loop}, i_3, i_4 \text{ in other loop}] \\
&= 2\eta^4 \sum_{k<\ell} {\mathbb P}_{{\rm PD}(\theta)} [i_1, i_2 \text{ in $k$th element}, i_3, i_4 \text{ in $\ell$th el.}] \\
&= 2\eta^4 \sum_{k,\ell=1}^\infty {\mathbb E}_{\{Y_i\}}[(1-Y_1)^4 \dots (1-Y_{k-1})^4 \\
& \hspace{25mm} \cdot Y_k^2 (1-Y_k)^2 \dots (1-Y_{k+\ell-1})^2 Y_{k+\ell}^2] \\
&= 2\eta^4 \frac{{\mathbb E}_{{\rm Beta}(1,\theta)}[Y^2 (1-Y)^2] {\mathbb E}_{{\rm Beta}(1,\theta)}[Y^2]}{(1-{\mathbb E}_{{\rm Beta}(1,\theta)}[(1-Y)^4]) (1-{\mathbb E}_{{\rm Beta}(1,\theta)}[(1-Y)^2])} \\
&= 2\eta^4 \frac{\theta \Gamma(1+\theta)}{\Gamma(4+\theta)}.
\end{split}
\end{equation}
Combining the terms above we get the moments for the nematic phase ($\theta = 3/2$) and for the SU(3) phases ($\theta=3$) given in Tab. I of the main text. Alternatively we could have looked at the Taylor series of $\langle e^{i k Q} \rangle_\beta$ from the expression in Eq.~\eqref{expression for exp(ikQ)}.
We now calculate the moments using symmetry breaking; we express them in terms of $n^*$. The $k$th moment is given by
\begin{equation}
\begin{split}
\langle Q^k \rangle_{\beta} &= \frac1{|\Lambda|^k} \sum_{i_1,\dots,i_k} \langle Q_{i_1} \dots Q_{i_k} \rangle_{\beta} \\
&=\frac1{|\Lambda|^k} \sum_{i_1,\dots,i_k} \int_{P{\mathbb S}^2} d\vec a \, \langle Q_{i_1} \rangle_{\vec a} \dots \langle Q_{i_k} \rangle_{\vec a}.
\end{split}
\end{equation}
We used the fact that extremal states are "clustering" and that for $\Lambda$ large, the main contribution in the sum comes from distant sites. We now use translation invariance and we rotate the observable rather than the state, so as to get
\begin{equation}
\begin{split}
\langle Q^k \rangle_{\beta} &= \int_{P{\mathbb S}^2} d\vec a \, \langle Q_i^{\vec a} \rangle_{\vec e_{\rm z}}^k \\ &= (n^*)^k \int_{P{\mathbb S}^2} d\vec a \, (a_3^2 - \tfrac12 a_1^2 - \tfrac12 a_2^2)^k \\
&= (n^*)^k \int_0^{\pi/2} d\theta \, \sin\theta \, (\tfrac32 \cos^2\theta - \tfrac12)^k.
\end{split}
\end{equation}
We used Eq.\ \eqref{rotated state} to get the second line. Calculating the integral we finally get
\begin{equation}
\begin{split}
&\langle Q^2 \rangle = \tfrac15 (n^*)^2, \\
&\langle Q^3 \rangle = \tfrac2{35} (n^*)^3, \\
&\langle Q^4 \rangle = \tfrac3{35} (n^*)^4.
\end{split}
\end{equation}
This is compatible with the values in the Table \ref{Table:Qn} if we assume validity of the relation \eqref{value n*}. Notice that $n^*$ happens to be negative.
The calculation using symmetry breaking is simpler than that with Poisson-Dirichlet. However, the heuristics is more subtle and the result may be uncertain. For $u=0$ and $u=1$, the Poisson-Dirichlet calculations can be carried out without much hesitation (with $\theta=3$) but symmetry breaking is not immediate.
\medskip
\section{Energy histograms}
\begin{figure}[t]
\centering
\includegraphics{FigS1.pdf}
\caption{Energy histograms $P_E$ for $u=\cot(\phi=3\pi/8)=0.41412...$ for various system sizes $L$ at temperatures with equal peak height.}
\label{figShist}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics{FigS2.pdf}
\caption{Left panel: Cumulant $U_E$ of the energy distribution as a function of temperature near $T_c$ at $u=\cot(\phi=3\pi/8)=0.41412...$ for various system sizes $L$. Right panel: Finite-size extrapolation of the minimum value $U_E^\text{min}$ of $U_E$. }
\label{figSUE}
\end{figure}
Further evidence for the first-order character of the planar spin nematic melting transition is obtained from analysing energy histograms $P_E$ near the transition temperature. Within the stochastic series expansion QMC approach, the energy histogram $P_E$ is readily available from the histogram of the expansion order~\cite{Sandvik1999}. We obtain histograms with a pronounced two-peak structure for sufficiently large system sizes, indicative of phase coexistence. In particular,
we can use standard histogram-reweighting~\cite{Ferrenberg1988} in order to access the energy histograms at any temperatures $T$ in the vicinity of a base temperature, at which the QMC simulations were actually performed. For each system size, this base temperature was taken from the peak position of the specific heat. This reweighting approach allows us to adjust $T$ such as to obtain histograms $P_E$ with an equal peak height of the two peaks~\cite{Lee1990}. These are shown in Fig.~\ref{figShist} for our reference value of $u=\cot(\phi=3\pi/8)=0.41412...$. We identify a pronounced two-peak structure for $L\gtrsim 64$. While the dip for $L=64$ is still shallow, it becomes deeper for increasing values of $L$, in agreement with the predictions by Binder~\cite{Binder1987} and Lee and Kosterlitz~\cite{Lee1990,Lee1991} for a first-order transition. The fact that the minimum takes on a substantial value even for $L=128$ reflects the fact that the transition is weakly first-order.
\begin{figure}[t]
\centering
\includegraphics{FigS3.pdf}
\caption{Energy histograms $P_E$ for various values of $u$ for $L=96$ at temperatures with equal peak height. For better comparison the individual histograms are shown shifted with respect to the energy $E_0$ of the minimum.}
\label{figShistvar}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics{FigS4.pdf}
\caption{Finite-size extrapolation of different estimators for the transition temperature $T_c$ at $u=\cot(\phi=3\pi/8)=0.41412...$.}
\label{figSTc}
\end{figure}
We also analyzed the fourth-order cumulant
\begin{equation}
U_E=1-\frac{1}{3}\frac{\langle H^4\rangle_\beta}{\langle H^2\rangle_\beta^2}
\end{equation}
of the energy distribution in the vicinity of $T_c$. This is shown for different system sizes in Fig.~\ref{figSUE}. We observe a narrow dip in $U_E$ at a temperature that approaches $T_c$ upon increasing $L$. While the minimum value $U_E^\text{min}$ tends towards $2/3$ for increasing $L$, a finite-size extrapolation with a $1/L^3$-scaling (cf. the right panel of Fig.~\ref{figSUE}) shows that in the thermodynamic limit $U_E$ remains well below $2/3$. This is another strong indication for the first-order character of the phase transition~\cite{Janke1993}.
Thus far, we concentrated on $u=\cot(\phi=3\pi/8)=0.414...$, but we also performed QMC simulations at different values of $u$ across the planar spin nematic regime. Energy histograms $P_E$ for several values of $u=\cot(\phi)$ are shown in Fig.~\ref{figShistvar}. Here, the energy has been shifted with respect for the dip position, denoted $E_0$, for better comparison. The energy histograms $P_E$ all exhibit a characteristic two-peak structure. We furthermore find that upon approaching the $u=0$ ($\phi=\pi/2$) end point of the planar spin nematic phase, the relative value of the minimum between the peaks increases slightly. For example, at $u=0.414...$ the ratio between the the local minimum value of $P_E$ and its maximum value is about 0.51, while at $u=0.031..$, this ratio has increased to about 0.71.
This indicates that the transition becomes even weaker first-order upon approaching this SU(3) point. Moving towards the other end point of the planar spin nematic phase at $u=1$ ($\phi=\pi/4$), we do not observe a similar weakening of the first-order character of the nematic transition. Previous work on the thermal transitions out of the ferromagnetic phase at both SU(3) points claims that both transitions are continuous~\cite{Harada2002}. We note that on the $L=96$ system size, we similarly were not able to resolve any two-peak structure in $P_E$ at the SU(3) point at $u=1$.
It would thus certainly be interesting to further examine the SU(3) points in more detail in future work (where the low-$T$ phase is ferromagnetically ordered) on even larger lattices than accessible to us, in order to assess the conclusion of Ref.~\cite{Harada2002} regarding the nature of the phase transitions at the SU(3) points.
\section{Determination of $T_c$}
Here, we detail the estimation of the transition temperature $T_c$, focusing again on our reference value $u=\cot(\phi=3\pi/8)$=0.41412..... We consider three different estimators for $T_c$, obtained upon performing an extrapolation to the thermodynamic limit of (i) the position of the maximum in the specific heat $C$, (ii) the position of the minimum in $U_E$, and (iii) the temperature for which the peaks in the energy histogram $P_E$ have equal height. As shown in Fig.~\ref{figSTc}, all three quantities extrapolate with a $1/L^3$-scaling for large system sizes to a mean estimate $T_c=1.64900(1)$ for the transition temperature, as quoted in the main text.
\end{document} |
2,877,628,089,445 | arxiv | \section{Introduction}
\subsection{Motivation}
Heterogeneous cellular networks (HCNs), consisting of various types of base stations such as macro, pico and femto, are a necessary step in the evolution of cellular networks to meet the explosive demand in mobile data traffic growth and various emerging applications \cite{GhoshHCN12}. For seamless coverage, it is essential to understand the signal-to-interference ratio (SIR) distribution, especially at high deployment densities, which makes the network interference-limited. In the literature, the mathematical analysis for the SIR distribution in conventional single-tier cellular network and HCNs mainly relies on the application of Poisson point process (PPP) theory in stochastic geometry \cite{Haenggi18book, Andrews11, Dhillon12, Mukherjee12, nigam2014, Singh13, Jo12, Keeler13, Blaszczyszyn15, XZhang14}, which has been shown to be a powerful tool in recent years.
However, the conventional SIR analysis for the HCNs is restricted to the mean success probability $\ps(\theta)\triangleq\P(\SIR>\theta)$, defined as the complementary cumulative distribution function (CCDF) of the SIR evaluated at the typical link. Such a performance metric is merely a macroscopic quantity by averaging the conditional success probability (link reliability) $\Ps(\theta)\triangleq \P(\SIR>\theta\mid\Phi)$ over the underlying point process $\Phi$, hence it provides no information about the difference between links. In contrast, the network operators' concerns for the real deployment of HCNs are questions such as ``How are the link reliabilities distributed among users in different tiers and/or in the whole network?'', or ``How will the offloading affect the SIR performance of different tiers?'', or ``What is the reliability level that the `5\% user'\footnote[1]{The ``5\% user'' refers to the user whose performance ranks at the 5th-percentile.} can achieve in each tier?''
To obtain such fine-grained information on the SIR performance, the meta distribution concept was introduced in \cite{MHmeta}, which characterizes the distribution of the conditional success probabilities of the individual links given the point process. The lack of study of the meta distribution for HCNs with offloading biasing among different tiers motivates our study in this paper. We shall see that the meta distribution of SIR is a framework that facilitates the analysis for a series of performance metrics including the variances of the link reliability, the mean local delay and the asymptotic gains for HCNs.
\subsection{Related Work}
For the SIR-related analysis based on stochastic geometry in HCNs, the most commonly used model is the homogeneous independent Poisson (HIP) model, where BSs of each tier follow a homogeneous independent Poisson point process \cite[Def.~2]{Haenggi14wcl}. \cite{Jo12} utilized the HIP model with the (biased) nearest-BS association and considered offloading between different tiers, where offloading was implemented by biasing the transmit power of different tiers. \cite{Singh13} studied an extended heterogeneous network scenario where multiple radio access technologies (RATs) including cellular and Wi-Fi coexist, with each RAT consisting of multiple tiers and modeled by the HIP model, and the biasing association is also considered. The distribution of the SINR at the typical user was derived and applied to the analysis of rate coverage. In \cite{nigam2014}, coordinated multipoint joint transmission (CoMP) in HCN was analyzed and it was shown, as a special case (namely no-CoMP), that the result for a single tier in \cite{Andrews11} also holds for arbitrary tiers.
Instead of the (biased) nearest-BS association adopted in the above-mentioned works, there is also the line of work using the maximum instantaneous SINR association, such as \cite{Dhillon12, Mukherjee12, Keeler13, Blaszczyszyn15, XZhang14}. \cite{Dhillon12} studied the coverage (success) probability and the average rate of the HIP model for the SINR thresholds greater than $0$ dB under both open and closed access. \cite{Mukherjee12} utilized the HIP model and determined the coverage probability from the joint CCDF of the SINR at the typical user with the SINR thresholds extended to all regime. \cite{Keeler13} and \cite{Blaszczyszyn15} also extended the SINR threshold to less than 0 dB and established the exact results for the maximum instantaneous SINR association rule with arbitrary shadowing in HCNs by the $K$-coverage probability. As for the fading model, it should be noted that different from \cite{Dhillon12}, where only Rayleigh fading is considered, it has been shown that the same result applies to arbitrary fading in \cite{XZhang14}.
As for modeling the HCNs with more general point processes, \cite{Deng15} proposed two models for the two-tier HCN with the inter-tier independence modeled by combining the PPP and the Poisson hole process, and the intra-tier independence taken into account by combining the PPP and Matern cluster process respectively, yielding more accurate results for the outage probability and the area spectral efficiency. In \cite{WeiASAPPP16}, for HCNs consisting of general point processes as each tier with unbiased association, the authors studied the SIR distribution by using the shifted versions of the PPP SIR distributions as approximations.
Most of these above-discussed works related to SIR analysis in HCNs only analyze the mean success probability without delving into the SIR performance at the individual link level. To overcome this limitation, we need to develop the meta distribution framework for the HCNs.
The meta distribution has been applied to different scenarios since it was formally formulated in \cite{MHmeta}, where the analysis of single-tier Poisson bipolar networks with ALOHA channel access and the downlink of Poisson cellular networks laid the foundation of the concept. It was applied to study D2D communication underlaid with the downlink of Poisson cellular networks \cite{2017mHaenggiTcom}, uplink and downlink Poisson cellular networks with fractional power control \cite{YWmetaPower}, D2D communications with interference cancellation \cite{YWmetaIC}, millimeter-wave D2D networks \cite{Deng17}, the spatial outage capacity \cite{SOCmeta}, and downlink coordinated multi-point transmission/reception (CoMP) in cellular networks \cite{CuiCoMP17}. These studies revealed some interesting new insights that are of significance to the deployment of real networks.
\subsection{Contributions}
In this paper, we develop an SIR meta distribution analysis framework for the HIP downlink model under Rayleigh fading. We show that this framework enables a comprehensive understanding of a series of key performance metrics and network design problems. Specifically,
\begin{itemize}
\item We derive exact analytical expressions of the $b$-th moment of the conditional success probability for both the overall typical user and the typical user in each tier under Rayleigh fading.
\item We show that the beta distribution is an excellent approximation for the exact meta distribution of both the entire network and each tier.
\item We reveal that both the $b$-th moment and the variance of the conditional success probability for each tier can be efficiently approximated by horizontally shifting the mean success probability curve of the single-tier PPP according to the asymptotic SIR gains, whose expressions are given explicitly.
\item We rigorously study the effects of the offloading biases on both the entire network and each tier in terms of the first moment and variance of the conditional success probability.
\item We extend the model to include random base station activity by ALOHA and derive analytical expressions of the $b$-th moment of the conditional success probability for both the overall typical user and the typical user in each tier.
\item We derive lower bounds of the region of ALOHA probabilities so that the mean local delay remains finite under the effect of random base station activity.
\end{itemize}
\subsection{Organization}
The rest of the paper is organized as follows: Section~\ref{sec:SystemModel} introduces the system model and the concept of the SIR meta distribution in HCNs. Section~\ref{sec:MainResults} develops the general framework for the analysis of HCNs using the meta distribution, wherein we derive the exact analytical expressions of the $b$-th moment of the conditional success probability, both for the entire network and for each individual tier, and discuss various key performance metrics and some network design problems related to offloading. Section~\ref{sec:BSactivity} extends the SIR meta distribution to the analysis of random base station activity. Section~\ref{sec:Conclusion} concludes the paper.
\section{System Model}\label{sec:SystemModel}
\subsection{SIR Model}
We consider a general $K$-tier heterogeneous cellular network model, where BSs of each tier follow a homogeneous independent Poisson point process $\Phi_i$ with intensity $\lambda_i$. This is the so-called homogeneous independent Poisson (HIP) model \cite[Def.~2]{Haenggi14wcl}. For the BSs of the $i$-th tier, the transmit power is $P_i$, and the range expansion bias is $B_i$. For BS ${\rm x}\in \Psi=\bigcup\limits_{i\in[K]}\Phi_i$, $\iota({\rm x})\in[K]$, denotes its tier number and $[K]=\{1,2,...K\}$. We assume the standard power-law path loss model with exponent $\alpha>2$, and define $\delta = 2/\alpha$. The downlink association rule is the biased nearest-BS association, \ie, for the typical user at the origin $o$, its serving BS $\nu(o)$ is drawn from all BSs according to
\begin{equation}\label{eq:nu_o}
\nu(o)= \mathop{\arg\max}\limits_{{\rm x}\in\Psi}\{P_{\iota({\rm x})}B_{\iota({\rm x})}\|{\rm x}\|^{-\alpha}\},
\end{equation}
where $\iota({\rm x})$ is the tier index of BS $\rm x$.
The power fading coefficient associated with BS ${\rm x}\in\Psi$ is denoted by $h_{\rm x}$, which is exponentially distributed with $\E(h_{\rm x})=1$ (Rayleigh fading). $R_j$ is the distance from the typical user to the nearest BS in $\Phi_j$. First we focus on the fully loaded case on a certain resource block (RB), \ie, all BSs are always active on the RB in consideration.
Letting ${\rm x_0} = \nu(o)$, for the typical user at the origin, the received signal-to-interference ratio (SIR) is given by
\begin{equation}\label{eq:SIR_typ}
\SIR_o = \frac{P_{\iota({\rm x_0})}h_{\rm x_0}\|\rm x_0\|^{-\alpha}}{\sum\limits_{{\rm x}\in\Psi\setminus\{\rm x_0\}}P_{\iota({\rm x})}h_{\rm x}\|{\rm x}\|^{-\alpha}}.
\end{equation}
\subsection{Meta Distribution for HCNs}
The SIR meta distribution for single-tier cellular networks is the two-parameter function defined as \cite{MHmeta}
\begin{equation}
\bar F(\theta, t) \triangleq \bar F_{\Ps}(t) = \P(\Ps(\theta)>t),\quad \theta\in\R^+,\:t\in [0,1],
\label{eq:meta}
\end{equation}
which is the CCDF of the conditional success probability (link reliability) $\Ps$. The $b$-th moment of the meta distribution is denoted by $M_b(\theta)\triangleq \E(\Ps(\theta)^b)$.
We consider two types of SIR meta distributions, one is for the overall network (\ie, the overall typical user) and the other is specific to the $i$-th tier, obtained by conditioning on the typical user connecting to that tier. In the following, we use the label $(i)$ for the quantities related to the $i$-th tier meta distributions.
\section{SIR Meta Distribution Framework}\label{sec:MainResults}
In this section we derive the general analytical expression for the $b$-th moment of the meta distribution in the HIP model with biasing.
\subsection{Moments of the Conditional Success Probability}
First, we state a lemma about the conditional and average access probabilities for the typical user connecting to the given $i$-th tier, which is a slight reformulation of \cite[Lemma 1]{Jo12}. Hence the proof is omitted.
\allowdisplaybreaks
\begin{lemma}[Access probability]
\label{lem:AccessProb}
Defining $\iota(\rm x_0)\triangleq\iota(\nu(o))$, the conditional access probability for the typical user connecting to the $i$-th tier given $R_i$ is
\begin{equation}\label{eq:condi_acc_Prob}
\P(\iota({\rm x_0})=i \mid R_i) = \prod_{j\neq i} e^{-\lambda_j \pi ({\hat P_{ij}} {\hat B_{ij}})^{\delta} R_i^2},
\end{equation}
and the access probability that the typical user is associated with the $i$-th tier is
\begin{equation}\label{eq:AccessProb}
p_{\rm a}^{(i)}\triangleq\P(\iota({\rm x_0})=i) = \frac{1}{\sum\limits_{j\in[K]} \hat{\lambda}_{ij}(\hat{P}_{ij}\hat{B}_{ij})^\delta}
\end{equation}
where $\hat\lambda_{ij} = \lambda_j/\lambda_i$, $\hat P_{ij} = P_j/P_i$ and $\hat B_{ij} = B_j/B_i$.
\end{lemma}
Next we present the first main result on the moments of the conditional success probability.
\begin{theorem}[Moments for the $K$-tier HCNs]
\label{thm:Mb_RE}
For the overall typical user in the $K$-tier HIP model with range expansion, the $b$-th moment of the conditional success probability is given by
\begin{equation}
\label{eq:Mb-typ-Bias}
M_b = \sum\limits_{i} \frac{1}{\sum\limits_{j}\hat\lambda_{ij} (\hat P_{ij} \hat B_{ij})^\delta~ _2F_1(b,-\delta; 1-\delta; -\theta\hat B_{ij}^{-1})}.
\end{equation}
where $i,~j\in[K]$, $\hat\lambda_{ij} = \lambda_j/\lambda_i$, $\hat P_{ij} = P_j/P_i$ and $\hat B_{ij} = B_j/B_i$.
\end{theorem}
\begin{IEEEproof}
See Appendix A.
\end{IEEEproof}
\begin{corollary}[Moments without range expansion]
\label{thm:Mb-WO-RE}
For the overall typical user, the $b$-th moment $M_b$ with no range expansion in any tier, \ie, $B_i=1$ for $i\in[K]$, is given by
\begin{equation}\label{eq:Mb_typ}
M_b=\frac{1}{_2F_1(b,-\delta; 1-\delta; -\theta)} ,\quad b\in \mathbb{C}.
\end{equation}
\end{corollary}
\begin{IEEEproof}
This can be easily obtained by setting $B_i=1$ for $i\in[K]$ in \eqref{eq:Mb-typ-Bias}.
\end{IEEEproof}
\begin{remark}
The $b$-th moment of the meta distribution of the overall typical user in a HIP-based $K$-tier downlink HCN without range expansion in any tier is the same as that in a single-tier network \cite[Thm.~2]{MHmeta}. Hence the meta distribution is the same. This shows that the multitier architecture does not improve the performance of the 5\% user (or, more generally, the fairness between the users).
\end{remark}
\begin{corollary}[Moments for the typical user in the $\bm i$-th tier]
\label{thm:Mb-ith-tier}
Conditioned on the typical user connecting to the $i$-th tier, the $b$-th moment of the meta distribution is given by
\begin{equation}\label{eq:Mb-ith-tier}
M_{b \mid (i)} = \frac{\sum_{j} \hat{\lambda}_{ij}(\hat{P}_{ij}\hat{B}_{ij})^\delta}{\sum_j \hat{\lambda}_{ij}(\hat{P}_{ij}\hat{B}_{ij})^\delta ~_2F_1(b,-\delta; 1-\delta; -\theta \hat B_{ij}^{-1})}.
\end{equation}
where $\hat\lambda_{ij} = \lambda_j/\lambda_i$, $\hat P_{ij} = P_j/P_i$ and $\hat B_{ij} = B_j/B_i$.
\end{corollary}
\begin{IEEEproof}
This follows directly from the proof of Thm.~\ref{thm:Mb_RE}.
\end{IEEEproof}
\begin{corollary}[Mean local delay]
For the typical user in the $i$-th tier, the mean local delay is given by
\begin{equation}\label{eq:MLD_ith}
M_{-1\mid (i)} = \frac{(1-\delta)\sum_j \hat \lambda_{ij} (\hat P_{ij} \hat B_{ij})^\delta}{\sum_j \hat \lambda_{ij} (\hat P_{ij} \hat B_{ij})^\delta (1-\delta-\delta\theta \hat B_{ij}^{-1})},
\end{equation}
\end{corollary}
\begin{IEEEproof}
The mean local delay is the -1-st moment of the conditional success probability in Cor.~\ref{thm:Mb-ith-tier}. Using the identity $_2F_1(-1,b;c;z) \equiv 1-\frac{bz}{c}$, \eqref{eq:MLD_ith} is obtained.
\end{IEEEproof}
The mean local delay $M_{-1\mid (i)}$ has a phase transition at $\theta_{{\rm c}\mid(i)}$ as given in \eqref{eq:theta_c} when it is seen as a function of the SIR threshold with the other parameters fixed, which means the mean local delay is finite for $\theta < \theta_{{\rm c}\mid(i)}$ and is infinite for $\theta \geq \theta_{{\rm c}\mid(i)}$.
\begin{equation}\label{eq:theta_c}
\theta_{{\rm c}\mid(i)}=\frac{(1-\delta)\sum_j \hat \lambda_{ij} (\hat P_{ij} \hat B_{ij})^\delta}{\delta \sum_j \hat \lambda_{ij} (\hat P_{ij})^\delta (\hat B_{ij})^{\delta-1}},
\end{equation}
\subsection{Approximations of the Meta Distribution}
According to the Gil-Pelaez theorem\cite{gil1951note}, for a general variable $X>0$ with characteristic function $\varphi_X(t)\triangleq\E e^{jtX}$, $j\triangleq\sqrt{-1}$, $t\in\R$, the CCDF of $X$ is given by
\begin{equation}\label{eq:Gil}
\bar F_X(x)=\frac12+\frac1\pi\int_0^\infty \frac{\Im(e^{-jt\log x}\varphi_X(jt))}{t}\dd t,
\end{equation}
where $\Im(z)$ denotes the imaginary parts of $z\in\mathbb{C}$.
Letting $X\triangleq\Ps(\theta)$ (or $X\triangleq P_{s\mid(i)}(\theta)$), we have $\varphi_X(t)=M_{jt}$ (or $\varphi_X(t)=M_{jt\mid(i)}$), setting $b=jt$ in \eqref{thm:Mb_RE} (or \eqref{eq:Mb-ith-tier}). Hence, the meta distribution of the conditional success probability for the whole network (and the specific $i$-th tier) can be calculated.
Calculation of the exact meta distribution via the Gil-Pelaez theorem usually involves many calculations of imaginary moments, which prohibits direct insights into the meta distributions and its applications in mapping to other performance metrics like the ergodic data rate \cite{Deng17}, etc. An efficient approximation of the meta distribution is obtained by using the beta distribution through matching their first and second moments, which has been verified in \cite{MHmeta, Deng17, 2017mHaenggiTcom, YWmetaPower, YWmetaIC} for various network scenarios.
\subsection{Asymptotic SIR Gains}
As shown in \cite{Haenggi14wcl,AGuo15ADG,Ganti16}, the CCDFs $\bar F_{\SIR}(\theta)$ of the SIR at the typical user in different general single-tier nearest-associated networks resemble merely horizontally shifted versions in the SIR threshold $\theta$ (in dB) of each other, as long as they have the same diversity gain. The horizontal gap (or the ``SIR gain'') relative to a reference network model at the target success probability $p_{\rm t}$ is given by
\begin{equation}\label{eq:HorizonGap}
G_{\rm p}(p_{\rm t}) \triangleq \frac{\bar F_{\SIR}^{-1}(p_{\rm t})}{\bar F_{\SIR_{\rm ref}}^{-1}(p_{\rm t})},
\end{equation}
where $\bar F_{\SIR}^{-1}$ is the inverse function of $\bar F_{\SIR}(\theta)$.
Usually it is more convenient to write $G_{\rm p}(p_{\rm t})$ as a function of $\theta$ by $G(\theta)=\theta'/\theta$,
where $\theta'$ is given by $\bar F_{\SIR}(\theta')=\bar F_{\SIR_{\rm ref}}(\theta)=p_{\rm t}$.
The asymptotic SIR gain at the high-reliability regime is defined by
\begin{equation}\label{eq:HorizonGap3}
G_0 \triangleq \lim_{\theta\to 0}G(\theta).
\end{equation}
Similarly, the asymptotic SIR gain at the low-reliability regime is defined as
\begin{equation}\label{eq:HorizonGapInf}
G_\infty \triangleq \lim_{\theta\to \infty}G(\theta).
\end{equation}
Usually, the most sensible reference network model is the homogeneous PPP. If $G_0$ (or $G_\infty$) exists, then a rather convenient way to estimate $p_{\rm s}(\theta)$ of the network in focus is by using $G_0$ (or $G_\infty$) as the scaling factor $G$ for $\theta$, \ie,
\begin{equation}\label{eq:HorizonGap4}
p_{\rm s}(\theta) \approx p_{\rm s, PPP}({\theta/G}).
\end{equation}
$G(\theta)$ in dB quantifies the horizontal gap between $p_{\rm s}(\theta)$ and $p_{\rm s, PPP}(\theta)$ for $\theta$ in dB.
Next, we extend the above-mentioned SIR asymptotic gain in single-tier networks to HCNs based on the HIP model.
\begin{definition}[Asymptotic SIR gains in HCNs]
For the HCN model in this paper, the asymptotic SIR gains of the $b$-th moment of the conditional success probability for each tier, at both the high-reliability and low-reliability regimes, with the standard success probability of the single-tier PPP as the reference, are, respectively, given by
\begin{equation}\label{eq:AsymptGainMb}
G_{0,b}^{(i)} = \lim_{\theta\to 0}\frac{M_{b\mid(i)}^{-1}(p_{\rm s, PPP}(\theta))}{\theta},
\end{equation}
and
\begin{equation}\label{eq:AsymptGainMbInf}
G_{\infty,b}^{(i)} = \lim_{\theta\to \infty}\frac{M_{b\mid(i)}^{-1}(p_{\rm s, PPP}(\theta))}{\theta}.
\end{equation}
where $M_{b\mid(i)}^{-1}$ is the inverse function of $M_{b\mid(i)}$ and $p_{\rm s, PPP}(\theta) = M_1$ in \eqref{eq:Mb_typ}.
\end{definition}
We will show that, remarkably, the horizontal shift is applicable to each tier in the HCN. Before deriving the asymptotic gains, we first state a lemma about the asymptotics of the hypergeometric function $_2F_1$.
\begin{lemma}\label{lem:Asymp2F1}
For $b\in\mathbb{C}$,
\begin{equation}\label{eq:Asymp2F1_0}
_2F_1(b,\delta,1-\delta;-z) \sim 1+bz\frac{\delta}{1-\delta}, ~~z\to 0,
\end{equation}
and
\begin{equation}\label{eq:Asymp2F1_Inf}
_2F_1(b,\delta,1-\delta;-z) \sim z^\delta T(b), ~~z\to \infty,
\end{equation}
where $T(b)=\int_0^\infty(1-(1+r^{-\frac{1}{\delta}})^{-b})\dd r$.
\end{lemma}
\begin{IEEEproof}
By Taylor expansion, at $z=0$,
\begin{equation}\label{eq:Taylor}
\frac{1}{(1+z)^b} \sim 1-bz,
\end{equation}
hence
\begin{align}\label{eq:2F1_asymp}
_2F_1(b,\delta,1-\delta;-z) &= 1+ \int_1^\infty \Big(1-\frac{1}{(1+z s^{-1/\delta})^b} \Big)\dd s \nonumber \\
&\sim 1+ \int_1^\infty \Big(1-(1- bz s^{-1/\delta}) \Big)\dd s \nonumber \\
&= 1+bz\frac{\delta}{1-\delta}.
\end{align}
\allowdisplaybreaks
When $z\to \infty$, we have
\begin{align}
_2F_1(b,-\delta; 1-\delta; -z) &= 1 + 2\int_0^1 \Big(1-\frac{1}{(1+z r^\alpha)^b}\Big) r^{-3} \dd r \nonumber \\
&= 1+ z^\delta 2\int_0^{z^{\frac{1}{\alpha}}} \Big(1-\frac{1}{(1+r^\alpha)^b}\Big) r^{-3} \dd r \nonumber \\
&\sim z^\delta 2\int_0^{\infty} \Big(1-\frac{1}{(1+r^\alpha)^b}\Big) r^{-3} \dd r \nonumber \\
&\sim z^\delta \int_0^{\infty} \Big(1-\frac{1}{(1+r^{-\frac{1}{\delta}})^b}\Big) \dd r.
\end{align}
where the first step is according to \cite[eq. (23)]{MHmeta}; the second step follows variable substitution $z^{\frac{1}{\alpha}}r\to r$; the third step follows since $z\to \infty$ and the last step follows variable substitution $r\to r^{-\frac{1}{2}}$.
\end{IEEEproof}
\begin{corollary}[Asymptotic SIR gains relative to PPP]
\label{cor:AsymptGain}
Conditioned on the typical user connecting to the $i$-th tier, the asymptotic SIR gains of the $b$-th moment of the meta distribution relative to $M_1$ of the single-tier homogeneous PPP are given by
\begin{equation}\label{eq:AsymptGain}
G_{0,b}^{(i)} = \frac{\sum_{j} \hat{\lambda}_{ij}\hat{P}_{ij}^\delta \hat{B}_{ij}^\delta}{b\sum_j \hat{\lambda}_{ij}\hat{P}_{ij}^\delta \hat{B}_{ij}^{\delta-1}},
\end{equation}
and
\begin{equation}\label{eq:AsymptGainInf}
G_{\infty,b}^{(i)} = \Big(\frac{T(1)}{T(b)}\frac{\sum_{j} \hat{\lambda}_{ij}\hat{P}_{ij}^\delta \hat{B}_{ij}^\delta}{\sum_j \hat{\lambda}_{ij}\hat{P}_{ij}^\delta}\Big)^{\frac{1}{\delta}},
\end{equation}
where $b\in\mathbb{C}$, $\hat\lambda_{ij} = \lambda_j/\lambda_i$, $\hat P_{ij} = P_j/P_i$ and $\hat B_{ij} = B_j/B_i$.
\end{corollary}
\begin{IEEEproof}
To determine $G_{0,b}^{(i)}$, we need to evaluate the limit of $M_{b\mid(i)}(\theta)$ at $\theta\to 0$.
Applying \eqref{eq:Asymp2F1_0} in \eqref{eq:Mb-ith-tier},
\begin{align}\label{eq:Mb_ith_asymp}
M_{b\mid(i)}(\theta) &\sim \frac{\sum_{j} \hat{\lambda}_{ij}\hat{P}_{ij}^\delta \hat{B}_{ij}^\delta}{\sum_{j} \hat{\lambda}_{ij}\hat{P}_{ij}^\delta \hat{B}_{ij}^\delta \Big(1+b\theta\hat{B}_{ij}^{-1} \frac{\delta}{1-\delta} \Big)} \nonumber\\
&=\frac{1}{1+ \frac{\delta}{1-\delta}\frac{\theta}{G_{0,b}^{(i)}}} \nonumber\\
&\sim 1- \frac{\delta}{1-\delta}\frac{\theta}{G_{0,b}^{(i)}}.
\end{align}
Since for the PPP,
\begin{equation}\label{eq:M1_PPP_asymp}
M_{1,{\rm PPP}}(\theta) = \frac{1}{_2F_1(1,\delta,1-\delta;-\theta)} \sim 1-\frac{\theta\delta}{1-\delta},
\end{equation}
it is clear that $G_{0,b}^{(i)}$ is exactly the asymptotic gain for $\theta\to 0$.
To determine $G_{\infty,b}^{(i)}$, applying \eqref{eq:Asymp2F1_Inf} in \eqref{eq:Mb-ith-tier}, we have
\begin{equation}\label{eq:Mb_ith_asympinf}
M_{b\mid(i)}(\theta) \sim \bigg(\frac{\sum_{j} \hat{\lambda}_{ij}\hat{P}_{ij}^\delta}{\sum_{j} \hat{\lambda}_{ij}\hat{P}_{ij}^\delta \hat{B}_{ij}^\delta}T(b)\theta^\delta\bigg)^{-1}.
\end{equation}
$G_{\infty,b}^{(i)}$ is then obtained by comparing \eqref{eq:Mb_ith_asympinf} and \eqref{eq:M1_PPP_asymp}.
\end{IEEEproof}
\begin{remark}
In Cor.~\ref{cor:AsymptGain}, the reference model in use is the first moment of the conditional success probability of the single-tier PPP. Another efficient way is to use the $b$-th moment the conditional success probability of the single-tier PPP as the reference model, then the variable $b$ in \eqref{eq:AsymptGain} and \eqref{eq:AsymptGainInf} vanishes and the two asymptotic gains become constants. From this it is easy to be inferred that the variances $V^{(i)}(\theta)$ of each tier are also shifted versions of each other, as shown in \figref{fig:HorizonShiftVar_B2_10} in Sec.~\ref{sec:Apps}.
\end{remark}
\subsection{Base Station Activity}\label{sec:BSactivity}
In this section, we model the random activities of interfering base stations in each tier by the ALOHA model, \ie, the interfering BSs of tier $i$ are active only with probability $p_i$. The activities of different base stations are independent. We first derive the general $b$-th moment for the typical user of each individual tier and the whole network, and then the lower bound of the activity probabilities to keep the mean local delay finite.
\begin{theorem}
\label{thm:Mb-ith-tier_BSActivity}
Given that the typical user connects to the $i$-th tier with the serving BS always being active, and the interfering BSs in tier $j\in[K]$ are active independently with probability $p_j$, the $b$-th moment of the meta distribution can be expressed as
\begin{equation}\label{eq:Mb-ith-tier-BSActivity}
M_{b \mid (i)}(\bm p) = \frac{\sum_{j} \hat{\lambda}_{ij}(\hat{P}_{ij}\hat{B}_{ij})^\delta}{\sum\limits_j \hat{\lambda}_{ij}(\hat{P}_{ij}\hat{B}_{ij})^\delta \big(1-\sum\limits_{k=1}^{\infty}\binom bk (-p_j \theta \hat B_{ij}^{-1})^k \frac{\delta}{k-\delta} ~_2F_1(k,k-\delta; k-\delta+1; -\theta \hat B_{ij}^{-1})\big)}.
\end{equation}
where $\bm p=(p_1,p_2,...p_K)$, $\hat\lambda_{ij} = \lambda_j/\lambda_i$, $\hat P_{ij} = P_j/P_i$, and $\hat B_{ij} = B_j/B_i$.
\end{theorem}
\begin{IEEEproof}
See Appendix B.
\end{IEEEproof}
As expected, letting $K=1$, \eqref{eq:Mb-ith-tier-BSActivity} retrieves the single-tier result in \cite[Thm.~3]{MHmeta}; also, letting $K=2$ and the two tiers share the same parameters, the result of each tier is also the same as the-single tier result.
\begin{theorem}
\label{thm:Mb-typ_BSActivity}
For the overall typical (active) user with the interfering BSs in tier $j\in[K]$ are active independently with probability $p_j$, the $b$-th moment of the meta distribution can be expressed as
\begin{equation}\label{eq:Mb-typ-BSActivity}
M_{b}(\bm p) = \sum\limits_i \frac{1}{\sum\limits_j \hat{\lambda}_{ij}(\hat{P}_{ij}\hat{B}_{ij})^\delta \big(1-\sum\limits_{k=1}^{\infty}\binom bk (-p_j \theta \hat B_{ij}^{-1})^k \frac{\delta}{k-\delta} ~_2F_1(k,k-\delta; k-\delta+1; -\theta \hat B_{ij}^{-1})\big)}.
\end{equation}
where $\bm p=(p_1,p_2,...p_K)$, $\hat\lambda_{ij} = \lambda_j/\lambda_i$, $\hat P_{ij} = P_j/P_i$, and $\hat B_{ij} = B_j/B_i$.
\end{theorem}
From \eqref{eq:Mb-ith-tier-BSActivity}, the mean local delay of the typical user connecting to the $i$-th tier is given by
\begin{equation}\label{eq:MLDp}
M_{-1\mid(i)}(\bm p) = \frac{1}{D_i(\bm p)},~~{\bm p}\in \mathcal{S}_i,
\end{equation}where
\begin{align}\label{eq:Di}
D_i(\bm p) &= 1-\frac{p_i\theta\delta}{1-\delta}\:_2F_1(1,1-\delta;2-\delta;-\theta(1-p_i)) \nonumber\\
&~~~~+\sum\limits_{j\neq i}\frac{\lambda_j}{\lambda_i}\Big(\frac{P_jB_j}{P_iB_i}\Big)^\delta\Big(1-\frac{p_j\theta\delta}{1-\delta}\:_2F_1(1,1-\delta;2-\delta;-\theta B_iB_j^{-1}(1-p_j))\Big),
\end{align}
and $\mathcal{S}_i$ is the region for $\bm p$ in which the mean local delay is finite for the $i$-th tier, defined by
\begin{equation}\label{eq:FiniteRegion}
\mathcal{S}_i\triangleq\{(p_1,p_2,...p_K)\in[0,1]^K:D_i(\bm p)>0\} .
\end{equation}
The boundary of the region for the finite mean local delay for the $i$-th tier is then defined as
\begin{equation}\label{eq:FiniteBoundary}
\partial\mathcal{S}_i\triangleq\{(p_1,p_2,...p_K)\in[0,1]^K:D_i(\bm p)=0\} .
\end{equation}
The region of all tiers is then given by the intersection
\begin{equation}\label{eq:FiniteRegionAll}
\mathcal{S}\triangleq\bigcap\limits_{i\in[K]}\mathcal{S}_i.
\end{equation}
A simple but reasonable inference from \eqref{eq:MLDp} and \eqref{eq:Di} is that for small $\bm p$, the mean local delay is finite since the interference is low and most of the users in each tier have a high conditional success probability, as $\bm p$ grows higher, the interference gets severe, and with $\bm p$ increasing to some critical threshold, $D_i(\bm p)$ will go to zero, resulting in the infinite mean local delay.
It is hard to exactly characterize $\mathcal{S}_i$, next we provide a lower bound $\partial\mathcal{\check S}_i$ of $\mathcal{S}_i$ to shed light on the effect of the base station activity probabilities $\bm{p}$. By noticing that
\begin{align}
_2F_1(1,1-\delta;2-\delta;-z) &\eqa (1+z)^{-1} \:_2F_1\Big(1,1;2-\delta;\frac{z}{1+z}\Big) \nonumber\\
&\eqb (1+z)^{-1} \sum\limits_{m=0}^{\infty} \frac{(1)_m (1)_m}{(2-\delta)_m}\frac{u^m}{m!} \nonumber\\
&= (1+z)^{-1} \sum\limits_{m=0}^{\infty} \frac{(1)_m}{(2-\delta)_m} u^m \nonumber\\
&< (1+z)^{-1}\Big(1+ \frac{1}{2-\delta}u +\frac{1}{2-\delta}u^2 + ... \Big) \nonumber\\
&= (1+z)^{-1}\Big(1+\frac{1}{2-\delta}\frac{u}{1-u}\Big) \nonumber\\
&= (1+z)^{-1}\Big(1+\frac{1}{2-\delta}z\Big),
\end{align}
where (a) is by the Euler's transformation; (b) is by the series form of the Gaussian hypergeometric function $_2F_1$ and $(q)_m\equiv\frac{\Gamma(q+m)}{\Gamma(q)}$ is the Pochhammer function (rising factorial). The boundary is given by
\begin{equation}\label{eq:FiniteInnerBound}
\partial\mathcal{\check S}_i=\{(p_1,p_2,...p_K)\in[0,1]^K:\check D_i(\bm p)=0\},
\end{equation}
where
\begin{equation}\label{eq:checkDfun}
\check D_i(\bm p) = \sum\limits_j\frac{\lambda_j}{\lambda_i}\bigg(\frac{P_jB_j}{P_iB_i}\Big)^\delta \Big(1-\frac{p_j\theta\delta}{1-\delta}\Big(1+\theta\frac{B_i}{B_j}(1-p_j)\Big)^{-1}\Big(1+\frac{\theta B_i(1-p_j)}{(2-\delta)B_j}\Big)\bigg).
\end{equation}
\section{Applications in Two-tier HCNs}\label{sec:Apps}
In this section, we apply the meta distribution framework developed in Sec.~\ref{sec:MainResults} to the two-tier HIP model and show the corresponding numerical results. Since the performances are affected only by the ratios between the densities, transmit powers and biases of the two tiers, we assume $P_1=\lambda_1=B_1=1$ without loss of generality.
\subsection{Moments}
Defining $f_b(x)\triangleq \:_2F_1(b,-\delta; 1-\delta; -x)$, we obtain the first moment and variance for each tier from Cor.~\ref{thm:Mb-ith-tier},
\begin{equation}\label{eq:M1tier1}
M_{1\mid (1)} = \frac{1+\lambda_2 (P_2 B_2)^\delta}{f_1(\theta)+\lambda_2 (P_2 B_2)^\delta~ f_1(\theta B_2^{-1})},
\end{equation}
\begin{equation}\label{eq:M1tier2}
M_{1\mid (2)} = \frac{1+\lambda_2^{-1} (P_2 B_2)^{-\delta}}{f_1(\theta)+\lambda_2^{-1} (P_2 B_2)^{-\delta}~ f_1(\theta B_2)},
\end{equation}
\begin{equation}\label{eq:V1}
V_{(1)} = \frac{1+\lambda_2 (P_2 B_2)^\delta}{f_2(\theta)+\lambda_2 (P_2 B_2)^\delta~ f_2(\theta B_2^{-1})}-\Big(\frac{1+\lambda_2 (P_2 B_2)^\delta}{f_1(\theta)+\lambda_2 (P_2 B_2)^\delta~ f_1(\theta B_2^{-1})}\Big)^2,
\end{equation}
\begin{equation}\label{eq:V2}
V_{(2)} = \frac{1+\lambda_2^{-1} (P_2 B_2)^{-\delta}}{f_2(\theta)+\lambda_2^{-1} (P_2 B_2)^{-\delta}~ f_2(\theta B_2)}-\Big(\frac{1+\lambda_2^{-1} (P_2 B_2)^{-\delta}}{f_1(\theta)+\lambda_2^{-1} (P_2 B_2)^{-\delta}~ f_1(\theta B_2)}\Big)^2.
\end{equation}
\begin{figure} [!t]
\begin{center}
\includegraphics[width=0.95\figwidth]{Figs/M1vsTheta.eps}
\caption{$M_1$ of the typical user in each tier versus $\theta$ with $\alpha=4$, $P_2=0.2$ and $\lambda_2=5$. In this case, for $B_2=1$, $p_{\rm a}^{(1)}=0.5$ and $p_{\rm a}^{(2)}=0.5$; for $B_2=0.1$, $p_{\rm a}^{(1)}=0.59$ and $p_{\rm a}^{(2)}=0.41$; for $B_2=10$, $p_{\rm a}^{(1)}=0.12$ and $p_{\rm a}^{(2)}=0.88$.}
\label{fig:M1vsTheta}
\end{center}
\end{figure}
\begin{figure} [!t]
\begin{center}
\includegraphics[width=0.95\figwidth]{Figs/VarvsTheta.eps}
\caption{$V$ of the typical user in each tier versus $\theta$ with $\alpha=4$, $P_2=0.2$ and $\lambda_2=5$. In this case, for $B_2=1$, $p_{\rm a}^{(1)}=0.5$ and $p_{\rm a}^{(2)}=0.5$; for $B_2=0.1$, $p_{\rm a}^{(1)}=0.59$ and $p_{\rm a}^{(2)}=0.41$; for $B_2=10$, $p_{\rm a}^{(1)}=0.12$ and $p_{\rm a}^{(2)}=0.88$.}
\label{fig:VarvsTheta}
\end{center}
\end{figure}
\figref{fig:M1vsTheta} and \figref{fig:VarvsTheta} show $M_1$ and $V$ of each tier in a two-tier HCN. We can see that if there is no bias (\ie, $B_1=B_2=1$), the curves of $M_1$ and $V$ of both tiers coincide, which implies that the two tiers have the same SIR statistics regardless of their different densities and powers. However, the inequality in range expansion bias results in the separation between these two tiers in terms of $M_1$ and $V$. Specifically, since biasing means offloading, we can draw the conclusion that offloading from one tier to the other will always benefit $M_1$ of the former, while harming the latter for any given $\theta$.
\subsection{Beta Approximations}
In \figref{fig:MetaDistr_theta0dB_B2_10dB}, we see that this approximation is also excellent for HCNs with biases.
\begin{figure} [!t]
\begin{center}
\includegraphics[width=0.95\figwidth]{Figs/MetaDistr_theta0dB_B2_10dB.eps}
\caption{The exact meta distribution for the overall network and for each tier of a two-tier HCN with $\theta=0$ dB, $\alpha=4$, $\lambda_1=1$, $\lambda_2=5$, $P_1=1$, $P_2=0.2$, $B_1=1$ and $B_2=10$. The solid lines correspond to the exact results and the dashed lines are the beta approximations.}
\label{fig:MetaDistr_theta0dB_B2_10dB}
\end{center}
\end{figure}
\subsection{Horizontal Shifting via Asymptotic SIR Gains}
For the two-tier HCN example, the asymptotic SIR gain for $M_1$ of each tier is respectively given by $G_{0,1}^{(1)} = \frac{1+\lambda_2 P_2^\delta B_2^\delta}{1+\lambda_2 P_2^\delta B_2^{\delta-1}}$ and $G_{0,1}^{(2)} = \frac{1+\lambda_2^{-1} P_2^{-\delta} B_2^{-\delta}}{1+\lambda_2^{-1} P_2^{-\delta} B_2^{1-\delta}}$.
Numerically, for the case $B_2=10$ dB shown in \figref{fig:M1vsTheta} and \figref{fig:VarvsTheta}, $G_{0,1}^{(1)} = 6.75$ dB, $G_{0,1}^{(2)} = -3.25$ dB, $G_{0,2}^{(1)} = 3.74$ dB, $G_{0,2}^{(2)} = -6.27$ dB, $G_{\infty,1}^{(1)} = 9.94$ dB, $G_{\infty,1}^{(2)} = -2.06$ dB, $G_{\infty,2}^{(1)} = 4.42$ dB and $G_{\infty,2}^{(2)} = -5.58$ dB. \figref{fig:HorizonShift2_B2_10} shows the comparison between the exact $b$-th moment curves and the shifted versions of the $M_1$ of a single-tier PPP as the reference model and \figref{fig:HorizonShiftVar_B2_10} shows the comparison between the exact variance curves and the shifted versions of the variance of a single-tier PPP as the reference model. We can see that the shifted versions by using the asymptotic gain are excellent approximations for the exact results.
\begin{figure} [!t]
\begin{center}
\includegraphics[width=0.95\figwidth]{Figs/HorizonShift2_B2_10.eps}
\caption{Illustration for the asymptotic gain of $M_b$ in each tier of a two-tier HCN relative to $M_1$ of a single-tier PPP. In this case, $\alpha=4$, $P_2=0.2$ and $\lambda_2=5$, $B_2=10$ dB. The solid lines correspond to the exact results and the dashed lines are the shifted versions of $M_1$ of the single-tier PPP by $G_{0,b}^{(i)}$ and $G_{\infty,b}^{(i)}$, $i=1,2$, respectively.}
\label{fig:HorizonShift2_B2_10}
\end{center}
\end{figure}
\begin{figure} [!t]
\begin{center}
\includegraphics[width=0.95\figwidth]{Figs/HorizonShiftVar_B2_10.eps}
\caption{Illustration for the asymptotic gain of $V^{(i)}(\theta)$ of a two-tier HCN relative to the variance of a single-tier PPP. In this case, $\alpha=4$, $P_2=0.2$ and $\lambda_2=5$, $B_2=10$ dB. The solid lines correspond to the exact results and the dashed lines are the shifted versions of $V(\theta)$ of the single-tier PPP by the asymptotic gains $G_{0,b}^{(i)}$ and $G_{\infty,b}^{(i)}$, $i=1,2$, respectively.}
\label{fig:HorizonShiftVar_B2_10}
\end{center}
\end{figure}
\subsection{Effects of Biasing}
In this section, we study the effects of range expansion biases on the coverage performance of each individual tier and the whole network.
Sometimes, it is convenient and of significance to consider the asymptotic performance of the range expansion biases.
\begin{corollary}\label{cor:AsympB2}
For $B_2\to \infty$, which means that tier 1 is closed-access, we have
\begin{itemize}[\rm (a)]
\item[\rm (a)] $\displaystyle M_{1\mid (1)}\sim 1,~ M_{1\mid (2)}\sim \frac{\lambda_2 P_2^\delta \sinc\delta}{F(\delta,\theta)\lambda_2 P_2^\delta \sinc\delta + \theta^\delta}$; further, for $\displaystyle \theta\to\infty,~ M_{1\mid (2)}\sim \frac{\lambda_2 P_2^\delta \sinc\delta}{\theta^\delta(1+\lambda_2 P_2^\delta)}$;
\vspace{2pt}
\item[\rm (b)] $\displaystyle V_{(1)}\to 0,~ V_{(2)}\sim \frac{1}{F(\delta,\theta)+\frac{3-2\delta}{(2-\delta)\lambda_2 P_2^\delta \sinc\delta}\theta^\delta}-\Big(\frac{\lambda_2 P_2^\delta \sinc\delta}{F(\delta,\theta)\lambda_2 P_2^\delta \sinc\delta + \theta^\delta}\Big)^2$; further, for $\displaystyle \theta\to\infty,~ V_{(2)}\sim \frac{\lambda_2 P_2^\delta \sinc\delta}{\theta^\delta (1+\delta)(1+\lambda_2 P_2^\delta)} - \frac{\lambda_2^2 P_2^{2\delta} \sinc^2\delta}{\theta^{2\delta} (1+\lambda_2 P_2^\delta)^2}$,
\end{itemize}
where $F(\delta,\theta)=\:_2F_1(1,-\delta; 1-\delta; -\theta)$.
\end{corollary}
\begin{IEEEproof}
These results are easily obtained by using \eqref{eq:Asymp2F1_Inf} in Lem.~\ref{lem:Asymp2F1} by noting that $T(1)=\frac{1}{\sinc\delta}$, $T(2)=\frac{1+\delta}{\sinc\delta}$ and the identity $_2F_1(a, b; c; 0)\equiv 1$.
\end{IEEEproof}
\begin{corollary}\label{cor:EffectBiasM1}
$B_2>1 \Leftrightarrow M_{1\mid (1)}>M_{1\mid (2)}$.
\end{corollary}
\begin{IEEEproof}
Since $f_1(x)$ is monotonically increasing, we have $f_1(\theta B_2)>f_1(\theta)>f_1(\theta B_2^{-1})$ for $B_2>1$. Then from \eqref{eq:M1tier1} and \eqref{eq:M1tier2} we have $M_{1\mid (1)}>\frac{1}{1+f_1(\theta)}$ while $M_{1\mid (2)}<\frac{1}{1+f_1(\theta)}$.
\end{IEEEproof}
In words, offloading from one tier to the other will harm the average success probability of the latter tier.
As for the overall typical user, according to Thm.~\ref{thm:Mb_RE}, its first moment and variance of the conditional success probability are, respectively, given by
\begin{equation}
M_1(B_2) = \frac{1}{f_1(\theta)+\lambda_2 (P_2 B_2)^\delta f_1(\theta B_2^{-1})} + \frac{1}{f_1(\theta)+\lambda_2^{-1} (P_2 B_2)^{-\delta}f_1(\theta B_2)},
\end{equation}
\begin{align}
V(B_2) &= \frac{1}{f_2(\theta)+\lambda_2 (P_2 B_2)^\delta f_2(\theta B_2^{-1})} + \frac{1}{f_2(\theta)+\lambda_2^{-1} (P_2 B_2)^{-\delta} f_2(\theta B_2)} \nonumber \\
&~ -\Bigg(\frac{1}{f_1(\theta)+\lambda_2 (P_2 B_2)^\delta f_1(\theta B_2^{-1})} + \frac{1}{f_1(\theta)+\lambda_2^{-1} (P_2 B_2)^{-\delta} f_1(\theta B_2)} \Bigg)^2.
\end{align}
We can prove that $\frac{\partial M_1}{\partial B_2}\Big |_{B_2=1} = 0$, $\frac{\partial V}{\partial B_2}\Big |_{B_2=1} = 0$, which means $B_2=1$ is an extreme point. Also, $\frac{\partial^2 M_1}{\partial B_2^2}\Big |_{B_2=1} \leq 0$, hence $B_2=1$ is the maximal point of $M_1$ (see that shown in \figref{fig:M1B2} ). For the second derivative of $V$ at $B_2=1$, it is not easy to judge its sign across different values of $B_2$ since it is related to the value of $\theta$. But we can observe this from the analytical curves shown in \figref{fig:VarV2} that $B_2 = 1$ is the local minimum.
\begin{figure} [!t]
\begin{center}
\includegraphics[width=\figwidth]{Figs/M1_B2.eps}
\caption{Analytical results for $M_1$ of the typical user of the entire network versus $B_2$ with $\alpha=4$, $P_2=0.2$ and $\lambda_2=4$.}
\label{fig:M1B2}
\end{center}
\end{figure}
\begin{figure} [!ht]
\begin{center}
\includegraphics[width=\figwidth]{Figs/Var_B2.eps}
\caption{Analytical results for $V$ of the typical user of the entire network versus $B_2$ with $\alpha=4$, $P_2=0.2$ and $\lambda_2=4$.}
\label{fig:VarV2}
\end{center}
\end{figure}
\begin{figure} [!t]
\begin{center}
\includegraphics[width=0.95\figwidth]{Figs/Asym_V_B2_60dB.eps}
\caption{Asymptotic $V$ of the typical user in the pico tier versus $\theta$ with $\alpha=4$, $P_2=0.2$ and $\lambda_2=5$.}
\label{fig:Asym_V2}
\end{center}
\end{figure}
\begin{figure*}[!t]
\begin{minipage}{0.45\linewidth}
\centerline{\includegraphics[width=0.95\figwidth]{Figs/Asym_M1_B2_30dB.eps}}
\caption{Asymptotic $M_1$ of the typical user in the pico tier versus $\theta$ with $\alpha=4$, $P_2=0.2$ and $\lambda_2=5$.}
\label{fig:Asym_M1_B2_30dB}
\end{minipage}
\hfill
\begin{minipage}{0.45\linewidth}
\centerline{\includegraphics[width=0.95\figwidth]{Figs/Asym_V_B2_30dB.eps}}
\caption{Asymptotic $V$ of the typical user in the pico tier versus $\theta$ with $\alpha=4$, $P_2=0.2$ and $\lambda_2=5$.}
\label{fig:Asym_V_B2_30dB}
\end{minipage}
\end{figure*}
Based on the above analysis, for the $M_1$ of the overall users in a general $K$-tier HCN, we have the following corollary.
\begin{corollary}
For the $K$-tier HIP model, setting all bias terms $B_i$ to the same value (i.e., no biasing) maximizes $M_1(\theta)$ of the overall typical user for all $\theta>0$.
\end{corollary}
\begin{IEEEproof}
For an arbitrary realization of the point process $\Psi$, determine the local-average SIR, which is equal to \eqref{eq:SIR_typ} but without the fading coefficients (see \cite[Eqn. (11)]{George17}) for all users for $B_i=1, i\in[K]$ (no biasing). This is by definition the best local-average SIR that each user can achieve. Consequently, if for any tier $i$, $B_i\neq 1$, there will be some users whose local-average SIR will decrease since they are no longer associated with the strongest-on-average BS. This implies that $M_1$ decreases.
\end{IEEEproof}
\begin{remark}
For a general $K$-tier HCN with range expansion bias $B_i$ in the $i$-th tier, it is not easy to determine whether $B_i>1$ is harmful to the coverage performance in terms of $M_1$ for the $i$-th tier than the case with $B_i=1$. Since what play the decisive role are the ratios between $B_i$ and the bias values of the other tiers, which, in essence, reflect the offloading relationship among different tiers. In particular, $B_i/B_j < 1$ means offloading from the $i$-th tier to the $j$-th tier and vice versa. Hence, for a two-tier case, if some of the users in the first tier are offloaded to the second tier, then the latter definitely suffers a loss in $M_1$; however, for a three-tier case, if some of the users in the first tier are offloaded to the second tier, but some users belong to the second tier are also offloaded to the third tier, then for the second tier, its $M_1$ may improve.
\end{remark}
\subsection{Lower Bounds of the Mean Local Delay with Random BS Activity}
Specifically, for a two-tier HIP model, we have the following corollary.
\begin{corollary}\label{cor:2tierProbBound}
For a two-tier HCN, given all the other parameters,
\begin{enumerate}[(1)]
\item if $B_1=B_2$, then $\mathcal{S}=\mathcal{S}_1 = \mathcal{S}_2$;
\item if $B_i>B_j$, then $\mathcal{S}=\mathcal{S}_j$, $i,j\in\{1,2\}$;
\item if $\theta < \frac{1-\delta}{\delta}$, then $\mathcal{S}=\mathcal{S}_1 = \mathcal{S}_2=[0,1]^2$.
\end{enumerate}
\end{corollary}
\begin{proof}
For a two-tier HCN, we have
\begin{align}\label{eq:D_1st}
D_1(p_1,p_2) &= \underbrace{1-\frac{p_1\theta\delta}{1-\delta} \:_2F_1(1,1-\delta;2-\delta;-\theta(1-p_1))}_{A_1} \nonumber\\
&~~+ \frac{\lambda_2}{\lambda_1}\Big(\frac{P_2B_2}{P_1B_1}\Big)^\delta \bigg(\underbrace{1-\frac{p_2\theta\delta}{1-\delta} \:_2F_1\Big(1,1-\delta;2-\delta;-\theta(1-p_2)\frac{B_1}{B_2}\Big)}_{G_2}\bigg),
\end{align}
\begin{align}\label{eq:D_2nd}
D_2(p_1,p_2) &= \underbrace{1-\frac{p_2\theta\delta}{1-\delta} \:_2F_1(1,1-\delta;2-\delta;-\theta(1-p_2))}_{G_1} \nonumber\\
&~~+ \frac{\lambda_1}{\lambda_2}\Big(\frac{P_1B_1}{P_2B_2}\Big)^\delta \bigg(\underbrace{1-\frac{p_1\theta\delta}{1-\delta} \:_2F_1\Big(1,1-\delta;2-\delta;-\theta(1-p_1)\frac{B_2}{B_1}\Big)}_{A_2}\bigg).
\end{align}
\begin{enumerate}[(1)]
\item For $B_1=B_2$, let $g(x) = 1-\frac{x\theta\delta}{1-\delta} \:_2F_1(1,1-\delta;2-\delta;-\theta(1-x))$, $c=\frac{\lambda_2}{\lambda_1}\big(\frac{P_2}{P_1}\big)^\delta$, then $D_1(p_1,p_2) = g(p_1) + cg(p_2)$, $D_2(p_1,p_2) = \frac{D_1(p_1,p_2)}{c}$, since $c>0$, it is obvious that $D_1(p_1,p_2)$ and $D_2(p_1,p_2)$ always get negative at the same $(p_1,p_2)$. Hence $\mathcal{S}_1$ and $\mathcal{S}_2$ share the same boundary and thus $\mathcal{S}_1=\mathcal{S}_2$.
\item Without loss of generality, we assume $B_2>B_1$. Let $d=\frac{B_2}{B_1}>1$, then $D_1(p_1,p_2) = A_1+cd^\delta G_2$, $D_2(p_1,p_2) = \frac{A_2+cd^\delta G_1}{cd^\delta}$. Since $_2F_1(1,1-\delta;2-\delta;-z)$ is a monotonically decreasing function of $z$ for $z\geq0$, which is easy to be proved by its first-order derivative, for given $p_1, p_2$, we have $A_1<A_2$, $G_1>G_2$, hence as $p_1$ and (or) $p_2$ increase, $D_1(p_1,p_2)$ will decrease to zero first, resulting in $\mathcal{S}_1\subset\mathcal{S}_2$.
\item Let $p_1=p_2=1$, then $\check D_i(1,1) = \big(1+\sum_{j\neq i} \frac{\lambda_j}{\lambda_i} \big(\frac{P_j B_j}{P_i B_i}\big)^\delta\big) \big(1-\theta\frac{\delta}{1-\delta}\big)$, $\check D_i(1,1)>0$ requires $\theta<\frac{1-\delta}{\delta}$.
\end{enumerate}\vspace{-0.8cm}
\end{proof}
\begin{figure} [!t]
\begin{center}
\includegraphics[width=0.95\figwidth]{Figs/InnerBound2.eps}
\caption{The exact boundary $\partial\mathcal{S}_1$ and its lower bound $\partial\mathcal{\check S}_1$ of a two-tier HCN with $\alpha=4$, $\lambda_2/\lambda_1=25$, $P_1/P_2=200$ and $B_2/B_1=10$. In this case, $\mathcal{S}=\mathcal{S}_1$.}
\label{fig:InnerBound}
\end{center}
\end{figure}
\vspace{0.5cm}
In \figref{fig:InnerBound}, the exact boundary $\partial\mathcal{S}_1$ and its lower bound $\partial\mathcal{\check S}_1$ of a two-tier HCN are shown. As we see, the lower bound becomes tighter as $\theta$ decreases. In this case, according to Cor.~\ref{cor:2tierProbBound}(2), $\mathcal{S}=\mathcal{S}_1$. We also observe that as $\theta$ decreases, $\mathcal{S}$ grows towards $[0,1]^2.$
\section{Conclusions}\label{sec:Conclusion}
In this paper, we developed the SIR meta distribution framework for the analysis of HIP-based $K$-tier HCNs with offloading biases and Rayleigh fading and performed a systematic study for a series of key performance metrics, revealing fine-grained information on the per-user performance. We first derived the $b$-th moment of the conditional success probability for both the entire network and each single tier. Based on the $b$-th moment, the exact meta distribution as well as a simple yet accurate approximation based on beta distribution is provided. We derived the asymptotic gains and found that for any specific tier, the $b$-th moment as well as the variance of the conditional success probability is approximately a horizontal shifted version of that in a single-tier PPP, and hence horizontal shifted versions of each other.
About the effect of the offloading biases, we proved that $M_1$ of the whole network is always harmed by any biasing; for multi-tier (more than $3$) HIP-based HCNs, users of certain tiers will benefit while the others suffer, which depends on the relative ratios of the biases between different tiers. The effect on the per-tier success probability can be quantified using a horizontal shift of the SIR distribution.
The $b$-th moment of the conditional success probability under the independent ALOHA-like random base station activities was also addressed. The region of the activity probabilities in which the mean local delay of each tier remains finite is characterized by a lower bound, which was shown to be accurate enough compared to the exact one.
Overall, the SIR meta distribution framework offers several new and interesting insights in the performance of HCNs, which helps us understand the HCNs better and hence benefits the real network design and optimization.
|
2,877,628,089,446 | arxiv | \section{
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Introduction
Position operators are key but controversial objects in the study of particle
localization in field theory. Newton and Wigner (NW) found position operators
with commuting components and spherically symmetric eigenvectors for massive
particles and for massless particles with spin $0$ and $\frac{1}{2}$
\cite{NewtonWigner}, but their construction failed for photons. Pryce derived
a photon position operator, $\widehat{\mathbf{x}}_{P}$, consistent with the NW
axioms but its components do not commute so it does not have localized
eigenvectors \cite{Pryce}. A photon position operator with commuting
components, $\widehat{\mathbf{x}}$, does exist \cite{Hawton}, but its
eigenvectors are cylindrically symmetrical and it does not transform like a
vector under rotations \cite{HawtonBaylis}.
We show here that the components of $\widehat{\mathbf{x}}$ perpendicular to
the axis of symmetry of its eigenvectors together with the rotation operator
parallel to this axis are a realization of the two dimensional Euclidean
little group ($E\left( 2\right) $) \cite{Weinberg}. The operator
$\widehat{\mathbf{x}}$ does not transform like a vector under rotations and
boosts because an additional term is required to rotate the axis of symmetry
of its eigenvectors. This additional term describes the Berry phase
\cite{Berry} displacement of photon position.
The form of the position eigenvectors used here is flexible enough to
accomodate Newton-Wigner and covariant normalization. For photons the
Heisenberg picture position eigenvectors are
\begin{equation}
c_{\sigma\mathbf{x}}^{\mu}(\mathbf{k})=k^{\alpha}e^{\mathrm{i}\left(
\mathbf{k}\cdot\mathbf{x}-kct-\sigma\chi\right) }\frac{e_{\theta}^{\mu
}+\mathrm{i}\sigma e_{\phi}^{\mu}}{\sqrt{2}} \label{evec
\end{equation}
in momentum space spherical polar coordinates where $\sigma$ is helicity,
$\chi$ is the Euler rotation angle about $\mathbf{k}$, and $\mathbf{x}$ is
displacement from the origin. Each choice of $\alpha$ corresponds to a choice
of normalisation for the position eigenvectors. In the NW case for which
$\alpha=1/2$ these eigenvectors are not covariant and they are nonlocal in
configuration space due to the factor $k^{1/2}$. If $\alpha=0$, $c_{\sigma
\mathbf{x}}^{\mu}(\mathbf{k})$ is a four-vector and transformation to
configuration space using the Lorentz invariant measure \textrm{d}$^{3}k/k$ is
a four-vector proportional to the electromagnetic four-potential. The inverse
Fourier transform obtained using the trivial measure is the time derivative of
the vector potential, proportional to the electric field describing this
instantaneouly localized position eigenvector. For the definite helicity
transverse modes $\sigma=$ $\pm$, $\mathbf{B}=-\mathrm{i}\sigma\mathbf{E}$ so
the Riemann-Silberstein vector $\mathbf{E+}\mathrm{i}\sigma\mathbf{B
=2\mathbf{E}$ is again an electric field. Details of the normalization of
these position eigenvectors are discussed in \cite{NewtonWigner} and
\cite{HawtonDebierre}, but these details do not affect the expressions derived
in this paper.
The basis of eigenvectors of $\widehat{\mathbf{x}}$ is ideally suited to the
description of optical beams with definite angular momentum (AM) in a fixed
direction. Since orbital AM results in a helical wave front, these beams are
referred to as "twisted light" \cite{Padgett}. It has been observed that total
angular and linear momentum can be transferred from a photon to a particle
trapped in a twisted light beam \cite{ONeil,Zhao}. Focussing of a beam leads
to localization on the axis of symmetry \cite{Zhao} so a basis of localized
states is well suited to the theoretical description of focusing. Berry's
topological phase has been observed using light beams and optical fibers
\cite{Chiao,TomitaChiao,Onoda,Bliokh} as a sideways shift of the beam
centroid. Twisted light beams are currently very topical as they are of
interest as candidates for manipulation of particles, imaging and optical
communications based on violation of local realism \cite{Zeilinger}.
The plan of the paper is as follows: In Section II the Poincar\'{e} and
position operators and their commutation relations are discussed and the AM
and boost operators are separated into intrinsic and extrinsic parts by
writing them in terms of $\widehat{\mathbf{x}}$. In Section III the Wigner
little group algebra is briefly summarized and then extended to include
$\widehat{\mathbf{x}}$ and we prove that the transverse components of
$\widehat{\mathbf{x}}$ together with rotation about its axis of cylindrical
symmetry are a realization of the photon little group. In Section IV
experimental and theoretical work on optical beams is discussed and in Section
V we conclude.
\section{Poincar\'{e} and position operators}
In this Section, after a brief review of the Poincar\'{e} operators, we
introduce $\widehat{\mathbf{x}}$ and its associated Berry phase and then write
the AM and boost operators in terms of position operators.
The Poincar\'{e} group describes the fundamental kinematic symmetry of a
relativistic particle \cite{Stone}. The generators of translations in space
and time, rotations and boosts are the momentum, Hamiltonian, AM and Lorentz
boost operators, $\widehat{\mathbf{P}},$ $\widehat{H},$ $\widehat{\mathbf{J}}
$ and $\widehat{\mathbf{K}}$ respectively. These Poincar\'{e} operators
satisfy the commutation relations $\left[ \widehat{J}_{i},\widehat{J
_{j}\right] =\mathrm{i}\hbar\epsilon_{ijk}\widehat{J}_{k},$ $\left[
\widehat{J}_{i},\widehat{K}_{j}\right] =\mathrm{i}\hbar\epsilon
_{ijk}\widehat{K}_{k},$ $\left[ \widehat{K}_{i},\widehat{K}_{j}\right]
=-\mathrm{i}\hbar\epsilon_{ijk}\widehat{J}_{k},$ $\left[ \widehat{J
_{i},\widehat{P}_{j}\right] =\mathrm{i}\hbar\epsilon_{ijk}\widehat{P}_{k},$
$\left[ \widehat{K}_{i},\widehat{P}_{j}\right] =\mathrm{i}\hbar\delta
_{ij}\widehat{H},$ $\left[ \widehat{K}_{i},\widehat{H}\right] =-\mathrm{i
\hbar\widehat{P}_{i},$ $\left[ \widehat{J}_{i},\widehat{H}\right] =\left[
\widehat{P}_{i},\widehat{H}\right] =\left[ \widehat{P}_{i},\widehat{P
_{j}\right] =0$ for $i=1,2,3$ \cite{Weinberg}. This algebra will next be
extended to include position operators.
In $\mathbf{k}$-space $\widehat{\mathbf{P}}=\hbar\mathbf{k}$ and the photon
position operator with commuting components, $\widehat{\mathbf{x}}$, is
related to the spinless nonrelativistic momentum space position operator
$\mathrm{i}\mathbf{\partial}_{\mathbf{k}}$ by $\widehat{\mathbf{x}}=k^{\alpha
}\widehat{D}\mathrm{i}\mathbf{\partial}_{\mathbf{k}}\widehat{D}^{-1
k^{-\alpha}$ where \cite{HawtonBaylis}
\begin{equation}
\widehat{D}=\exp\left( -\mathrm{i}\widehat{\sigma}\chi\right) \exp\left(
-\mathrm{i}\widehat{S}_{3}\phi\right) \exp\left( -\mathrm{i}\widehat{S
_{2}\theta\right) . \label{D
\end{equation}
Here $\partial_{\mathbf{k}}$ is the $\mathbf{k}$-space gradient,
$\widehat{S}_{i}$ are the Cartesian components of the spin operator
$\widehat{\mathbf{S}}$, $\widehat{\sigma}=\mathbf{e}_{\mathbf{k}
\cdot\widehat{\mathbf{S}}$ is the helicity operator, $\theta$ and $\phi$ are
the $\mathbf{k}$-space spherical polar angles, $\chi\left( \theta
,\phi\right) $ is the Euler angle and the $\mathbf{k}$-space spherical polar
unit vectors are $\mathbf{e}_{\theta},$ $\mathbf{e}_{\phi}$ and $\mathbf{e
_{\mathbf{k}}$ as sketched in Fig. 1. The definite helicity transverse unit
vectors, equal to $\widehat{D}\left( \mathbf{e}_{1}+\mathrm{i}\sigma
\mathbf{e}_{2}\right) $, are
\begin{equation}
\mathbf{e}_{\sigma}^{\left( \chi\right) }=\frac{1}{\sqrt{2}}\left(
\mathbf{e}_{\theta}+\mathrm{i}\sigma\mathbf{e}_{\phi}\right) \mathrm{e
^{-\mathrm{i}\sigma\chi}. \label{e_chi
\end{equation}
The position operator with commuting components is \cite{Hawton,HawtonBaylis}
\begin{equation}
\widehat{\mathbf{x}}=\mathrm{i}\mathbf{\partial}_{\mathbf{k}}-\mathrm{i
\alpha\frac{\mathbf{k}}{k^{2}}+\frac{1}{k^{2}}\mathbf{k\times
\widehat{\mathbf{S}}-\widehat{\sigma}\mathbf{a}\left( \theta,\phi\right)
\label{x
\end{equation}
where $k=\left\vert \mathbf{k}\right\vert $, $\alpha=\frac{1}{2}$ for the NW
basis an
\begin{equation}
\mathbf{a}=\frac{\cos\theta}{k\sin\theta}\mathbf{e}_{\phi}+\mathbf{\partial
}_{\mathbf{k}}\chi. \label{a
\end{equation}
Inspection of Fig. 1 shows that rotation about $\mathbf{k}$ does not change
$\theta$ or $\phi$. The Euler angle $\chi\left( \theta,\phi\right) $ is
defined as a general rotation about $\mathbf{k}$. Any possible transverse
basis is the set of eigenvectors of (\ref{x}) for some $\chi\left(
\theta,\phi\right) $. Since experiments are often performed on optical beams
with definite angular momentum, the case $\chi=-m\phi$ for which the position
eigenvectors have intrinsic AM $\hbar m\sigma$ in some arbitrary but fixed
direction is of special interest. For this choice of $\chi$ (\ref{a}) becomes
\begin{equation}
\mathbf{a}^{\left( m\right) }=\frac{\cos\theta-m}{k\sin\theta
\mathbf{e}_{\phi} \label{am
\end{equation}
It is known that
\begin{equation}
\widehat{\sigma}a_{i}^{\left( m\right) }=\mathrm{i}\mathbf{e}_{\sigma
^{\ast}\cdot\mathbf{\partial}_{\mathbf{k}_{i}}\mathbf{e}_{\sigma}
\label{Onoda
\end{equation}
is a Berry connection with curvature $\mathbf{\partial}_{\mathbf{k}
\times\widehat{\sigma}\mathbf{a}^{\left( m\right) }=-\widehat{\sigma
}\mathbf{e}_{k}/k^{2}$ \cite{Onoda,HawtonBaylis,Bliokh3}. For parallel
transport generated by the rotation $\mathrm{d}\bm{\xi}$, we have
$\mathrm{d}\mathbf{k}=\mathrm{d}\bm{\xi}\times\mathbf{k}$, and hence
\begin{equation}
\left( \mathbf{a}^{\left( m\right) }\mathbf{\times k}\right)
\cdot\mathrm{d}\bm{\xi}=-\mathbf{a}^{\left( m\right) }\cdot\mathrm{d
\mathbf{k,} \label{parallel
\end{equation}
so the Berry phase shift is $\sigma\Omega$ where the $m$-independent solid
angle subtended by a loop of the photon's path is \cite{Chiao,HawtonBaylis}
\begin{equation}
\Omega=
{\displaystyle\oint}
\mathbf{a}^{\left( m\right) }\cdot\mathrm{d}\mathbf{k}=2\pi\left(
1-\cos\theta\right) . \label{loop
\end{equation}
The position operator with commuting components, $\widehat{\mathbf{x}}$, will
be emphasized here but the properties of the Pryce operator which does not
have commuting components will also be discussed because it is this operator
that is commonly used. The Pryce operator i
\begin{equation}
\widehat{\mathbf{x}}_{P}=\mathrm{i}\mathbf{\partial}_{\mathbf{k}
-\frac{\mathrm{i}}{2}\frac{\mathbf{k}}{k^{2}}+\frac{1}{k^{2}}\mathbf{k\times
}\widehat{\mathbf{S}} \label{Pryce
\end{equation}
where, from (\ref{x})
\begin{equation}
\widehat{\mathbf{x}}=\widehat{\mathbf{x}}_{P}-\widehat{\sigma}\mathbf{a.}
\label{xP
\end{equation}
Since $\left[ \widehat{x}_{Pi},\widehat{x}_{Pj}\right] =\mathrm{i
\epsilon_{ijk}k_{k}/k^{3},$ which can be written as $\widehat{\mathbf{x}
_{P}\times\widehat{\mathbf{x}}_{P}=-\mathrm{i}\widehat{\sigma}\mathbf{k/
k^{3}$, it is straightforward to verify that the position operator (\ref{x})
does indeed have commuting components: $\widehat{\mathbf{x}}\times
\widehat{\mathbf{x}}=\widehat{\mathbf{x}}_{P}\times\widehat{\mathbf{x}
_{P}-\widehat{\sigma}\left( \mathrm{i}\mathbf{\partial}_{\mathbf{k
}\mathbf{\times}\widehat{\mathbf{x}}_{P}\right) =\mathbf{0}$.
In addition to having commuting components, the position operator (\ref{x})
commutes with the helicity operator and satisfies the usual momentum-position
commutation relations. Using $\widehat{\mathbf{P}}=\hbar\mathbf{k}$ and
$\widehat{H}=\hbar ck$, we writ
\begin{align}
\left[ \widehat{x}_{i},\widehat{x}_{j}\right] & =0,\ \left[
\widehat{x}_{i},k_{j}\right] =\mathrm{i}\delta_{ij},\nonumber\\
\left[ \widehat{x}_{i},k\right] & =\mathrm{i}\frac{k_{i}}{k},\ \left[
\widehat{x}_{i},\widehat{\sigma}\right] =0. \label{commutation
\end{align}
In the Heisenberg picture where \textrm{d}$\widehat{O}/\mathrm{d}t=\left[
\widehat{O},\widehat{H}\right] /\mathrm{i}\hbar$, the momentum space photon
velocity operator is thu
\begin{equation}
\dot{\widehat{\mathbf{x}}}=c\mathbf{e}_{\mathbf{k}}. \label{v
\end{equation}
The Foldy representation \cite{Foldy} of the Poincar\'{e} AM and boost
operators is $\widehat{\mathbf{J}}=\mathrm{i}\hbar\mathbf{\partial
}_{\mathbf{k}}\times\mathbf{k+}\widehat{\mathbf{S}}$ and $\widehat{\mathbf{K
}=\frac{1}{2}\mathrm{i}\hbar\left( k\mathbf{\partial}_{\mathbf{k
}+\mathbf{\partial}_{\mathbf{k}}k\right) +\hbar\mathbf{e}_{\mathbf{k
}\mathbf{\times}\widehat{\mathbf{S}}$. In terms of the Pryce position operator
$\widehat{\mathbf{J}}=\hbar\widehat{\mathbf{x}}_{P}\times\mathbf{k
+\widehat{\sigma}\hbar\mathbf{e}_{\mathbf{k}},$ $\widehat{\mathbf{K}}=\frac
{1}{2}\hbar\left( k\widehat{\mathbf{x}}_{P}+\widehat{\mathbf{x}}_{P}k\right)
$. In terms of $\widehat{\mathbf{x}}$ whose eigenvectors, (\ref{e_chi}), are
localized these operators are partitioned into intrinsic and extrinsic parts
\cite{HawtonBaylis}. For the momentum operato
\begin{align}
\widehat{\mathbf{J}} & =\hbar\widehat{\mathbf{x}}\times\mathbf{k+
\widehat{\mathbf{J}}^{\left( 0,\mathbf{a}\right) },\label{J}\\
\widehat{\mathbf{J}}^{\left( 0,\mathbf{a}\right) } & =\widehat{\sigma
}\hbar\left( \mathbf{a\times k+e}_{\mathbf{k}}\right) \label{Ja
\end{align}
where $\widehat{\mathbf{J}}^{\left( 0,\mathbf{a}\right) }$ and
$\hbar\widehat{\mathbf{x}}\times\mathbf{k}$ are its intrinsic and extrinsic
parts. The superscript $\left( 0,\mathbf{a}\right) $ refers to the position
eigenvector at the origin for a particular choice of $\mathbf{a}$. The boost
operator i
\begin{align}
\widehat{\mathbf{K}} & =\frac{\hbar}{2}\left( k\widehat{\mathbf{x
}+\widehat{\mathbf{x}}k\right) +\widehat{\mathbf{K}}^{\left( 0,\mathbf{a
\right) },\label{K}\\
\widehat{\mathbf{K}}^{\left( 0,\mathbf{a}\right) } & =\widehat{\sigma
}\hbar k\mathbf{a.} \label{Ka
\end{align}
Poincar\'{e} transformations are generated by the unitary operator
\cite{Weinberg}
\begin{equation}
\widehat{U}\left( \bm{\xi},\bm{\beta},\mathbf{x},t\right) =\exp\left[
\frac{\mathrm{i}}{\mathbf{\hbar}}\left( \widehat{\mathbf{J}}\cdot
\bm{\xi}\mathbf{-}\widehat{\mathbf{K}}\cdot\bm{\beta}\right) +\mathrm{i
\left( \widehat{H}t\mathbf{-}\widehat{\mathbf{P}}\cdot\mathbf{x}\right)
/\hbar\right] \label{U
\end{equation}
in which $\bm{\xi}$ is the rotation angle, $\bm{\beta}=\mathbf{v}/c$,
$\mathbf{x}$ a spatial displacement and $t$ is time. The infinitesimal change
in an operator $\widehat{O}$ due to the unitary transformation $\widehat{U
^{\dagger}\widehat{O}\widehat{U}$ for $\widehat{U}\left( \mathrm{d
\bm{\xi},\mathrm{d}\bm{\beta},\mathrm{d}\mathbf{x},\mathrm{d}t\right) $ is
then
\begin{align}
\mathrm{d}\widehat{O} & =-\frac{\mathrm{i}}{\hbar}\left\{ \mathrm{d
\bm{\xi}\cdot\left[ \widehat{\mathbf{J}},\widehat{O}\right] -\mathrm{d
\bm{\beta}\cdot\left[ \widehat{\mathbf{K}},\widehat{O}\right] \right\}
\label{Uinf}\\
& +\frac{\mathrm{i}}{\hbar}\left\{ \mathrm{d}t\left[ \widehat{H
,\widehat{O}\right] -\mathrm{d}\mathbf{x\cdot}\left[ \widehat{\mathbf{P
},\widehat{O}\right] \right\} .\nonumber
\end{align}
\section{Little group and Wigner translations}
In this Section the properties of the Wigner little group of massless
particles will first be summarized and then their relationship to the position
operators $\widehat{\mathbf{x}}_{P}$ and $\widehat{\mathbf{x}}$ will be
discussed We will prove that the operators $\left\{ \widehat{x
_{1},\widehat{x}_{2},\widehat{J}_{3}\right\} $ are a realization of the
photon little group. Finally, rotation of the axis of symmetry of the basis
will be investigated.
The Wigner little group operators for a specific four-momentum $k^{\mu}$ are
defined by $L_{\nu}^{\mu}k^{\nu}=k^{\mu}$. For a zero mass particle, it is
common, for convenience, to take $\mathbf{k}$ parallel to the $3$-axis,
$k^{\mu}=\left( k,0,0,k\right) $, so that the little group operators read
$\widehat{L}=\left\{ \widehat{L}_{1},\widehat{L}_{2},\widehat{J}_{3}\right\}
$, $\widehat{L}_{1}=\widehat{J}_{2}+\widehat{K}_{1}$ and $\widehat{L
_{2}=-\widehat{J}_{1}+\widehat{K}_{2}$ \cite{Weinberg,Debierre}. In
$\widehat{L}_{1}$ and $\widehat{L}_{2}$ the component of $\widehat{\mathbf{J
}$ needed to compensate for the boost is rotated about $\mathbf{e}_{3}$
relative to $\widehat{\mathbf{K}}$ from $\mathbf{e}_{1}$ to $-\mathbf{e}_{2}$
or from $\mathbf{e}_{2}$ to $\mathbf{e}_{1}$, that is it lags
$\widehat{\mathbf{K}}$ by $\pi/2$. These operators are an $e\left( 2\right)
$ subalgebra of the Poincar\'{e} group that satisfy the commutation relations
$\left[ \widehat{L}_{1},\widehat{L}_{2}\right] =0,\ \left[ \widehat{J
_{3},\widehat{L}_{1}\right] =\mathrm{i}\hbar\widehat{L}_{2},$ $\left[
\widehat{J}_{3},\widehat{L}_{2}\right] =-\mathrm{i}\hbar\widehat{L}_{1}$.
Since $\widehat{L}_{1}$ and $\widehat{L}_{2}$ commute they can be
simultaneously diagonalized and their linear combinations have a continuum of
eigenvalues that is not observed, therefore, their common eigenvalue must be
$0$. The operators $\widehat{L}_{1}$ and $\widehat{L}_{2}$ generate gauge
transformations \cite{Weinberg,Kim}.
Photons in a twisted light beam have definite total AM in some fixed direction
that here will be called $\mathbf{e}_{3}$. With $\chi=-m\phi$ the $\mathbf{k
$-space unit vectors given by (\ref{e_chi}) become
\begin{equation}
\mathbf{e}_{\sigma}^{\left( m\right) }\left( \mathbf{k}\right) =\frac
{1}{\sqrt{2}}\left( \mathbf{e}_{\theta}+\mathrm{i}\sigma\mathbf{e}_{\phi
}\right) e^{\mathrm{i}\sigma m\phi}. \label{e1
\end{equation}
Position eigenvectors at arbitrary $\mathbf{x}$ and $t$ can be obtained by
applying the unitary transformation $U^{\dag}$ given by (\ref{U}) to the
position eigenvectors at the origin, $k^{\alpha}\mathbf{e}_{\sigma}^{\left(
m\right) }\left( \mathbf{k}\right) $, to obtai
\begin{equation}
\mathbf{c}_{\sigma\mathbf{x}}^{\left( m\right) }\left( \mathbf{k}\right)
=k^{\alpha}\mathbf{e}_{\sigma}^{\left( m\right) }\left( \mathbf{k}\right)
\exp\left[ \mathrm{i}\left( \mathbf{k}\cdot\mathbf{x}-\omega t\right)
\right] \label{ex
\end{equation}
consistent with $c_{\sigma\mathbf{x}}^{\mu}=\left( c_{\sigma\mathbf{x}
^{0},\mathbf{c}_{\sigma\mathbf{x}}\right) $ in (\ref{evec}). Experiments are
usually performed on optical beams for which focusing leads to photon
localization in two dimensions. In these beams photon density is independent
of $x_{3}$ and $t$. Using $\int_{-\infty}^{\infty}dz\exp\left[ \mathrm{i
\left( k_{z}-k_{z_{0}}\right) z\right] /2\pi=\delta\left( k_{z}-k_{z_{0
}\right) $ the integral of (\ref{e1}) over $z$ gives
\begin{equation}
\mathbf{e}_{\sigma\bot}^{\left( m\right) }\left( \mathbf{k}_{\bot}\right)
=\mathbf{e}_{\sigma}^{\left( m\right) }\left( \mathbf{k}_{\bot},k_{z_{0
}\right) \exp\left[ \mathrm{i}\left( \mathbf{k}_{\bot}\cdot\mathbf{x
_{\bot}-k_{z_{0}}z-\omega t\right) \right] \label{eperp
\end{equation}
with $\omega=\sqrt{\mathbf{k}_{\bot}^{2}+k_{z_{0}}^{2}}$. This is a good basis
for the description of optical beams.
To describe rotations about fixed axes, $\mathbf{a}^{\left( m\right) }$ will
be written in terms of the Cartesian unit vectors $\mathbf{e}_{1}$,
$\mathbf{e}_{2}$ and $\mathbf{e}_{3}$ sketched in Fig. 1
\begin{figure}[ptb
\centering
\includegraphics[
natheight=3.261200in,
natwidth=2.355700in,
height=3.2612in,
width=2.3557in
{C:/Users/user/Dropbox/graphics/Fig1__1.pdf
\caption{Spherical, cylindrical and Cartesian cordinates
\end{figure}
In $\mathbf{k}$-space
\begin{equation}
\mathbf{e}_{\mathbf{k}}=\cos\theta\mathbf{e}_{3}+\sin\theta\mathbf{e}_{\kappa
},\ \mathbf{e}_{\kappa}=\cos\phi\mathbf{e}_{1}+\sin\phi\mathbf{e
_{2}.\label{cylindrical
\end{equation}
Substitution of (\ref{cylindrical}) in (\ref{Ja}) gives the $\mathbf{e}_{3}$
component of the intrinsic part of the AM operator a
\begin{equation}
\widehat{J}_{3}^{\left( 0,-m\phi\right) }=\widehat{\sigma}m\mathbf{\hbar
.}\label{J3
\end{equation}
Thus the localized states have intrinsic AM $\hbar\widehat{\sigma
m\mathbf{e}_{3}.$
While the momentum, angular momentum and boost operators transform like three
dimensional vectors, for $\widehat{\mathbf{x}}$ operator algebra based on
(\ref{commutation}) and (\ref{J}) to (\ref{Ka}) gives
\begin{align}
\left[ \widehat{J}_{i},\widehat{x}_{j}\right] & =\mathrm{i}\hbar
\epsilon_{ijk}\widehat{x}_{k}-\mathrm{i}\partial_{k_{j}}\widehat{J
_{i}^{\left( 0,\mathbf{a}\right) },\label{Jx}\\
\left[ \widehat{K}_{i},\widehat{x}_{j}\right] & =-\mathrm{i}\frac{\hbar
}{2}\left( \frac{k_{j}}{k}\widehat{x}_{i}+\widehat{x}_{i}\frac{k_{j}
{k}\right) -\mathrm{i}\partial_{k_{j}}\widehat{K}_{i}^{\left( 0,\mathbf{a
\right) }. \label{Kx
\end{align}
Since $\widehat{J}_{3}^{\left( 0,-m\phi\right) }=\widehat{\sigma}m\hbar$
given by (\ref{J3}) does not depend on $\mathbf{k}$ in (\ref{Jx}) and the
components of $\widehat{\mathbf{x}}$ commute, in the basis $\chi=-m\phi$
\begin{align}
\left[ \widehat{x}_{1},\widehat{x}_{2}\right] & =0,\label{xx}\\
\left[ \widehat{J}_{3},\widehat{x}_{1}\right] & =\mathrm{i}\hbar
\widehat{x}_{2},\label{Jx1}\\
\left[ \widehat{J}_{3},\widehat{x}_{2}\right] & =-\mathrm{i
\hbar\widehat{x}_{1}. \label{Jx2
\end{align}
Thus $\left\{ \widehat{x}_{1},\widehat{x}_{2},\widehat{J}_{3}\right\} $ is a
realization of the two dimensional Euclidean $e\left( 2\right) $ algebra
that effects genuine infinitesimal transformations in configuration space.
\emph{This is the primary result of this paper.}
The Poincar\'{e}, little group and Pryce position operators are discussed in
\cite{Stone}. The commutator
\begin{align}
\left[ \widehat{J}_{i},\widehat{x}_{Pj}\right] & =\mathrm{i}\hbar
\epsilon_{ijk}\widehat{x}_{Pk},\label{JxP}\\
\left[ \widehat{K}_{i},\widehat{x}_{Pj}\right] & =-\mathrm{i}\frac{\hbar
}{2}\left( \frac{k_{j}}{k}\widehat{x}_{Pi}+\widehat{x}_{Pi}\frac{k_{j}
{k}\right) -\mathrm{i}\widehat{\sigma}\hbar\epsilon_{ijk}\frac{k_{k}}{k^{2}}.
\label{KxP
\end{align}
are equivalent to (4) and (10) in \cite{Stone}. The position operator
$\widehat{\mathbf{x}}$ whose components commute has the additional features
that it has an axis of symmetry and localized eigenvectors.
To simplify (\ref{Jx}) and (\ref{Kx}) and obtain their physical interpretation
we will write them in vector form and assume constant $\mathrm{d}\bm{\xi}$,
$\mathrm{d}\bm{\beta}$, \textrm{d}$t$ and \textrm{d}$\mathbf{x}=0$. We return
to the general case since $\mathbf{a}\left( \theta,\phi\right) $ is needed
to describe a basis with axis of symmetry not parallel to $\mathbf{e}_{3}$. A
nonzero commutator between $\widehat{\mathbf{x}}$ and $\widehat{\mathbf{J}}$,
$\widehat{\mathbf{K}}$ or $\widehat{H}$ implies a infinitesimal change in the
position operator. For a rotation through the angle \textrm{d}$\bm{\xi}$ and a
velocity change $c$\textrm{d}$\bm{\beta},$ (\ref{Uinf}) gives $\mathrm{d
\widehat{x}_{j}^{\xi}=-\left( \mathrm{i}/\hbar\right) \mathrm{d
\bm{\xi}\cdot\left[ \widehat{\mathbf{J}},\widehat{x}_{j}\right] $ and
$\mathrm{d}\widehat{x}_{j}^{\beta}=\left( \mathrm{i}/\hbar\right)
\mathrm{d}\bm{\beta}\cdot\left[ \widehat{\mathbf{K}},\widehat{x}_{j}\right]
$. The corresponding changes in the position operator ar
\begin{align}
\mathrm{d}\widehat{\mathbf{x}}^{\xi} & =\mathrm{d}\bm{\xi}\mathbf{\times
}\widehat{\mathbf{x}}-\partial_{\mathbf{k}}\left( \mathrm{d}\bm{\xi}\cdot
\widehat{\mathbf{J}}^{\left( 0,\mathbf{a}\right) }\right) ,\label{dxeta}\\
\mathrm{d}\widehat{\mathbf{x}}^{\beta} & =\frac{\mathbf{k}}{k
\mathrm{d}\bm{\beta}\cdot\widehat{\mathbf{x}}-\partial_{\mathbf{k}}\left(
-\mathrm{d}\bm{\beta}\cdot\widehat{\mathbf{K}}^{\left( 0,\mathbf{a}\right)
}\right) . \label{dxbeta
\end{align}
The first term on the right hand side of (\ref{dxbeta}) arises because the
energy and position operators do not commute. This corresponds to the
$\mathrm{d}t$ term of (\ref{Uinf}) and is a feature of the quantum mechanics
of both massive and massless particles. The terms in round brackets are
$\widehat{\sigma}$ multiplied by a change in the Euler angle $\chi\left(
\theta,\phi\right) $ where, in (\ref{dxeta}),
\begin{equation}
\widehat{\sigma}\mathrm{d}\chi^{\xi}=\frac{1}{\hbar}\mathrm{d}\bm{\xi}\cdot
\widehat{\mathbf{J}}^{\left( 0,\mathbf{a}\right) }\left( \theta
,\phi\right) . \label{delta
\end{equation}
Rotation about an axis in the $12$-plane will change the axis of symmetry of
the basis. For rotation though an angle \textrm{d}$\theta$ about a fixed axis
that makes an angle $\varphi$ with the $\mathbf{e}_{1}$ axis, $\mathrm{d
\bm{\xi}=-\mathrm{d}\theta\left( \cos\varphi\mathbf{e}_{1}+\sin
\varphi\mathbf{e}_{2}\right) =-$\textrm{d}$\theta\mathbf{e}_{\mathbf{\kappa
}_{\varphi}}$. For a boost described by $\mathrm{d}\bm{\beta}=\mathrm{d
\theta\left( -\mathrm{\sin}\varphi\mathbf{e}_{1}+\cos\varphi\mathbf{e
_{2}\right) =\mathrm{d}\theta\mathbf{e}_{\varphi}$ leads $\mathrm{d}\bm{\xi}$
by $\pi/2$ so that $\mathrm{d}\chi^{\xi}=-\mathrm{d}\chi^{\beta}=ka^{\left(
m\right) }\left( \theta\right) \cos\left( \phi-\varphi\right) $ and the
change in Euler angle introduced by the boost cancels that due to the
rotation. For a finite rotation about $\mathbf{e}_{\mathbf{\kappa}_{\varphi}}$
this Euler angle can be integrated over $\theta$ to give $\Delta\chi^{\xi
}=\left[ \int_{\theta_{i}}^{\theta_{f}}a^{\left( m\right) }\left(
\theta\right) \mathrm{d}\theta\right] \cos\left( \phi-\varphi\right) $.
Since, according to (\ref{x}) and (\ref{a}), $\widehat{\mathbf{x}}$ includes a
term $-\widehat{\sigma}\partial_{\mathbf{k}}\chi\left( \theta,\phi\right) $
it follows that
\begin{align}
\mathrm{d}\widehat{\mathbf{x}}^{\xi}-\mathrm{d}\bm{\xi}\times
\widehat{\mathbf{x}} & =\mathrm{d}\widehat{\mathbf{x}}^{\beta
-\mathbf{e}_{\mathbf{k}}\mathrm{d}\bm{\beta}\cdot\widehat{\mathbf{x
}\nonumber\\
& =-\widehat{\sigma}\mathrm{d}\theta\left[ \mathbf{e}_{\theta}\frac{\partial
a^{\left( m\right) }\left( \theta\right) }{\partial\theta}\cos\left(
\phi-\varphi\right) \right. \nonumber\\
& \left. +\mathbf{e}_{\phi}\frac{a^{\left( m\right) }\left(
\theta\right) }{\sin\theta}\sin\left( \phi-\varphi\right) \right] .
\label{dx
\end{align}
The position operator $\widehat{\mathbf{x}}$ describes the center of AM, while
the Pryce operator $\widehat{\mathbf{x}}_{P}=\widehat{\mathbf{x
}+\widehat{\sigma}\mathbf{a}^{\left( 1\right) }$ implies orbital AM relative
to this center. When applied to the position eigenvector at the origin,
$\widehat{\mathbf{x}}_{P}\mathbf{e}_{\sigma}^{\left( m\right) }=\left(
\widehat{\mathbf{x}}+\widehat{\sigma}\mathbf{a}^{\left( m\right) }\right)
\mathbf{e}_{\sigma}^{\left( m\right) }=\sigma\mathbf{a}^{\left( m\right)
}\mathbf{e}_{\sigma}^{\left( m\right) }$ so the orbital AM is $\sigma
\hbar\mathbf{a}^{\left( m\right) }\times\mathbf{k}$. The position operator
$\widehat{\mathbf{x}}$ obeys the commutation relations (\ref{Jx}) and
(\ref{Kx}), while for $\widehat{\mathbf{x}}_{P}$ (\ref{JxP}) and (\ref{KxP})
are satisfied. These commutation relations are similar except that (\ref{Jx})
and (\ref{Kx}) contain a term that rotates the axis of symmetry, while
(\ref{KxP}) contains a term $-\mathrm{i}\widehat{\sigma}\hbar\epsilon
_{ijk}k_{k}/k^{2}$ not present in (\ref{Kx}) due to noncommutativity of the
components of the Pryce position operator. The extra term in (\ref{KxP}) is
equivalent to the second term on the right hand side of (10) and the right
hand side of (13) in \cite{Stone}. Since from (\ref{xP}) and the
$\widehat{\mathbf{x}}_{P}$ commutation relation following it $\left[
\widehat{x}_{i},\widehat{x}_{j}\right] =\left[ \widehat{x}_{Pi
,\widehat{x}_{Pj}\right] -\widehat{\sigma}\left( \left[ \widehat{x
_{i},a_{j}^{\left( m\right) }\right] +\left[ a_{i}^{\left( m\right)
},\widehat{x}_{j}\right] \right) =\mathrm{i}\widehat{\sigma}\mathbf{\partial
}_{\mathbf{k}}\mathbf{\times}a^{\left( m\right) }-\mathrm{i}\widehat{\sigma
}\mathbf{k}/k^{3}=0$, in (\ref{Kx}) this `Wigner' term is absorbed into
$\widehat{\mathbf{x}}$ that has commuting components.
\section{Optical beams}
In this Section application of the position eigenvectors to optical beams will
be discussed in the context of the recent experimental and theoretical
literature. We will consider the relationship of the configuration space basis
to transfer of linear and angular momentum to a particle, focusing, phase
shift in an optical fiber and optical communications.
The linear and angular momentum of a photon can be transferred to a particle.
It is observed that the optical intensity in a high-order Bessel beam is
independent of $t$ and $x_{3}$ and its transverse profile is a series of
bright rings. A small particle trapped in a bright ring of such a beam
simultaneously spins on its axis and orbits the beam centroid \cite{ONeil}. A
photon in this beam has transverse wave vector $\mathbf{k}_{\perp}=k_{\perp
}\mathbf{e}_{\phi},$ a radial position vector $\mathbf{x}_{\perp}$ pointing
outward from the beam axis and extrinsic orbital AM $l\hbar\mathbf{e}_{3}$
\cite{Padgett}. If it is absorbed, its linear momentum $\hbar\mathbf{k
_{\perp}$ and total AM $\left( \sigma+l\right) \hbar\mathbf{e}_{3}$ will be
transferred to the particle causing it to spin on its axis and orbit the beam
axis \cite{Zhao}. At a fundamental level it is total angular and linear
momentum that is conserved \cite{CT}.
The eigenvectors of $\widehat{\mathbf{x}}$ are an idealization of an
ultrashort pulse focused at $\mathbf{x}$ for an instant. Experiments are
usually performed on optical beams for which focusing leads to photon
localization in only two dimensions. Eq. (\ref{eperp}) is a good basis for
description of focusing of a beam. A CP beam with incident center wave vector
$\mathbf{k}_{i}=k_{i}\mathbf{e}_{3}$ and final wave vector $\mathbf{k}_{f}$ is
focused to the point $\left( 0,0,x_{3}\right) $ as sketched in Fig. 2
\begin{figure}[ptb
\centering
\includegraphics[
natheight=2.833100in,
natwidth=2.273600in,
height=2.8331in,
width=2.2736in
{C:/Users/user/Dropbox/graphics/Fig2__2.pdf
\caption{Focusing of a light beam.
\end{figure}
Since refraction by the lens conserves the component of AM parallel to its
axis of symmetry, the total AM per photon at the focal point is still $\sigma
m\hbar\mathbf{e}_{3}$. This conversion has been observed: focusing of a beam
carrying spin AM can induce orbital AM which drives the orbital motion of
micron-sized metal particles \cite{Zhao}.
In an optical fiber photon position is limited by the diameter of the fiber
and the longitudinal component of momentum is determined by its orientation. A
right-handed CP beam cycling around a closed circuit in $\mathbf{k}$-space
acquires a Berry phase shift relative to a left-handed CP beam of
$2\Omega=4\pi\left( 1-\cos\theta\right) $ per loop\ as predicted in
\cite{Chiao}, confirmed experimentally in \cite{TomitaChiao}, and given here
by (\ref{loop}). It was predicted in \cite{vanEnk} that electrons and photons
experience a universal geometric phase shift even in a straight waveguide that
can be described in perturbation theory by spin-orbit coupling which in the
paraxial limit is proportional to the spin-orbit coupling. This effect has
recently been observed in dispersion-taylored straight few-mode fibers
\cite{Raymer} where $x_{3}$ was varied by cutting the fiber. The linear
polarization was found to rotate with $x_{3}$ at a rate proportional to the
spin-orbit coupling strength. In these experiments, photons in the input beam
have extrinsic orbital AM $\hbar l_{3}$.
Twisted photons can be used to encode information beyond one bit per single
photon \cite{Zeilinger}. Secure communication requires entangled photons and
entanglement is the most mysterious property of quantum particles. Tests of
quantum mechanics are often performed on photons so the controversy regarding
photon localization is relevant to many experiments that have great potential
for the performance of quantum tasks.
\section{Conclusion}
Photons are the most important but also the most problematic neutral bosons.
They are important because they are the subject of many experiments, including
some intended as tests of quantum mechanics itself. They are controversial
because most theorists believe that there is no acceptable photon position
operator with commuting components that would lead to a basis of position
eigenvectors. But such an operator does in fact exist and we show here that
its properties are a consequence of the symmetry of the photon little group.
The extra term in its commutation relations with the rotation and boost
operators describes rotation of the axis of symmetry of its eigenvectors and
the observed Berry phase shift.
We have proved in Section III that $\left\{ \widehat{x}_{1},\widehat{x
_{2},\widehat{J}_{3}\right\} $ is a realization of the two dimensional
Euclidean $e\left( 2\right) $ algebra that effects genuine infinitesimal
transformations in configuration space. This answers the question "What is
$\mathbf{x}$" posed by Stone, Dwivede and Zhou \cite{Stone}: $\mathbf{x}$ is
photon position. While still controversial this conclusion illuminates the
debate surrounding photon wave mechanics and the localization of light.
|
2,877,628,089,447 | arxiv | \section{Introduction}
The characteristic variety of a coherent $\mathscr{D}$-module with a good filtration is the support of the associated graded module on the cotangent bundle (see \cite{Kasbook}). Characteristic cycles can be obtained with multiplicities taken into account. They can also be considered relative to smooth morphisms (or holomorphic submersions under the analytic setting) and from a logarithmic perspective.
See \S\ref{subsec:relchcc} and \S\ref{sec:logdmodule} for definitions.
In this paper, we study characteristic cycles of relative $\mathscr{D}$-modules associated to (regular) holonomic $\mathscr{D}$-modules. We apply the relative characteristic cycles to studying the logarithmic characteristic cycles for lattices of regular holonomic $\mathscr{D}$-modules and the constructibility of logarithmic de Rham complexes.
\subsection{Constructibility of log de Rham complexes for lattices}
For holonomic $\mathscr{D}$-modules on complex manifolds, Kashiwara's constructibility theorem \cite{Kascons} says that the de Rham complexes of the holonomic modules are constructible (they are indeed perverse, see also \cite[Theorem 4.6.6]{HTT}). Our first two main results are constructibility and perversity of log de Rham complexes of lattices.
Suppose that $(X,D)$ is a pair consisting of a complex manifold together with a reduced normal crossing divisor $D$, called an \emph{analytic smooth log pair}.
Let $\mathscr{D}_{X,D}$ be the sheaf of rings of holomorphic logarithmic differential operators, that is, the sub-sheaf of $\mathscr{D}_X$ consisting of differential operators preserving the defining ideal of $D$. Let $\mathcal{M}$ be a coherent $\mathscr{D}_X$-module. We now consider a $\mathscr{D}_{X,D}$-\emph{lattice} $\bar \mathcal{M}$ of $\mathcal{M}$,
a special $\mathscr{D}_{X,D}$-module associated to $\mathcal{M}$ (see \S\ref{subsec:lattices} for definition). Typical examples of lattices include Deligne lattices (cf. \cite[\S4.4]{WZ}) and lattices given by the graph embedding construction of Malgrange (see \S\ref{sec:gemMal} for details).
\begin{theorem}\label{thm:constrlattice}
Suppose that $(X,D)$ is an analytic smooth log pair and that $\mathcal{M}$ is a holonomic $\mathscr{D}_X$-module. If $\bar\mathcal{M}$ is a $\mathscr{D}_{X,D}$-lattice of $\mathcal{M}$, then the log de Rham complex $\textup{DR}_{X,D}(\bar\mathcal{M})$
is constructible.
\end{theorem}
The above theorem naturally generalizes Kashiwara's constructibility theorem and answers the question at the beginning of \cite{WZ}. The constructible complex $\textup{DR}_{X,D}(\bar\mathcal{M})$ is not perverse in general and the stratification of the constructible complex $\textup{DR}_{X,D}(\bar\mathcal{M})$ is determined by the stratification of $\textup{Ch}(\mathcal{M}(*D))$ (see Remark \ref{rmk:stratificationlogDR}).
\begin{theorem}\label{thm:j*j!DR}
In the situation of Theorem \ref{thm:constrlattice}, $\textup{DR}_{X,D}(\bar\mathcal{M}(kD))$ are perverse locally on a relative compact open subset of $X$ $($or globally when $X$ is algebraic$)$ for all $|k|\gg 0$. Moreover,
if $\mathcal{M}$ is regular holonomic, then locally on a relative compact open subset of $X$ $($or globally when $X$ is algebraic$)$ we have natural quasi-isomorphisms
\begin{enumerate}
\item $\textup{DR}_{X,D}(\bar\mathcal{M}(kD))\stackrel{q.i.}{\simeq} Rj_*\textup{DR}(\mathcal{M}|_U)$,
\item $\textup{DR}_{X,D}(\bar\mathcal{M}(-kD))\stackrel{q.i.}{\simeq} j_!\textup{DR}(\mathcal{M}|_U)$
\end{enumerate}
for all integral $k\gg 0$, where $j\colon U=X\setminus D\hookrightarrow X$ is the open embedding.
\end{theorem}
Taking the lattice $\bar\mathcal{M}=\mathscr{O}_X$ in Theorem \ref{thm:j*j!DR} (1), we recover the Grothendieck comparison \cite{Grocm},
\[[\mathscr{O}_X\rightarrow \Omega^1_X(\log D)\rightarrow\cdots \rightarrow\Omega^n_X(\log D)]\stackrel{q.i.}{\simeq}Rj_*\C_U[n],\]
where $n$ is the dimension of $X$. See also \cite[Theorem 1.2]{WZ}. Theorem \ref{thm:j*j!DR} for lattices given by the graph embedding construction has been used in studying the cohomology support loci of rank one complex local systems \cite{BVWZ,BVWZ2}.
The Kashiwara's constructibility theorem has been extended to Riemann-Hilbert correspondence, the regular case by Kashiwara and Mebkhout independently (see for instance \cite[\S 7]{HTT}) and the irregular case by D’Agnolo and Kashiwara \cite{DAK15}. Under the logarithmic setting, Kato and Nakayama \cite{KN99} and Ogus \cite{OglogRH} studied Riemann-Hilbert correspondence for log connections on smooth log schemes, and Koppensteiner and Taplo \cite{KT} further studied the theory of logarithmic $\mathscr{D}$-modules on smooth log schemes. Koppensteiner \cite{K20} (based on the work of Ogus) augmented the log de Rham complexes to graded complexes on Kato-Nakayama spaces and proved a finiteness result for logarithmic holonomic $\mathscr{D}$-modules. Since smooth log pairs are smooth log schemes, lattices in this paper are special examples of log $\mathscr{D}$-modules in \cite{KT}.
Therefore, one can naturally ask whether Theorem \ref{thm:constrlattice} together with Theorem \ref{thm:j*j!DR} can be enhanced to a Riemann-Hilbert correspondence on smooth log pairs in the logarithmic category.
Notice that our proof of Theorem \ref{thm:constrlattice} is logarithmic in nature, since it depends on the natural stratification of the normal crossing divisor $D$, which gives evidence of the existence of the log Riemann-Hilbert correspondence.
A similar logarithmic Riemann-Hilbert program for log holonomic modules has also been proposed in \cite{KT}. However, lattices are not logarithmic holonomic \cite[Definition 3.22]{KT} in general and hence Theorem \ref{thm:constrlattice} is different from the finiteness result in \cite{K20}. It would be interesting to further understand relations between lattices and logarithmic holonomic modules.
Our proof of Theorem \ref{thm:constrlattice} and Theorem \ref{thm:j*j!DR} depends on ``transforming" log $\mathscr{D}$-modules to relative $\mathscr{D}$-modules and on the study of relative characteristic cycles. Typical examples of non-trivial relative $\mathscr{D}$-modules arise from the construction of the generalized Kashiwara-Malgrange filtrations. Then we discuss relative characteristic cycles associated to Kashiwara-Malgrange filtrations.
\subsection{$V$-filtrations along slopes of smooth complete intersections and their relative characteristic cycles}
Suppose that $X$ is a smooth algebraic variety over $\C$ (or a complex manifold) and that $Y\subseteq X$ is a smooth complete intersection of codimension $r$, that is,
\[Y=\bigcap_{j=1}^r H_j\]
where $\sum_jH_j$ is a normal crossing divisor. Let $\mathscr{D}_X$ be the sheaf of rings of algebraic (or holomorphic) differential operators. Define a $\mathbb{Z}^r$-filtration on $\mathscr{D}_X$ along $Y$ by
\begin{equation} \label{eq:kKMD}
V_{{\bf s}}\mathscr{D}_X=\bigcap_{j=1}^r V_{s_j}^{H_j}\mathscr{D}_X
\end{equation}
for every ${\bf s}=(s_1,s_2\dots,s_r)\in \mathbb{Z}^r$, where $V_{\bullet}^{H_j}\mathscr{D}_X$ is the Kashiwara-Malgrange filtration on $\mathscr{D}_X$ along $H_j$ (see Definition \ref{def:KMalongY}). Following ideas of Sabbah \cite{Sab}, for a nondegenerate slope $L$ in the dual cone $(\mathbb{Z}_{\ge 0}^r)^\vee$, we define the (generalized) Kashiwara-Malgrange filtration on $\mathscr{D}_X$ along $L$ by
\[^LV_{L({\bf s})}\mathscr{D}_X=\sum_{L({\bf s}')\le L({\bf s})}V_{{\bf s}'}\mathscr{D}_X.\]
This gives a $\mathbb{Z}$-filtration $^LV_{\bullet}\mathscr{D}_X$ via the isomorphism $\mathbb{Z}\simeq\mathbb{Z}^r/L^\perp$.
For a coherent $\mathscr{D}_X$-module $\mathcal{M}$, one can then define the Kashiwara-Malgrange filtration (or $V$-filtration for short) on $\mathcal{M}$ along $L$ (see Definition \ref{def:fltLV}). For degenerate slopes, one can reduce to the nondegenerate ones by ignoring the unrelated $H_j$.
The $V$-filtration of a coherent $\mathscr{D}_X$-module $\mathcal{M}$ along $L$ (if exists) contains the ``deformation'' information of $\mathcal{M}$. More precisely, the Rees ring $^LR_V\mathscr{D}_X$ associated to $^LV_\bullet \mathscr{D}_X$ gives the sheaf of differential operators relative to the (algebraic) normal deformation of $Y$ in $X$ along $L$,
\[\varphi^L:\widetilde X^L\to \mathbb{C},\]
where $\widetilde X^L$ is the ambient space of the deformation. Hence, the Rees module $^LR_V\mathcal{M}$ associated to $^LV_\bullet\mathcal{M}$ is a $\mathscr{D}$-module relative to $\varphi^L$. See \S\ref{subsect:spalongL} for details.
\begin{theorem}[Sabbah]\label{thm:relccL}
Suppose that $\mathcal{M}$ is a holonomic $\mathscr{D}_X$-module and that $Y$ is a smooth complete intersection of codimension $r$. Then $\mathcal{M}$ is specializable along every slope vector $L$ (i.e. the $V$-filtration on $\mathcal{M}$ along $L$ uniquely exists). Moreover, if $\mathcal{M}$ is regular holonomic and the slope vector $L$ is nondegenerate, then
$\textup{gr}^{^LV}_\bullet\mathcal{M}$ gives a regular holonomic $\mathscr{D}$-module on $T_YX$ and we have the following formulas for characteristic cycles,
\[\textup{CC}_{\widetilde X^L/\mathbb{C}}({ }^LR_V\mathcal{M})=\overline{q^{*}_L(\textup{CC}(\mathcal{M}))}\subseteq T^*(\widetilde X^L/\mathbb{C})\]
and
\[\textup{CC}(\widetilde\textup{gr}^{^LV}_\bullet\mathcal{M})=\overline{q^{*}_L(\textup{CC}(M))}|_{T^*T_YX}\subseteq T^*T_YX,\]
where $q_L:T^*(\widetilde X^L/\mathbb{C})\setminus T^*T_YX\simeq T^*X\times\mathbb{C}^\star\to T^*X$ is the natural projection.
\end{theorem}
One can also consider the Rees ring $R_V\mathscr{D}_X$ associated to the $\mathbb{Z}^r$-filtration $V_\bullet\mathscr{D}_X$. Similar to $^LR_V\mathscr{D}_X$, $R_V\mathscr{D}_X$ can be seen as the sheaf of differential operators relative to the (refined) normal deformation of $Y$ in $X$ (see \S\ref{subsec:refnormde}),
\[\varphi: \widetilde X\to \mathbb{C}^r,\]
with the $\mathbb{Z}^r$-grading on $R_V\mathscr{D}_X$ induced from the natural toric structure on the base $\mathbb{C}^r$. Then $\varphi^L$, as well as $^LR_V\mathscr{D}_X$, is obtained from $\varphi$ and $R_V\mathscr{D}_X$ respectively through the base-change,
\[\iota_L\colon\C \hookrightarrow \C^r \]
induced by the one parameter subgroup of $L$. For a $\mathscr{D}_X$-module $\mathcal{M}$ with a good filtration $U_\bullet\mathcal{M}$ over $V_\bullet\mathscr{D}_X$, the associated Rees module $R_U\mathcal{M}$ is then a coherent relative $\mathscr{D}$-module with respect to $\varphi$.
\begin{theorem}[Sabbah]\label{thm:CCrelRU}
Suppose that $\mathcal{M}$ is a regular holonomic $\mathscr{D}_X$-module and that Y is a smooth complete intersection of codimension $r$. Let $U_\bullet\mathcal{M}$ be a good $\mathbb{Z}^r$-filtration over $V_\bullet\mathscr{D}_X$. Then we have
\[\textup{CC}_{\widetilde X/\mathbb{C}^r}(R_U\mathcal{M})=\overline{q^{*}(\textup{CC}(\mathcal{M}))}\subseteq T^*(\widetilde X/\mathbb{C}^r)\]
where $q:T^*(\widetilde X^L/\mathbb{C})\setminus (\prod^r_{j=1}u_j=0)\simeq T^*X\times(\mathbb{C}^\star)^r\to T^*X$ is the natural projection and $(u_1,u_2,\dots,u_r)$ are coordinates of $\mathbb{C}^r$.
\end{theorem}
Theorem \ref{thm:relccL} and \ref{thm:CCrelRU} are essentially due to Sabbah (see \cite[\S 2]{Sab2}). Their proof relies on the study of characteristic cycles for relative $\mathscr{D}$-modules (see \S\ref{sec:vfilreld}). When $L=(\underbrace{1,1,\dots,1}_r)$ (use the standard dual basis of $(\mathbb{Z}^r)^\vee$), the characteristic cycle formula for $\textup{gr}^{^LV}_\bullet\mathcal{M}$ in Theorem \ref{thm:relccL} is due to Ginsburg (\cite[Theorem 5.8]{Gil}). But the precise characteristic cycle formulas in Theorem \ref{thm:relccL} and \ref{thm:CCrelRU} seem to be missing in the literature. It is worth mentioning that Ginsburg use the characteristic cycle formula in \cite[Theorem 5.8]{Gil} to study index theorems. It would be interesting to know whether the characteristic cycle formulas in Theorem \ref{thm:relccL} are related to index theorems or even Fukaya categories (cf. \cite{NZ}).
\subsection{Relative Riemann-Hilbert correspondence}
We give an explicit relative Riemann-Hilbert correspondence for $^LR_V\mathcal{M}$. We assume $\mathcal{M}$ a regular holonomic $\mathscr{D}_X$-module. The relative $\mathscr{D}$-module $^LR_V\mathcal{M}$ has fibers
\[ \label{eq:spdefL}
\mathbf{L} i_{\alpha}^*({ }^LR_V\mathcal{M})\stackrel{q.i.}{\simeq}\mathcal{M} \textup{ for $0\not=\alpha\in \mathbb{C}$ and, }\mathbf{L} i_{0}^*({ }^LR_V\mathcal{M})\stackrel{q.i.}{\simeq}\widetilde\textup{gr}^{^LV}_\bullet\mathcal{M},
\]
where $\stackrel{q.i.}{\simeq}$ denotes quasi-isomorphism
and $ i_\alpha:\widetilde X^L_\alpha=(\varphi^L)^{-1}(\alpha)\hookrightarrow \widetilde X^L$ is the closed embedding. Thus, $^LR_V\mathscr{D}_X$ deforms $\mathcal{M}$ into $\textup{gr}^{^LV}_\bullet\mathcal{M}$.
By Theorem \ref{thm:relccL}, $^LR_V\mathcal{M}$ provides an example of relative regular holonomic $\mathscr{D}$-modules (see Definition \ref{def:relholo} and \S\ref{subsec:bashchangerelD}). Moreover, using Lemma \ref{lm:cmmpbdr} the relative \emph{de Rham complex},
\[\omega_{\widetilde X^L/\mathbb{C}}\otimes^\mathbf{L}_{\mathscr{D}} { }^LR_V\mathcal{M},\]
has fibers $\textup{DR}(\mathcal{M})$ for $\alpha\not=0$ and the central fiber
$\textup{DR}_{T_YX}(\widetilde\textup{gr}^{^LV}_\bullet\mathcal{M})$,
where $\omega_{\widetilde X^L/\mathbb{C}}$ is the relative canonical sheaf of $\varphi_L$ (see \S \ref{subsect:scrm} and Remark \ref{rmk:gndlw} for explicit formulas).
But the central fiber satisfies (applying Theorem \ref{thm:holnb} and Lemma \ref{lm:Lgrnearby})
\[\label{eq:Lvsp}
\textup{DR}_{T_YX}(\widetilde\textup{gr}^{^LV}_\bullet\mathcal{M})\simeq \psi_{T_YX}(Rj^L_*p^{-1}(\textup{DR}(M))) \eqqcolon \textup{Sp}^L_{T_YX}(\textup{DR}(\mathcal{M}))
\]
where $\textup{Sp}^L_{T_YX}$ is the \emph{Verdier specialization} along $L$ by definition, $p: \widetilde X^L\setminus T_YX\to X$ is the natural projection and $j^L:\widetilde X^L\setminus T_YX\hookrightarrow \widetilde X^L$ is the open embedding. Thus, the relative de Rham complex
deforms $\textup{DR}(\mathcal{M})$ into its Verdier specialization along $L$. In particular, the relative de Rham complex gives a relative constructible complex (cf. \cite{FS17}).
In general, a relative regular Riemann-Hilbert correspondence for relative regular holonomic $\mathscr{D}$-modules over curves is established in \cite{FFS19}. However, the relative holonomicity in \emph{loc. cit.} is more restricted. Motivated by the above example, one might ask whether the relative Riemann-Hilbert correspondence over curves can be extended to the case by using the relative holonomicity in Definition \ref{def:relholo}.
In contrast to $^LR_V\mathcal{M}$, $R_U\mathcal{M}$ is not necessarily relatively holonomic because of the main obstruction that $R_U\mathcal{M}$ is only torsion-free but not necessarily flat over $\C^r$. Consequently, one cannot ``normalize" $U_\bullet\mathcal{M}$ into a $\mathbb{Z}^r$-indexed $V$-filtration in general. See Remark \ref{rmk:nabmkm} for more discussions.
\begin{prop}\label{prop:flatrelhol}
In the situation of Theorem \ref{thm:CCrelRU}, if $R_U\mathcal{M}$ is flat over $\C^r$, then $R_U\mathcal{M}$ is relative holonomic.
\end{prop}
\subsection{Logarithmic characteristic cycles of lattices}
If $D=\sum_{i=1}^r H_j$ is a normal crossing divisor, then
\[\mathscr{D}_{X,D}=V_{\vec0}\mathscr{D}_X\]
where the latter is defined in Eq.\eqref{eq:kKMD}.
This means that if $U_\bullet(\mathcal{M}(*D))$ is a good filtration over the $\mathbb{Z}^r$-filtration $V_\bullet\mathscr{D}_X$, then each filtrant $U_{{\bf s}}\mathcal{M}(*D)$ is a lattice of $\mathcal{M}$. This is an easy relation between log $\mathscr{D}$-modules and relative $\mathscr{D}$-modules. See \S\ref{subsec:logtorel} for a complicated relation between them. Our next main result is an alternative proof of Ginsburg's log characteristic cycle formula based on the complicated relation.
\begin{theorem}[Ginsburg]\label{thm:CClogC}
Suppose that $(X,D)$ is an analytic smooth log pair and that $\mathcal{M}$ is a regular holonomic $\mathscr{D}_X$-module. If $\bar\mathcal{M}$ is a $\mathscr{D}_{X,D}$-lattice of $\mathcal{M}$, then
\[\textup{CC}_{\log}(\bar \mathcal{M})=\overline{\textup{CC}}^{\log}(\mathcal{M}|_{U}),\]
where $\overline{\textup{CC}}^{\log}(\mathcal{M}|_U)$ denotes the closure of $\textup{CC}(\mathcal{M}_U)\subseteq T^*U$ inside the logarithmic cotangent bundle $T^*(X,D)$.
\end{theorem}
Ginsburg's proof of the above theorem under the algebraic setting in \cite[Appendix A]{Gil1} uses microlocalization of $\mathscr{D}$-modules and resolution of singularities. Our proof under the analytic setting relies on converting log $\mathscr{D}$-modules to relative $\mathscr{D}$-modules, with some ideas due to Sabbah and Briancon-Maisonobe-Merle.
Theorem \ref{thm:CClogC} has a long history. A characteristic variety formula of the holonomic system for $\prod_{l=1}^N(f_l+\sqrt{-1}{O})^{\lambda_l}$ first appears in \cite{KK79}. For the lattice $\mathscr{D}_X[{\bf s}]({\bf f}^{\bf s}\cdot m)$ given by the graph embedding construction (see \S\ref{sec:gemMal}), the formula for characteristic varieties is proved by Briancon-Maisonobe-Merle \cite[Th\'eor\`em 2.1]{BMM} and the characteristic cycle formula in this case is obtained in \cite{BVWZ2}. Ginsburg \cite{Gil1} made it in its most general form as in Theorem \ref{thm:CClogC} under the algebraic setting. See \cite{KasBf,Gil,Gil1,BVWZ2,Mai,Maihyp} for applications of Theorem \ref{thm:CClogC}. See also \cite{Callog,Cal20} for related applications.
In a follow-up paper, Theorem \ref{thm:CClogC} in its general form is used to obtain the conclusion that the zero loci of Bernstein-Sato ideals for regular holonomic $\mathscr{D}$-modules in general are always of codimension-one \cite[Theorem 3.11]{WuRHA}. This conclusion in turn plays an important role in the establishment of the Riemann-Hilbert correspondence for Alexander complexes (see \cite[\S 3]{WuRHA}).
The following example shows that regularity in both Theorem \ref{thm:CClogC} and Theorem \ref{thm:relccL} is needed.
\begin{example}
We consider $\mathcal{M}=\C[t,1/t]\cdot e^{1/t}$, the algebraic irregular holonomic $\mathscr{D}_X$-module generated by the function $e^{1/t}$ for $X=\C$, where $t$ is the coordinate of the complex plane $\C$. Let $H$ be the divisor $\{0\}\subseteq X$. Since
$$t\partial_t\cdot e^{1/t}=-e^{1/t}/t,$$
$\mathcal{M}$ is coherent over $\mathscr{D}_{X,H}$. Hence, $\mathcal{M}$ is a $\mathscr{D}_{X,H}$-lattice of itself and its $V$-filtration along $H$ is the trivial filtration with $\textup{gr}^V_\bullet\mathcal{M}=0.$
Moreover, since
$$(1+t^2\partial_t)\cdot e^{1/t}=0,$$
one can see that $\textup{Ch}_{\log}(\mathcal{M})$ has a component over $H$.
\end{example}
\subsection{Key ideas in proving main results}
For Theorem \ref{thm:constrlattice}, we first use direct images of log $\mathscr{D}_X$-modules (see \S\ref{subsec:dimagelogD}) under the graph embedding of the defining functions of the normal crossing divisor locally to reduce to the case for lattices $\mathscr{D}_{X}[{\bf s}]({\bf f}^{\bf s}\cdot \mathcal{M}_0)$. The lattices $\mathscr{D}_{X}[{\bf s}]({\bf f}^{\bf s}\cdot \mathcal{M}_0)$ can be treated as relative $\mathscr{D}$-modules over $\C[{\bf s}]$ with independent parameters
$${\bf s}=(s_1,s_2,\dots,s_r).$$
We then use the fact that $\mathscr{D}_{X}[{\bf s}]({\bf f}^{\bf s}\cdot \mathcal{M}_0)$ is indeed relative holonomic, see Theorem \ref{thm:mairelhol}, which generalizes a relative holonomicity result of Maisonobe \cite{Mai}. We then identify the log de Rham complex of $\mathscr{D}_{X}[{\bf s}]({\bf f}^{\bf s}\cdot \mathcal{M}_0)$ as the fiber of the relative de Rham complexes over the origin ${\mathbf 0}\in \C^r=\textup{Spec }\C[{\bf s}]$. Finally, Theorem \ref{thm:constrlattice} follows from the relative holonomicity result and Kashiwara's constructibility theorem for complexes of $\mathscr{D}$-modules with holonomic cohomologies. To prove Theorem \ref{thm:j*j!DR}, we reduce the required perversity to the flatness of the twisted lattices
\[\mathscr{D}_{X}[{\bf s}]({\bf f}^{\bf s}\cdot \mathcal{M}_0(kD))\]
over a small (analytic) neighborhood of ${\mathbf 0}\in \C^r$ for $|k|\gg0$ by applying Sabbah's generalized $b$-functions.
The key point of the proof of
Theorem \ref{thm:CClogC} is to interpret log $\mathscr{D}$-modules as certain relative $\mathscr{D}$-modules.
More precisely, we use what we call the log rescaled families (locally) to convert lattices to relative $\mathscr{D}$-modules over the log factor. See \S\ref{subsec:logtorel} for constructions. Finally we apply a relative characteristic variety formula of Sabbah and Briancon-Maisonobe-Merle (see Lemma \ref{lm:BMM}).
\subsection{Relations to singularities in algebraic geometry} We first clarify the Bernsten-Sato polynomials (or $b$-functions) in this paper and in the literature.
The $b$-functions in Definition \ref{def:fltLV} are the natural generalization of the $b$-functions for the usual $V$-filtrations (see \S\ref{subsec:KMVsp} and also \cite[\S III.7]{Bj}). For a holomorphic function $f$ and for the lattices $\mathscr{D}_X[s](f^s\cdot\mathcal{M}_0)$ (that is, $r=1$), the associated $b$-function is the monic polynomial $b(s)\in\C[s]$ of the least degree such that
\[b(s)\cdot \frac{\mathscr{D}_X[s](f^s\cdot\mathcal{M}_0)}{\mathscr{D}_X[s](f^{s+1}\cdot\mathcal{M}_0)}=0,\]
where $\mathcal{M}_0$ is an $\mathscr{O}_X$-coherent submodule of a holonomic $\mathscr{D}_X$-module.
See \cite[III.2 and VI.1]{Bj} for this case. If $\mathcal{M}_0=\mathscr{O}_X$, the above definition gives particularly the $b$-function for $f$. For the lattice $\mathscr{D}_{X}[{\bf s}]({\bf f}^{\bf s}\cdot \mathcal{M}_0)$ in general, since $s_i$ is identified with $-t_i\partial_{t_i}$,
Theorem \ref{thm:mibfs} gives a polynomial $b({\bf s})\in\C[{\bf s}]$ such that
\[b({\bf s})\cdot \frac{\mathscr{D}_X[{\bf s}](
{\bf f}^{\bf s}\cdot\mathcal{M}_0)}{\mathscr{D}_X[{\bf s}]({\bf f}^{{\bf s}+\vec1}\cdot\mathcal{M}_0)}=0,\]
with $\vec 1=(1,1,\dots,1)$. Such $b({\bf s})$ are what we mean by Sabbah's generalized $b$-functions. This can be further generalized to the definition of the Bernstein-Sato ideal $B_{\bf f}$ of ${\bf f}$ (see for instance \cite{Budur}).
Now we discuss the relations in between $V$-filtrations (and/or $b$-functions) and singularities in algebraic geometry. The $b$-function of $f$ provides a useful tool to study singularities of the divisor of $f$, see for instance \cite{KasBf,ELSV,BSaito,Kolpair,Ste}. For multiple functions, Budur, Musta\c{t}\v{a} and Saito \cite{BMS} defined the Bernstein-Sato polynomials (or $b$-functions) for arbitrary schemes in $X=\mathbb{C}^n$ by considering the $V$-filtration along smooth subvarieties (see Definition \ref{def:fltV}).\footnote{The $V$-filtration in \emph{loc. cit.} is the $\mathbb{Q}$-indexed one. One can refine the $\mathbb{Z}$-index to the $\mathbb{Q}$-index by a standard procedure using $b$-functions.} More precisely, they considered the $V$-filtration and $b$-functions along the slope $L=(1,1,\dots,1)$ for the holonomic module $\iota_{{\bf f}+}\mathscr{O}_X$, where ${\bf f}=(f_1,\dots,f_r)$ generate the ideal of an affine scheme. They can reinterpret the log-canonical threshold as well as other jumping coefficients of the multiplier ideals of the scheme \cite{Laz} as roots of the $b$-function of the $V$-filtration (\cite[Theorem 2]{BMS}). By Theorem \ref{thm:mibfs}, one can see that if $L=(1,\dots,1)$ is not a slope of an adapted cone of certain good $\mathbb{Z}^r$-filtration on $\iota_{{\bf f}+}\mathscr{O}_X$, then the $b$-function of the scheme is irrelevant to the generalized $b$-function of Sabbah.
In fact, this can be further refined in terms of Bernstein-Sato ideals with the help of a result of Maisonobe. More precisely, by \cite[R\'esultat 4]{Mai}, if $L=(1,\dots,1)$ is not a slope of the codimension one components of the zero locus of $B_{\bf f}$, then the $b$-function of the scheme defined by ${\bf f}$ is irrelevant to the Bernstein-Sato ideal of ${\bf f}$.
By Theorem \ref{thm:relccL}, one can now consider the $V$-filtration and the $b$-function of $\iota_{{\bf f}+}\mathscr{O}_X$ along an arbitrary slope $L$. It would be interesting to know whether there exist algebro-geometric interpretations of the jumping indices of the $V$-filtration and the roots of the $b$-function of $\iota_{{\bf f}+}\mathscr{O}_X$ along $L$.
Musta\c{t}\v{a} and Popa \cite{MP1,MP2,MP3} defined Hodge ideals (see also \cite{Saitohi}) by considering the Hodge filtration of the $\mathscr{D}$-module $\mathscr{O}_X(*D)$ using mixed Hodge modules. It is then natural to ask whether there exist connections between $V$-filtrations of $\iota_{{\bf f}+}\mathscr{O}_X(*D)$ along $L$ and Hodge ideals of $D$, where $\prod_i f_i=0$ defines the divisor $D$.
\subsection{Outline}In \S\ref{sec:reldmodule}, we discuss the general theory of relative $\mathscr{D}$-modules. Then we discuss log $\mathscr{D}$-modules and the proof of Theorem \ref{thm:CClogC} in \S\ref{sec:logdmodule}. \S\ref{sec:vfilreld} is about the generalized $V$-filtrations and their relations with relative $\mathscr{D}$-modules. Most of the results in \S\ref{sec:vfilreld} are essentially due to Sabbah. We give a down-to-earth exposition in \S\ref{sec:vfilreld}, for the reason that the beautiful theory of multi-indexed filtrations of Sabbah seems to be not widely known. Also, the construction of $V$-filtrations along arbitrary slopes seems to be missing in the literature, to the best of our knowledge. For instance, as mentioned above only the $V$-filtration along the slope $L=(1,\dots,1)$ was studied in \cite{BMS}.
Finally, we recall the graph construction of Malgrange and prove the constructibility of log de Rham complexes in \S\ref{sec:gemMal}.
\subsection{Convention} Throughout this paper, we discuss sheaves of modules on either algebraic or analytic spaces (or both). If the underlying space is algebraic (resp. analytic), then the sheaves of modules on it are all assumed to be algebraic (resp. analytic). But, when we discuss constructible complexes (or perverse sheaves) on a complex algebraic variety, we always use the Euclidean topology. If $f:X\to Y$ is a morphism of (algebraic or analytic) schemes, we use $f^{-1}$ and $f_*$ to denote the sheaf-theoretical inverse and direct image functors respectively.
\subsection*{Acknowledgement}
The author thanks Peng Zhou and Nero Budur for useful discussions, Claude Sabbah for answering questions and Yajnaseni Dutta and Ruijie Yang for useful comments.
\section{Relative $\mathscr{D}$-modules}\label{sec:reldmodule}
\subsection{Relative characteristic cycles}\label{subsec:relchcc}
We recall the theory of $\mathscr{D}$-modules under the relative setting. We mainly follow \cite[Chapter III. 1.3]{Schpbook}.
Suppose that $\varphi\colon \mathcal X\to \mathcal{S}$ is a smooth morphism (i.e. $d\varphi$ is surjective everywhere) of complex smooth algebraic varieties (or complex manifolds). We write by $\mathscr{T}_{
\mathcal{X}/
\mathcal{S}}$ the sheaf of vector fields tangent to the leaves of $\phi$. We then have an inclusion
\[\mathscr{T}_{\mathcal{X}/\mathcal{S}}\hookrightarrow \mathscr{T}_{\mathcal{X}}.\]
Then the sheaf of rings of relative differential operators associated to $\varphi$ is defined to be the subalgebra
\[\mathscr{D}_\varphi=\mathscr{D}_{\mathcal X/\mathcal{S}}\subseteq \mathscr{D}_X\]
generated by $\mathscr{T}_{X/Y}$ and $\mathscr{O}_X$. Similar to the absolute case, $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$ is a coherent and noetherian sheaf of rings. Modules over $\mathscr{D}_{\mathcal X/\mathcal{S}}$ are called relative $\mathscr{D}$-modules over $\mathcal{S}$. We also write by $\Omega^1_{\mathcal X/\mathcal S}$ the relative cotangent sheaf which is defined to be the $\mathscr{O}$-dual of $\mathscr{T}_{\mathcal X/\mathcal S}$. Since $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$ is not commutative, we have both right and left $\mathscr{D}_{\mathcal{X},\mathcal{S}}$-modules and the side-change operator is given by tensoring $\omega_{\mathcal{X}/\mathcal{S}}\coloneqq \wedge^m\Omega^1_{\mathcal X/\mathcal S}$ with its quasi-inverse by tensoring $\omega^{-1}_{\mathcal{X}/\mathcal{S}}$, where $m=\dim \mathcal{X}-\dim \mathcal{S}$.
Since $\varphi$ is smooth, we have a short exact sequence of cotangent bundles
\[0\to \mathcal X\times_\mathcal{S} T^*\mathcal{S} \to T^*X\to T^*(\mathcal X/S)\to 0.\]
The filtration $F_\bullet\mathscr{D}_X$ by the orders of differential operators induces on $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$ the order filtration $F_\bullet\mathscr{D}_{\mathcal X/S}$. Then the associated graded sheaf of rings $\textup{gr}_\bullet^F\mathscr{D}_{\mathcal X/S}$ gives the algebraic structure sheaf of $T^*(\mathcal X/S)$ by the $\sim$-functor.
In the analytic case, $\mathscr{O}_{T^*(\mathcal X/S)}$ is a faithfully flat ring extension of $\widetilde\textup{gr}_\bullet^F\mathscr{D}_{\mathcal X/S}$ by GAGA.
For a coherent $\mathscr{D}_{\mathcal X/\mathcal{S}}$-module $\mathscr M$, a filtration $F_\bullet\mathscr{M}$ over $F_\bullet\mathscr{D}_{\mathcal{X}/\mathcal{S}}$ is called \emph{good} if $\gr^F_\bullet\mathscr{M}$ is coherent over $\gr^F_\bullet\mathscr{D}_{\mathcal{X}/\mathcal{S}}$. Conversely, if there exists a filtration $F_\bullet\mathscr{M}$ satisfying that $\gr^F_\bullet\mathscr{M}$ is coherent over $\gr^F_\bullet\mathscr{D}_{\mathcal{X}/\mathcal{S}}$, then $\mathscr{M}$ is coherent over $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$.
Good filtrations for coherent $\mathscr{D}_{\mathcal X/\mathcal{S}}$-modules exist locally in the analytic category and globally in the algebraic category.
We define the relative characteristic variety by
\[\textup{Ch}_{{\textup{rel}}}(\mathscr M)=\textup{Ch}_{\mathcal{X}/\mathcal{S}}(\mathscr{M})\coloneqq \textup{supp}(\widetilde\textup{gr}^F_\bullet\mathscr{M})\subseteq T^*(\mathcal X/\mathcal{S}),\]
where we use the $\sim$-functor of the affine morphism
$$\pi: T^*(\mathcal X/\mathcal{S})\to X.$$
By construction, $\textup{Ch}_{\textup{rel}}(\mathscr{M})$ is a conic subvariety of $T^*(
\mathcal{X}/\mathcal{S})$, where ``conic" means that it is invariant under the natural $\C^\star$-action induced by the grading on $\textup{gr}^F_\bullet\mathscr{D}_{\mathcal{X}/\mathcal{S}}$. Each irreducible components of $\textup{Ch}_{{\textup{rel}}}(\mathscr M)$ has a multiplicity. Similar to the absolute case, $\textup{Ch}_{{\textup{rel}}}(\mathscr M)$ and the multiplicities are independent of good filtrations. Then the relative characteristic cycle, denoted by $\textup{CC}_{{\textup{rel}}}(\mathscr M)$ is the associated locally finite cycles of $\textup{Ch}_{{\textup{rel}}}(\mathscr M)$ with multiplicities. If $\varphi$ is an algebraic smooth morphism between smooth varieties over $\mathbb{C}$, then $\textup{CC}_{{\textup{rel}}}(\mathscr M)$ is an algebraic cycle inside the algebraic relative cotangent bundle $T^*(\mathcal{X}/\mathcal{S})$.
Similar to the absolute case, for a relative differential operator $P\in \mathscr{D}_{\mathcal{X}/\mathcal{S}}$ of order $k$, we can define its principal symbol, which gives a section of homogeneous degree $k$ in $\textup{gr}^F_\bullet \mathscr{D}_{\mathcal{X}/\mathcal{S}}$. By \cite[3.24 Definition]{Bj}, we obtain the relative Poisson bracket on $\textup{gr}^F_\bullet \mathscr{D}_{\mathcal{X}/\mathcal{S}}$ and hence the relative Poisson bracket on $\mathscr{O}_{T^*(\mathcal{X}/\mathcal{S})}$ (by faithful flatness). A subvariety of $T^*(\mathcal{X}/\mathcal{S})$ is called (relative) \emph{involutive} if its radical ideal sheaf is closed under the Poisson bracket. Then by Gabber's involutive theorem (see \cite[Appendix III, 3.25 Theorem]{Bj}), we obtain:
\begin{theorem}[Gabber's Involutivity]\label{thm:Ginv}
Suppose that $\mathscr{M}$ is a coherent $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-module $($left or right$)$. Then $\textup{Ch}_{\textup{rel}}(
\mathscr{M})$ is $($relative$)$ involutive.
\end{theorem}
Notice that the fibers of a relative involutive subvariety $\mathcal{Z}\subseteq T^*(\mathcal{X}/\mathcal{S})$ are not necessarily involutive. One reason is that the intersections
\[\mathcal{Z}_s=\mathcal{Z}\cap T^*(\mathcal{X}/\mathcal{S})_s\subseteq T^* \mathcal{X}_s\]
are not always proper intersections for $s\in \mathcal{S}$. If additionally $\mathcal{Z}$ is smooth over $\mathcal{S}$, then one can easily check that $\mathcal{Z}_s\subseteq T^*\mathcal{X}_s$ is either empty or involutive.
However, we have the following relative Bernstein inequality:
\begin{theorem}[Relative Bernstein Inequality of Maisonobe]\label{thm:rbernsteinM}
Suppose that $\mathscr{M}$ is a coherent $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-module $($left or right$)$. If $\textup{Ch}_{\textup{rel}}(
\mathscr{M})_s$ is not empty for $s\in\mathcal{S}$, then all the irreducible components of $\textup{Ch}_{\textup{rel}}(
\mathscr{M})_s$ are of dimension $\ge \dim\mathcal{X}-\dim\mathcal{S}$.
\end{theorem}
\begin{proof}
The proof is essentially the same as that of \cite[Proposition 5]{Mai}, where the author only discussed relative $\mathscr{D}$-modules over $\mathcal{S}=\mathbb{C}^r$. For completeness, we sketch the proof in general. We take a smooth point of $\textup{Ch}_{\textup{rel}}(
\mathscr{M})$ and focus on an open neighborhood $W$ around it. By generic smoothness (or Morse-Sard Theorem for critical values in the analytic case), $\textup{Ch}_{\textup{rel}}(
\mathscr{M})\cap W$ is smooth over an open neighborhood $U$ of $\mathcal{S}$ (shrink $W$ if necessary). Then the relative involutivity in Theorem \ref{thm:Ginv} and the relative smoothness imply that $(\textup{Ch}_{\textup{rel}}(
\mathscr{M})\cap W)\cap T^*\mathcal{X}_s$ is involutive in $T^*\mathcal{X}_s$ for $s\in\mathcal{S}$. In particular, the dimension of
$$(\textup{Ch}_{\textup{rel}}(
\mathscr{M})\cap W)\cap T^*\mathcal{X}_s$$
is $\ge \dim\mathcal{X}-\dim\mathcal{S}$. Therefore, the required statement follows from the upper semicontinuity of the dimension of fibers of $\textup{Ch}_{\textup{rel}}(\mathscr{M})$.
\end{proof}
The following lemma is the relative analogue of \cite[Proposition 2.10]{Kasbook}. We leave its proof for interested readers. See also \cite[Lemma 3.2.2]{BVWZ}.
\begin{lemma}\label{lm:unionchrel}
If
\[0\to \mathscr{M}'\longrightarrow \mathscr{M}\longrightarrow\mathscr{M}''\to 0\]
is a short exact sequence of coherent $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-modules, then
\[\textup{Ch}_{\textup{rel}}(\mathscr{M})=\textup{Ch}_{\textup{rel}}(\mathscr{M}')\cup \textup{Ch}_{\textup{rel}}(\mathscr{M}'').\]
\end{lemma}
Following \cite{Sab2}, we define the relative holonomicity as follows.
\begin{definition}[Relative holonomicity]\label{def:relholo}
A coherent $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-module $\mathscr{M}$ is called relative holonomic over $\mathcal{S}$ (or $\mathscr{O}_S$) if its characteristic variety $\textup{Ch}_{\textup{rel}}(
\mathscr{M})$ is relative Lagrangian, that is, the fiber $\textup{Ch}_{\textup{rel}}(
\mathscr{M})_s$ is either empty or a (possibly reducible) Lagrangian subvariety in $T^*\mathcal{X}_s$ for every $s\in \mathcal{S}$.\footnote{This relative holonomicity is slightly more general than the ones in \cite{Mai, FS17, BVWZ, BVWZ2}, where the latter requires additionally that the relative Lagrangian subvarieties are independent of $s\in \mathcal{S}=\C^r$.}
\end{definition}
Following from Lemma \ref{lm:unionchrel} and Theorem \ref{thm:rbernsteinM}, we immediately have:
\begin{coro}\label{cor:relholab}
Relative holonomicity is preserved by subquotients and extensions. In particular, the category of relative holonomic modules is abelian.
\end{coro}
\subsection{Base change for relative $\mathscr{D}$-modules}\label{subsec:bashchangerelD}
We now discuss the base change for relative $\mathscr{D}_X$-modules. Suppose we have the following commutative diagram,
\begin{equation} \label{diag:basechanges1}
\begin{tikzcd}
\mathcal{X}_\mathcal{T}\arrow[r,"\mu"]\arrow[d]& \mathcal{X}\arrow[d]\\
\mathcal{T}\arrow[r,"\nu"]& \mathcal{S}
\end{tikzcd}
\end{equation}
so that $\mathcal{X}_\mathcal{T}=\mathcal{X}\times_\mathcal{S}\mathcal{T}$. Suppose $\mathscr{M}$ is a (left) $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-module. We consider the $\mathscr{O}$-pullback through $\mu$:
\[\mu^*(\mathscr{M})=\mathscr{O}_{\mathcal{X}_\mathcal{T}}\otimes_{\mu^{-1}\mathscr{O}_{\mathcal{X}}}\mu^{-1}\mathscr{M}.\]
Since $\mu^*{\mathscr{D}_{\mathcal{X}/\mathcal{S}}}=\mathscr{D}_{\mathcal{X}_{\mathcal{T}}/\mathcal{T}}$, $\mu^*(\mathscr{M})$ is naturally a relative $\mathscr{D}$-module over $\mathcal{T}$. We then have the derived pullback functor $\mathbf{L}\mu^{*}$ for relative $\mathscr{D}$-modules. When the relative $\mathscr{D}$-module structure is forgotten, it is exactly the derived $\mathscr{O}$-module pullback functor. One can easily see that the derived functor $\mathbf{L}\mu^*$ for relative $\mathscr{D}$-module preserves coherence, thanks to the identification $\mu^*{\mathscr{D}_{\mathcal{X}/\mathcal{S}}}=\mathscr{D}_{\mathcal{X}_{\mathcal{T}}/\mathcal{T}}$ again.
We write by $i_s\colon \mathcal{X}_s\hookrightarrow \mathcal{X}$ the closed embedding of the fiber over $s\in \mathcal{S}$. A relative holonomic $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-module $\mathcal{M}$ is \emph{regular} if $\mathbf{L} i^*_s(\mathcal{M})$ is a complex of $\mathscr{D}_{\mathcal{X}_s}$-modules with regular holonomic cohomology sheaves for every $s\in\mathcal{S}$. The author is told by C. Sabbah that it is not known whether the category of regular relative holonomic $\mathscr{D}$-module is closed by taking subquotients.
For a closed subvariety $\mathcal{Z}\subseteq \mathcal{S}$, we denote by $i_\mathcal{Z}\colon \mathcal{X}_\mathcal{Z}\hookrightarrow \mathcal{X}$ the closed embedding with $\mathcal{X}_\mathcal{Z}=\mathcal{X}\times_\mathcal{S} \mathcal{Z}$.
\begin{lemma}\label{lm:spsmsubvar}
If $\mathscr{M}$ is relative holonomic over $\mathcal{S}$ and $\mathcal{Z}$ is a smooth subvariety, then $\mathbf{L} i_\mathcal{Z}^*(\mathcal{M})$ is a complex of relative holonomic cohomology sheaves over $\mathcal{Z}$. In particular, $\mathbf{L}^k i_s^*\mathcal{M}$ is a holonomic $\mathscr{D}_{\mathcal{X}_s}$-module for each $k$.
\end{lemma}
\begin{proof}
It is obvious that $\mathbf{L}^k i_\mathcal{Z}^*(\mathscr{M})$ is coherent over $\mathscr{D}_{\mathcal{X}_\mathcal{Z}/\mathcal{Z}}$ and that $$\textup{Ch}_{\textup{rel}}(\mathbf{L}^k i_\mathcal{Z}^*(\mathscr{M}))\subseteq \textup{Ch}_{\textup{rel}}(\mathscr{M})|_{\mathcal{Z}}\coloneqq\textup{Ch}_{\textup{rel}}(\mathscr{M})\cap \mathcal{X}_\mathcal{Z}.$$
But $\textup{Ch}_{\textup{rel}}(\mathscr{M})|_{\mathcal{Z}}$ is relative Lagrangian over $\mathcal{Z}$. Therefore, $\mathbf{L}^k i_\mathcal{Z}^*(\mathscr{M})$ is relative holonomic by Theorem \ref{thm:rbernsteinM}.
\end{proof}
\begin{prop}\label{prop:spnotorsioncc}
Suppose that $\phi:\mathcal{X}\to \mathcal{S}$ is smooth
with $\mathcal{H}\subset\mathcal{S}$ a smooth divisor,
and that $\mathscr{M}$ is a coherent $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-module. If $\mathscr{M}$ has no torsion subsheaf supported on $\mathcal{X}_\mathcal{H}$ and if the cycle $\textup{CC}_{\textup{rel}}(\mathscr{M})$ does not have components over $\mathcal{X}_\mathcal{H}$, then
\[\textup{CC}_{\textup{rel}}(i^*_\mathcal{H}\mathscr{M})=\textup{CC}_{\textup{rel}}(\mathscr{M})|_{\mathcal{X}_\mathcal{H}},\]
where $i_\mathcal{H}:\mathcal{X}_\mathcal{H}\hookrightarrow\mathcal{X}$ is the closed embedding with the fiber product $\mathcal{X}_\mathcal{H}=\mathcal{X}\times_\mathcal{S}\mathcal{H}$.
\end{prop}
\begin{proof}
Since characteristic cycles are local, it is enough to assume $\mathcal{H}$ defined by a regular (or holomorphic) function $h$. The torsion-free assumption implies that we have a short exact sequence
\[0\to \mathscr{M}\xrightarrow{\cdot h} \mathscr{M}\rightarrow i^*_\mathcal{H}(\mathscr{M})\to 0.\]
Now we pick a good filtration $F_\bullet\mathscr{M}$ over $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$ and the filtration induces a filtered complex
\[F_\bullet\mathscr{M}\xrightarrow{\cdot h} F_\bullet\mathscr{M}.\]
We then obtain a convergent spectral sequence with the $E^0$-page given by the following length 2 complex
\[\eta\colon\textup{gr}^F_\bullet\mathscr{M}\xrightarrow{\cdot h}\textup{gr}^F_\bullet\mathscr{M}.\]
Then, $i^*_\mathcal{H}(\mathscr{M})$ has an induced filtration $F_\bullet(i^*_\mathcal{H}(\mathscr{M}))$ (good over $\mathscr{D}_{\mathcal{X}_{\mathcal{H}}/\mathcal{H}}$). By convergence of the spectral sequence (see for instance \cite[Lemme 3.5.13]{Laumon} and also \cite[3.7. Lemme]{Sab2}), we have
\[[\textup{gr}^F_\bullet(i^*_\mathcal{H}(\mathscr{M}))]=[\textup{Coker}(\eta)]-[\textup{Ker}(\eta)]\]
in the Grothendieck group $K_0$. Since the characteristic cycle is well defined for $[\textup{gr}^F_\bullet(i^*_\mathcal{H}(\mathscr{M}))]$, the required statement for characteristic cycles follows from \cite[Appendix IV. 3.13 Proposition]{Bj}.
\end{proof}
Suppose that $\mathscr{N}$ is a (left)
$\mathscr{D}_{\mathcal{X}_{\mathcal{T}}/\mathcal{T}}$-module (or more generally a complex). We consider the derived pushforward functor $\mathbf{R}\mu_*(\mathscr{N})$ with $\mu$ as in Diag.\eqref{diag:basechanges1}. Since $\mu^*{\mathscr{D}_{\mathcal{X}/\mathcal{S}}}=\mathscr{D}_{\mathcal{X}_{\mathcal{T}}/\mathcal{T}}$, we have
\[\mathbf{R}\mu_*(\mathscr{N})\simeq \mathbf{R}\mu_*(\mu^*(\mathscr{D}_{\mathcal{X}/\mathcal{S}})\otimes_{\mathscr{D}_{\mathcal{X}_{\mathcal{T}}/\mathcal{T}}}\mathscr{N}),\]
and hence the complex $\mathbf{R}\mu_*(\mathscr{N})$ is a complex of $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-modules by adjunction.
\begin{prop}\label{pro:pfcohrelbasechange}
Suppose that $\nu$ is a proper morphism, $\mathscr{N}$ is coherent over $\mathscr{D}_{\mathcal{X}_{\mathcal{T}}/\mathcal{T}}$ $($or more generally a complex of $\mathscr{D}_{\mathcal{X}_{\mathcal{T}}/\mathcal{T}}$-modules with coherent cohomology sheaves$)$ and $\mathcal{N}$ $($or each of its cohomology sheaves$)$ admits a good filtration over $F_\bullet\mathscr{D}_{\mathcal{X}/\mathcal{S}}$. Then $\mathbf{R}^{i}\mu_*(
\mathscr{N})$ is coherent over $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$ for each $i\in\mathbb{Z}$.
\end{prop}
\begin{proof}
The idea of the proof of the required coherence result is similar to that of the case for absolute $\mathscr{D}$-modules (see for instance \cite[Theorem 2.8.1]{Bj}). We sketch here its proof for completeness.
By a standard procedure (see for instance the proof of \cite[Theorem 1.5.8]{Bj}), the required statement can be reduced to the case for
\[\mathscr{N}=\mathscr{D}_{\mathcal{X}_{\mathcal{T}}/\mathcal{T}}\otimes_\mathscr{O} \mathscr{L}\]
where $\mathscr{L}$ is a coherent $\mathscr{O}$-module. But this case follows immediately from the Grauert's direct image theorem for $\mathscr{O}$-modules and the projection formula (since $\mu^*{\mathscr{D}_{\mathcal{X}/\mathcal{S}}}=\mathscr{D}_{\mathcal{X}_{\mathcal{T}}/\mathcal{T}}$).
\end{proof}
It is worth mentioning that $\mathbf{R}\mu_*$ does not preserve relative holonomicity under proper base changes in general (compared to Proposition \ref{prop:relpushfw}). See \S \ref{subsec:relccU} for related examples.
\subsection{Relative de Rham complexes}
We keep notations as in the previous subsection. Let $\mathscr{M}$ be a (left) $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-module. The \emph{relative de Rham complex} of $\mathscr{M}$ is defined as
\[\textup{DR}_{\mathcal{X}/\mathcal{S}}(\mathscr{M})\coloneqq \omega_{\mathcal{X}/\mathcal{S}}\otimes^\mathbf{L}_{\mathscr{D}_{\mathcal{X}/\mathcal{S}}}\mathscr{M}.\]
\begin{lemma}\label{lm:cmmpbdr}
We have a natural isomorphism
\[\mathbf{L} i^*_s\circ\textup{DR}_{\mathcal{X}/
\mathcal{S}}\simeq \textup{DR}_{\mathcal{X}_s}\circ \mathbf{L} i^*_s\]
for $s\in\mathcal{S}$.
\end{lemma}
\begin{proof}
The functor $\mathbf{L} i^*_s$ can be rewritten as $\otimes^\mathbf{L}_{\mathscr{O}_\mathcal{S}}\C_s$, where $\C_s$ is the residue field of $s\in\mathcal{S}$. Since sections of $\mathscr{O}_{\mathcal{S}}$ contains in the center of $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$, the required statement follows.
\end{proof}
\subsection{Relative direct images}\label{subsec:reldim}
We discuss direct image functors for $\mathscr{D}$-modules under the relative setting. We fix a morphism $f$ over $\mathcal{S}$
\[
\begin{tikzcd}
\mathcal{Y}\arrow[rr,"f"]\arrow[dr,"\varphi_1"] & & \mathcal{X}\arrow[dl,"\varphi"] \\
&\mathcal{S}
\end{tikzcd}
\]
with $\varphi_1$ and $\varphi$ smooth. We recall the definition of the relative direct image:
\[f_+(\mathscr{N})\coloneqq \mathbf{R} f_*(\mathscr{M}\otimes_\mathscr{O} \omega_{f/\mathcal{S}}\otimes_{\mathscr{D}}f^*\mathscr{D}_{\mathcal{X}/\mathcal{S}})\]
for a left $\mathscr{D}_{\mathcal{Y}/\mathcal{S}}$-module $\mathscr{N}$ (or more generally a complex of left $\mathscr{D}_{\mathcal{Y}/\mathcal{S}}$-modules), where $\omega_{f/\mathcal{S}}=\omega_{\mathcal{Y}/\mathcal{S}}\otimes f^*(\omega^{-1}_{\mathcal{X}/\mathcal{S}})$.
The morphism $f$ over $\mathcal{S}$ induces a relative Lagrangian correspondence
\[
\begin{tikzcd}
T^*(\mathcal{Y}/\mathcal{S})\arrow[rd,"\phi_1"] & \mathcal{Y}\times _\mathcal{X} T^*(\mathcal{X}/\mathcal{S})\arrow[l,"\varrho_f"]\arrow[r,"\varpi_f"]\arrow[d] & T^*(\mathcal{X}/\mathcal{S})\arrow[ld,"\phi"]\\
&\mathcal{S}&
\end{tikzcd}
\]
See for instance \cite[\S 2.4]{HTT} for the absolute Lagrangian correspondence.
The following proposition is a generalization of \cite[Theorem 1.17(a)]{FS17}.
\begin{prop}\label{prop:relpushfw}
With above notations, let $\mathscr{N}$ be a coherent $\mathscr{D}_{\mathcal{Y}/\mathcal{S}}$-module with a good filtration over $F_\bullet\mathscr{D}_{\mathcal{Y}/\mathcal{S}}$.
\begin{enumerate}
\item If $f$ is proper over $\mathcal{S}$, then $f_+(\mathscr{N})$ is a complex of $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-modules with coherent cohomology sheaves.
\item If moreover $\mathscr{N}$ is relative holonomic, then $f_+(\mathscr{N})$ is a complex of $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-modules with relative holonomic cohomology sheaves.
\end{enumerate}
\end{prop}
\begin{proof}
Since $(\mathscr{N}, \C_\mathcal{Y})$ gives us a good relative elliptic pair (see \cite[Definition 2.14]{SS94}), the first statement follows from Theorem 4.2 in \emph{loc. cit.} If moreover $\mathscr{N}$ is relative holonomic, then by Corollary 4.3 in \emph{loc. cit.} we have
\[\textup{Ch}_{\textup{rel}}(\mathcal{H}^i f_+(\mathscr{N}))\subseteq \varpi_f(\varrho_f^{-1}(\textup{Ch}_{\textup{rel}}(\mathscr{N}))\]
for each $i$. Thus, for $s\in \mathcal{S}$
\[\textup{Ch}_{\textup{rel}}(\mathcal{H}^i f_+(\mathscr{N}))\cap \phi^{-1}(s) \subseteq \varpi_f(\varrho_f^{-1}(\textup{Ch}_{\textup{rel}}(\mathscr{N}))\cap\phi^{-1}(s)\subseteq\varpi_f(\varrho_f^{-1}(\textup{Ch}_{\textup{rel}}(\mathscr{N})\cap\phi_1^{-1}(s))).\]
By definition $\textup{Ch}_{\textup{rel}}(\mathscr{N})\cap\phi_1^{-1}(s)$ is Lagrangian and hence isotropic. By \cite[(4.9)]{KasBf}, $\varpi_f(\varrho_f^{-1}(\textup{Ch}_{\textup{rel}}(\mathscr{N})\cap\phi_1^{-1}(s))$ and hence $\textup{Ch}_{\textup{rel}}(\mathcal{H}^i f_+(\mathscr{N}))\cap \phi^{-1}(s)$ are both isotropic. Thus, $\textup{Ch}_{\textup{rel}}(\mathcal{H}^i f_+(\mathscr{N}))\cap \phi^{-1}(s)$ is Lagrangian by Theorem \ref{thm:rbernsteinM} and $\mathcal{H}^i f_+(\mathscr{N})$ is relative holonomic.
\end{proof}
The following lemma is immediate by construction, and we skip its proof.
\begin{lemma}\label{lm:cmspf+}
We have a natural isomorphism
\[\mathbf{L} i^*_s\circ f_+\simeq (f_s)_+\circ \mathbf{L} i^*_s,\]
where $f_s:\mathcal{Y}_s\to \mathcal{X}_s$ is the induced morphism over $s\in \mathcal{S}$.
\end{lemma}
\begin{coro}
If $f$ is (relative) proper over $\mathcal{S}$, then $f_+$ preserves relative regular holonomicity.
\end{coro}
\begin{proof}
If $f$ is proper over $\mathcal{S}$, then $f_s$ is proper for $s\in\mathcal{S}$. Since $(f_s)_+$ preserves regular holonomicity, the required statement follows from Lemma \ref{lm:cmspf+}.
\end{proof}
\section{Logarithmic $\mathscr{D}$-modules}\label{sec:logdmodule}
In this section, we recall $\mathscr{D}$-modules under the logarithmic setting. Let $X$ be a complex manifold of dimension $n$
and let $D$ be a normal crossing divisor. We call such $(X,D)$ a (analytic) smooth log pair. We write by $\mathscr{D}_{X,D}$ the subalgebra of $\mathscr{D}_X$ consisting of differential operators preserving the ideal sheaf of $D$. In local coordinates $(x_1,x_2,\dots,x_n)$ on an open neighborhood $U$ with $D|_U=(\prod_{i=1}^rx_i=0)$ for some $r\le n$, $\mathscr{D}_{X,D}$ is the subalgebra generated by $\mathscr{O}_U$ and
$$x_1\partial_{x_1},\dots,x_k\partial_{x_r},\partial_{x_{r+1}},\dots,\partial_{x_n}.$$
Since $D$ is normal crossing, $\mathscr{D}_{X,D}$ is a coherent and noetherian sheaf of rings. Modules over $\mathscr{D}_{X,D}$ are called logarithmic (log) $\mathscr{D}$-modules.
The order filtration $F_\bullet \mathscr{D}_X$ induces the order filtration $F_\bullet \mathscr{D}_{X,D}$ such that the analytification of $\gr^F_\bullet\mathscr{D}_{X,D}$ gives us the structure sheaf of the log cotangent bundle $T^*(X,D)$. For a coherent $\mathscr{D}_{X,D}$-module $\mathcal{M}$, one can define the log characteristic variety
\[\textup{Ch}_{\log}(\mathcal{M})\subseteq T^*(X,D),\]
and the log characteristic cycle $\textup{CC}_{\log}(\mathcal{M})$ similar to the relative case. When $D=\emptyset$, we use $\textup{Ch}(\mathcal{M})$ (resp. $\textup{CC}(\mathcal{M})$) to denote the characteristic variety (resp. cycle) of the $\mathscr{D}_X$-module $\mathcal{M}$.
\subsection{Log de Rham complexes}\label{subsec:logdeRhamcons}
Suppose that $\mathcal{M}$ is a left $\mathscr{D}_{X,D}$-module on a smooth log pair $(X,D)$. Similar to the absolute case, $\omega_X(D)$ is a right $\mathscr{D}_{X,D}$-module, where $\omega_X$ is the sheaf of the top forms on $X$. The \emph{log de Rham complex} of $\mathcal{M}$ is defined as
\[\textup{DR}_{X,D}(\mathcal{M})\coloneqq \omega_X(D)\otimes^\mathbf{L}_{\mathscr{D}_{X,D}}\mathcal{M}. \]
By \cite[Lemma 2.3]{WZ}, we have
\[\textup{DR}_{X,D}(\mathcal{M})\simeq [\mathcal{M}\rightarrow \Omega^1(\log D)\otimes_\mathscr{O}\mathcal{M}\rightarrow\cdots\rightarrow \Omega^n(\log D)\otimes_\mathscr{O}\mathcal{M}]\]
where the complex on the right-hand side starts from the degree $-n$-term and $\Omega^i(\log D)$ denotes the sheaf of the degree $i$ log forms.
In local coordinates
$$(x_1,x_2,\dots,x_n) \textup{ with } D|_U=(\prod_{i=1}^rx_i=0)$$ on an open neighborhood $U$ for some $r\le n$, we further have
\begin{equation}
\textup{DR}_{X,D}(\mathcal{M})\simeq \textup{Kos}(\mathcal{M};x_1\partial_{x_1},\dots,x_r\partial_{x_r},\partial_{x_{r+1}},\dots, \partial_{x_n}),
\end{equation}
where $\textup{Kos}$ denotes the Koszul complex of the actions $x_1\partial_{x_1},\dots,x_r\partial_{x_r},\partial_{x_{r+1}},\dots, \partial_{x_n}$ on $\mathcal{M}$.
\subsection{Direct image functor}\label{subsec:dimagelogD}
Suppose that $f\colon (X,D)\rightarrow(Y,E)$ is a morphism of smooth log pairs, that is, $f\colon X\rightarrow Y$ is a morphism of complex manifolds such that $f^{-1}E\subseteq D$. Then we define the derived direct image functor for left log $\mathscr{D}$-modules $\mathcal{M}$ by
\[f^{\log}_+(\mathcal{M})=Rf_*(\mathcal{M}\otimes_\mathscr{O} \omega_f\otimes^\mathbf{L}_{\mathscr{D}_{X,D}}f^*\mathscr{D}_{Y,E}),\]
where $\omega^{\log}_f$ is the canonical sheaf of $f$,
\[\omega^{\log}_f=\omega_X(D)\otimes_\mathscr{O} f^*(\omega^{-1}_{Y}(E)).\]
The above definition is compatible with the direct images in the relative case in \S\ref{subsec:reldim} if one take $D$ and $E$ empty.
\begin{prop}\label{prop:logpushfdr}
Let $f\colon (X,D)\rightarrow(Y,E)$ be a morphism of smooth log pairs and let $\mathcal{M}$ be a $($left$)$ $\mathscr{D}_{X,D}$-modules.
Then
\[Rf_*\textup{DR}_{X,D}(\mathcal{M})\simeq \textup{DR}_{Y,E}(f^{\log}_+\mathcal{M}).\]
\end{prop}
\begin{proof}
The proof of this proposition is exactly the same as the non-log case (cf. \cite[Theorem 4.2.5]{HTT}). We leave the detail for interested readers.
\end{proof}
\subsection{Lattices}\label{subsec:lattices}
We recall the definition of lattices in the analytic setting. Suppose that $(X,D)$ is an analytic smooth log pair and $\mathcal{M}$ is a coherent $\mathscr{D}_X$-module. We write the algebraic localization of $\mathcal{M}$ along $D$ by
\[\mathcal{M}(*D)=\mathcal{M}\otimes_\mathscr{O} \mathscr{O}_X(*D),\]
where
\[\mathscr{O}_X(*D)=\lim_{k\to \infty}\mathscr{O}_X(kD).\]
Notice that in general $\mathcal{M}(*D)$ is not even coherent over $\mathscr{D}_X$. However, if $\mathcal{M}$ is holonomic, then so is $\mathcal{M}(*D)$, since holonomicity is preserved under tenser products over $\mathscr{O}$. A coherent $\mathscr{D}_{X,D}$-submodule
$$\bar\mathcal{M}\subseteq \mathcal{M}(*D)$$ is called a $\mathscr{D}_{X,D}$-\emph{lattice} of $\mathcal{M}(*D)$ (or $\mathcal{M}$) if $\bar\mathcal{M}|_{X\setminus D}=\mathcal{M}|_{X\setminus D}$. By definition, lattices of $\mathcal{M}$ do not have torsion subsheaves supported on $D$ (although, $\mathcal{M}$ might have). The prototype examples of lattices are Deligne lattices of local systems (see \cite[\S 4.4]{WZ}).
\subsection{From log to relative}\label{subsec:logtorel}
In this subsection, we discuss the connection between log $\mathscr{D}$-modules and relative $\mathscr{D}$-modules. We focus locally on $W$ a polydisc,
\[W=\mathbb{D}^k_x\times \mathbb{D}_t^r\textup{ with coordinates }(x_1,\dots,x_k,t_1,\dots,t_r)\]
and a divisor $D$ given by $t_1\cdot t_2\cdots t_r=0$. In particular, $(W,D)$ is a smooth log pair. We consider the log rescaled families
\[\widetilde W_j=W\times \mathbb{D}_y^j\]
with $y(j)=(y_1,\dots,y_j)$ the coordinates on $\mathbb{D}_y^j$ for $0\le j\le r$ ($\widetilde W_0=W$), and the maps
\[p_j\colon \widetilde W_j\rightarrow W, (x,t,y(j))\mapsto(x,e^{y(j)}t),\]
where we abbreviate $(x_1,\dots,x_r)$ as $x$, $(e^{y_1}t_1,\dots,e^{y_j}t_j,t_{j+1},\dots,t_r)$ as $e^{y(j)}x$ and etc. Then we have the commutative diagram for each $0\le j<k$
\begin{equation}
\begin{tikzcd}
\widetilde W_j \arrow[rr,hook,"i_j"]\arrow[rd,"p_j"] & & \widetilde W_{j+1}\arrow[ld,"p_{j+1}"]\\
& W &
\end{tikzcd}
\end{equation}
where the inclusion is
$$i_j:\widetilde W_j\hookrightarrow \widetilde W_{j+1}, (x,t,y(j))\mapsto (x,t,y(j),0).$$
Let $U = \mathbb{D}^k_x \times (\mathbb{D}^\star)^r_t \subset W$ with $\mathbb{D}^\star$ the punctured disk. Then we define $\widetilde U_j = p^{-1}(U)$, $\widetilde D_j = p^{-1}_j(D)$.
Given a $\mathscr{D}_{W,D}$-lattice $\bar\mathcal{M}$ of a $\mathscr{D}_X$-module $\mathcal{M}$,
we consider the pull-backs
$\mathscr M_j \coloneqq p^*_j
\bar\mathcal{M}$ and write $p_k=p$, $\mathscr{M}=\mathscr{M}_k$, $\widetilde W=\widetilde W_k$, $\widetilde U_k=\widetilde U$ and $\widetilde D=\widetilde D_k$.
\begin{lemma}\label{lm:pblogrel}
With notations above, let $\mathcal{M}$ be a coherent $\mathscr{D}_X$-module. Then
\begin{enumerate}
\item $p_j$ is smooth $($submersive$)$ for each $j$,
\item $\mathscr{M}_j$ is a coherent $\mathscr{D}_{\widetilde W_j,\widetilde D_j}$-module for each $j$,
\item $\mathscr{M}$ is a coherent $\mathscr{D}_{\widetilde W/\mathbb{D}^r_t}$-module for the natural projection
$$ \pi_t: \widetilde W \to \mathbb{D}^r_t.$$
\end{enumerate}
\end{lemma}
\begin{proof}
Part (1) is obvious. For Part (2), we pick a good filtration $F_\bullet\bar\mathcal{M}$ over $F_\bullet\mathscr{D}_{X,D}$. Since $p_j$ is smooth, $p_j^*(F_\bullet\bar\mathcal{M})$ is a filtration of $\mathscr{M}_j$ over $F_\bullet\mathscr{D}_{\widetilde W_j,D_j}$. By construction, we have
\[t_i\partial_{t_i}\cdot (1\otimes p_j^{-1}(m))=1\otimes p_j^{-1} (t_i\partial_{t_i}\cdot m)\]
for sections $m$ of $\bar\mathcal{M}$.
Therefore, $p^*_j(\gr^F_\bullet\mathcal{M})$ is coherent over $\gr^F_\bullet\mathscr{D}_{\widetilde W_j,\widetilde D_j}$ and hence $p_j^*(F_\bullet\bar\mathcal{M})$ is a good filtration of $\mathscr{M}_j$ over $F_\bullet\mathscr{D}_{\widetilde W_j,D_j}$. In particular, $\mathscr{M}_j$ is coherent over $\mathscr{D}_{\widetilde W_j,\widetilde D_j}$. For Part (3), one observes that $t_i\partial_{t_i}-\partial_{y_i}$ annihilates $1\otimes p_j^{-1}(m)$. Thus, $p^*(\gr^F_\bullet\mathcal{M})$ is coherent over $\gr^F_\bullet\mathscr{D}_{\widetilde W/\mathbb{D}^r_t}$. In consequence, $\mathscr{M}$ is coherent over $\mathscr{D}_{\widetilde W/\mathbb{D}^r_t}$.
\end{proof}
\begin{theorem}\label{thm:relchrellog}
Let $\mathcal{M}$ be a regular holonomic $\mathscr{D}_W$-module and let $\bar\mathcal{M}$ be a $\mathscr{D}_{W,D}$-lattice of $\mathcal{M}$. Then we have:
\begin{enumerate}
\item If $r=1$, then $\mathscr{M}$ is a relative regular holonomic $\mathscr{D}_{\widetilde W/\mathbb{D}^r_x}$-module.
\item If $\mathscr{M}$ is flat over $\mathbb{D}^r_x$ for some $r>1$, then $\mathscr{M}$ is relative holonomic over $\mathbb{D}^r_x$.
\end{enumerate}
\end{theorem}
If $r\ge 2$, then $\mathscr{M}$ is not necessarily relative holonomic. See Example \ref{ex:notrelhollogres} in \S \ref{sec:gemMal}.
\subsection{Proof of Theorem \ref{thm:CClogC}}
Our first goal is to prove that, the characteristic variety $\textup{Ch}_{\log} (\bar\mathcal{M})$ is the closure of $\textup{Ch}(\mathcal{M}|_U)$ in the log cotangent bundle $T_{\log}^* W$, which is equivalent to prove that $\textup{Ch}_{\log} (\bar\mathcal{M})$ has no irreducible components over $D$. Then the statement of characteristic cycles follows immediately, since multiplicities are generically defined.
Our main tool is a technical result of Sabbah \cite{Sab2} (see also \cite{BMM}). Consider a smooth submersion $\varphi: \mathcal{X} \to \mathcal{S}$. Let $\mathcal{N}$ be a regular hononomic $\mathscr{D}_\mathcal{X}$-module, $\mathcal{N}_{{\textup{rel}}}$ be a coherent $\mathscr{D}_{\mathcal{X}/\mathcal{S}}$-submodule of $\mathcal{N}$, that generates $\mathcal{N}$ as $D_\mathcal{X}$-module. Suppose $\textup{Ch}(\mathcal{N}) = \bigcup_{Y \in L} T_{Y}^* \mathcal{X}$, with a set $L$ (locally finite) consisting of irreducible subvarieties of $\mathcal{X}$. Let $L_1 \subset L$ be a subset consisting of $Y$ such that $\varphi(Y)$ contains a non-empty open subset of $\mathcal{S}$. Denote by $Y^{\textup{sm}}$ the smooth locus of $Y$. By generic smoothness, we can assume $Y^{\textup{sm}}$ is smooth over $\mathcal{S}$ (shrink $Y^{\textup{sm}}$ if necessary). Then we obtain the relative conormal bundle of $Y^{\textup{sm}}\hookrightarrow\mathcal{X}$ over certain open subset of $\mathcal{S}$. We then write by $T^*_{\varphi|_Y}(\mathcal{X}/\mathcal{S})$ the closure of the (generically defined) relative conormal bundle, calling it the \emph{relative conormal space} of $\varphi|_Y$. Notice that relative conormal spaces are not necessarily relative Lagrangian.
\begin{lemma}\label{lm:BMM}\cite[3.2.Th\'eor\`em]{Sab2}, \cite[Lemme 2.2]{BMM}
With notations as above, suppose there is a non-constant holomorphic function $F: \mathcal{X} \to \C$, such that $L \backslash L'$ are contained in $F^{-1}(0)$, and $\mathcal{N}$ are without $F$-torsion. Then, we have
$$ \textup{Ch}_{{\textup{rel}}}(\mathcal{N}_{{\textup{rel}}}) = \bigcup_{Y \in L_1} T^*_{\varphi|_Y}(\mathcal{X}/\mathcal{S}), $$
where $T^*_{\varphi|_Y}(\mathcal{X}/\mathcal{S})$ is the relative conormal space of $\varphi|_Y$.
\end{lemma}
Since characteristic cycles are local, it is enough to assume $X=W$, a polydisc as in \S\ref{subsec:logtorel}.
To apply Lemma \ref{lm:BMM}, we let $\mathcal{N} = p^*(\mathcal{M}(*D))$, $\mathcal{N}_{{\textup{rel}}} = \mathscr{M}$, $\mathcal{X} = \widetilde W$, $\mathcal{S} = \mathbb{D}^r_t$, $\varphi = \pi_t$, and $F = t_1 \cdots t_r$. Suppose we have decomposition
$$ \textup{Ch}(\mathcal{M}|_U) = \bigcup_{Y \in L_1} T^*_Y W $$
for a set of closed strata $L_1$ in $U$. Then we may define a set of closed strata $\widetilde L_1$ in $\widetilde W$, by sending $Y \in L_1$ to $\widetilde Y = \overline{p^{-1}Y} \subset \widetilde W$. Since $p$ is submersive, we have
$$ \textup{Ch}(p^*\widetilde\mathcal{M}(*D))|_{\widetilde U} = \bigcup_{\widetilde Y \in \widetilde L_1} T^*_{\widetilde Y} \widetilde W|_{\widetilde U}. $$
\begin{coro}\label{cor:chrellogrel}
$$ \textup{Ch}_{{\textup{rel}}}(\mathscr{M}) = \bigcup_{\widetilde Y \in \widetilde L_1} T^*_{\pi_t|_{\widetilde Y}}(\widetilde W / \mathbb{D}^k_x ). $$
\end{coro}
\begin{proof}
By Lemma \ref{lm:BMM}, we only need to look at strata of $ \textup{Ch}(p^*\mathcal{M}(*D))$ that project under $\pi_x$ to an open set in $\mathbb{D}_x^k$, hence it suffices to consider the strata that intersect with $\pi_x^{-1}( (\mathbb{D}^*_x)^k) = \widetilde U$ (by the construction of $p$). These are labeled exactly by $\widetilde L_1$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:relchrellog}]
By Corollary \ref{cor:chrellogrel}, $\textup{Ch}_{\textup{rel}}(\mathscr{M})$ does not have components over $t_1\cdots t_r=0$. For every point $\alpha\in \mathbb{D}^r_t$, we pick $r$ general hyperplanes passing $\alpha$. By flatness assumption, the relative holonomicity in Part (2) follows from inductively applying Proposition \ref{prop:spnotorsioncc}. Since $\mathscr{M}\subseteq \mathcal{N}=p^*(\mathcal{M}(*D))$, $\mathscr{M}$ is torsion-free over $\mathbb{D}^r_t$. When $r=1$, torsion-freeness implies flatness and hence the relative holonomicity of Part (1) follows. For regularity in Part (1), one observes that
\[i_\alpha^*(\mathscr{M})\simeq p_\alpha^*(\mathcal{M}|_U) \]
are regular holonomic $\textup{for } 0\not=\alpha\in \mathbb{D}_t$,
where the morphism $p_\alpha$ is
$$p_\alpha\colon \mathbb{D}_x^k\times\{\alpha\}\times \mathbb{D}_y\to \mathbb{D}_x^k\times \mathbb{D}_t \quad (x,\alpha,y)\mapsto (x,\alpha e^y).$$
When $\alpha=0$,
\[i_0^*(\mathscr{M})\simeq p_0^*(\mathcal{M}|_{(t=0)}).\]
But
\[[\mathcal{M}|_{(t=0)}]=[\Phi_{t=0}(\mathcal{M})]\]
in the Grothendieck group $K_0$ (by \cite[Proposition 1.1.2]{Gil}). Since regularity is well-defined for objects in $K_0$, $\mathcal{M}|_{(t=0)}$ is regular holonomic by Theorem \ref{thm:holnb}(1). The regularity in Part (2) is similar by induction.
\end{proof}
We use $T^*_{\log}$ and $T^*_{\textup{rel}}$ to denote the log/relative cotangent bundle.
\begin{prop}\label{prop:logrelsp}
We have:
\begin{enumerate}
\item $\textup{Ch}_{\log}(\mathscr{M}) = \iota(\textup{Ch}_{{\textup{rel}}}(\mathscr{M}))$ where
$$ T^*_{\log} \widetilde W = T^*\mathbb{D}_x^k \times T^*_{\log} \mathbb{D}^r_t \times T^* \mathbb{D}^r_y, \quad T^*_{{\textup{rel}}} \widetilde W = T^*\mathbb{D}_x^k \times \mathbb{D}^r_t \times T^* \mathbb{D}^r_y $$
$$ \iota: T^*_{{\textup{rel}}} \widetilde W \to T^*_{\log} \widetilde W, \quad (x,t,y, \xi_x,\xi_y) \mapsto (x,t,y, \xi_x, \widetilde \xi_t = \xi_y, \xi_y),$$
and $\widetilde \xi_t$ is the coefficient in front of $dt / t$.
\item $\tilde\iota(\textup{Ch}_{\log}(\bar\mathcal{M})) = \textup{Ch}_{\log}(\mathscr{M})|_{\{y=0\}}$, where $\tilde\iota\colon T^*_{\log} W\hookrightarrow T^*_{\log} \widetilde W$ is given by
\[(x,t,\xi_x,\widetilde \xi_t)\mapsto(x,t,\xi_x,\widetilde \xi_t, \xi_y=\widetilde\xi_t).\]
\end{enumerate}
\end{prop}
\begin{proof}
For Part(1), since for any section $s$ in $p^{-1}(\bar\mathcal{M})$, $t_i \partial_{t_i} - \partial_{y_i}$ annihilate $s$, hence on the level of the associated graded modules, we have $\widetilde \xi_{t_i} - \xi_{y_i}$ annihilate $\textup{gr}^F_\bullet(\mathscr{M})$.
Now we prove Part(2). By Lemma \ref{lm:pblogrel}, we have
$$\gr^F_\bullet(i_r^*(\mathscr{M}))\simeq i_r^*(\gr^F_\bullet(\mathscr{M}))\simeq p_{r-1}^*(\gr^F_\bullet\bar\mathcal{M}).$$
Meanwhile, since $\gr^F_\bullet(\mathscr{M})$ and $\mathscr{M}$ both have no $y_r$-torsion, we have
\[\widetilde\textup{supp}_{i^*_r(\gr^F_\bullet\mathscr{D}_{\widetilde W,\widetilde D})}(i_r^*(\gr^F_\bullet(\mathscr{M})))=\textup{Ch}_{\log}(\mathscr{M})|_{\{y_r=0\}}\subseteq T^*_{\log} \widetilde W|_{y_r=0}.\]
Since $\widetilde \xi_{x_r} - \xi_{y_r}$ annihilates $\textup{gr}^F_\bullet(\mathscr{M})$, we further have
\[\widetilde\textup{supp}_{i^*_r(\gr^F_\bullet\mathscr{D}_{\widetilde W,\widetilde D})}(i_r^*(\gr^F_\bullet(\mathscr{M})))=\widetilde\iota(\textup{Ch}_{\log}(\mathscr{M}_{r-1})).\]
We then do induction backwards until we obtain Part (2).
\end{proof}
Finally, we describe the closure in log cotangent bundle.
\begin{lemma}\label{lm:unpackagelogcl}
Let $Y \subset U$ be a closed stratum, and let $\widetilde Y = \overline{p^{-1}(Y)} \subset \widetilde W$. Then $\tilde\iota(\overline{T_Y^*U}^{\log}) = \iota(T^*_{\pi_t\mid \widetilde Y} (\widetilde W / \mathbb{D}_t^r))|_{\{y=0\}}$.
\end{lemma}
\begin{proof}
Let us unpackage the definition of $\widetilde Y$. For each local function $f(x,t) \in \mathscr{O}_U$ that vanishes on $Y$, we consider $f(x, e^y t)$ on $\widetilde W$. The relative conormals with a general $t$ fixed, $T^*_{\pi_t\mid \widetilde Y} (\widetilde W / \mathbb{D}_t^r)$, are generated by the relative differentials
$$ d_{/\pi_t}(p^{-1}f)= (\partial_x f) dx + t (\partial_t f) e^y dy. $$
After we apply $\iota$, we get
$$ \iota(d_{/\pi_t}(p^{-1}f)) = (\partial_x f) dx + t (\partial_t f) e^y \frac{dt}{t} + t (\partial_t f) e^y dy. $$
Then we restrict to $\{y=0\}$, meaning setting $y_i=0$, and forget $\xi_{y_i}$.
\[ \iota(d_{/\pi_t}(p^{-1}f))|_{\{ y=0\}} = (\partial_x f) dx + t (\partial_t f) \frac{dt}{t}. \]
Indeed, this gives us back the $df$ in the basis section of $T_{\log}^* W$, i.e. $dx$ and $dt/t$.
The above argument works generically. Taking closure, the proof is done by the construction of relative conormal spaces.
\end{proof}
Now the proof of Theorem \ref{thm:CClogC} is accomplished by combining Proposition \ref{prop:logrelsp} and Lemma \ref{lm:unpackagelogcl}.
\section{Relative $\mathscr{D}$-modules and $V$-filtrations}\label{sec:vfilreld}
In this section, we discuss the Kashiwara-Malgrange filtrations for $\mathscr{D}$-modules in the general sense of Sabbah by using relative $\mathscr{D}$-modules.
For simplicity, we focus on the algebraic category unless stated otherwise, that is, all the underlying spaces and sheaves on them are algebraic in this section. See Remark \ref{rmk:gagavfil} for the analytic case.
\subsection{Kashiwara-Malgrange filtrations}\label{subsec:KMVsp}
\begin{definition}\label{def:KMalongY}
Suppose that $X$ is a smooth complex variety and $Y$ is a smooth subvariety of $X$ with its ideal sheaf denoted by $\mathscr{I}_Y$. Then the Kashiwara-Malgrange filtration on $\mathscr{D}_X$ is a $\mathbb{Z}$-indexed increasing filtration defined by
\[V^Y_k\mathscr{D}_X\coloneqq\{P\in \mathscr{D}_X | \quad P\mathscr{I}_Y^{j}\subseteq \mathscr{I}_Y^{j-k} \textup{ for every } j\in \mathbb{Z}\}\]
where $\mathscr{I}_Y^{j}=\mathscr{O}_X$ if $j\le 0$.\footnote{In the literature, some authors define the Kashiwara-Malgrange filtration on $\mathscr{D}_X$ as the decreasing filtration, that is, $V_Y^k\mathscr{D}_X\coloneqq V^Y_{-k}\mathscr{D}_X$.} In particular, if $Y=H$ is a smooth hypersurface $V^Y_0\mathscr{D}_X=\mathscr{D}_{X,H}$, the sheaf of logarithmic differential operators along $H$.
We then define the associated Rees ring by
\[R^Y_V\mathscr{D}_X\coloneqq \bigoplus_{k\in \mathbb{Z}}V^Y_k\mathscr{D}_X\cdot u^k\subseteq \mathscr{D}_X[u,1/u],\]
where the independent variable $u$ is used to help remember the grading.
\end{definition}
\begin{definition}\label{def:fltV}
Suppose that $\mathcal{M}$ is a (left) $\mathscr{D}_X$-module. A $\mathbb{Z}$-indexed increasing filtration $\Omega_\bullet\mathcal{M}$ is \emph{compatible} with $V_\bullet\mathscr{D}_X$ if
\[V^Y_k\mathscr{D}_X\cdot \Omega_j\mathcal{M}\subseteq \Omega_{k+j}\mathcal{M} \textup{ for all } k,j\in \mathbb{Z}.\]
A compatible filtration $\Omega_\bullet\mathcal{M}$ is a \emph{good} filtration over $V_\bullet\mathscr{D}_X$ if the associated Rees module
\[R_\Omega\mathcal{M}\coloneqq \bigoplus_{k\in \mathbb{Z}}\Omega_k\mathcal{M}\cdot u^k\subseteq \mathcal{M}[u,1/u]\]
is coherent over $R^Y_V\mathscr{D}_X$. A good filtration $V_\bullet\mathcal{M}$ is called the Kashiwara-Malgrange filtration on $\mathcal{M}$ if there exists a monic polynomial $b(s)\in\mathbb{C}[s]$ with its roots having real parts in $[0,1)$ so that
\[b(\sum_it_i\partial_{t_i}+k)\textup{ annihilates } \textup{gr}^{V^Y}_k\mathcal{M}\coloneqq V_k\mathcal{M}/V_{k-1}\mathcal{M} \textup{ for each } k\in\mathbb{Z}, \]
where $(t_1=t_2=\cdots=t_r=0)$ locally defines $Y$ and $\partial_{t_i}$ are the local vector field along the smooth divisor $(t_i=0)$.
The monic polynomial $b(s)$ of the least degree is called the Bernstein-Sato polynomial or $b$-function of $\mathcal{M}$ along $Y$.
\end{definition}
One can check that the Kashiwara-Malgrange filtration on $\mathcal{M}$ is unique if exists. It is obvious that the existence of the Kashiwara-Malgrange filtration on $\mathcal{M}$ guarantees that $\mathcal{M}$ is coherent over $\mathscr{D}_X$. A coherent $\mathscr{D}_X$-module $\mathcal{M}$ is called \emph{specializable} along $Y$ if the Kashiwara-Malgrange filtration exists along $Y$. Furthermore, it is called $R$-\emph{specializable} if the $b$-function $b(s)$ has roots in $R$, where $R$ is a subring of $\mathbb{C}$.
\begin{theorem}[Kashiwara]\label{thm:sphol}
If $\mathcal{M}$ is holonomic over $\mathscr{D}_X$, then it is specializable along every submanifold $Y\subseteq X$.
\end{theorem}
We will give a proof of the above fundamental theorem under more general settings (cf. Theorem \ref{thm:Lsphol}).
We now recall the definition of nearby cycles and vanishing cycles along smooth hypersurfaces.
\begin{definition
Suppose that $H$ is a smooth hypersurface and it is defined by $(t=0)$ locally.
We assume that $\mathcal{M}$ is specializable along $H$ with $V_\bullet\mathcal{M}$ its Kashiwara-Malgrange filtration. Then the nearby cycle of $\mathcal{M}$ along $H$ is defined by
\[\Psi_H(\mathcal{M})\coloneqq \textup{gr}^V_0\mathcal{M}\]
and the vanishing cycle is
\[\Phi_H(\mathcal{M})\coloneqq \textup{gr}^V_{1}\mathcal{M}.\]
\end{definition}
Since the morphism
\[t:\textup{gr}^V_k\mathcal{M}\longrightarrow \textup{gr}^V_{k-1}\mathcal{M}\]
is an isomorphism of $\mathscr{D}_H$-modules for all $k\not=0$, we then have \[\Phi_H(\mathcal{M})\simeq \textup{gr}^V_k\mathcal{M}\]
for $k>0$ and
\[\Psi_H(\mathcal{M})\simeq \textup{gr}^V_k\mathcal{M}\]
for $k\le -1$.
For a perverse sheaf $K$ (with complex coefficients) on $X$, we use $\psi_t(K)$ and $\phi_t(K)$ to denote the nearby cycle and vanishing cycle of $K$ along $H=(t=0)$ respectively. Let us refer to \cite[\S 8.6]{KSbook} for their definitions.
The following theorem of Kashiwara is the Riemann-Hilbert correspondence of nearby and vanishing cycles.
\begin{theorem}\cite[Theorem 2]{KasV}\label{thm:holnb}
Suppose that $\mathcal{M}$ is regular holonomic and $V_\bullet\mathcal{M}$ is the Kashiwara-Malgrange filtration along a smooth hypersurface $H=(t=0)$. Then
\begin{enumerate}
\item $\textup{gr}^V_k\mathcal{M}$ is a regular holonomic $\mathscr{D}_H$-module for every $k\in \mathbb{Z}$,
\item $\textup{DR}_H(\Psi_H(\mathcal{M}))\simeq \psi_t(\textup{DR}_X\mathcal{M})$ and $\textup{DR}_H(\Phi_H(\mathcal{M}))\simeq \phi_t(\textup{DR}_X\mathcal{M}).$
\end{enumerate}
\end{theorem}
\subsection{Algebraic normal deformation}\label{subsec:normdef}
We now recall the normal deformation algebraically; see \cite[\S 4.1]{KSbook} for the topological construction.
Suppose that $Y\subseteq X$ is a smooth subvariety with the ideal sheaf $\mathscr{I}_Y$.
We algebraically define a space by
\[\widetilde X_Y\coloneqq \textup{Spec }[\bigoplus_{k\in\mathbb{Z}}\mathscr{I}^{-k}_Y\otimes u^k],\]
where $u$ is an independent variable giving a $\mathbb{C}^\star$-action on $\widetilde X_Y$. Then the natural inclusion $\mathbb{C}[u]\hookrightarrow \bigoplus_{k\in\mathbb{Z}}\mathscr{I}^{-k}_Y\otimes u^k$ gives rise to a smooth family
\[\varphi_Y\colon \widetilde X_Y\to \mathbb{C}\]
so that
\begin{enumerate}
\item $\varphi_Y^{-1}(u)\simeq X$ if $0\not=u\in\mathbb{C}$;
\item $\varphi_Y^{-1}(0)=\textup{Spec }[\bigoplus_{k\in\mathbb{Z}^{\ge0}}\mathscr{I}^{k}_Y/\mathscr{I}_Y^{k+1}]\eqqcolon T_YX$, the algebraic normal bundle of $Y\subseteq X$.
\end{enumerate}
By the construction of $T_YX$ and $V^Y_\bullet\mathscr{D}_X$, one easily observes
\begin{equation} \label{eq:idgrvd}
\textup{gr}_\bullet^{V^Y}\mathscr{D}_X\simeq \pi_*\mathscr{D}_{T_YX},
\end{equation}
where $\pi:T_YX\to X$ is the natural affine morphism (see \cite[\S II.10]{Bj} for the analytical case).
Under the identification \eqref{eq:idgrvd}, $\sum t_i\partial_{t_i}$ gives a global section of $\mathscr{D}_{T_YX}$ corresponding to the radial vector field of the bundle $T_YX$ (with respect to the natural $\mathbb{C}^\star$-action on $T_YX$), denoted by $\sum\overline{ t_i\partial}_{t_i}\in \textup{gr}^{V^Y}_\bullet\mathscr{D}_X$. In particular, $\sum\overline{ t_i\partial}_{t_i}$ is independent of choices of $t_1,\dots,t_r$.
Now we assume that $\mathcal{M}$ is a coherent $\mathscr{D}_X$-module with a good filtration $\Omega_\bullet\mathcal{M}$ over $V^Y_\bullet\mathscr{D}_X$.
Since $\pi$ is affine, we get a coherent $\mathscr{D}_{T_YX}$-module $\widetilde\textup{gr}^\Omega_\bullet\mathcal{M}$ after applying the $\sim$-functor. We then say that $\textup{gr}^\Omega_\bullet\mathcal{M}$ is holonomic (resp. regular holonomic) over $\textup{gr}^{V^Y}_\bullet\mathscr{D}_X$ if $\widetilde\textup{gr}^\Omega_\bullet\mathcal{M}$ is so over $\mathscr{D}_{T_YX}$.
The deformation $\varphi_Y$ induces a deformation from $T^*X$ to $T^*T_YX$ as follows. We consider the relative cotangent bundle of $\varphi_Y$:
$$T^*\varphi_Y: T^*(\widetilde X_Y/\mathbb{C})\to \mathbb{C}.$$
The fiber of $T^*\varphi_Y$ over $0$ is $T^*T_YX$ and $T^*(\widetilde X_Y/\mathbb{C})\setminus T^*T_YX\simeq T^*X\times \mathbb{C}^\star$.
The following lemma is essentially due to Sabbah (see \cite[Lemme 2.0.1]{Sab}). We rephrase it algebraically.
\begin{lemma}[Sabbah]\label{lm:Rvrel}
For a smooth subvariety $Y\subseteq X$, we have a natural isomorphism
\[R^Y_V\mathscr{D}_X\simeq \mathscr{D}_{\widetilde X_Y/\mathbb{C}}.\]
\end{lemma}
\begin{proof}
We pick a (\'etale) local coordinates $(x_1,\dots, x_{n-r},t_1,\dots, t_r)$ so that $(t_1=t_2=\cdots=t_r=0)$ defines $Y$, where $n=\dim X$ and $r$ is the codimension of $Y\subseteq X$ (cf. \cite[\S A.5]{HTT}). Since $d(t^k_i\cdot u)=kt_i^{k-1}dt\cdot u$ (taking differentials over $\mathbb{C}[u]$), we have a local decomposition of the relative cotangent sheaf
\[\Omega^1_{\widetilde X_Y/\mathbb{C}}=\bigoplus_{k\in \mathbb{Z}}\mathscr{I}_Y^{-k}(\bigoplus_{i=1}^r dt_i\otimes u^{k-1}\bigoplus_{j=1}^{n-r}dx_j\otimes u^{k})\]
and hence
\begin{equation} \label{eq:decompdefXY}
\mathscr{T}_{\widetilde X_Y/\mathbb{C}}=\bigoplus_{k\in \mathbb{Z}}\mathscr{I}_Y^{-k}(\bigoplus_{i=1}^r \partial_{t_i}\otimes u^{k+1}\bigoplus_{j=1}^{n-r}\partial_{x_j}\otimes u^{k}).
\end{equation}
Since $\mathscr{T}_{\widetilde X_Y/\mathbb{C}}$ and $\mathscr{O}_{\widetilde X_Y}$ generate $\mathscr{D}_{\widetilde X_Y/\mathbb{C}}$, the required isomorphism then follows.
\end{proof}
By Lemma \ref{lm:Rvrel}, we immediately have:
\begin{prop}\label{prop:grrelsp}
\[T^*(\widetilde X_Y/\mathbb{C})=\textup{Spec } [\textup{gr}^F_\bullet R^Y_V\mathscr{D}_X],\]
where $F_\bullet(R^Y_V\mathscr{D}_X)$ is the order filtration for relative differential operators induced from the order filtration on $\mathscr{D}_X$.
\end{prop}
\subsection{Side-change for Rees modules}\label{subsect:scrm}
We discuss side-changes for $R^Y_V\mathscr{D}_X$-modules.
The proof of Lemma \ref{lm:Rvrel} implies that
the relative canonical sheaf of $\varphi_Y$ is
\[\omega_{\widetilde X_Y/\mathbb{C}}=\bigoplus_{k\in \mathbb{Z}}\omega_X\otimes_{\mathscr{O}_X}\mathscr{I}_Y^{-k}\otimes u^{k-r},\]
where we use $\omega_-$ to denote the canonical sheaf of the smooth variety $-$. We then immediately have the side-change operators
\[\omega_{\widetilde X_Y/\mathbb{C}}\otimes_\mathscr{O}(\bullet)\colon\textup{Mod}^l(R^Y_V\mathscr{D}_X)\longrightarrow \textup{Mod}^r(R^Y_V\mathscr{D}_X)\]
and
\[\omega^{-1}_{\widetilde X_Y/\mathbb{C}}\otimes_\mathscr{O}(\bullet)\colon\textup{Mod}^r(R^Y_V\mathscr{D}_X)\longrightarrow \textup{Mod}^l(R^Y_V\mathscr{D}_X),\]
where $\textup{Mod}^l(R^Y_V\mathscr{D}_X)$ (resp. $\textup{Mod}^r(R^Y_V\mathscr{D}_X)$) is the abelian category of left (resp. right) $R_V\mathscr{D}_X$-modules. Similar to the absolute case, the side-change operators give an equivalence between $\textup{Mod}^l(R^Y_V\mathscr{D}_X)$ and $\textup{Mod}^r(R^Y_V\mathscr{D}_X)$.
Since $\omega_{\widetilde X_Y/\mathbb{C}}\otimes_{\mathbb{C}[u]} \mathbb{C}_0\simeq \omega_{T_YX}$ and $R^Y_V\mathscr{D}_X\otimes_{\mathbb{C}[u]} \mathbb{C}_0\simeq \textup{gr}_\bullet^{V^Y}\mathscr{D}_X$ ($\mathbb{C}_0$ is the residue field of $0\in \textup{Spec }\mathbb{C}[u]$), we immediately have the following commutative diagram
\begin{equation} \label{eq:sideccmd}
\begin{tikzcd}
\textup{Mod}^l(R^Y_V\mathscr{D}_X) \arrow[r, "\omega_{\tilde X_Y/\mathbb{C}}\otimes\bullet"]\arrow[d, "\bullet\otimes_{\mathbb{C}[u]}\mathbb{C}_0"] & \textup{Mod}^r(R^Y_V\mathscr{D}_X)\arrow[l, "\omega_{\tilde X_Y/\mathbb{C}}^{-1}\otimes\bullet"]\arrow[d, "\bullet\otimes_{\mathbb{C}[u]}\mathbb{C}_0"] \\
\textup{Mod}^l(\textup{gr}^{V^Y}_\bullet\mathscr{D}_X) \arrow[r, "\omega_{T_YX}\otimes\bullet"] & \textup{Mod}^r(\textup{gr}^{V^Y}_\bullet\mathscr{D}_X)\arrow[l, "\omega_{T_YX}^{-1}\otimes\bullet"]
\end{tikzcd}
\end{equation}
where
$\textup{Mod}^l(\textup{gr}^{V^Y}_\bullet\mathscr{D}_X)$ (resp. $\textup{Mod}^r(\textup{gr}^{V^Y}_\bullet\mathscr{D}_X)$) are the abelian category of graded left (resp. right) $\textup{gr}^V_\bullet\mathscr{D}_X$-modules.
Furthermore, since $R_V\mathscr{D}_X\otimes_{\mathbb{C}[u]}\mathbb{C}_1 \simeq \mathscr{D}_X$ where $\mathbb{C}_1$ is the residue field of $1\in \textup{Spec }\mathbb{C}[u]$, we also have the following commutative diagram
\begin{equation} \label{eq:sideccmd2}
\begin{tikzcd}
\textup{Mod}^l(R_V\mathscr{D}_X) \arrow[r, "\omega_{\tilde X_Y}\otimes\bullet"]\arrow[d, "\bullet\otimes_{\mathbb{C}[u]}\mathbb{C}_1"] & \textup{Mod}^r(R_V\mathscr{D}_X)\arrow[l, "\omega_{\tilde X_Y}^{-1}\otimes\bullet"]\arrow[d, "\bullet\otimes_{\mathbb{C}[u]}\mathbb{C}_1"] \\
\textup{Mod}^l(\mathscr{D}_X) \arrow[r, "\omega_{X}\otimes\bullet"] & \textup{Mod}^r(\mathscr{D}_X).\arrow[l, "\omega_{X}^{-1}\otimes\bullet"]
\end{tikzcd}
\end{equation}
\subsection{Characteristic varieties of nearby cycles.}
In this subsection, we calculate the characteristic cycles of nearby cycles.
Suppose that $\mathcal{M}$ is specializable along a smooth hypersurface $H\subseteq X$. Then $\textup{gr}_k^V\mathcal{M}$ is both a coherent $\mathscr{D}_{X,H}$-module and a coherent $\mathscr{D}_H$-module for every $k\in\mathbb{Z}$.
\begin{lemma}\label{lm:<0lattice}
If $\mathcal{M}$ is holonomic, then the Kashiwara-Malgrange filtrations satisfy
\[V_k\mathcal{M}=V_{k}(\mathcal{M}(*H))\]
for every $k<0$, where $\mathcal{M}(*H)$ is the algebraic localization of $\mathcal{M}$ along $H$. In particular, $V_k\mathcal{M}$ is a $\mathscr{D}_{X,H}$-lattice of $\mathcal{M}(*H)$ for every $k<0$.
\end{lemma}
\begin{proof}
We consider the exact sequence
\[0\to \mathcal{T}\rightarrow \mathcal{M}\rightarrow \mathcal{M}(*H)\rightarrow \mathcal{Q}\to 0\]
where $\mathcal{T}$ is the torsion subsheaf of $\mathcal{M}$ supported on $\mathcal{H}$, namely $\mathcal{T}=\mathcal{H}^0_H(\mathcal{M})$, and $\mathcal{Q}$ is the quotient module, or $\mathcal{Q}=\mathcal{H}^1_H(\mathcal{M})$.
Since $\mathcal{T}$ and $\mathcal{Q}$ are supported on $H$, by Kashiwara's equivalence (cf. \cite[Theorem 1.6.1]{HTT}) $V_k\mathcal{T}$ and $V_k\mathcal{Q}$ are zero for all $k<0$. The exact sequence induces another exact sequence
\[0\to V_k\mathcal{T}\rightarrow V_k\mathcal{M}\rightarrow V_k\mathcal{M}(*H)\rightarrow V_k\mathcal{Q}\to 0\]
for each $k$. We thus have obtained the required statement.
\end{proof}
The following theorem is equivalent to \cite[Theorem 5.5]{Gil}, where in \emph{loc. cit.} the nearby cycle is alternatively constructed following the algebraic approach of Beilinson and Bernstein. We give it a proof by applying Theorem \ref{thm:CClogC}.
\begin{theorem}\label{thm:ccnearby}
Suppose that $\mathcal{M}$ is a regular holonomic $\mathscr{D}_X$-module and that $H\subseteq X$ is a smooth hypersurface. Then
\[\overline{\textup{CC}(\mathcal{M}|_{U})}|_H\subseteq T^*(X,H)\]
is a Lagrangian cycle in $T^*H\subseteq T^*(X,H)|_H$ with $U=X\setminus H$. Furthermore, the nearby cycle $\Psi_H(\mathcal{M})$ has the characteristic cycle
\[\textup{CC}(\Psi_H(\mathcal{M}))=\overline{\textup{CC}(\mathcal{M}|_U)}|_H\subseteq T^*H.\]
\end{theorem}
\begin{proof}
Since characteristic cycles are local, it is enough to assume $H=(t=0)$ for some local regular (or holomorphic) function $t$.
Since $\Psi_H(\mathcal{M})=\textup{gr}^V_{k}\mathcal{M}$ for $k<0$, we can focus on the short exact sequence of $V_0\mathscr{D}_X=\mathscr{D}_{X,H}$-modules
\[0\to V_{-1}\mathcal{M}\xrightarrow{\cdot t}V_{-1}\mathcal{M}\rightarrow\textup{gr}^V_{-1}\mathcal{M}\to 0.\]
By Lemma \ref{lm:<0lattice}, $V_{-1}\mathcal{M}$ is a $\mathscr{D}_{X,H}$-lattice of $\mathcal{M}$. Then, by Theorem \ref{thm:CClogC}, we have
\[\textup{CC}_{\log}(V_{-1}\mathcal{M})=\overline{\textup{CC}(\mathcal{M}|_U)}.\]
Similar to the proof of Proposition \ref{prop:spnotorsioncc}, considering the above short exact sequence, we conclude that
\[\textup{CC}_{\log}(\textup{gr}^V_{-1}\mathcal{M})=\overline{\textup{CC}(\mathcal{M}|_U)}|_H\subseteq T^*(X,H)\]
and
\[\dim(\overline{\textup{CC}(\mathcal{M}|_U)}|_H)=\dim X-1.\]
Now we pick a good filtration $F_\bullet(\textup{gr}^V_{-1}\mathcal{M})$ over $F_\bullet\mathscr{D}_H$. Furthermore, we have a closed embedding
\[T^*H\hookrightarrow T^*(X,H)\]
defined by $\widetilde\xi_t=0$, where $\widetilde\xi_t$ is the symbol of $t\partial_t$ in $\textup{gr}^V_\bullet\mathscr{D}_{X,H}$. Thus, $F_\bullet(\textup{gr}^V_{-1}\mathcal{M})$ is also good over $F_\bullet\mathscr{D}_{X,H}$. Since characteristic cycles are independent of good filtrations, we therefore have
\[\textup{CC}(\Psi_H(\mathcal{M}))=\textup{CC}_{\log}(\textup{gr}^V_{-1}\mathcal{M})=\overline{\textup{CC}(\mathcal{M}|_U)}|_H\subseteq T^*H.\]
\end{proof}
\subsection{Generalized Kashiwara-Malgrange filtrations}
We discuss refinements of the Kashiwara-Malgrange filtration by using Sabbah's multi-filtrations.
Suppose that $X$ is a smooth complex variety and $Y\subseteq X$ a smooth subvariety of codimension $r$ such that
\[Y=\bigcap_{j=1}^r H_j,\]
where
$H_1, H_2,\dots,H_r$ are smooth hypersurfaces intersecting transversally (that is, the divisor $D=\sum_j H_j$ has simple normal crossings).
We then call $Y$ a \emph{smooth complete intersection} of $H_1,\dots,H_r$.
We use ${V}^{H_j}_\bullet \mathscr{D}_X$ to denote the Kashiwara-Malgrange filtration of $\mathscr{D}_X$ along $H_j$ for $j=1,\dots,r$. For ${\bf s}=(s_1,s_2,\dots, s_r) \in \mathbb{Z}^r$, we set
\[V_{\bf s}\mathscr{D}_X=\bigcap_{j=1}^r { }{V}^{H_j}_{s_j} \mathscr{D}_X.\]
As the index ${\bf s}$ varies in $\mathbb{Z}^r$, we get an increasing $\mathbb{Z}^r$-filtration of $\mathscr{D}_X$ with respect to the natural partial order on $\mathbb{Z}^r$, denote by $V_\bullet\mathscr{D}_X$. One can easily check
$$V_{\mathbf 0}\mathscr{D}_X=\mathscr{D}_{X,D},$$ where the latter is the sheaf of rings of log differential operators.
We write the associated Rees ring by
\[R_V\mathscr{D}_X:= \bigoplus_{{\bf s}\in\mathbb{Z}^r}V_{\bf s}\mathscr{D}_X\cdot \prod_{j=1}^ru_j^{s_i},\]
where the product $\prod_{j=1}^ru_j^{s_i}$ is used to help us remember the multi-grading of $R_V\mathscr{D}_X$.
For a coherent $\mathscr{D}_X$-module $\mathcal{M}$, similar to Definition \ref{def:fltV}, we say that a $\mathbb{Z}^r$-filtration $U_\bullet\mathcal{M}$ is compatible with $V_\bullet\mathscr{D}_X$ if
\[V_{\bf s}\mathscr{D}_X\cdot U_{\bf k}\mathcal{M}\subseteq U_{{\bf k}+{\bf s}}\mathcal{M}\]
for all ${\bf k}, {\bf s}\in \mathbb{Z}^r$. Such a filtration $U_\bullet\mathcal{M}$ is called good over $V_\bullet\mathscr{D}_X$ if its associated Rees module $R_U\mathcal{M}$ is coherent over $R_V\mathscr{D}_X$.
\subsection{Refinement of normal deformation}\label{subsec:refnormde}
We keep the notations as in the previous subsection. Suppose that $Y\subseteq X$ is a smooth complete intersection of $H_1,\dots,H_r$. We denote by $\mathscr{I}_{H_j}$ the ideal sheaf of $H_j$ for $j=1,2,\dots,r$.
Define
$$ \widetilde X \coloneqq \textup{Spec }(\bigoplus_{k_1, \cdots, k_r \in \mathbb{Z}} \bigotimes_j {\mathscr{I}_{H_j}^{-k_j} \otimes u_j^{k_j} } ).$$
Then the natural inclusion $$\mathbb{C}[u_1,\dots,u_r]\hookrightarrow \bigoplus_{k_1, \cdots, k_r \in \mathbb{Z}} \bigotimes_j {\mathscr{I}_{H_j}^{-k_j} \otimes u_j^{k_j} } $$
gives rise to a smooth family
\[\varphi\colon \widetilde X\to \mathbb{C}^r\]
so that
\begin{enumerate}
\item $\varphi^{-1}(u_1,\dots,u_r)\simeq X$ if $(u_1,\dots,u_r)\in(\mathbb{C}^\star)^r$;
\item $\varphi^{-1}(\mathbf 0)= T_YX$, the algebraic normal bundle of $Y\subseteq X$.
\end{enumerate}
The $\mathbb{Z}^r$-grading of $\bigoplus_{k_1, \cdots, k_r \in \mathbb{Z}} \prod_j {\mathscr{I}_{H_j}^{-k_j} \otimes u_j^{k_j} } $ induces $(\C^\star)^k$-actions on both $\widetilde X$ and $T_YX$. Since $Y$ is a complete intersection, the obvious thing is that
$$T_Y X = T_{H_1} X \times_X \cdots \times_X T_{H_r} X$$
and hence $T_Y X \to Y$ is a split rank $r$ vector bundle. Moreover, the induced $(\C^\star)^r$-action on $T_YX$ is given by rescaling the fibers.
Similar to Lemma \ref{lm:Rvrel}, we obtain:
\begin{lemma}\label{lm:Rvrelm}
We have a natural isomorphism
\[R_V\mathscr{D}_X\simeq \mathscr{D}_{\widetilde X/\mathbb{C}^r}.\]
\end{lemma}
From the above lemma, we immediately conclude that $R_V\mathscr{D}_X$ is a coherent and noetherian sheaf of rings.
Similar to Proposition \ref{prop:grrelsp}, we have:
\begin{prop}\label{prop:grrelspT}
\[T^*(\widetilde X/\mathbb{C}^r)=\textup{Spec } [\textup{gr}^F_\bullet R_V\mathscr{D}_X],\]
where $F_\bullet(R^Y_V\mathscr{D}_X)$ is the order filtration for relative differential operators induced from the order filtration on $\mathscr{D}_X$.
\end{prop}
\begin{remark}\label{rmk:gagavfil}
In the case that $X$ is a complex manifold and $Y$ is an analytic smooth complete intersection, one can construct the complex manifold $\widetilde X$ similar to the topological construction in \cite[\S 4.1]{KSbook} or by using open blowups as in \cite[\S 2.1]{Sab}. Then $\mathscr{D}_{\widetilde X/\mathbb{C}^r}$ is a faithfully flat ring extension of $R_V\mathscr{D}_X$ by GAGA, or more precisely
\[\mathscr{D}_{\widetilde X/\mathbb{C}^r}=\mathscr{O}_{\widetilde X}\otimes_{R_V\mathscr{O}_X}R_V\mathscr{D}_X,\]
where
$$R_V\mathscr{O}_X=\bigoplus_{k_1, \cdots, k_r \in \mathbb{Z}} \bigotimes_j {\mathscr{I}_{H_j}^{-k_j} \otimes u_j^{k_j}}.$$
As a consequence, all the results in this section can be extended to the analytic case.
\end{remark}
\subsection{Specializability along arbitrary slopes}\label{subsect:spalongL}
Let $L=(l_1,\dots,l_r)$ be a nonzero primitive covector in $(\mathbb{Z}^r_{\ge0})^\vee$. We also use $L$ to denote the ray generated by the primitive vector. We call such $L$ a slope for the smooth complete intersection $Y\subseteq X$ and that $L$ is non-degenerate if each $l_j$ is not zero and degenerate otherwise. We set
$$Y_L\coloneqq \bigcap_{l_j\not=0} H_j.$$
By definition, if $L$ is non-degenerate, then $Y_L=Y$.
Given a nondegenerate slope $L$, we have a toric embedding
$$\iota\colon\C \hookrightarrow \C^k, \textup{ by } u \mapsto (u^{l_1}, \cdots, u^{l_k}).$$
We can pull-back $\widetilde X$, to get a smooth family $\varphi_L$ in the following Cartesian diagram:
\[
\begin{tikzcd}
\widetilde X^L \arrow[r,hook,"\iota_L"]\arrow[d,"\varphi^L"]& \widetilde X\arrow[d,"\varphi"] \\
\mathbb{C}\arrow[r,hook] &\mathbb{C}^r
\end{tikzcd}
\]
This can be constructed directly, as
$$ \widetilde X^L = \textup{Spec }(\bigoplus_{k_1, \cdots, k_r \in \mathbb{Z}} \bigotimes_j {\mathscr{I}_{H_j}^{-k_j} \otimes (u^{l_j})^{k_j} } )$$
and the fiber over $u=0$ is
$$ (\varphi^L)^{-1}(0)=\widetilde X_L|_{u=0} = \textup{Spec }[ (\bigoplus_{k_1, \cdots, k_r \in \mathbb{Z}} \bigotimes_j {\mathscr{I}_{H_j}^{-i_j} \otimes (u^{a_j})^{i_j} } ) \otimes_{\C[u]} \C[u]/(u) ]\simeq T_{Y}X.$$
In other words, $\widetilde X^L$ gives a normal deformation along the slope direction $L$. The isomorphism $\widetilde X^L|_{u=0}\simeq T_{Y}X$ induces a $\C^\star$-action on $T_{Y}X$ :
\begin{equation} \label{eq:Lindaction}
\lambda\cdot (y_1,\dots, y_{n-r},\xi_1,\dots,\xi_{r})=(y_1,\dots, y_{n-r},\lambda^{l_1}\cdot\xi_1,\dots,\lambda^{l_{r}}\cdot\xi_{r})
\end{equation}
for $\lambda\in\C^\star=\C\setminus\{0\} $ and $(y_1,\dots, y_{n-r},\xi_1,\dots,\xi_{r})\in T_{Y}X$.
The construction of $\widetilde X^L$ induces the following Cartesian diagram of relative cotangent bundles:
\begin{equation}
\begin{tikzcd}
T^*(\widetilde X^L/\C)\arrow[r,hook]\arrow[d,"T^*\varphi^L"]& T^*(\widetilde X/\mathbb{C}^r)\arrow[d,"T^*\varphi"]\\
\mathbb{C}\arrow[r,hook,"\iota"]&\mathbb{C}^r.
\end{tikzcd}
\end{equation}
By Lemma \ref{lm:Rvrelm}, we see that $R_V\mathscr{D}_X$ is flat over $\mathbb{C}[u_1,u_2,\dots,u_r]$. We then set:
\[^LR_V\mathscr{D}_X:= \iota_L^*(R_V\mathscr{D}_X)=\mathscr{D}_{\widetilde X^L/\mathbb{C}}.\]
In particular, $^LR_V\mathscr{D}_X$ is coherent and noetherian. If $L$ is degenerate, then one can replace $Y$ by $Y_L$ to reduce to the non-degenerate case.
\begin{remark}\label{rmk:gndlw}
Similar to $\omega_{\widetilde X_{Y}/\C}$ in \S\ref{subsect:scrm}, one can get the explicit formula for the relative canonical sheaves:
$$\omega_{\widetilde X/\C^r}=\bigoplus_{k_1, \cdots, k_r \in \mathbb{Z}} \omega_X\otimes_\mathscr{O}(\bigotimes_j {\mathscr{I}_{H_j}^{-k_j} \otimes u_j^{k_j-1} } )\textup{ and }\omega_{\widetilde X^L_{Y}/\C}=\iota_L^*(\omega_{\widetilde X/\C^r}).$$
\end{remark}
By construction, we have the explicit formula for $^LR_V\mathscr{D}_X$:
\[^LR_V\mathscr{D}_X= \bigoplus_{k\in\mathbb{Z}} { }^LV_k\mathscr{D}_X\cdot u^{k}\]
where by definition
\[{ }^LV_k\mathscr{D}_X\coloneqq \sum_{{\bf s}\in \mathbb{Z}^r, L\cdot{\bf s}=k} V_{\bf s}\mathscr{D}_X.\]
The graded ring $^LR_V\mathscr{D}_X$ then induces an increasing $\mathbb{Z}$-filtration $^LV_\bullet\mathscr{D}_X$ on $\mathscr{D}_X$. We might call $^LV_\bullet\mathscr{D}_X$ the Kashiwara-Malgrange filtration of $\mathscr{D}_X$ along the slope $L$. Since $\varphi^L$ is smooth and $\widetilde X^L|_{u=0}\simeq T_{Y_L}X$, we have that
\begin{equation} \label{eq:grLVD}
\textup{gr}^{^LV}_\bullet\mathscr{D}_X\simeq \pi_* \mathscr{D}_{T_{Y_L}\mathscr{D}_X}
\end{equation}
where $\pi: T_{Y_L}X\to Y_L\hookrightarrow X$ is the composition (which is an affine morphism). The $\mathbb{Z}$-grading of $\textup{gr}^{^LV}_\bullet\mathscr{D}_X$ corresponds to the $\mathbb{C}^\star$-action in \eqref{eq:Lindaction}.
The $\mathbb{C}^\star$-action in \eqref{eq:Lindaction} induces a radial vector field on $T_{Y_L}X$, denoted by $v_L$. We assume that locally $H_j$ are defined by $t_j=0$ where $t_j$ are among some local coordinate system $(x_1,\dots,x_{n-r},t_1,\dots,t_r)$.
Then locally
\[v_L=L\cdot (t_1\partial_{t_1},t_2\partial_{t_2},\dots,t_r\partial_{t_r}).\]
\begin{definition}\label{def:fltLV}
Suppose that $Y\subseteq X$ is a smooth complete intersection of $H_1,\dots,H_r$, $\mathcal{M}$ is a $($left$)$ $\mathscr{D}_X$-module and $L$ is a slope. A $\mathbb{Z}$-indexed increasing filtration $\Omega_\bullet\mathcal{M}$ is \emph{compatible} with $^LV_\bullet\mathscr{D}_X$ if
\[^LV_k\mathscr{D}_X\cdot \Omega_j\mathcal{M}\subseteq \Omega_{k+j}\mathcal{M} \textup{ for all } k,j\in \mathbb{Z}.\]
A compatible filtration $\Omega_\bullet\mathcal{M}$ is a \emph{good} filtration over $^LV_\bullet\mathscr{D}_X$ if the associated Rees module
\[^LR_\Omega\mathcal{M}\coloneqq \bigoplus_{k\in \mathbb{Z}}\Omega_k\mathcal{M}\cdot u^k\subseteq \mathcal{M}[u,1/u]\]
is coherent over $^LR_V\mathscr{D}_X$. A good filtration $^LV_\bullet\mathcal{M}$ is called the Kashiwara-Malgrange filtration on $\mathcal{M}$ along the slope $L$ if there exists a monic polynomial $b(s)\in\mathbb{C}[s]$ with its roots having real parts in $[0,1)$ so that
\[b(v_L+k)\textup{ annihilates } \textup{gr}^{^LV}_k\mathcal{M}\coloneqq {}^LV_k\mathcal{M}/{}^LV_{k-1}\mathcal{M} \textup{ for each } k\in\mathbb{Z}. \]
The monic polynomial $b(s)$ of the least degree is called the Bernstein-Sato polynomial or $b$-function of $\mathcal{M}$ along $L$.
\end{definition}
Kashiwara-Malgrange filtrations along $L$ are unique if exist. For an arbitrary slope $L$, a coherent $\mathscr{D}_X$-module $\mathcal{M}$ is called $L$-\emph{specializable} if the Kashiwara-Malgrange filtration of $\mathcal{M}$ along $L$ exists. When $L=(1,1,\dots,1)$, the Definition \ref{def:fltLV} coincides with Definition \ref{def:fltV}. When $L={\bf e}_j$, the $j$-th unit vector in $\mathbb{Z}^r_{\ge0}$, it coincides with Definition \ref{def:fltV} for the case when $Y=H_j$.
In particular, specializability along $L$ is compatible with the specializability defined in \S\ref{subsec:KMVsp}. We can then similarly define $R$-specializability along $L$ for $R$ a subring of $\mathbb{C}$.
Suppose that $U_\bullet\mathcal{M}$ is a good filtration of $\mathcal{M}$ over $V_\bullet\mathscr{D}_X$. Then for a slope $L$ we can obtain a compatible filtration $^LU_\bullet\mathcal{M}$ over $^LV_\bullet\mathscr{D}_X$ defined by
\begin{equation} \label{eq:filsl}
^LU_k\mathcal{M}\coloneqq \sum_{{\bf s}\in \mathbb{Z}^r, L\cdot{\bf s}=k} U_{\bf s}\mathcal{M}.
\end{equation}
We denote the associated Rees module by
\[
^LR_U\mathcal{M}\coloneqq\bigoplus_{k\in\mathbb{Z}}^LU_k\mathcal{M}\cdot u^k.
\]
Since $^LR_U\mathcal{M}\simeq \iota_L^*(R_U\mathcal{M})/\mathcal{T}_u$ and the pullback functor for relative $\mathscr{D}$-modules preserves coherence, we have that $^LU_\bullet\mathcal{M}$ is good over $^LV_\bullet\mathscr{D}_X$, where $\mathcal{T}_u$ is the $u$-torsion subsheaf.
The following result is a natural generalization of Theorem \ref{thm:sphol}, which is first observed by Sabbah \cite[\S3.1]{Sab}. We provide an alternative proof with the idea essentially due to Bj\" ork.
\begin{theorem}\label{thm:Lsphol}
Suppose that $\mathcal{M}$ is a holonomic $\mathscr{D}_X$-module and $L$ is a slope. Then $\mathcal{M}$ is splecializable along $L$. Moreover, if $\Omega_\bullet\mathcal{M}$ is a good filtration over ${}^L V_\bullet\mathscr{D}_X$, then $\textup{gr}^{\Omega}_\bullet\mathcal{M}$ is holonomic over $\textup{gr}^{^LV}_\bullet\mathscr{D}_X$.
\end{theorem}
\begin{proof}
We take a good filtration $\Omega_\bullet\mathcal{M}$ over $^LV_\bullet\mathscr{D}_X$ locally. Such a filtration exists at least locally by coherence. Then we apply \cite[Appendix IV. Theorem 4.10]{Bj} and conclude that $j_{\textup{gr}^{^LV}_\bullet\mathscr{D}_X}(\textup{gr}^{\Omega}_\bullet\mathcal{M})=j(\mathcal{M}),$
where the first grade number is the graded number of $\textup{gr}^{\Omega}_\bullet\mathcal{M}$ over $\textup{gr}^{^LV}_\bullet\mathscr{D}_X$.
Since $\mathcal{M}$ is holonomic, by \cite[Appendix IV. Proposition 3.5(2)]{Bj} $j(\mathcal{M})=n$, the dimension of $X$. Hence $j_{\textup{gr}^{^LV}_\bullet\mathscr{D}_X}(\textup{gr}^{\Omega}_\bullet\mathcal{M})=n$ and hence $\textup{gr}^{\Omega}_\bullet\mathcal{M}$ is holonomic over $\textup{gr}^{^LV}_\bullet\mathscr{D}_X\simeq\pi_*\mathscr{D}_{T_{Y_L}X}$ (since $\dim T_{Y_L}X=\dim X=n$).
Now we consider the operator
\[\theta_L\coloneqq \bigoplus_{k\in\mathbb{Z}}v_L+k\]
on $\textup{gr}^{\Omega}_\bullet\mathcal{M}$. By construction, one can easily check
\[\theta_L\in \textup{End}_{\textup{gr}^{^LV}_\bullet\mathscr{D}_X}(\textup{gr}^{\Omega}_\bullet\mathcal{M}).\]
Since $\textup{gr}^{\Omega}_\bullet\mathcal{M}$ is holonomic over $\textup{gr}^{^LV}_\bullet\mathscr{D}_X$, we conclude that $\theta_L$ admits a minimal polynomial $b(s)\in\mathbb{C}[{\bf s}]$. The real parts of roots of $b(s)$ might not be contained in $[0,1)$. Namely, the good filtration $\Omega_\bullet\mathcal{M}$ is not the Kashiwara-Malgrange filtration along $L$. We then apply the procedure in the proof of \cite[Theorem 1(1)]{KasV} to adjust the roots of $b(s)$ and the filtration $\Omega_\bullet\mathcal{M}$. The output of the procedure gives us the Kashiwara-Malgrange filtration $^LV_\bullet\mathcal{M}$. By uniqueness, the local construction glues to the global $^LV_\bullet\mathcal{M}$. Therefore, $\mathcal{M}$ is specializable along $L$.
\end{proof}
\subsection{Micro nearby cycles along arbitrary slopes}\label{subsec:mncsl}
We keep notations as in the previous subsection and continue to assume $Y\subseteq X$ a smooth complete intersection and $\mathcal{M}$ a holonomic $\mathscr{D}_X$-module.
By Theorem \ref{thm:Lsphol}, the Kashiwara-Malgrange filtration $^LV_\bullet\mathcal{M}$ of $\mathcal{M}$ along every slope $L$. For a nondegenerate slope $L$, the module $p^*_L(\mathcal{M})$ on $\widetilde X^L\setminus T_YX$ gives rise to a holonomic $\mathscr{D}_{\widetilde X^L}$-module, denoted by $\widetilde \mathcal{M}_L$ (that is, $\widetilde \mathcal{M}_L=j^L_*(\mathcal{M}_L)$), where $p_L\colon \widetilde X^L\setminus T_YX\simeq X\times\mathbb{C}^\star\to X$ is the natural projection and $j^L\colon \widetilde X^L\setminus T_YX\hookrightarrow \widetilde X^L$ is the open embedding.
\begin{lemma}\label{lm:Lgrnearby}
Suppose that $\mathcal{M}$ is specializable along a slope $L$. Then
\[\widetilde\textup{gr}^{^LV}_\bullet\mathcal{M}\simeq \Psi_{u=0}(\widetilde \mathcal{M}_L).\]
\end{lemma}
\begin{proof}
The $\C^\star$-action on $\widetilde X^L$ is induced by the grading of $u$ by construction. Hence, the $\C^\star$-action induces the action of the Euler vector field along the smooth divisor $T_YX=(u=0)\subseteq \widetilde X^L$. But the $\C^\star$-actions on $\widetilde X^L$ and hence $T_{Y_L}X$ are both induced by the operator
\[\bigoplus_{k\in\mathbb{Z}} (L\cdot (t_1\partial_{t_1},t_2\partial_{t_2},\dots,t_r\partial_{t_r})+k).\]
The required statement then follows by definition. See also \cite[\S1.3]{BMS} for the case $L=(1,1,\dots,1)$.
\end{proof}
By the above lemma and Theorem \ref{thm:holnb}, we obtain:
\begin{coro}\label{cor:grlhol}
If $\mathcal{M}$ is regular holonomic, then so is $\textup{gr}^{^LV}_\bullet\mathcal{M}$.
\end{coro}
For a holonomic $\mathscr{D}_X$-module $\mathcal{M}$, we denote
\[^LR\Psi_{T_YX}(\mathcal{M})\coloneqq \widetilde\textup{gr}^{^LV}_\bullet\mathcal{M}.\]
Following terminology in \cite{Schpbook} (and motivated by Lemma \ref{lm:Lgrnearby}), we call $^LR\Psi_{T_YX}(\mathcal{M})$ the \emph{micro nearby cycle} of $\mathcal{M}$ along $L$.
\begin{theorem}\label{thm:micccl}
Assume that $\mathcal{M}$ is a regular holonomic $\mathscr{D}_X$-module and $L$ is a nondegenerate slope. Then
\[\textup{CC}_{\tilde X_H/\mathbb{C}}({}^LR_V\mathcal{M})=\overline{q^{*}_L(\textup{CC}(\mathcal{M}))} \textup{ and }\textup{CC}({ }^LR\Psi_{T_YX}(\mathcal{M}))=\overline{q^{*}_L(\textup{CC}(\mathcal{M}))}|_{T^*T_HX}.\]
where $q_L:T^*(\widetilde X^L/\mathbb{C})\setminus T^*T_YX\simeq T^*X\times\mathbb{C}^\star\to T^*X$ is the natural projection.
\end{theorem}
\begin{proof}
We apply Lemma \ref{lm:BMM} and let $
\mathcal{N}_{\textup{rel}}= { }^LR_V\mathcal{M}$, $\mathcal{N}=\widetilde \mathcal{M}_L$ and $F=u$. We thus have
\[\textup{CC}_{\tilde X_H/\mathbb{C}}({}^LR_V\mathcal{M})=\overline{q^{*}_L(\textup{CC}(\mathcal{M}))}.\]
To obtain
\[\textup{CC}({ }^LR\Psi_{T_YX}(\mathcal{M}))=\overline{q^{*}_L(\textup{CC}(\mathcal{M}))}|_{T^*T_HX},\]
we apply Proposition \ref{prop:spnotorsioncc}.
\end{proof}
Theorem \ref{thm:relccL} follows from combining Theorem \ref{thm:Lsphol}, Corollary \ref{cor:grlhol} and Theorem \ref{thm:micccl}.
\subsection{Sabbah's toric base-change of $R_U\mathcal{M}$}
We now recall Sabbah's toric base-changes of Rees modules.
Let us first introduce some notations. We set
\[M=\mathbb{Z}^r, M^+=(\mathbb{Z}_{\ge0})^r, M_\mathbb{Q}=M\times\mathbb{Q} \textup{ and } M^+_\mathbb{Q}=(\mathbb{Q}_{\ge 0})^r.\]
Let $N$ be the dual lattice of $M$ and define $N^+$, $N_\mathbb{Q}$ and $N^+_\mathbb{Q}$ similarly.
We then fix a simplicial fan $\Sigma$ in $N_\mathbb{Q}$. We assume that $\Sigma$ is given by a subdivision of $N^+_\mathbb{Q}$. Then $\Sigma$ gives a toric variety $\mathcal{S}_\Sigma$ (by making further subdivision, one can assume $\mathcal{S}_\Sigma$ is smooth) and a projective birational morphism
\[\nu_\Sigma\colon\mathcal{S}_\Sigma\rightarrow\mathbb{C}^r\simeq \textup{Spec } \mathbb{C}[M^+].\]
For a cone $\Gamma\in \Sigma$, we set
\[\check{\Gamma}\coloneqq \{m\in M \mid \langle m, n\rangle\ge 0, \forall n\in\Gamma\}.\]
Then
\[\mathcal{S}_{\Gamma}\coloneqq \textup{Spec } \mathbb{C}[\check{\Gamma}]\hookrightarrow \mathcal{S}_\Sigma\]
gives an open affine patch of $\mathcal{S}_\Sigma$. We denote a partial ordering induced by $\Gamma$ on $M$ by
\[{\bf s}\le_{\Gamma}{\bf s}' \Leftrightarrow {\bf s}'-{\bf s}\in \check{\Gamma}\]
and we say
\[{\bf s}<_{\Gamma}{\bf s}' \Leftrightarrow {\bf s}\le_{\Gamma}{\bf s}' \textup{ but not } {\bf s}'\le_{\Gamma}{\bf s}.\]
We use $\mathcal{L}(\Gamma)$ to denote the finite set of all the primitive generators of $\Gamma$ and write
\[\mathcal{L}(\Sigma)\coloneqq \bigcup_{\Gamma\in \Sigma}\mathcal{L}(\Gamma).\]
We suppose that $Y\subseteq X$ is a smooth complete intersection of $H_1,\dots,H_r$ and $\mathcal{M}$ a $\mathscr{D}_X$-module with a $\mathbb{Z}^r$-filtration $U_\bullet\mathcal{M}$ that is good over $V_\bullet\mathscr{D}_X$. It is now natural to make $U_\bullet\mathcal{M}$ and $V_\bullet\mathscr{D}_X$ indexed by $M\simeq \mathbb{Z}^r$.
We consider the following fiber-product diagram
\[
\begin{tikzcd}
\widetilde X_\Sigma \coloneqq \widetilde X\times_{\mathbb{C}^r} \mathcal{S}_\Sigma \arrow[r,"\mu_\Sigma"]\arrow[d,"\varphi_\Sigma"] &\widetilde X\arrow[d, "\varphi"]\\
\mathcal{S}_\Sigma\arrow[r,"\nu_\Sigma"]&\mathbb{C}^r.
\end{tikzcd}
\]
Using the flattening theorem relative over toric varieties, Sabbah and Castro proved the following fundamental theorem:
\begin{theorem}\cite[A.1.1]{Sab}\label{thm:existadfan}
Suppose that $U_\bullet\mathcal{M}$ is a good multi-filtration over $V_\bullet\mathscr{D}_X$ along the smooth complete intersection $Y\subseteq X$. Then there exists a simplicial fan $\Sigma$ subdividing $\mathbb{N}^+_\mathbb{Q}$ such that $\widetilde{R_U\mathcal{M}}\coloneqq\mu^*_\Sigma R_U\mathcal{M}/\mathcal T$ is flat over $\mathcal{S}_\Sigma$, where $\mathcal T$ is the torsion subsheaf of $\mu^*_\Sigma R_U\mathcal{M}$ supported over the exceptional locus of $\nu_\Sigma$.
\end{theorem}
Such $\Sigma$ in the above theorem is called a fan \emph{adapted} to $U_\bullet\mathcal{M}$.
By construction, we have a natural inclusion of $\mathbb{Z}^r$-graded modules
\[R_U\mathcal{M}\hookrightarrow \mathcal{M}[u_1,1/u_1,\dots,u_r,1/u_r]\]
or equivalently
\[R_U\mathcal{M}|_{\widetilde X_N}=\mathcal{M}[u_1,1/u_1,\dots,u_r,1/u_r].\]
If we have a cone $\Gamma\in \Sigma$, then we know
\[\mu^*_\Sigma R_U\mathcal{M}|_{\widetilde X_\Gamma}=\mu^*_\Gamma R_U\mathcal{M}= \mathbb{C}[\check{\Gamma}]\otimes_{\mathbb{C}[M^+]}R_U\mathcal{M}.\]
Since $\mu_\Sigma$ and $\mu_{\Gamma}$ are identical over $S_N$, using the pullback functor we have an induced natural morphism
\begin{equation} \label{eq:pbmorphism}
\mathbb{C}[\check{\Gamma}]\otimes_{\mathbb{C}[M^+]}R_U\mathcal{M}\rightarrow \mathcal{M}[u_1,1/u_1,\dots,u_r,1/u_r].
\end{equation}
The above morphism is neither necessarily surjective nor injective. What is the its image? To answer this question, Sabbah introduced a refined filtration for each cone $\Gamma\in\Sigma$ by
\[^\Gamma U_{\bf s}\mathcal{M}=\sum_{{\bf s}'\le_{\Gamma}{\bf s}}U_{{\bf s}'}\mathcal{M}, \quad\forall {\bf s}\in M.\]
If $\Gamma$ is a unimodular cone of dimesion $<k$, then one easily sees that $^\Gamma U_{\bf s}\mathcal{M}$ only depends on the image of ${\bf s}$ in $M/\Gamma^\perp$. Hence, in the special case that $L$ is a ray in $\Sigma$, we have
\[^LU_{\bf s}\mathcal{M}={ }^LU_{L({\bf s})}\mathcal{M},\]
that is, $^LU_\bullet\mathcal{M}$ is indexed by $\mathbb{Z}\simeq M/\Gamma^{\perp}$. Therefore, $^LU_\bullet\mathcal{M}$ coincides with the $\mathbb{Z}$-indexed filtration defined by \eqref{eq:filsl}.
We write the associated Rees module by
\[^\Gamma R_U\mathcal{M}\coloneqq \bigoplus_{s\in M}{}^\Gamma U_{\bf s} \mathcal{M}\cdot\prod_{j=1}^r u_j^{s_j}.\]
One observes that $^\Gamma R_U\mathcal{M}$ is the image of the natural morphism \eqref{eq:pbmorphism} and that its kernel is the torsion subsheaf $\mathcal{T}|_{\widetilde X_\Gamma}$.
Therefore, we have proved that
\[\widetilde{R_U\mathcal{M}}|_{\widetilde X_\Gamma}\simeq { }^\Gamma R_U\mathcal{M}.\]
Furthermore, Sabbah proved:
\begin{lemma}\cite[2.2.2.Lemme]{Sab}\label{lm:flatgamma}
If ${ }^\Gamma R_U\mathcal{M}$ is flat over $\mathbb{C}[\check{\Gamma}]$, then
\[^\Gamma U_{\bf s}\mathcal{M}=\bigcap_{L\in \mathcal{L}(\Gamma)}{}^LU_{L({\bf s})}\mathcal{M}.\]
\end{lemma}
It is obvious that $\mathbb{C}[\check{\Gamma}]\otimes_{\mathbb{C}[M^+]}R_U\mathcal{M}$ is coherent over $^\Gamma R_V\mathscr{D}_X$ and hence $^\Gamma R_U\mathcal{M}$ is coherent over $^\Gamma R_V\mathscr{D}_X$. However, it is not always the case that $^\Gamma R_U\mathcal{M}$ is coherent over $R_V\mathscr{D}_X$ and hence $^\Gamma U_\bullet\mathcal{M}$ is not necessarily a good filtration over $V_\bullet\mathscr{D}_X$. To fix this, Sabbah defined the \emph{saturation} of $U_\bullet\mathcal{M}$ by
\[\bar U_{\bf s} \mathcal{M}\coloneqq \bigcap_{\textup{primitive vectors } L\in N^+} {}^LU_{L({\bf s})}\mathcal{M}.\]
\begin{theorem}[Sabbah]\label{thm:goodsat}
Suppose that $U_\bullet\mathcal{M}$ is a good multi-filtration over $V_\bullet\mathscr{D}_X$ and $\Sigma$ is a simplicial fan adapted to $U_\bullet\mathcal{M}$. Then $\bar U_\bullet\mathcal{M}$ is good over $V_\bullet\mathscr{D}_X$ and
\[\bar U_{\bf s} \mathcal{M}=\bigcap_{L\in \mathcal{L}(\Sigma)}{}^LU_{L({\bf s})}\mathcal{M}.\]
\end{theorem}
\begin{proof}
We take a cone $\Gamma\in \Sigma$ and consider the natural surjection
\[\mu^*_\Gamma R_U\mathcal{M}\rightarrow \widetilde{R_U\mathcal{M}}|_{\widetilde X_\Gamma}\simeq { }^\Gamma R_U\mathcal{M}.\]
By Theorem \ref{thm:existadfan}, $\widetilde{R_U\mathcal{M}}$ is flat over $\mathcal{S}_\Sigma$.
By Lemma \ref{lm:flatgamma}, we hence know
\[\widetilde{R_U\mathcal{M}}|_{\widetilde X_\Gamma}=\bigoplus_{s\in M}(\bigcap_{L\in \mathcal{L}(\Gamma)}{}^LU_{L({\bf s})}\mathcal{M})\cdot\prod_{j=1}^r u_j^{s_j}.\]
Since $\{\widetilde X_\Gamma\}_{\Gamma\in \Sigma}$ gives a covering of $\widetilde X_\Sigma$, we hence obtain that
\[\mu_{\Sigma*}(\widetilde{R_U\mathcal{M}})=\bigoplus_{s\in M}(\bigcap_{L\in \mathcal{L}(\Sigma)}{}^LU_{L({\bf s})}\mathcal{M})\cdot\prod_{j=1}^r u_j^{s_j}.\]
Since $\nu_\Sigma$ is projective, by Proposition \ref{pro:pfcohrelbasechange} we conclude that $\mu_{\Sigma*}(\widetilde{R_U\mathcal{M}})$ is coherent over $R_V\mathscr{D}_X$.
By construction, we know that
\[^\Gamma U_{\bf s} \mathcal{M} \subseteq \bigcap_{\textup{primitive vectors }L\in\Gamma\cap N^+}{}^LU_{L({\bf s})}\mathcal{M}\]
where on the right hand side the intersection is over all primitive vectors in $\Gamma\cap N^+$ (not just generators of $\Gamma$)
and hence
\[^\Gamma U_{\bf s} \mathcal{M} =\bigcap_{L\in \mathcal{L}(\Gamma)}{}^LU_{L({\bf s})}\mathcal{M}= \bigcap_{\textup{primitive vectors }L\in\Gamma\cap N^+}{}^LU_{L({\bf s})}\mathcal{M}\]
thanks to Lemma \ref{lm:flatgamma} again. Therefore,
\[\bar U_{\bf s}\mathcal{M}= \bigcap_{\Gamma\in\Sigma}{}^\Gamma U_{\bf s} \mathcal{M}=\bigcap_{L\in \mathcal{L}(\Sigma)}{}^LU_{L({\bf s})}\mathcal{M}.\]
and
\[\mu_{\Sigma*}(\widetilde{R_U\mathcal{M}})=R_{\bar U}\mathcal{M}.\]
Since we have proved that $\mu_{\Sigma*}(\widetilde{R_U\mathcal{M}})$ is coherent over $R_V\mathscr{D}_X$, $\bar U_\bullet\mathcal{M}$ is good over $V_\bullet\mathscr{D}_X$.
\end{proof}
\begin{remark}\label{rmk:nabmkm}
Let $\mathcal{M}$ be a holonomic $\mathscr{D}_X$-module.
By Theorem \ref{thm:Lsphol}, we have $^L V_\bullet\mathcal{M}$ the Kashiwara-Malgrange filtration of $\mathcal{M}$ for each slope $L$. We then fix an adapted fan $\Sigma$ to a good multi-filtration $U_\bullet\mathcal{M}$. Now one can naively define
\[^\Sigma V_{\bf s}\mathcal{M}= \bigcap_{L\in \mathcal{L}(\Sigma)} {}^LV_{L({\bf s})}\mathcal{M}\quad \forall {\bf s}\in M,\]
which gives a $\mathbb{Z}^r$-filtration $^\Sigma V_\bullet \mathcal{M}$ over $V_\bullet\mathscr{D}_X$. However, it is not necessarily true in general that $^\Sigma V_\bullet \mathcal{M}$ is good over $V_\bullet\mathscr{D}_X$ even if $\mathcal{M}$ is regular holonomic; see \cite[\S 3.3]{Sab} for further discussions. This means that one cannot define multi-indexed Kashiwara-Malgrange filtrations in general. On the contrary, Bernstein-Sato polynomials can be generalized successfully to the multi-indexed case (see Theorem \ref{thm:mibfs}).
\end{remark}
Using Theorem \ref{thm:goodsat}, Sabbah proved the following beautiful result about the existence of multi-variable $b$-function. We sketch its proof for completeness.
\begin{theorem}[Existence of Sabbah's generalized $b$-functions]\label{thm:mibfs}
Suppose that $\mathcal{M}$ is a holonomic $\mathscr{D}_X$-module with a $\mathbb{Z}^r$-filtration $U_\bullet\mathcal{M}$ good over $V_\bullet\mathscr{D}_X$ along a smooth complete intersection $Y\subseteq X$ of $H_1,\dots,H_r$. Then there exists a simplicial fan $\Sigma$ subdividing $N^+$ such that for every nonzero vector ${\bf a}\in M^+$ there exist polynomials $b^{\bf a}_L(s)\in\mathbb{C}[s]$ $($depending on ${\bf a}$$)$ for all slopes $L\in \mathcal{L}(\Sigma)$ so that locally
\[\prod_{L\in \mathcal{L}(\Sigma)}b^{\bf a}_L(L\cdot (t_1\partial_{t_1},\dots,t_r\partial_{t_r})) \textup{ annihilates } \frac{U_{\vec 0}\mathcal{M}}{U_{-{\bf a}}\mathcal{M}},\]
where $t_j$ are local defining functions of $H_j$.
\end{theorem}
\begin{proof}
We take a simplicial fan $\Sigma$ adapted to $U_\bullet\mathcal{M}$, whose existence is guaranteed by Theorem \ref{thm:existadfan}. By Theorem \ref{thm:goodsat}, the saturation of $U_\bullet\mathcal{M}$ is
\[\bar U_{\bf s}\mathcal{M}=\bigcap_{L\in \mathcal{L}(\Sigma)} {}^LU_{L({\bf s})}\mathcal{M} \quad \forall {\bf s}\in M.\]
Since $\bar U_\bullet\mathcal{M}$ is good over $V_\bullet\mathscr{D}_X$, there exists a vector ${\bf k}\in M^+$ depending on ${\bf a}$ such that
\[\bar U_{-k}\mathcal{M}\subseteq U_{-{\bf a}}\mathcal{M}.\]
On the other hand, similar to the proof of Theorem \ref{thm:Lsphol}, we conclude that $\textup{gr}^{^LU}_\bullet\mathcal{M}$ is holonomic over $\textup{gr}^{^LV}_\bullet\mathscr{D}_X$ for each slope $L$. Therefore, there exists $b_L(s)\in \mathbb{C}[s]$ such that $b_L(\theta_L)$ kills $\textup{gr}^{^LU}_\bullet\mathcal{M}$. In particular, there exists $b^{\bf a}_L(s)\in \mathbb{C}[s]$ so that
\[b^{\bf a}_L(L\cdot (t_1\partial_{t_1},\dots,t_r\partial_{t_r}))\cdot{ }^LU_{0}\mathcal{M}\subseteq {}^LU_{L(-{\bf k})}\mathcal{M}\]
for each $L\in \mathcal{L}(\Sigma)$. Since $U_{\vec 0}\mathcal{M}\subseteq \bar U_{\vec 0}\mathcal{M}$ and
\[\bar U_{-k}\mathcal{M}=\bigcap_{L\in \mathcal{L}(\Sigma)} {}^LU_{L(-{\bf k})}\mathcal{M}\subseteq U_{-{\bf a}}\mathcal{M},\]
the required statement follows.
\end{proof}
\subsection{Relative characteristic cycles for $R_U\mathcal{M}$}\label{subsec:relccU}
We now prove Theorem \ref{thm:CCrelRU}.
Assume that $\mathcal{M}$ be a regular holonomic $\mathscr{D}_X$-module with a good filtration $U_\bullet\mathcal{M}$ over $V_\bullet\mathscr{D}_X$. We write
$$p\colon \varphi^{-1}((\C^*)^r)\simeq X\times (\C^*)^r\longrightarrow X$$
the natural projection, and $j\colon \varphi^{-1}((\C^*)^r)\hookrightarrow \widetilde X$ the open embedding. We then set
$\mathcal{N}=j_*(p^*\mathcal{M})$, $\mathcal{N}_{\textup{rel}}=R_U\mathcal{M}$ and $F=\prod_j u_j$. Thus, Theorem \ref{thm:CCrelRU} follows from Lemma \ref{lm:BMM}.
If additionally $R_U\mathcal{M}$ is flat over $\C^r$, then we pick an arbitrary point $\alpha\in \C^r$ and general hyperplanes $\mathcal{H}_1,\dots,\mathcal{H}_r$ such that $\alpha$ is a smooth complete intersection of these hyperplanes. Applying Lemma \ref{lm:BMM} and Proposition \ref{prop:spnotorsioncc} inductively, we conclude that $\textup{CC}_{\textup{rel}}(R_U\mathcal{M})$ is relative Lagrangian and hence that $R_U\mathcal{M}$ is relative holonomic. We have thus proved Proposition \ref{prop:flatrelhol}. Now we pick a simplicial fan adapted to $U_\bullet\mathcal{M}$ as in Theorem \ref{thm:existadfan}. Then $\widetilde{R_U\mathcal{M}}$ is flat over $S_\Sigma$. By a similar argument, we can more generally prove:
\begin{prop}\label{prop:usigmarelhol}
In the situation of Theorem \ref{thm:existadfan}, if $\mathcal{M}$ is a regular holonomic $\mathscr{D}_X$-module, then $\widetilde{R_U\mathcal{M}}$ is relative holonomic over $S_\Sigma$.
\end{prop}
Since the saturation $\bar U_\bullet\mathcal{M}$ is good over $V_\bullet\mathscr{D}_X$ (Theorem \ref{thm:goodsat}), Theorem \ref{thm:CCrelRU} specifically implies
\[\textup{CC}_{\textup{rel}}(R_{\bar U}\mathcal{M})=\overline{q^{-1}(\textup{CC}(\mathcal{M}))}.\]
By construction, $\overline{q^{-1}(\textup{CC}(\mathcal{M}))}$ is a relative conormal space but not necessarily a relative Lagrangian in general (unless $r=1$). Since
\[R_{\bar U}\mathcal{M}={\mu_\Sigma}_*(\widetilde{R_U\mathcal{M}}),\]
from Proposition \ref{prop:usigmarelhol}, we see that the direct image functors for relative $\mathscr{D}$-modules under proper base changes do not necessarily preserve relative holonomicity (cf. \S\ref{subsec:bashchangerelD}).
\section{Graph embedding construction of Malgrange} \label{sec:gemMal}
Let ${\bf f}=(f_1,f_2,\dots,f_r)$ be a $r$-tuple of regular (or holomorphic) functions on a smooth complex variety (or a complex manifold) $Y$. We write $$j_{\bf f}\colon U_Y\coloneqq Y\setminus (\prod_{i=1}^rf_i=0)\hookrightarrow Y$$ the open embedding.
We consider the graph embedding
\[\iota_{{\bf f}}\colon Y\hookrightarrow X\coloneqq Y\times\C^r\quad x\mapsto (x,f_1(x),\dots,f_r(x)).\]
Let $\mathcal{M}_{Y}$ be a holonomic $\mathscr{D}_{Y}$-module. We set $\widetilde\mathcal{M}=\mathcal{M}_Y(*D_Y)$ with the divisor $D_Y=(\prod_{i=1}^rf_i=0)$, which is a holonomic $\mathscr{D}_Y$-module. We assume
\[\widetilde\mathcal{M}= \mathscr{D}_Y\cdot \mathcal{M}_0\]
for some $\mathscr{O}_Y$-coherent submodule $\mathcal{M}_0\subseteq \widetilde\mathcal{M}$.
Following the idea of Malgrange \cite{MalV}, we have a coherent $\mathscr{D}_Y[{\bf s}]=\mathscr{D}_Y\otimes_\C\C[{\bf s}]$-submodule
\[\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0)\subseteq \widetilde\mathcal{M}[{\bf s}]\cdot{\bf f}^{\bf s}\]
where ${\bf s}=(s_1,s_2,\dots,s_r)$,
\[{\bf f}^{\bf s}=\prod_{i=1}^rf_i^{s_i},\]
and the $\mathscr{D}_Y[{\bf s}]$-module structure is induced by
\[\theta\cdot({\bf f}^{\bf s}\cdot m_0)={\bf f}^{\bf s}\cdot\theta(m_0)+\sum_{i=1}^r s_i\frac{\theta(f_i)}{f_i}{\bf f}^{\bf s}\cdot m_0\]
for vector fields $\theta$ on $Y$, where $m_0$ is a section of $\mathcal{M}_0$. Since $\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0)$ is both a $\C[{\bf s}]$-module and a $\mathscr{D}_Y$-module, it is a coherent relative $\mathscr{D}$-module over $\C[{\bf s}]$. However, $\widetilde\mathcal{M}[{\bf s}]\cdot{\bf f}^{\bf s}$ is not coherent over $\mathscr{D}_Y[{\bf s}]$.
We denote by $(t_1,\dots,t_r)$ the coordinates of $\C^r$. The key point is that after identifying $s_i$ with $-t_i\partial_{t_i}$, we have a $\mathscr{D}_X$-module isomorphism
\[ \iota_{{\bf f}+}(\widetilde\mathcal{M})\simeq \iota_{{\bf f}*}({\widetilde\mathcal{M}}[{\bf s}]{\bf f}^{\bf s})
\]
with the $t_i$-action on $\iota_{{\bf f}*}({\widetilde\mathcal{M}}[{\bf s}]{\bf f}^{\bf s})$ given by
\[t_i\cdot (b({\bf s}){\bf f}^{\bf s}\cdot m_0)=b(s_1,\dots,s_{i-1},s_i+1,s_{i+1},\dots,s_{r})f_i{\bf f}^{\bf s}\cdot m_0.\]
Consequently, $\iota_{{\bf f}*}(\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0))$ is a $\mathscr{D}_{X,D}$-lattice of $\iota_{{\bf f}+}(\widetilde\mathcal{M})$, where $D$ is the divisor defined by $(t_1\cdots t_r=0)$.
Since $\iota_{{\bf f}*}(\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0))$ is supported on the graph of $Y$, abusing notations, we also say $\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0)$ is a $\mathscr{D}_{X,D}$-lattice of $\iota_{{\bf f}+}(\widetilde\mathcal{M})$. Then $\mathcal{M}_0$ generates a holonomic $\mathscr{D}_X$-module
\[\mathcal{M}=\mathscr{D}_X\cdot\iota_{{\bf f}*}\mathcal{M}_0\subseteq \iota_{{\bf f}+}\widetilde\mathcal{M}\]
and $\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0)$ generates a $R_V\mathscr{D}_X$-module
\[R_V\mathscr{D}_X\cdot\iota_{{\bf f}*}(\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0)),\]
where the latter induces a $\mathbb{Z}^r$-filtration on $\mathcal{M}$ with
\[U_{\vec0}\mathcal{M}=\iota_{{\bf f}*}(\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0))\textup{ and } U_{-\vec1}\mathcal{M}=\iota_{{\bf f}*}(\mathscr{D}_Y[{\bf s}]({\bf f}^{{\bf s}+\vec1}\cdot\mathcal{M}_0)).\]
We then apply Theorem \ref{thm:mibfs} and obtain the Sabbah's generalized $b$-function $b({\bf s})\in \C[{\bf s}]$ for $\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0)$ such that
\[b({\bf s})\cdot\dfrac{\mathscr{D}_Y[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0)}{\mathscr{D}_Y[{\bf s}]({\bf f}^{{\bf s}+\vec1}\cdot\mathcal{M}_0)}=0\]
with $b({\bf s})$ given by a product of polynomials of degree 1. Sabbah's generalized $b$-functions associated to graph embeddings can be further generalized to notions of Bernstein-Sato ideals (see for instance \cite{Budur}).
In the graph embedding case, we can construct the log rescaled family globally (cf. \S\ref{subsec:logtorel}):
\[p: \widetilde X\coloneqq Y\times \C^r_t\times \C_y^r\to Y\times \C^r_t, \quad (x, t, y)\mapsto (x, e^yt).\]
We now give a counterexample of Theorem \ref{thm:relchrellog} without flatness when $r=2$:
\begin{example}\label{ex:notrelhollogres}
We take $Y=\C^2$ with coordinates $(x_1,x_2)$ and ${\bf f}=(x_1x_2,x_2)$. We consider the $\mathscr{D}_{X,D}$-lattice $\bar\mathcal{M}=\mathscr{D}_Y[{\bf s}]{\bf f}^{\bf s}$. Its $\mathscr{D}_{X,D}$-annihilator is
\[{\textup{Ann}}_{\mathscr{D}_{X,D}}(\bar\mathcal{M})=(x_1\partial_{x_1}+t_1\partial_{t_1},x_2\partial_{x_2}+t_2\partial_{t_2},t_1-x_1x_2,t_2-x_2).\]
Since $\bar\mathcal{M}$ and $\mathscr{M}=p^*(\iota_{{\bf f}*}\bar\mathcal{M})$
are both acyclic,
\[\textup{Ch}_{{\textup{rel}}}(\mathscr{M})=(x_1\xi_{x_1}=-\xi_{y_1},x_2\xi_{x_2}=-\xi_{y_2}, e^{y_1}t_1=x_1x_2, e^{y_2}t_2=x_2).\]
Thus, the fiber of $\textup{Ch}_{{\textup{rel}}}(\mathscr{M})$ over $(t_1=0,t_2=0)$ satisfies
\[\textup{Ch}_{{\textup{rel}}}(\mathscr{M})\cap(t_1=0,t_2=0)=(x_1\xi_{x_1}=-\xi_{y_1},\xi_{y_2}=0,x_2=0)\]
and hence the dimension of the fiber is $5>4$. Therefore, $\textup{Ch}_{\textup{rel}}(\mathscr{M})$ is not relative Lagrangian over $\C^2_t$.
\end{example}
The following theorem is a generalization of \cite[R\'esultat 1]{Mai}. See \cite[Theorem 3.3]{WuRHA} for the proof of a more generalized result and also \cite[Theorem 4.3.4]{BVWZ2}when $\mathcal{M}_0=\mathscr{O}_X$.
\begin{theorem}\label{thm:mairelhol}
If $\widetilde\mathcal{M}$ is a holonomic $\mathscr{D}_Y$-module, then every lattice $\mathscr{D}_{Y}[{\bf s}]({\bf f}^{\bf s}\cdot \mathcal{M}_0)$
is relative holonomic over $\C[{\bf s}]$ and
\[\textup{Ch}^{\textup{rel}}(\mathscr{D}_{Y}[{\bf s}]({\bf f}^{\bf s}\cdot \mathcal{M}_0))= \textup{Ch}(\widetilde\mathcal{M})\times \C^r.\]
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm:constrlattice}]
Since constructibility is local, it is enough to assume that
\[D=(\prod_{i=1}^r x_i=0)\]
with local coordinates $(x_1,x_2,\dots,x_n)$ and $\widetilde\mathcal{M}=\mathcal{M}(*D)$ satisfies
\[\mathcal{M}(*D)=\mathscr{D}_X\cdot\mathcal{M}_0\]
for some coherent $\mathscr{O}_X$-submodule $\mathcal{M}_0$. Then we take the graph embedding of smooth log pairs
\[\iota_{\bf f}\colon (X,D)\hookrightarrow (Z,D_Z)\]
where ${\bf f}=(x_1,x_2,\dots,x_r)$ and $Z=X\times\C^r$, $D_Z=(t_1\cdots t_r=0)$.
Similar to the non-log case (cf. \cite[Example 1.5.23]{HTT}), we have
\[\iota^{\log}_{{\bf f}+}(\bar\mathcal{M})\simeq \iota_{{\bf f}*}(\bar\mathcal{M}\otimes_\mathscr{O} \omega^{\log}_{\iota_{\bf f}}\otimes_{\mathscr{D}_{X,D}}f^*\mathscr{D}_{Z,D_Z})\]
and
\[\iota^{\log}_{{\bf f}+}(\bar\mathcal{M})\hookrightarrow \iota_{{\bf f}+}(\widetilde\mathcal{M}).\]
Thus, $\iota^{\log}_{{\bf f}+}(\bar\mathcal{M})$ is a $\mathscr{D}_{Z,D_Z}$-lattice of the holonomic $\mathscr{D}_Z$-module $\iota_{{\bf f},+}(\widetilde\mathcal{M})$. But the graph embedding gives us different lattices of $\iota_{{\bf f},+}(\widetilde\mathcal{M})$,
\[\mathscr{D}_X[{\bf s}]({\bf f}\cdot\mathcal{M}_0(kD))\]
for all $k\in \mathbb{Z}$. The lattices $\mathscr{D}_X[{\bf s}]({\bf f}\cdot\mathcal{M}_0(kD))$ are relative holonomic over $\C[{\bf s}]$ by Theorem \ref{thm:mairelhol}.
Meanwhile, we can compare lattices:
\[\iota^{\log}_{{\bf f}+}(\bar\mathcal{M})\subseteq \iota_{{\bf f}*}(\mathscr{D}_X[{\bf s}]({\bf f}\cdot\mathcal{M}_0))\otimes_{\mathscr{O}_Z}\mathscr{O}_Z(kD_Z)=\iota_{{\bf f}*}(\mathscr{D}_X[{\bf s}]({\bf f}\cdot\mathcal{M}_0(kD)))\]
for $k\gg 0$. The above inclusion and Corollary \ref{cor:relholab} together imply that $\iota^{-1}_{\bf f}(\iota^{\log}_{{\bf f}+}(\bar\mathcal{M}))$ is relative holonomic over $\C[{\bf s}]$. By Lemma \ref{lm:spsmsubvar},
\[\iota^{-1}_{\bf f}(\iota^{\log}_{{\bf f}+}(\bar\mathcal{M}))\otimes_{\C[{\bf s}]}^\mathbf{L} \C_{\mathbf 0}\simeq \mathbf{L} i_{\mathbf0}^* (\iota^{-1}_{\bf f}(\iota^{\log}_{{\bf f}+}(\bar\mathcal{M})))\]
is a complex of $\mathscr{D}_X$-modules with holonomic cohomology sheaves, where $i_{\mathbf 0}\colon \{\mathbf 0\}\hookrightarrow \textup{Spec }\C[{\bf s}]$ is the closed embedding and $\C_{\mathbf 0}$ is the residue field. By the construction of the log de Rham complex and Proposition \ref{prop:logpushfdr}, we have
\begin{equation} \label{eq:logdrtodrt}
\iota_{{\bf f}*}(\textup{DR}_{X,D}(\bar\mathcal{M}))\simeq \textup{DR}_{Z,D_Z}(\iota^{\log}_{{\bf f}+}(\bar\mathcal{M}))\simeq \iota_{{\bf f}*}(\textup{DR}_X(\iota^{-1}_{\bf f}(\iota^{\log}_{{\bf f}+}(\bar\mathcal{M}))\otimes_{\C[{\bf s}]}^\mathbf{L} \C_{\mathbf 0}))
\end{equation}
where the last quasi-isomorphism follows by identifying $s_i$ with $-t_i\partial_{t_i}$. Since
$$\iota^{-1}_{\bf f}(\iota^{\log}_{{\bf f}+}(\bar\mathcal{M}))\otimes_{\C[{\bf s}]}^\mathbf{L} \C_{\mathbf 0}$$ is a complex of $\mathscr{D}_X$-modules with holonomic cohomology sheaves, $\textup{DR}_{X,D}(\bar\mathcal{M})$ is constructible by Kashiwara's constructibility theorem (cf. \cite[Theorem 4.6.3]{HTT}).
\end{proof}
\begin{remark}\label{rmk:stratificationlogDR}
(1) From the proof of Theorem \ref{thm:constrlattice}, $\textup{DR}_{X,D}(\bar\mathcal{M})$ is not necessarily perverse in general, unless $\iota^{-1}_{{\bf f}}(\iota_{{\bf f}+}^{\log}(\bar\mathcal{M}))$ is flat over a neighborhood of ${\mathbf0}\in \C^r$.\\
(2) The stratification for the constructible complex $\textup{DR}_{X,D}(\bar\mathcal{M})$ is determined by the stratification of $\textup{Ch}(\mathcal{M}(*D))$ by \eqref{eq:logdrtodrt} and Theorem \ref{thm:mairelhol}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:j*j!DR}]
Part (1) is the analytification of \cite[Theorem 1.1]{WZ} with the same proof. We now prove Part (2).
We keep the notations as in the proof of Theorem \ref{thm:constrlattice}. By picking some $k\gg 0,$
we have an inclusion of lattices
\[\iota^{\log}_{{\bf f}+}(\bar\mathcal{M})\subseteq \iota_{{\bf f}*}(\mathscr{D}_X[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0(kD))).\]
We then consider the short exact sequence of $\mathscr{D}_{Z,D_Z}$-modules
\[0\to \iota^{\log}_{{\bf f}+}(\bar\mathcal{M})\rightarrow\iota_{{\bf f}*}(\mathscr{D}_X[{\bf s}]({\bf f}\cdot\mathcal{M}_0(kD)))\rightarrow\mathcal{Q}\to 0,\]
where $\mathcal{Q}$ is defined to be the quotient module. Applying Theorem \ref{thm:mibfs} to $\mathcal{Q}$, there exists $b({\bf s})\in\C[{\bf s}]$ as a product of linear polynomials in ${\bf s}$ such that
\[b({\bf s})\cdot \mathcal{Q}=0.\]
Using substitution, we have
\[b({\bf s}+\mathbf{l})\cdot\mathcal{Q}(-lD_Z)=0\]
for $\mathbf{l}=(l,l,\dots,l)\in \mathbb{Z}^r$ and for each $l$. Chose $l\gg 0$, so that $b({\bf s}+\mathbf{l})$ does not vanishing at ${\mathbf 0}\in \C^r$. Thus, $\mathcal{Q}(-lD_Z)\otimes \C_{\mathbf 0}=0.$
Considering the above short exact sequence, since
\[\textup{DR}_{Z,D_Z}(\mathcal{Q}(-lD_Z))\simeq \iota_{{\bf f}*}(\textup{DR}_X(\iota^{-1}_{{\bf f}}(\mathcal{Q}(-lD_Z))\otimes\C_{\mathbf 0}))\]
we hence obtain a quasi-isomorphism
\begin{equation} \label{eq:abdcde}
\textup{DR}_{Z,D_Z}(\iota^{\log}_{{\bf f}+}(\bar\mathcal{M})(-lD_Z))\xrightarrow{q.i.} \textup{DR}_{Z,D_Z}(\iota_{{\bf f}*}(\mathscr{D}_X[{\bf s}]({\bf f}\cdot\mathcal{M}_0((k-l)D))))
\end{equation}
for some $l\gg k$. By construction, we have
\[\textup{DR}_{Z,D_Z}(\iota_{{\bf f}*}(\mathscr{D}_X[{\bf s}]({\bf f}\cdot\mathcal{M}_0((k-l)D))))\simeq \iota_{{\bf f}*}(\textup{DR}_X(\mathscr{D}_X[{\bf s}]({\bf f}\cdot\mathcal{M}_0((k-l)D))\otimes^\mathbf{L}_{\C[{\bf s}]}\C_{\mathbf 0})).\]
We then apply \cite[Corollary 5.4]{WZ}, and by the quasi-isomorphism \eqref{eq:abdcde} obtain
\[\textup{DR}_{Z,D_Z}(\iota^{\log}_{{\bf f}+}(\bar\mathcal{M})(-lD_Z))\simeq \iota_{{\bf f}*}(j_!(\textup{DR}(\mathcal{M}|_U))).\]
By the projection formula,
\[\iota^{\log}_{{\bf f}+}(\bar\mathcal{M})(-lD_Z)\simeq \iota^{\log}_{{\bf f}+}(\bar\mathcal{M}(-lD)).\]
Since $\iota_{{\bf f}}$ is a closed embedding, the required quasi-isomorphism then follows from Proposition \ref{prop:logpushfdr}.
We now prove the perversity statement without the regularity assumption. Using the argument in proving \cite[Theorem 5.2 and 5.3]{WZ} as well as the discussion in \cite[\S 3.6]{BVWZ} in dealing with the local analytic case , one obtains that the $\C[{\bf s}]$-modules $\mathscr{D}_X[{\bf s}]({\bf f}^{\bf s}\cdot\mathcal{M}_0(kD))$ are flat over a Zariski neighborhood of ${\mathbf0}\in \textup{Spec }\C[{\bf s}]$ for all $|k|\gg 0$. Using Sabbah's $b$-functions, we then conclude that $\iota^{-1}_{\bf f}(\iota^{\log}_{{\bf f}+}(\bar\mathcal{M}(kD)))$ are flat over a Zariski neighborhood of ${\mathbf0}\in \textup{Spec }\C[{\bf s}]$ for all $|k|\gg 0$. Consequently, the required perversity follows.
\end{proof}
\bibliographystyle{amsalpha}
|
2,877,628,089,448 | arxiv | \section{Introduction}
Sufficiently cold and dense quark matter is a color superconductor
\cite{bailinlove,RWreview,alfordreview,schaferreview,DHRreview,Renreview,shovyreview,Huang}.
In the limit of asymptotically large quark chemical potentials, $\mu \gg \Lambda_{\rm QCD}$,
quarks are weakly coupled \cite{ColPer}, and interact mainly via single-gluon
exchange. In this regime of weak coupling, the color-superconducting gap $\phi$
can be computed within the fundamental theory of strong interactions, quantum chromodynamics
(QCD) \cite{son,schaferwilczek,rdpdhr,Brown1,shovkovy,Hsu}. It was first noticed in Refs.\
\cite{son,schaferwilczek,rdpdhr} that in order to describe color superconductivity
correctly it is crucial to take into account the specific energy and momentum dependence
of the gluon propagator in dense quark matter. It turned out that the long-ranged,
magnetic gluons generate a logarithmic enhancement in addition to the standard
BCS logarithm and by that increase the value of the gap at leading logarithmic order.
Furthermore, due to the energy and momentum dependence of the gluon propagator
the gap also is a function of energy and momentum and therefore a complex quantity
\cite{reuter2,ren}. Complex gap functions are well-known from the investigation of
strong-coupling superconductors in condensed matter physics for more than 40 years
\cite{eliashberg,schrieffer,vidberg,mahan}.
The authors of Ref.\ \cite{rdpdhr} estimated the magnitude of the imaginary part of the
color-superconducting gap for massless quarks by considering the cut of the
magnetic gluon propagator in the complex energy plane.
They found that Im$\,\phi =0$ on the Fermi
surface and Im$\,\phi\sim g\,{\rm Re}\,\phi$ exponentially close to the Fermi surface,
$|k-\mu| \sim \mu \exp(-c/g)$ where $g\ll 1$ is the QCD coupling constant in the limit
of weak coupling. One only has ${\rm Im}\,\phi \sim {\rm Re}\,\phi$ for
quarks farther away from the Fermi surface, $|k-\mu| \sim g\mu$.
Therefore, considering quarks exponentially close to the Fermi surface,
the approximation $\phi \simeq {\rm Re}\,\phi$ is valid
for $\phi$ up to corrections of order $g$ to the prefactor of $\phi$, i.e.,
up to its subleading order.
However, the contribution of ${\rm Im}\,\phi$ to $\phi$ through ${\rm Re}\,\phi$
has not been estimated yet.
In this work we show that, in the 2SC phase and to subleading order,
${\rm Im}\,\phi$ does not contribute to $\phi$ for quarks exponentially close
to the Fermi surface. To do so, all momentum and energy dependences
of $\phi$ must be included in the gap equation. The appropriate starting point for
that is an energy- and momentum-dependent
Ansatz for $\phi$, $\phi = \phi(k_0,\mathbf{k})$, where $k_0$ and $\mathbf{k}$ are treated as
independent variables.
As known from the solution for ${\rm Re}\,\phi$, the integrals over energy and
momentum in the gap equation yield large logarithms, $\ln(\mu/\phi)\sim 1/g$,
which cancel powers of $g$ from the quark-gluon vertices. Due to these logarithms
one must actually compute or at least carefully estimate these
integrals in order to determine the importance
of the various terms contributing to ${\rm Re}\,\phi$ and ${\rm Im}\,\phi$.
Moreover, for a complete account,
not only the magnetic cut but also the electric cut, as well as the poles of the
gluon propagator, have to be considered in the solution for ${\rm Im}\,\phi$.
In order to illustrate the latter point, for energies just above the gluon mass
$m_g \sim g\mu$ one has ${\rm Im}\,\phi \sim g^2\mu \gg {\rm Re}\,\phi$ due to a large
contribution from the emission of on-shell electric gluons.
In order to estimate how this term feeds back into ${\rm Re}\,\phi$ requires a careful analysis.
Treating energy and momentum as independent variables and solving the coupled
gap equations for Im$\,\phi$ and Re$\,\phi$ self-consistently is therefore a non-trivial problem
and, moreover, leads to interesting insights.
To date, the gap function has never been calculated by treating energy and momentum
independently. In Refs.\ \cite{rdpdhr,meanfield,specgluon} it was assumed that the off-shell
gap function is of the same order of magnitude as the gap on the quasiquark mass-shell,
$\phi(k_0,\mathbf{k}) \approx \phi(\epsilon_k,\mathbf{k}) \equiv \phi_\mathbf{k}$.
Furthermore, all contributions that are generated by the energy dependence of the
gap, i.e., by its non-analyticities along the axis of real energies, are
neglected completely against the non-analyticities of the gluon propagator and
the quark poles. In Ref.\ \cite{reuter} it was pointed out
that at least in order to calculate corrections of order $g$ to the prefactor of
the gap, its off-shell behaviour must be included. In Refs.\ \cite{son,RWreview,ren},
on the other hand, the gap has been calculated within the Eliashberg theory
\cite{mahan}. In this model it is assumed that Cooper pairing happens only on the
Fermi surface. Following this assumption, the external quark momentum in the gap
equation is approximated by $k\approx \mu$. By additionally assuming isotropy in
momentum space, the gap becomes completely independent of momentum and a function
of energy only, $\phi(\omega,\mathbf{k}) \approx \phi(\omega,\mu) \equiv \phi(\omega)$.
Originally, the Eliashberg theory was formulated in order to include retardation
effects associated with the phonon interaction between electrons in a metal.
Since the energy that can be transfered between two electrons by a phonon
is restricted by the Debye frequency $\omega_D$,
Cooper pairing is restricted to happen only at the Fermi surface
\cite{mahan}.
In quark matter, however, such an assumption is problematic if
one is interested in understanding quark matter at more realistic densities where the
coupling between the quarks becomes stronger. In this case, quarks further
from the Fermi surface also participate in the pairing \cite{Itakura}.
If color superconductivity is present in the cores of neutron stars, the coupling is certainly strong,
$g\sim 1$, and a non-trivial momentum dependence of the gap cannot be neglected.
The present analysis is, strictly speaking, valid only at weak coupling.
However, treating energy and momentum as independent variables might
still be helpful to catch some aspects of color superconductivity at stronger coupling.
Besides affecting the value of the energy gap $\phi$ at some order, Im$\,\phi$ also
contributes to the damping of the quasiquark excitations in a color superconductor.
With respect to the anomalous propagation of quasiquarks, it is shown that the imaginary
part of the gap broadens the support around the quasiquark poles, see Eq.\ (\ref{specxi}) below.
This broadening strengthens the damping due to the imaginary part of the regular
quark self-energy, $\Sigma$, which is present also in the non-colorsuperconducting medium
\cite{lebellac,vanderheyden,manuel2,manuel,rockefeller}.
This paper is organized as follows: In Sec.\ \ref{compgapeq} the 2SC gap equation
is set up within the effective theory derived in Ref.\ \cite{reuter}. In Sec.\ \ref{solving}
the gap equation is first decomposed into its real and imaginary parts and then solved
to subleading order, cf.\ the schematic outline given after Eq.\ (\ref{traces}).
It is shown that Im$\,\phi$
contributes to $\phi$ at sub-subleading order for quarks exponentially close to the Fermi surface and
that Im$\,\phi \sim$ Re$\,\phi$ at $|k-\mu| \sim g\mu$, which justifies previous calculations.
Furthermore, an analytical expression for Im$\,\phi$ is given, see Eq.\ (\ref{Imphiexact}) below.
In Sec.\ \ref{outlook} the conclusions and an outlook are given.
A somewhat more detailed presentation can be found in Ref.\ \cite{reuter2}.
The units are $\hbar=c=k_B=1$. 4-vectors are denoted by
capital letters, $K^\mu = (k_0, {\bf k})$, with ${\bf k}$ being a
3-vector of modulus $|{\bf k}| \equiv k$ and direction
$\hat{\bf k}\equiv {\bf k}/k$. For the summation over Lorentz
indices, we employ a metric
$g^{\mu \nu} = {\rm diag}(+,-,-,-)$ and perform the calculations
within a compact Euclidean space-time with volume $V/T$, where $V$
is the 3-volume and $T$ the temperature of the system.
Since space-time is compact, energy-momentum space is
discretized, with sums $(T/V)\sum_{K} \equiv T\sum_n (1/V) \sum_{\bf k}$.
For a large 3-volume $V$, the sum over 3-momenta
can be approximated by an integral, $(1/V)\sum_{\bf k} \simeq
\int d^3 {\bf k}/(2 \pi)^3$. For bosons, the sum over $n$ runs over
the bosonic Matsubara frequencies $\omega_n^{\rm b} = 2n \pi T$, while
for fermions, it runs over the fermionic Matsubara frequencies
$\omega_n^{\rm f} = (2 n+1)\pi T$.
\section{setting up the complex gap equation}\label{compgapeq}
The complex gap equation is set up within the effective theory derived in Ref.\ \cite{reuter}.
This has three major advantages over a treatment in full QCD: Firstly, self-consistency
of the solutions of the Dyson-Schwinger equations for the quark and gluon propagators is
only required for those momentum modes considered as {\em relevant\/} for the physics of
interest. In full QCD, on the other hand, self-consistency has to be maintained for
{\em all\/} degrees of freedom. Secondly, by a special choice of the cutoffs for relevant
quarks and gluons, $\Lambda_{\rm q}$ and $\Lambda_{\rm gl}$, one can implement the
kinematics of quarks scattering along the Fermi surface into the effective theory. Considering
quarks with momenta $|k-\mu|<\Lambda_{\rm q}$ as relevant degrees of freedom, one can define
the projector onto these modes in Nambu-Gor'kov space as \cite{reuter}
\begin{eqnarray}
{\cal P}_1(K,Q) & \equiv & \left( \begin{array}{cc}
\Lambda_{\bf k}^+ & 0 \\
0 & \Lambda_{\bf k}^- \end{array} \right) \,
\Theta(\Lambda_{\rm q} - | k - k_F|) \, \delta^{(4)}_{K,Q}\;,
\end{eqnarray}
where $ \Lambda_{\bf k}^e\equiv(1+e\gamma_0 \bm{\gamma}\cdot \hat{\mathbf{k}})/2$ projects onto states
with positive ($e = +$) or negative ($e=-$)
energy (quark masses being neglected). The quark modes far away from the Fermi surface as
well as antiquarks have the projector ${\cal P}_2 \equiv 1-{\cal P}_1$. They are integrated
out and are contained in the couplings of the effective theory. For the gluons we introduce
the projector
\begin{eqnarray}
{\cal Q}_1(P_1,P_2) & \equiv & \Theta(\Lambda_{\rm gl} -p_1) \, \delta^{(4)}_{P_1,P_2}\;.
\end{eqnarray}
Consequently, relevant gluons are those with 3-momenta less than $\Lambda_{\rm gl}$, while
gluons with larger momenta, corresponding to ${\cal Q}_2 \equiv 1-{\cal Q}_1$,
are integrated out. Choosing the cutoffs according to
\begin{eqnarray}\label{cutoffs}
\Lambda_{\rm q} \alt g \mu \ll \Lambda_{\rm gl} \alt \mu
\end{eqnarray}
the energy of a gluon exchanged between two quarks is restricted by $p_0 < \Lambda_{\rm q}$.
Its momentum, on the other hand, can be much larger, since $p < \Lambda_{\rm gl}$.
This reflects the fact that quarks typically scatter along the Fermi surface and, due to the Pauli
principle, do not penetrate deeply into the Fermi sea. In addition to that, gluons with $p_0 \ll p$
have the property that they are not screened in the magnetic sector and therefore dominate
the interaction among quarks. The third advantage of this effective theory is that by expanding
the numerous terms in the gap equation in terms of
$\Lambda_{\rm q} /\Lambda_{\rm gl} \sim g $ one can systematically identify contributions of
leading, subleading, and sub-subleading order. This was demonstrated explicitely in
\cite{reuter} for the real part of the gap equation. Similarly, also the terms in the complex
gap equation can be organized in this way.
Obviously, the separation of the scales $\phi\,,g\mu$, and $\mu$ is rigorously valid only
at asymptotically large values of the quark chemical potential, where $g \ll 1$. In the
physically relevant region, $\mu \alt 1$ GeV and $g \sim 1$, this scale hierarchy breaks down.
For that case, more suitable choices for cutoff parameters
have been suggested \cite{schaferschwenzer}.
The Dyson-Schwinger equation for relevant quarks and gluons can be derived in a systematic way
using the Cornwall-Jackiw-Tomboulis (CJT) formalism \cite{CJT}.
For the quarks one finds
\begin{equation} \label{NGinvquark}
{\cal G}^{-1} = \left( \begin{array}{cc}
[ G^+]^{-1} & 0 \\
0 & [ G^-]^{-1} \end{array} \right)
+ \left( \begin{array}{cc}
\Sigma^+ & \Phi^- \\
\Phi^+ & \Sigma^- \end{array} \right) \;.
\end{equation}
Here $[G^+]^{-1}$ is the inverse tree-level propagator for quarks and $[G^-]^{-1}$ is the
corresponding one for charge-conjugate quarks. These effective propagators differ from the QCD
tree-level propagator $[G_0^\pm]^{-1}(K) \equiv \Diracslash{K} \pm \mu \gamma_0$ by additional
loops of irrelevant quark and gluon propagators. In Ref.\ \cite{reuter} it is shown that to subleading
order in the gap equation these loops can be neglected, $[G^\pm]^{-1}\simeq [G_0^\pm]^{-1}$.
The regular self-energy for (charge-conjugate) quarks is denoted as $\Sigma^\pm$. The off-diagonal
self-energies $\Phi^\pm$, the gap matrices, connect regular with charge-conjugate
quark degrees of freedom. A non-zero $\Phi^\pm$ corresponds to the condensation of quark Cooper
pairs. Equation (\ref{NGinvquark}) can be formally solved for ${\cal G}$,
\begin{equation} \label{NGquarkprop}
{\cal G} \equiv \left( \begin{array}{cc}
{\cal G}^+ & \Xi^- \\
\Xi^+ & {\cal G}^- \end{array} \right) \; ,
\end{equation}
where
\begin{equation}
{\cal G}^\pm \equiv \left\{ [G^\pm]^{-1} + \Sigma^\pm -
\Phi^\mp \left( [G^\mp]^{-1} + \Sigma^\mp \right)^{-1} \Phi^\pm \right\}^{-1}
\end{equation}
is the propagator describing normal propagation of quasiparticles
and their charge-conjugate counterpart, while
\begin{equation} \label{Xi}
\Xi^\pm \equiv - \left( [G^\mp]^{-1} + \Sigma^\mp \right)^{-1}
\Phi^\pm {\cal G}^\pm
\end{equation}
describes anomalous propagation of quasiparticles, which is possible if the ground state is
a color-superconducting quark-quark condensate, for details, see Ref.\ \cite{DHRreview}.
To subleading order, it is sufficient to approximate the propagator of the soft gluons by the
HDL-resummed propagator $\Delta_{\rm HDL}$ instead of solving the corresponding
Dyson-Schwinger equation \cite{dirkselfenergy}, while for the hard gluons one may
use the free propagator $\Delta_{0,22}$ \cite{reuter}. The index 22 indicates that this
propagator describes the propagation of a hard gluon mode. One has in total
\begin{eqnarray}\label{splitgluonprop}
\Delta^{\mu \nu}_{ab}(P) \equiv
\left[ \Delta_{\rm HDL}\right]^{\mu \nu}_{ab}(P)\,\theta(\Lambda_{\rm gl}-p) +
\left[ \Delta_{0,22}\right]^{\mu \nu}_{ab}(P) \,\theta(p-\Lambda_{\rm gl})\;.
\end{eqnarray}
In the mean-field approximation \cite{meanfield} the
Dyson-Schwinger equation for the gap matrix $\Phi^+ (K)$ reads
\begin{equation}\label{gapequation1}
\Phi^+ (K) = g^2 \, \frac{T}{V} \sum_Q
\Delta^{\mu \nu}_{ab}(K-Q)
\, \gamma_\mu (T^a)^T \, \Xi^+(Q) \, \gamma_\nu T^b \;,
\end{equation}
cf.\ Eq.\ (97) in Ref.\ \cite{reuter}. As discussed above, in the effective theory the sum runs only
over relevant quark momenta, $\mu - \Lambda_{\rm q} \leq q \leq \mu + \Lambda_{\rm q}$. Due to
the dependence of the gluon propagators $\Delta_{\rm HDL}$ and $\Delta_{0,22}$
on the external quark energy momentum $K$ in Eq.\ (\ref{gapequation1}),
the solution $\Phi(K)^+$
must be energy-dependent itself. Hence, solving the gap equation self-consistently requires an
energy-dependent Ansatz for the gap function. To subleading order in the gap equation, the
contribution from the regular self-energies $\Sigma^{\pm}$ can be subsumed by replacing
$q_0 \rightarrow q_0/Z(k_0)$ in the quark propagators \cite{qwdhr}, where
\begin{equation} \label{wavefunc}
Z(k_0) = \left( 1 + \bar{g}^2 \,\ln \frac{M^2}{k_0^2} \right)^{-1}
\end{equation}
is the quark wave-function renormalization factor \cite{manuel,rockefeller}, with
\begin{eqnarray}\label{gbar}
\bar g\equiv \frac{g}{3\sqrt{2}\pi}\;,
\end{eqnarray}
and
\begin{eqnarray} \label{Mconst}
M^2\equiv \frac{3\pi}{4} \,m_g^2\;,~~~m_g^2 \equiv N_f \frac{g^2\mu^2}{6\pi^2}\;.
\end{eqnarray}
The effect of Im$\,\Sigma$ on Re$\,\phi$ has been studied in Ref.\ \cite{manuel2}, where it is
shown that Im$\,\Sigma$ suppresses the formation of quark Cooper pairs. The corresponding
corrections, however, are shown to enter Re$\,\phi$ only beyond subleading order. In the
following it will be assumed that Im$\,\Sigma$ enters Re$\,\phi$ through Im$\,\phi$ also only
beyond subleading order. Consequently, Im$\,\Sigma$ will be neglected completely.
This is self-consistent since it turns out that Im$\,\phi$ itself contributes
only beyond subleading order to Re$\,\phi$.
The main contributions to Im$\,\phi$ are expected to arise from the energy dependence of
the gluon propagator, and not from Im$\,\Sigma$. This amounts
in neglecting the cut of the logarithm in Eq.\ (\ref{wavefunc}) when performing the Matsubara sum
in the complex gap equation (\ref{gapequation1}).
For the sake of definiteness, a two-flavor color superconductor is considered, where the
color-flavor-spin structure of the gap matrix is \cite{DHRreview,reuter}
\begin{equation} \label{gapmatrix}
\Phi^+(K) = J_3 \tau_2 \gamma_5\, \Lambda_{\bf k}^+ \,
\Theta(\Lambda_q -|k-\mu|)\, \phi(K)\;.
\end{equation}
The matrices $(J_3)_{ij} \equiv -i \epsilon_{ij3}$ and $(\tau_2)_{fg} \equiv-i \epsilon_{fg}$
represent the fact that quark pairs condense in the color-antitriplet, flavor-singlet channel.
Then the anomalous propagator reads
\begin{equation}\label{Xi2}
\Xi^+(Q) = J_3 \tau_2 \gamma_5 \, \Lambda_{\bf q}^- \,
\Theta(\Lambda_q - |q-\mu|) \, \frac{\phi(Q)}{[q_0/Z(q_0)]^2 - \epsilon_q^2}\;,
\end{equation}
where
\begin{eqnarray}\label{gapexcite}
{\epsilon_\mathbf{k}} = \sqrt{(k-\mu)^2 + \phi^2}\;.
\end{eqnarray}
Here we employed the analytical continuation $|\phi|^2 \rightarrow \phi^2$
\cite{ren,schrieffer,mahan}. Besides its poles at
$q_0 =\pm Z(\epsilon_q)\epsilon_q \equiv \pm \tilde \epsilon_q$ the anomalous propagator
$\Xi^+$ obtains further non-analyticities along the real $q_0$-axis through the complex gap
function $\phi$.
As presented more explicitly in Appendix \ref{disper}, the energy dependence of the gap function
$\phi(K)$ gives rise to a non-trivial spectral density
\begin{eqnarray}\label{specphi1}
\rho_\phi(\omega,\mathbf{k}) &\equiv & \frac{1}{2\pi i}\,\left[\phi(\omega+i\epsilon,\mathbf{k})-
\phi(\omega-i\epsilon,\mathbf{k})\right]\;,
\end{eqnarray}
which is directly related to the imaginary part of the gap function via
\begin{eqnarray}\label{imphidef}
{\rm Im}\,\phi(\omega+i\epsilon,\mathbf{k})&=& {\pi}\,\rho_\phi(\omega,\mathbf{k})\;.
\end{eqnarray}
For the spectral density of the anomalous quark propagator this yields
\begin{eqnarray}\label{specxi}
\rho_\Xi(\omega,\mathbf{q})&\equiv&
\frac{1}{2\pi i}\left[ \Xi(\omega+i\eta,\mathbf{q}) - \Xi(\omega-i\eta,\mathbf{q}) \right]\nonumber\\
&=&
- Z^2(\omega){\mathcal P}\; \frac{\rho_{\phi}(\omega,\mathbf{q})}{\omega^2-[Z(\omega)\epsilon_\mathbf{q}]^2} -
{\rm sign}(\omega)\,Z^2(\tilde\epsilon_\mathbf{q})\,{\rm Re}\,\phi(\omega+i\eta,\mathbf{q})\,\delta\!
\left(\omega^2 -\tilde\epsilon_\mathbf{q}^2 \right)\;,
\end{eqnarray}
where the cut of $Z(\omega)$ has been neglected. Also the non-analyticities of $\epsilon_q$
can be neglected: In the region $|k-\mu|\sim \phi$ one has $\phi\approx $ Re$\,\phi$,
whereas for $|q-\mu|\sim g\,\mu \gg \phi$ it is ${\epsilon_q}\simeq |q-\mu|$. Hence, in Eq.\ (\ref{specxi})
and in the gap equation (\ref{gapequation1}) one may write
${\epsilon_q} \simeq \sqrt{(q-\mu)^2 + ({\rm Re}\,\phi)^2}$, which is continuous across the
real energy axis. From Eq.\ (\ref{specxi}) it becomes obvious that $\rho_{\phi}(\omega,\mathbf{q})\neq 0$
leads to a broadening of $\rho_\Xi(\omega,\mathbf{q})$ around the quasiparticle pole at
$\omega \equiv \tilde\epsilon_q$. However, in order to
describe the damping of quasiquarks self-consistently, it is necessary to include Im$\,\Sigma$.
Further interesting details of the damping due to Im$\,\phi$ can be found in the recent analysis
of Ref.\ \cite{ren}
Inserting Eq.\ (\ref{Xi2}) into Eq.\ (\ref{gapequation1}), multiplying from both sides with
$J_3 \tau_2 \gamma_5 \Lambda_{\bf k}^+$,
and tracing over color, flavor, and Dirac degrees of freedom, one finds with
$[\Delta_{0,22}]^{\mu \nu}_{ab} \equiv \delta_{ab}\, \Delta_{0,22}^{\mu
\nu}$ and $[\Delta_{\rm HDL}]^{\mu \nu}_{ab} \equiv \delta_{ab}\,
\Delta_{\rm HDL}^{\mu \nu}$
\begin{eqnarray} \label{gapequation2a}
\phi(K) = \frac{g^2}{3} \, \frac{T}{V} \sum_Q
{\rm Tr}_s \left( \Lambda_{\bf k}^+ \gamma_\mu
\Lambda_{\bf q}^- \gamma_\nu \right)
\Delta^{\mu \nu}(K-Q)\,\tilde\Delta(Q)\,\phi(Q)\;,
\end{eqnarray}
where the remaining traces run over the internal Dirac indices. For convenience, the quark
propagator
\begin{eqnarray}\label{quarkpropcompact}
\tilde\Delta(Q)\equiv \frac{Z^2(q_0)}{q_0^2 - [Z(q_0)\,\epsilon_q]^2} =
\frac{1}{2\tilde\epsilon_\mathbf{q}}\sum\limits_{\sigma=\pm}
\frac{\sigma\,Z^2(q_0)}{q_0 -\sigma Z(q_0)\,\epsilon_\mathbf{q}}
\end{eqnarray}
has been introduced. Before the actual solution of Eq.\ (\ref{gapequation2a}) in the
next section, we first discuss the power-counting scheme of the gap equation in weak coupling.
In order to fulfill the equality in Eq.\ (\ref{gapequation2a}), the integration
over energy and momentum on the r.h.s.\ must yield terms of the order $\phi/g^2$.
After combination with the prefactor $g^2$ they are of order $\phi$, which is
the leading order in the gap equation. Accordingly, terms of order $g\,\phi$ are of
subleading order and terms of order $g^2\phi$ are of sub-subleading order. Until now, the
color-superconducting gap is known up to subleading order, i.e., up to corrections of order $g$
to the prefactor of the gap \cite{DHRreview}.
\section{Solving the complex gap equation}\label{solving}
\subsection{Derivation of the coupled gap equations of Re$\,\phi$ and Im$\,\phi$}\label{anacont}
In order to derive of the coupled gap equations of Re$\,\phi$ and Im$\,\phi$ we
first rewrite the Matsubara sum in Eq.\ (\ref{gapequation2a}) as a contour integral,
\begin{eqnarray}\label{Mdef}
{\mathcal M}^{\ell,t}(k_0,\mathbf{p},\mathbf{q}) \equiv
T \sum\limits_{q_0\neq k_0}\Delta^{\ell,t}(Q-K)\tilde\Delta(Q)\phi(Q)
=\int\limits_{\mathcal C} \frac{dq_0}{2\pi i}\,
\frac{1}{2}\tanh\left(\frac{q_0}{2T} \right)\Delta^{\ell,t}(Q-K)\tilde\Delta(Q)\phi(Q) \;,
\end{eqnarray}
where $\mathbf{p} = \mathbf{q}-\mathbf{k}$ and $\Delta^{\ell}$ denotes the longitudinal and $\Delta^{t}$ the
transverse gluon propagator in pure Coulomb gauge, cf.\ App.\ \ref{specgluons}.
The contribution at $q_0 = k_0$,
where the cut of the gluon propagator is located, has to be omitted.
The contour $\mathcal C$ is shown in cf.\ Fig.\ \ref{phicontour1}.
\begin{figure}[ht]
\centerline{\includegraphics[width=6cm]{phicontour1.eps}}
\caption{The contour ${\cal C}$ in Eq.\ (\ref{Mdef})
encloses the poles of $\tanh [q_0/(2T)]$ on the
imaginary $q_0$ axis. The additional poles and the cut at $q_0=k_0$ arise from the gluon
propagator, while the two poles on the real axis are due to the quasiquarks, cf. Eq.\ (\ref{specxi}).
The yet undetermined non-analyticities of the gap function on the real $q_0-$axis are indicated
by the shaded area.}
\label{phicontour1}
\end{figure}
In order to introduce the spectral densities of the gap function and of the magnetic
and longitudinal gluon propagators the contour ${\cal C}$
is deformed corresponding to Fig.\ \ref{phicontour2}.
\begin{figure}[ht]
\centerline{\includegraphics[width=6cm]{phicontour2.eps}}
\caption[Deforming the contour ${\cal C}$.]{Deforming the contour ${\cal C}$ to introduce the spectral
densities of $\Delta^{\ell,t}$ and $\phi$, cf.\ Eq.\ (\ref{M1}).}
\label{phicontour2}
\end{figure}
The contour integral can now be
decomposed into three parts
\begin{eqnarray}\label{M}
{\mathcal M}^{\ell,t}(k_0,\mathbf{p},\mathbf{q})= I_\infty^{\ell,t} + I_0^{\ell,t} + I_{k_0}^{\ell,t}\;.
\label{M1}
\end{eqnarray}
The first part, $I_\infty$, is the integral along a circle of asymptotically large radius. It can
be estimated after parameterizing $dq_0 = i|q_0|e^{i\theta}d\theta$ and considering the limit
$|q_0|\rightarrow \infty$. To this end we write without loss of generality, cf.\ Appendix \ref{disper},
\begin{eqnarray}\label{phisplit}
\phi(K)\equiv \tilde\phi(K) +{\hat\phi}(\mathbf{k})\;,
\end{eqnarray}
so that the energy dependence of $\phi(K)$ is contained in $ \tilde\phi(K)$. For asymptotically large
energies, $|k_0|\rightarrow \infty$, we require that $\tilde\phi(K) \rightarrow 0 $ and
$\phi(K) \rightarrow {\hat\phi}(\mathbf{k})$, where ${\hat\phi}(\mathbf{k})$ is a real function of $\mathbf{k}$ only.
Furthermore, we have $\Delta^\ell(Q-K) \rightarrow |\mathbf{q}-\mathbf{k}|^{-2}$
and $\Delta^t(Q-K) \rightarrow -q_0^{-2}$, cf.\ Eq.\ (\ref{longtransprops4}). It follows that the
longitudinal contribution $I_\infty^{\ell}\sim q_0^{-1}$ and the transversal
$I_\infty^{t} \sim q_0^{-3}$,
and hence that in total $I_\infty^{\ell,t} \rightarrow 0$.
The integral $I_0^{\ell,t}$ runs around the axis of real $q_0$,
\begin{eqnarray} \label{I00}
I_0^{\ell,t}=
\int\limits_{-\infty}^\infty \frac{dq_0}{2\pi i}\,
\frac{1}{2}\tanh\left(\frac{q_0}{2T} \right)\Delta^{\ell,t}(Q-K)
\left[\tilde\Delta(q_0+i\eta,\mathbf{q})\phi(q_0+i\eta,\mathbf{q})-
\tilde\Delta(q_0-i\eta,\mathbf{q})\phi(q_0-i\eta,\mathbf{q})\right]\;.
\end{eqnarray}
Applying the Dirac identity and using Eq.\ (\ref{specphi1})
and $\phi(\omega +i \eta)+\phi(\omega-i\eta) = 2{\rm Re}\,\phi(\omega)$ one obtains
\begin{eqnarray}\label{I0}
I_0^{\ell,t}
&=&
\frac{1}{2{\tilde \epsilon_\mathbf{q}}}\sum\limits_{\sigma=\pm}\sigma{\mathcal P}_{\sigma{\tilde \epsilon_\mathbf{q}}}
\int\limits_{-\infty}^\infty dq_0 \frac{1}{2}\tanh\left(\frac{q_0}{2T}\right)
\Delta^{\ell,t}(q_0-k_0,\mathbf{p})Z^2(q_0)\frac{\rho_{\phi}(q_0,\mathbf{q})}{q_0-\sigma{\tilde \epsilon_\mathbf{q}}}\nonumber\\
&&-\frac{1}{2{\tilde \epsilon_\mathbf{q}}}\frac{1}{2}\tanh\left(\frac{{\tilde \epsilon_\mathbf{q}}}{2T}\right)
Z^2(\tilde\epsilon_\mathbf{q})\, {\rm Re}\,\phi({\tilde \epsilon_\mathbf{q}},\mathbf{q}) \sum\limits_{\sigma=\pm}
\Delta^{\ell,t}(\sigma{\tilde \epsilon_\mathbf{q}}-k_0,\mathbf{p})\;,
\end{eqnarray}
where ${\mathcal P}_x$ denotes the principal value with respect to the pole at $x$.
The first term arises from the non-analyticities of the gap function, cf.\ Eq.\ (\ref{specphi1}),
and the second from the poles of the quark propagator at $q_0 = \pm {\tilde \epsilon_\mathbf{q}}$,
cf.\ Eq.\ (\ref{quarkpropcompact}).
The last integral $I_{k_0}^{\ell,t}$ circumvents the non-analyticities
of the gluon propagator as well as the pole of $\tanh$. One finds
\begin{eqnarray}\label{Ik0}
I_{k_0}&=&\frac{1}{2{\tilde \epsilon_\mathbf{q}}}\sum\limits_{\sigma=\pm}\sigma{\mathcal P}_{0}
\int\limits_{-\infty}^\infty dq_0 \frac{1}{2}\coth\left(\frac{q_0}{2T}\right)\phi(q_0+k_0,\mathbf{q})\,
Z^2(q_0+k_0)\,\frac{\rho^{\ell,t}(q_0,\mathbf{p})}{q_0-\sigma{\tilde \epsilon_\mathbf{q}}+k_0}\nonumber\\
&& -T\, \Delta^{\ell,t}(0,\mathbf{p}) \,\tilde\Delta(k_0,\mathbf{q})\,\phi(k_0,\mathbf{q})\;.
\end{eqnarray}
The first term is due to the non-analyticities of the longitudinal and magnetic gluon propagators,
$\Delta^{\ell,t}$. The second arises from the pole of $\coth(q_0/2T)$ at $q_0=0$ and corresponds
to the large occupation number density of gluons in the classical limit, $q_0 \ll T$. This
contribution has been shown to be beyond subleading order \cite{rdpdhr} and will therefore be
discarded in the following.
After analytical continuation, $k_0 \rightarrow \omega + i\eta$, the Dirac identity is employed
in order to split the complex gap equation (\ref{gapequation2a}) into its real and imaginary part.
For its imaginary part one finds using Eqs.\ (\ref{I0},\ref{Ik0}) and using the fact that
Re$\,\phi(q_0)$ and $Z^2(q_0)$ are even functions
\begin{eqnarray}\label{ImM}
{\rm Im \mathcal M}^{\ell,t}(\omega+i\eta,\mathbf{p},\mathbf{q}) &=&
-\frac{\pi}{4{\tilde \epsilon_\mathbf{q}}}{\rm Re}\,\phi({\tilde \epsilon_\mathbf{q}},\mathbf{q})\,
Z^2(\tilde\epsilon_\mathbf{q})\sum\limits_{\sigma=\pm}\sigma\rho^{\ell,t}
({\omega-\sigma\tilde \epsilon_\mathbf{q}},\mathbf{p})
\left[\tanh\left(\frac{\sigma{\tilde \epsilon_\mathbf{q}}}{2T}\right)
+\coth\left(\frac{{\omega-\sigma\tilde \epsilon_\mathbf{q}}}{2T}\right) \right]\nonumber\\
&&+
\frac{\pi}{4{\tilde \epsilon_\mathbf{q}}}\sum\limits_{\sigma=\pm}\sigma{\mathcal P}
\int\limits_{-\infty}^\infty dq_0
\frac{\rho^{\ell,t}(\omega-q_0,\mathbf{p})\rho_{\phi}(q_0,\mathbf{q})}{q_0-\sigma{\tilde \epsilon_\mathbf{q}}}
Z^2(q_0)\left[\tanh\left(\frac{q_0}{2T}\right) +\coth\left(\frac{\omega-q_0}{2T}\right) \right]
\nonumber\\
&\equiv& {\rm Im \mathcal M}^{\ell,t}_{\cal A}(\omega+i\eta,\mathbf{p},\mathbf{q})+
{\rm Im \mathcal M}^{\ell,t}_{\cal B}(\omega+i\eta,\mathbf{p},\mathbf{q})\,.
\end{eqnarray}
The first term on the r.h.s\ of Eq.\ (\ref{ImM}), ${\rm Im \mathcal M}^{\ell,t}_{\cal A}$,
contains all contributions to $\phi(K)$ that have already been considered in previous solutions
to subleading order. Here the gap function is always on the quasiparticle mass-shell.
The second term, ${\rm Im \mathcal M}^{\ell,t}_{\cal B}$, is due to the non-analyticities
of $\phi(K)$. It has been neglected in all previous treatments
due to the approximation
\begin{eqnarray}\label{ansatz}
\rho_\Xi(\omega,\mathbf{q})\simeq -{\rm sign}(\omega)\,{\rm Re}\,\phi(\tilde\epsilon_\mathbf{q}+i\eta,\mathbf{q})
\,Z^2(\tilde\epsilon_\mathbf{q})\,\delta\!\left(\omega^2 -\tilde\epsilon_\mathbf{q}^2 \right)\;,
\end{eqnarray}
cf. Eq.\ (41) in Ref.\ \cite{rdpdhr}. By doing so, one always forces the gap function
on the r.h.s.\ of the gap equation on the quasiparticle mass-shell.
The gap equation takes then the standard form, cf.\ Eq.\ (19) of Ref.\ \cite{qwdhr}.
The occurrence of the external energy $\tilde\epsilon_k$ on the r.h.s.\ due to the
energy-dependent gluon propagators indicates that the solution still possesses
some energy dependence although not provided by the Ansatz.
In the limit of small temperatures, $T\rightarrow 0$, the hyperbolic functions
in Eq.\ (\ref{ImM}) simplify, yielding for $\omega > 0$
\begin{eqnarray}\label{ImMT0}
{\rm Im \mathcal M}^{\ell,t}_{T=0}(\omega+i\eta,\mathbf{p},\mathbf{q}) &=&
\frac{\pi}{2{\tilde \epsilon_\mathbf{q}}}\left[\frac{}{}
\!\!-Z^2(\tilde\epsilon_\mathbf{q})\,{\rm Re}\,\phi({\tilde \epsilon_\mathbf{q}},\mathbf{q})\,\rho^{\ell,t}
(\omega-{\tilde \epsilon_\mathbf{q}},\mathbf{p})\,\theta(\omega-{\tilde \epsilon_\mathbf{q}})\right.\nonumber\\
&&\left.
+\sum\limits_{\sigma=\pm}\sigma\,{\mathcal P} \int\limits_{0}^{\omega} dq_0
\;\frac{\rho^{\ell,t}(\omega-q_0,\mathbf{p})\rho_{\phi}(q_0,\mathbf{q})}{q_0-\sigma{\tilde \epsilon_\mathbf{q}}}
\,Z^2(q_0)\right]\\
&\equiv& {\rm Im \mathcal M}^{\ell,t}_{{\cal A},T=0}(\omega+i\eta,\mathbf{p},\mathbf{q})+
{\rm Im \mathcal M}^{\ell,t}_{{\cal B},T=0}(\omega+i\eta,\mathbf{p},\mathbf{q})\;.
\end{eqnarray}
Inserting the Matsubara sums ${\rm Im\mathcal M}^{\ell,t}_{\cal A,B}$ back into
Eq.\ (\ref{gapequation2a}),
\begin{eqnarray}\label{Imphieq}
{\rm Im}\,\phi(\omega+i\eta, \mathbf{k})&=&
\frac{g^2}{3}\int\frac{d^3q}{(2\pi)^3}
\sum \limits _{r=\ell,t}{\rm Tr}_s^r ( k,p, q)\,\left[
{\rm Im\mathcal M}^r_{\cal A}(\omega+i\eta,\mathbf{p},\mathbf{q})+
{\rm Im\mathcal M}^r_{\cal B}(\omega+i\eta,\mathbf{p},\mathbf{q})
\right]\\
&\equiv& {\cal A}(\omega+i\eta, \mathbf{k})+ {\cal B}(\omega+i\eta, \mathbf{k})\label{AB}
\;,
\end{eqnarray}
we have finally derived the gap equation for Im$\,\phi(\omega+i\eta, \mathbf{k})$.
The traces over Dirac space are given by
\begin{subequations} \label{traces}
\begin{eqnarray}
{\rm Tr}_s^{\ell} ( k,p, q)& = & \frac{(k+q)^2 - p^2}{2\, k\,q} \;
, \\
{\rm Tr}_s^{t} ( k,p, q) & = & -2 - \frac{p^2}{2\, k\,q} +
\frac{(k^2-q^2)^2}{2\, k\, q\, p^2}\;.
\end{eqnarray}
\end{subequations}
We found that the imaginary part of the gap function can be split into a term
${\cal A}(\omega+i\eta, \mathbf{k})$ which contains all contributions
of the real part of the gap function on the quasiquark mass-shell,
and a term ${\cal B}(\omega+i\eta, \mathbf{k})$ which contains all new contributions
with the imaginary part of the gap function off the quasiparticle mass-shell,
cf.\ Eq.\ (\ref{AB}).
In the following it will be checked whether the known solution for
Re$\,\phi(\tilde\epsilon_\mathbf{k},\mathbf{k})$, which neglects the contributions contained in
$\cal B$, is self-consistent to subleading order. To this end, we can use the
leading order solution for \mbox{Re$\,\phi(\tilde\epsilon_\mathbf{k},\mathbf{k})$}, which is given by
\cite{DHRreview}
\begin{equation} \label{solution}
\phi(y) \equiv \phi \, \sin\left( \frac{\pi\,y}{2}\right)\,\, ,
\end{equation}
where $\phi$ denotes the value of the gap function on the Fermi surface to leading
logarithmic order in $g$ \cite{son}
\begin{equation} \label{phileading}
\phi \sim \mu \exp\left(-\frac{\pi}{2\, \bar{g}}\right) \;.
\end{equation}
The variable $0\leq y \leq 1$ defines the distance from the Fermi surface through the mixed scale
\begin{eqnarray}\label{Lambday}
\Lambda_y \equiv \phi^y M^{1-y}\;,
\end{eqnarray}
where $M\sim g\mu$ is defined in Eq.\ (\ref{Mconst}). A given value of $y$ corresponds to
momenta $\mathbf{k}$ with $|k-\mu|\sim \Lambda_{y}$. Note that $\Lambda_1=\phi$ and
$\Lambda_0=M$. Furthermore, we have $\Lambda_{\bar g} \sim e^{-\pi/2}\,M$, which is smaller
but still of the order of $M$.
Correspondingly, we refer to quarks with momenta $|k-\mu| \sim \Lambda_{1\leq y<\bar g}$ as
{\it exponentially close} to the Fermi surface and to quarks with $|k-\mu| \sim \Lambda_{\bar g\geq y\geq 0}$
as {\it farther away} from the Fermi surface.
Inserting $\phi(y)$ into ${\cal A}(\omega+i\eta, \mathbf{k})$ one can estimate
${\cal A}(\omega+i\eta, \mathbf{k})$ for different energy and momentum regimes. This is done in
Sec.\ \ref{estA}. In
the second iteration the part ${\cal B}(\omega+i\eta, \mathbf{k})$
is estimated by inserting $\rho_\phi\simeq {\cal A}/\pi$ into the
expression for ${\cal B}(\omega+i\eta, \mathbf{k})$, which is done in Sec. \ref{estB}.
In Sec.\ \ref{hilbert} these estimates are used to write
\begin{eqnarray}\label{splitting}
{\rm Re}\,\tilde\phi(\epsilon_\mathbf{k},\mathbf{k}) =
\frac{1}{\pi}\mathcal P\left[\;\int\limits_0^{\Lambda_1}+
\int\limits _{\Lambda_1}^{\Lambda_{\bar g}}+
\int\limits _{\Lambda_{\bar g}}^{\Lambda_0}+\cdots\right] d\omega \,\sum\limits_{\sigma=\pm}
\frac{{\cal A}(\omega+i\eta,\mathbf{k})+{\cal B}(\omega+i\eta,\mathbf{k})}{\omega-\sigma\epsilon_\mathbf{k}}\;,
\end{eqnarray}
cf.\ Eq.\ (\ref{disp1}),
where the integral over $\omega$ has been split according to the different energy
regimes of the estimates for $\cal A$ and $\cal B$, cf.\ Table \ref{tablesummary}.
Then, according to the discussion after Eq.\ (\ref{quarkpropcompact}),
terms of order $g^n\phi$ contribute to the (sub)$^n$-leading order
to ${\rm Re}\,\tilde\phi(\epsilon_\mathbf{k},\mathbf{k})$. The main results of this analysis are summarized
in Table \ref{tablesummary} for momenta close to the Fermi surface, $|k-\mu|\ll M$.
\begin{table}
\centerline{\begin{tabular}[t]{|c||c|c|c|c|c|}
\hline
& ~$\omega\lesssim\Lambda_1$~ &~ $\omega \sim\Lambda_{1>y>\bar g}$~ &
~ $\omega\sim \Lambda_{\bar g>y>0}$ ~&
~ $\omega\sim m_g +\Lambda_{1>y>0}$~ &~ $M\lesssim\omega\lesssim 2\mu$ ~
\\ \hline\hline
~dominant gluons~ & $t$-cut & $t$-cut & $t,\ell$-cut & $\ell$-pole & $t$-pole \\ \hline
~ ${\cal A}$~ & ~ $g^2\phi$ ~ & ~ $g\,\phi\,\cos\left(\frac{\pi\,y}{2}\right)$ ~ &
$g\,\phi$ &~ $g\,\phi\left(\frac{M}{\phi}\right)^y$ ~ & $g\,\phi$ \\ \hline
~ ${\cal B}$~ & ~ $g^2{\cal A}$ ~ & ~$g^2{\cal A}$~ & $g{\cal A}$ &
~ $g^2{\cal A}$ ~ & $g{\cal A}$ \\ \hline
~ ${\cal H}\,[{\cal A}]$~ & ~ $g^2\phi$ ~ & ~$\phi$~ & $g\,\phi$ &
~ $\phi$ ~ & $g\,\phi$ \\ \hline
~ ${\cal H}\,[{\cal B}]$~ & ~ $g^4\phi$ ~ & ~$g^2\phi$~ & $g^2\phi$ &
~ $g^2\phi$ ~ & $g^2\phi$ \\ \hline
\end{tabular}}
\caption{Estimates for the terms ${\cal A}$ and ${\cal B}$, cf.\ Eq.\ (\ref{AB}),
and for ${\cal H}\,[{\cal A}]$ and ${\cal H}\,[{\cal B}]$, cf.\ Eq.\ (\ref{splitting}),
for different energy scales and $|k-\mu|\ll M$. The gluon sectors dominating the
respective energies are indicated.}
\label{tablesummary}
\end{table}
The columns correspond to the various energy regimes of these estimates. In the first
line the dominant gluon sectors are given.
The cut of the transversal gluons gives the dominant contribution to ${\cal A}$ and $ {\cal B}$
for energies smaller than the scale $M$. At the scale $M$, the longitudinal and the transversal cut
contribute with the same magnitude. At that energy scale also the poles of the gluons start to
contribute as soon as $\omega > m_g$. At energies just above $m_g$ the longitudinal pole dominates
over the magnetic pole,
whereas for larger energies up to $2\mu$ the transversal pole gives the leading contribution.
The respective orders of magnitude of ${\cal A}$ and $ {\cal B}$, estimated at the various
energy scales, are given in the subsequent rows.
It is found that either ${\cal B} \sim g {\cal A}$ or ${\cal B} \sim g^2 {\cal A}$ and therefore ${\cal B \ll \cal A}$.
Finally, the orders of magnitudes of the contributions to the Hilbert transforms
${\cal H}\,[{\cal A}]$ and ${\cal H}\,[{\cal B}]$, obtained by integrating over the respective energy scales,
cf.\ Eq.\ (\ref{splitting}), are listed. When calculating the Hilbert transform
${\cal H}\,[{\cal A}]$ up to energies $\omega \sim \Lambda_{\bar g}$,
the transversal cut gives a contribution of order $\phi$, which is of leading order.
Since ${\cal B}\sim g^2{\cal A}$ for these energies, ${\cal H}\,[{\cal B}] \sim g^2\phi$, i.e.,
the contributions from $\cal B$ are beyond subleading order.
Integrating over larger energies $\omega \sim M$, it turns out that ${\cal H}\,[{\cal A}] \sim g\,\phi$,
which gives a subleading-order contribution. Since ${\cal B}\sim g{\cal A}$ in this energy regime, the
corresponding contribution from ${\cal H}\,[{\cal B}]$ is again beyond subleading order.
For the contributions from the poles one finds that, in the region
$\omega = m_g +\Lambda_{1>y>0}$, a contribution of order $\phi$ is generated by ${\cal H}\,[{\cal A}]$
(which combines with $\hat\phi(\mathbf{k})$ to give a contribution of order $g\,\phi$ in total, i.e., of subleading order).
Since in this energy regime ${\cal B}\sim g^2{\cal A}$, the corresponding contribution
${\cal H}\,[{\cal B}]$ is again only of order $g^2 \phi$, i.e., beyond subleading order.
Integrating over large energies up to $2\mu$, ${\cal H}\,[{\cal A}]$ gives a contribution of order $g\,\phi$, which
is of subleading order. Since in this regime ${\cal B}\sim g{\cal A}$, it follows that ${\cal H}\,[{\cal B}]$ is
beyond subleading order.
In Sec.\ \ref{phi0} it is shown that also ${\hat\phi}(\mathbf{k})\sim \phi$ and that the imaginary part of the
gap function again contributes to ${\hat\phi}(\mathbf{k})$ only at $g^2\phi$, i.e., beyond subleading order.
This finally proves that the imaginary part of the gap function enters the real part only beyond subleading order.
Hence, to subleading accuracy, the real part of the gap equation can be solved self-consistently neglecting the
imaginary part of the gap function for quark momenta exponentially close to the Fermi surface.
On the other hand, the real part of the gap function enters the imaginary part of the gap function always
to leading order. For energies, for which ${\cal B}\sim g^2{\cal A}$, the imaginary part of the
gap function can be calculated to subleading order from the real part alone. In particular, this is the case for
the regime of small energies. Furthermore, since for these energies only the cut of magnetic gluons contributes,
Im$\,\phi$ can be calculated to subleading order without much effort, which is done in Sec.\ \ref{calcimphi}.
In Sec.\ \ref{repro}, we reproduce the known gap equation for Re$\,\phi(\tilde\epsilon_\mathbf{k},\mathbf{k})$ by
Hilbert transforming ${\cal A}(\omega+i\eta, \mathbf{k})$, cf.\ Eq.\ (\ref{disp1}),
and adding the energy independent gap function $\hat\phi(\mathbf{k})$, cf.\ Eq.\ (\ref{phisplit}).
\subsection{Estimating ${\cal A}$ }\label{estA}
For the purpose of estimating the various terms contributing to $\cal A$, cf.\ Eqs.\
(\ref{ImMT0}-\ref{AB}), one may restrict oneself to the leading
contribution of the Dirac traces in Eq.\ (\ref{gapequation2a}), which is of order one. The integral over the
absolute magnitude of the quark momentum is $\int dq \, q^2$, while the angular integration is
$\int d \cos \theta \equiv \int d p \, p / (kq)$. Furthermore, we estimate $Z^2(\tilde\epsilon_\mathbf{q})\sim 1$.
The contribution to $\cal A$
from ${\cal M}^{\ell,t}_{{\cal A},T=0}$ in Eq.\ (\ref{ImMT0}),
which arises from the cut of the soft gluon propagator, is
\begin{eqnarray}\label{Acut}
{\cal A}^{\ell,t}_{\rm cut}(\omega,\mathbf{k})&\sim&g^2\int\limits_0^\delta\frac{d\xi}{\epsilon_\mathbf{q}}\,{\rm Re}\,
\phi(\epsilon_\mathbf{q},\mathbf{q})\int\limits_\lambda^{\Lambda_{\rm gl}} dp\, p \,\rho^{\ell,t}_{\rm cut}(\omega^*,\mathbf{p})\;,
\end{eqnarray}
where $\omega^*\equiv\omega-\epsilon_\mathbf{q}$ and $0<\omega^*<\omega$. Furthermore,
$\delta\equiv{{\rm min}(\omega,\Lambda_{\rm q})}$ and
$\lambda\equiv {\max}(|\xi -\zeta|,\omega^*)$
with $\zeta \equiv k-\mu$ and $\xi \equiv q-\mu$. In the effective theory,
both $|\zeta|$ and $|\xi|$ are bounded by
$\Lambda_{\rm q} \sim g\mu$. Due to the condition $\lambda<p<\Lambda_{\rm gl}$
it follows that ${\cal A}^{\ell,t}_{\rm cut}= 0$ for
$\omega >\Lambda_{\rm gl}+\Lambda_{\rm q}\simeq \mu$. For
the purpose of power counting it is sufficient to employ the following
approximative forms for
$\rho^{\ell,t}_{\rm cut}$,
cf.\ Eq.\ (\ref{lcut},\ref{tcut}),
\begin{subequations} \label{appcut}
\begin{eqnarray} \label{appcutt}
\rho_{\rm cut}^t (\omega^*, {\bf p}) &\simeq& \frac{M^2}{\pi} \,
\frac{\omega^* \, p}{p^6 + (M^2\, \omega^*)^2}\;,\\
\rho^{\ell}_{\rm cut}(\omega^*,{\bf p}) &\simeq&
\frac{2 M^2}{\pi}\, \frac{\omega^*}{p}\,\frac{1}{
( p^2 + 3\, m_g^2 )^2} \;. \label{appcutl}
\end{eqnarray}
\end{subequations}
These approximations reproduce the correct behavior for $\omega^*\ll p\ll m_g$, while for
$\omega^* < p < \Lambda_{\rm gl}$ they give at least the right order of magnitude.
With that the integration over $p$ can be performed analytically.
For energies $\omega <\Lambda_{\rm gl}$ one finds for the transverse part
\begin{eqnarray}\label{t}
{\cal A}^t_{\rm cut}(\omega,\mathbf{k})&\sim&g^2\int\limits_0^{\delta}\frac{d\xi}{\epsilon_\mathbf{q}}
\,{\rm Re}\,\phi(\epsilon_\mathbf{q},\mathbf{q})
\left[\arctan\left(\frac{\Lambda_{\rm gl}^3}{M^2\omega^*}\right)-\arctan\left(\frac{\lambda^3}
{M^2\omega^*}\right)\right]
\;.
\end{eqnarray}
For all $\zeta \leq \Lambda_{\rm q}$ and $\omega \leq \Lambda_{\rm gl}$ one has
$\Lambda_{\rm gl}^3/M^2\omega^*\gg 1$ and the first arctangent in the squared brackets
may be set equal to $\pi/2$. In the case that $\omega \gg M$ one has
$\lambda^3/M^2\omega^*\gg 1$ and the two arctangents cancel. If $\omega \sim M$, on the other
hand, the arctangents combine to a number of order $1$. Finally, in the case that
$\omega \sim \Lambda_{y}$ with $0\leq y \leq 1$, one has always $0\leq \xi\leq \Lambda_{y}$ due to the
theta-function in Eq.\ (\ref{ImMT0}). Moreover, it turns out that the arctangents cancel if
$ \zeta > \Lambda_{y/3}$, since then
$\lambda^3/(M^2\omega^*)\simeq \zeta^3/[M^2 (\omega-\xi)] \gg1$ for all
$\xi <\omega$.
In the case that $\omega\sim\phi$, the integral over $\xi$ does not yield the BCS logarithm,
cf.\ Appendix \ref{nonBCS}, and we find
\begin{eqnarray}\label{Aphi}
{\cal A}^t_{\rm cut}(\phi,\mathbf{k})&\sim&g^2\phi\;.
\end{eqnarray}
For larger energies $\omega\sim\Lambda_y$ with $0\leq y<1$ one substitutes
$\xi(y^\prime)\equiv \Lambda_{y^\prime}$, $d\xi/\xi = \ln(\phi/M)\,dy^\prime$ and obtains
with Eq.\ (\ref{solution})
\begin{eqnarray}
{\cal A}^t_{\rm cut}(\omega,\mathbf{k})&\sim& g^2\,\ln\left(\frac{\phi}{M}\right) \phi\int\limits_1^y dy\,
\sin\left( \frac{\pi\,y}{2}\right)\sim g\,\phi\,\cos\left( \frac{\pi\,y}{2}\right)\;.\label{Aphi2}
\end{eqnarray}
Here, the integration over $0\leq \xi \leq \Lambda_1$ has been neglected, since it
gives at most a contribution of order
$g^2\phi$, cf.\ Eq.\ (\ref{Aphi}).
In the longitudinal sector one finds for the integral over the gluon momentum $p$
\begin{eqnarray}\label{pintlong}
{\cal I}(\lambda)&\equiv& M^2\int\limits_\lambda^{\Lambda_{\rm gl}} \frac{dp}{
( p^2 + X^2 )^2}\sim \frac{1}{X}\left[\arctan\left(\frac{\Lambda_{\rm gl}}{X}\right)-
\arctan\left(\frac{\lambda}{X}\right)\right]
- \frac{1}{\Lambda_{\rm gl} }\frac{\lambda^2-\lambda\Lambda_{\rm gl}+X^2}{\lambda^2+X^2}\nonumber\\
&\sim&
\left\{
\begin{array}{c}
\frac{1}{X}~~~~~~,~{\rm for}~\lambda \leq X\\
\frac{\Lambda_{\rm gl}-\lambda}{\Lambda_{\rm gl}\lambda}~~,~{\rm for}~\lambda \gg X
\end{array}
\right.
\;,
\end{eqnarray}
where $X^2\equiv 3m_g^2$. Since $\zeta \leq \Lambda_{\rm q} \sim X$, solely the magnitude of $\omega$ decides
whether $\lambda\leq X$ or $\lambda\gg X$ is realized. It follows that, in contrast to the transversal case,
the order of magnitude of ${\cal A}^\ell_{\rm cut}$ is independent of $\zeta$. Energies
$\omega \sim \Lambda_{0\leq y\leq 1}$ correspond to $\lambda \leq X$. For the special case
$\omega \sim \Lambda_1$ one finds
\begin{eqnarray}
{\cal A}^\ell_{\rm cut}(\omega,\mathbf{k})&\sim&g^2\int\limits_0^{\Lambda_1}\frac{d\xi}{\epsilon_\mathbf{q}}\,
{\rm Re}\,\phi(\epsilon_\mathbf{q},\mathbf{q})\,\,\frac{\omega^*}{M} \sim g^2\,\phi\,\frac{\phi}{M}\;.
\end{eqnarray}
For $\omega \sim \Lambda_{0\leq y< 1}$ one obtains
\begin{eqnarray}\label{Aphi2b}
{\cal A}^\ell_{\rm cut}(\omega,\mathbf{k})&\sim&g\,\phi\int\limits_1^y dy^\prime\,\sin\left( \frac{\pi\,y^\prime}{2}\right)\,
\frac{\omega^*}{M}
\sim g\,\phi\int\limits_1^y dy^\prime\,\sin\left( \frac{\pi\,y^\prime}{2}\right)\,\left[\left(\frac{\phi}{M} \right)^y
-\left(\frac{\phi}{M} \right)^{y^\prime}\right]\nonumber\\
&\sim& g\,\phi\,\cos\left( \frac{\pi\,y}{2}\right) \,\left(\frac{\phi}{M} \right)^y\;.
\end{eqnarray}
Hence, for $y>0$ the term ${\cal A}^\ell_{\rm cut}$ is suppressed by a factor $(\phi/M)^y$ as compared
to ${\cal A}^t_{\rm cut}$ whereas for $y=0$ the longitudinal and the transversal cut contribute at the same order,
${\cal A}^\ell_{\rm cut} \sim {\cal A}^t_{\rm cut}\sim g\,\phi$.
For much larger energies, $M \ll \omega < \mu$, we have $\lambda=\omega^* \simeq \omega \gg X$
(note that $\zeta$ is bounded by $\Lambda_{\rm q}$). It follows with
$\delta = \Lambda_{\rm q} \sim \Lambda_0$
\begin{eqnarray}
{\cal A}^\ell_{\rm cut}(\omega,\mathbf{k})&\sim&g\,\phi\int\limits_1^0 dy\,\sin\left( \frac{\pi\,y}{2}\right)\,
\left(1-\frac{\omega}{\Lambda_{\rm gl}}\right)
\sim g\,\phi \left(1-\frac{\omega}{\mu}\right) \;. \label{Acuthigh}
\end{eqnarray}
Hence, ${\cal A}^\ell_{\rm cut} \gg {\cal A}^t_{\rm cut}$ in this large-energy regime. The results for
${\cal A}^{\ell,t}_{\rm cut}$ are summarized in Tables \ref{tableAcutzeta<M} and \ref{tableAcutzetasimM}.
\begin{table}
\centerline{\begin{tabular}[t]{|c||c|c|c|c|}
\hline
& $\omega\sim\phi$ & $\omega\sim\Lambda_{1>y>0}$ & ~$\omega\sim M$~ & ~$M\ll\omega<\mu$~
\\ \hline\hline
~ ${\cal A}_{\rm cut}^t$ ~ & $g^2\phi$ & $g\,\phi\,\cos\left(\frac{\pi\,y}{2}\right)$ & $g\,\phi$ & 0
\\ \hline
${\cal A}_{\rm cut}^\ell$ & ~ $g^2\phi\,\frac{\phi}{M}$~ &
~$g\,\phi\left(\frac{\phi}{M}\right)^y\cos\left(\frac{\pi\,y}{2}\right)$
~ & $g\,\phi$ & ~$g\,\phi\left(1-\frac{\omega}{\mu}\right)$~
\\ \hline
\end{tabular}}
\caption{Estimates for ${\cal A}_{\rm cut}^{\ell,t}$ at different energy scales and $\zeta\ll M$.}
\label{tableAcutzeta<M}
\end{table}
\begin{table}
\centerline{\begin{tabular}[t]{|c||c|c|c|c|}
\hline
& $\omega\sim\phi$ & $\omega\sim\Lambda_{1>y>0}$ & ~$\omega\sim M$~ & ~$M\ll\omega<\mu$~
\\ \hline\hline
~ ${\cal A}_{\rm cut}^t$ ~ & 0 & 0 & $g\,\phi$ & 0
\\ \hline
${\cal A}_{\rm cut}^\ell$ & ~ $g^2\phi\,\frac{\phi}{M}$~ &
~$g\,\phi\left(\frac{\phi}{M}\right)^y\cos\left(\frac{\pi\,y}{2}\right)$
~ & $g\,\phi$ & $~g\,\phi\left(1-\frac{\omega}{\mu}\right)$~
\\ \hline
\end{tabular}}
\caption{Estimates for ${\cal A}_{\rm cut}^{\ell,t}$ at different energy scales
and $\zeta\lesssim M$.}
\label{tableAcutzetasimM}
\end{table}
The contributions from the gluon poles to $\cal A$ read analogously to Eq.\ (\ref{Acut})
\begin{eqnarray}
{\cal A}^{\ell,t}_{\rm pole}(\omega,\mathbf{k})&\sim&
g^2\int\limits_0^\delta\frac{d\xi}{\epsilon_\mathbf{q}}\,{\rm Re}
\,\phi(\epsilon_\mathbf{q},\mathbf{q})\int\limits_{|\xi-\zeta|}^{2\mu}
dp\, p \,\rho^{\ell,t}_{\rm pole}(\omega^*,\mathbf{p})\,
\delta[\omega^*-\omega_{\ell,t}(\mathbf{p})]\;.\label{Apole}
\end{eqnarray}
The boundary $p \leq 2\mu$ in the integral over $p$ is due to the constraint
$\xi\leq\Lambda_{\rm q}$, cf.\ Fig.\ \ref{Sphere3}.
\begin{figure}[ht]
\centerline{\includegraphics[width=6cm]{Spheres3.eps}}
\caption{Hard gluon exchange with momentum $p>\Lambda_{\rm gl}\sim \mu$. The quark has
to remain within a layer of width $2\Lambda_{\rm q}\sim g\mu$ around the Fermi surface.
This effectively restricts the hard gluon momentum to
$p< 2\mu+2\Lambda_{\rm q}\lesssim 2\mu$.}
\label{Sphere3}
\end{figure}
Therefore, also $\omega_{\ell,t}<2\mu$. It follows that ${\cal A}^{\ell,t}_{\rm pole}=0$ for
$\omega> 2\mu$, since for these energies one has $\omega^*\simeq \omega> \omega_{\ell,t}$
always and the $\delta-$function in Eq.\ (\ref{Apole}) vanishes. Hence, we can restrict the analysis
of Eq.\ (\ref{Apole}) to $m_g < \omega < 2\mu$.
For the transverse sector we approximate
\begin{eqnarray}\label{approxt}
\rho^{t}_{\rm pole}(\omega_{t}(\mathbf{p}),\mathbf{p})\simeq-\frac{1}{2\omega_t(\mathbf{p})}
\end{eqnarray}
and $\omega_{t}(\mathbf{p})\simeq \sqrt{p^2+m_g^2}$ for all momenta $p$ and obtain
\begin{eqnarray}
{\cal A}^{t}_{\rm pole}(\omega,\mathbf{k})&\sim&
g^2\int\limits_0^{\Lambda_{\rm q}}\frac{d\xi}{\epsilon_\mathbf{q}}\,{\rm Re}\,\phi(\epsilon_\mathbf{q},\mathbf{q})\!\!\!\!
\int\limits_{\sqrt{m_g^2+|\zeta-\xi|^2}}^{2\mu}\!\!\!\!\!\!d\omega_t\,\delta(\omega^*-\omega_{t})\;.
\end{eqnarray}
The condition $\omega^*= \omega_t$ can be satisfied only if $\omega >\sqrt{m_g^2+\zeta^2}$.
Furthermore, the condition $\omega^*= \omega_t$ sets an upper boundary
to the integral over $\xi$ given by
$\xi_{\rm max} \equiv {\rm min}\{\Lambda_{\rm q}, (\omega^2-m_g^2-\zeta^2)/2(\omega-\zeta)\}$.
Hence, the BCS logarithm is generated for energies \mbox{$\omega \sim \sqrt{m_g^2+\zeta^2}+\Lambda_y$}
with $0\leq y<1$, since then $\xi_{\rm max}\sim \Lambda_y $. For such energies we have
\begin{eqnarray}\label{tpole}
{\cal A}^{t}_{\rm pole}(\omega,\mathbf{k})
&\sim&g^2\int\limits_0^{\xi_{\rm max}}\frac{d\xi}{\epsilon_\mathbf{q}}\,{\rm Re}\,\phi(\epsilon_\mathbf{q},\mathbf{q})
\sim g\,\phi\,\cos\left( \frac{\pi\,y}{2}\right)\;.
\end{eqnarray}
For larger energies up to $2\mu$ one has ${\cal A}^{t}_{\rm pole}(\omega,\mathbf{k}) \sim g\,\phi$.
In the longitudinal gluon sector we approximate
\begin{eqnarray} \label{rholapp}
\rho^\ell_{\rm pole}(\omega_\ell(\mathbf{p}), \mathbf{p}) \simeq -\frac{\omega_\ell(\mathbf{p})}{2p^2}
\end{eqnarray}
for gluon momenta $p$ not much larger than $m_g$. Such values of $p$ are guaranteed
if we consider energies of the form \mbox{$\omega = \sqrt{m_g^2+\zeta^2} +\Lambda_{y_1}$}
with $\bar g < y_1 < 1$.
As in the transversal case we simplify $\omega_{\ell}(\mathbf{p})\simeq \sqrt{p^2+m_g^2}$ and find
\begin{eqnarray}\label{lpole}
{\cal A}^{\ell}_{\rm pole}(\omega,\mathbf{k})&\sim&
g^2\int\limits_0^{\xi_{\rm max}}\frac{d\xi}{\epsilon_\mathbf{q}}\,{\rm Re}\,\phi(\epsilon_\mathbf{q},\mathbf{q})
\!\!\!\!\int\limits_{\sqrt{m_g^2+|\xi-\zeta|^2}}^{2\mu}\!\!\!\!\!\! d\omega_\ell\,
\frac{\omega_\ell^2}{\omega^2_\ell-m_g^2}
\,\delta(\omega^*-\omega_{\ell})\;,
\end{eqnarray}
where $\xi_{\rm max}$ is defined as in the transversal case. For the
considered energies one has $\xi_{\rm max}\sim \Lambda_{y_1}$. We find
\begin{eqnarray}\label{l4}
{\cal A}^{\ell}_{\rm pole}(\omega,\mathbf{k})
&\sim& g^2 \int\limits_0^{\xi_{\rm max}}\frac{d\xi}{\epsilon_\mathbf{q}}\,{\rm Re}\,\phi(\epsilon_\mathbf{q},\mathbf{q})
\frac{(\omega-\epsilon_\mathbf{q})^2}{(\omega-\epsilon_\mathbf{q})^2-m_g^2}
\nonumber\\
&\sim& g\,\phi\,\frac{\omega}{\omega-m_g}\,\cos\left( \frac{\pi\,y_1}{2}\right)\;,
\end{eqnarray}
where $\xi_{\rm max}\ll \omega$ and $\omega \agt m_g$ was exploited in order to simplify
the fraction under the integral, $\omega^2/(\omega^2-m_g^2)\sim\omega/(\omega-m_g)$.
Furthermore, the BCS logarithm has cancelled one power of $g$.
Hence, for $\zeta \sim m_g$ one has ${\cal A}^{\ell}_{\rm pole} \sim g\,\phi$. For quarks
exponentially close to the Fermi surface with $\zeta \sim \Lambda_{y_2/2}$ and
$y_2> 2\,\bar g$, we find
\begin{eqnarray}\label{l3}
{\cal A}^{\ell}_{\rm pole}(\omega,\mathbf{k}) \sim g\,\phi
\,\left(\frac{M}{\phi}\right)^y\cos\left( \frac{\pi\,y_1}{2}\right)\;,
\end{eqnarray}
where $y\equiv {\rm min}\{y_1,y_2\}$.
For much larger energies, $\omega \gg m_g$, we have to consider
gluon momenta $p \gg m_g$, for which the spectral density of longitudinal gluons
is exponentially suppressed
\begin{eqnarray}\label{rholpoleexp}
\rho_{\rm pole}^{\ell}(\omega_l(\mathbf{p}), \mathbf{p}) \sim \frac{\exp\left(-\frac{2p^2}{3m_g^2} \right)}{p}\;.
\end{eqnarray}
We find for $m_g \ll \omega<2\mu$ with $\omega_l(\mathbf{p}) \simeq p$
\begin{eqnarray}\label{lpole2}
{\cal A}^{\ell}_{\rm pole}(\omega,\mathbf{k})&\sim&g^2\int\limits_0^{\Lambda_{\rm q}}
\frac{d\xi}{\epsilon_\mathbf{q}}\,{\rm Re}\,\phi(\epsilon_\mathbf{q},\mathbf{q})
\!\!\!\!\int\limits_{\sqrt{m_g^2+|\xi-\zeta|^2}}^{2\mu}\!\!\!\!\!\!
d\omega_\ell\,\exp\left(-\frac{2\omega_\ell^2}{3m_g^2} \right)
\,\delta(\omega^*-\omega_{\ell})\nonumber\\
&\sim& g^2\phi\int\limits_{\Lambda_1}^{\Lambda_0}\frac{d\xi}{\xi}\,
\exp\left[-\frac{2(\omega-\xi)^2}{3m_g^2} \right]
\sim g\,\phi \,\exp\left(-\frac{2\omega^2}{3m_g^2} \right)\;,
\end{eqnarray}
which is the continuation of Eq.\ (\ref{l4}) to large energies and for all $\zeta \leq \Lambda_{\rm q}$.
The results for ${\cal A}^{\ell,t}_{\rm pole}$ are summarized in Table \ref{tableApolezeta<M}.
\begin{table}
\centerline{\begin{tabular}[t]{|c||c|c|c|c|}
\hline
&~ $\omega<\sqrt{m_g^2+\zeta^2}~ $
&~ $\omega \sim \sqrt{ m_g^2+\zeta^2}+\Lambda_{1<y_1<\bar g}$
~ & ~$m_g\ll\omega<2\mu$~
\\ \hline\hline
~ ${\cal A}_{\rm pole}^t$ ~ & 0 & $g\,\phi$ & $g\,\phi$
\\ \hline
${\cal A}_{\rm pole}^\ell$ & 0 & $g\,\phi\left(\frac{M}{\phi}\right)^y
\cos\left( \frac{\pi\,y_1}{2}\right)$ &
~ $g\,\phi \,\exp\left(-\frac{2\omega^2}{3m_g^2} \right)$~
\\ \hline
\end{tabular}}
\caption{Estimates for ${\cal A}_{\rm pole}^{\ell,t}$ at different
energy scales and for $\zeta \sim \Lambda_{y_2}$, where $0\leq y_2\leq 1$ and
$y\equiv {\rm min}\{y_1,y_2\}$. }
\label{tableApolezeta<M}
\end{table}
In the following Sec.\ \ref{estB} these results will be inserted into Eq.\ (\ref{ImMT0}) for
$\rho_\phi$ in order to estimate $\cal B$.
\subsection{Estimating ${\cal B}$} \label{estB}
The term ${\cal B}$ in Eq.\ (\ref{AB}) can be estimated using
\begin{eqnarray}\label{Blt}
{\cal B}^{\ell,t}_{\rm cut}(\omega,\mathbf{k}) \sim g^2\int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}
{\epsilon_\mathbf{q}} \int\limits _0^\omega dq_0 \sum\limits_{\sigma=\pm}
\frac{\sigma\,{\cal A}(q_0 ,\mathbf{q})}{q_0-\sigma \epsilon_\mathbf{q}} \int\limits _\lambda^{\Lambda_{\rm gl}}
dp\,p \,\rho^{\ell,t}_{\rm cut}(\omega^\prime,\mathbf{p})\;,
\end{eqnarray}
for the Landau-damped gluon sector. We introduced
$\omega^\prime\equiv\omega-q_0<\omega$ and
$\lambda\equiv {\max}(|\xi -\zeta|,\omega^\prime)$. Furthermore, we set $Z^2(\omega)\sim 1$.
From the condition
$\lambda<\Lambda_{\rm gl}$ in Eq.\ (\ref{Blt}) and from ${\cal A}(q_0,\mathbf{q}) = 0$ for
$q_0 > 2\mu$ it follows that ${\cal B}^{\ell,t}_{\rm cut}(\omega,\mathbf{k}) = 0$
for $\omega>\Lambda_{\rm gl}+2\mu\sim 3\mu$. Inserting the approximative forms (\ref{appcut}) for
$\rho^{\ell,t}_{\rm cut}$ into Eq.\ (\ref{Blt}) the integration over $p$ can be performed analogously to
Eqs.\ (\ref{t},\ref{pintlong}). In the transverse case one finds
\begin{eqnarray}\label{t2}
\hspace*{-0.6cm}
{\cal B}^t_{\rm cut}(\omega,\mathbf{k})\sim
g^2\int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}{\epsilon_\mathbf{q}} \int\limits _0^\omega dq_0
\sum\limits_{\sigma=\pm}\frac{\sigma\,{\cal A}(q_0,\mathbf{q}) }{q_0-\sigma \epsilon_\mathbf{q}}
\left[\arctan\left(\frac{\Lambda_{\rm gl}^3}{M^2\omega^\prime}\right)-
\arctan\left(\frac{\lambda^3}{M^2\omega^\prime}\right)\right]\;.
\end{eqnarray}
Analogously to Eq.\ (\ref{t}), one first determines the domains of $\omega,\,\zeta$, and
$\xi$, where the arctangents in the squared brackets do not cancel.
Since $\omega^\prime<\omega <3\mu$ it is $\Lambda_{\rm gl}^3/M^2\omega^\prime \gg 1$
and the first arctangent in the squared brackets may be set equal to $\pi/2$.
Furthermore, one finds that the argument of the second arctangent is not very large as
long as the conditions $\omega - M \lesssim q_0$ and
$|\xi-\zeta|^3\lesssim M^2(\omega-q_0)$ are
fulfilled. In order to satisfy the first condition, we restrict the integral over $q_0$ to the region
${\rm max}\{0,\omega - M\}< q_0 <\omega$.
The second condition becomes less restrictive if simplified to $|\xi-\zeta|^3\lesssim M^2\omega$.
The resulting estimate for ${\cal B}^{t}_{\rm cut}$ will turn out to be small so that a
more elaborate estimate is not necessary.
For energies $\omega\sim \phi$ one may use Eq.\ (\ref{Aphi}) to estimate
${\cal A} \sim{\cal A}^t_{\rm cut}\sim g^2\phi$. For $\zeta \ll M$ one has
\begin{eqnarray}
{\cal B}^t_{\rm cut}(\phi,\mathbf{k}) \sim
g^4 \phi \int\limits_0^{\Lambda_{1/3}} \frac{d\xi}{\epsilon_\mathbf{q}}\,
\ln\left| \frac{\epsilon_\mathbf{q}-\phi}{\epsilon_\mathbf{q} + \phi}\right| \sim g^4\phi\;.\label{Bcutphi}
\end{eqnarray}
The logarithm under the integral prevents the generation of the BCS logarithm, cf.\ Appendix
\ref{nonBCS}. For $\zeta \lesssim M$ the integration over $\xi$ is restricted to the region
$|\xi-\zeta|< \Lambda_{1/3}$. This yields the estimate
${\cal B}^t_{\rm cut}(\phi,\mathbf{k})\sim g^4\phi \,(\Lambda_{1/3}/M)
\sim g^4\phi\, (\phi/M)^{1/3}$.
For $\omega\sim \Lambda_y$ with $ 0 \leq y < 1$ we conservatively estimate
${\cal A }\sim {\cal A}^t_{\rm cut} \sim g\,\phi$, cf.\ Eq.\ (\ref{Aphi2}), and obtain
similarly to Eq.\ (\ref{Bcutphi})
\begin{eqnarray}\label{est1}
{\cal B}^t_{\rm cut}(\Lambda_y,\mathbf{k})\sim
g^3 \phi \int\limits_0^{\Lambda_{y/3}} \frac{d\xi}{\epsilon_\mathbf{q}}\,\ln\left|
\frac{\epsilon_\mathbf{q}-\Lambda_y}{\epsilon_\mathbf{q} + \Lambda_y}\right| \sim g^3\phi\;,
\end{eqnarray}
where we assumed $\zeta \ll M$. Again no BCS logarithm was generated due to the additional
logarithm. For $\zeta \lesssim M$ the integration over $\xi$ is restricted to the region
$|\xi-\zeta|< \Lambda_{y/3}$. If we conservatively estimate ${\cal A }\sim g\,\phi$
throughout this region, we obtain
${\cal B}^t_{\rm cut}(\phi,\mathbf{k})\sim g^3\phi\, (\phi/M)^{y/3}$.
For energies $\omega \sim M$ and larger, the condition $|\xi-\zeta|^3\lesssim M^2\omega$
is fulfilled for all $\xi < \Lambda_{\rm q}$.
Considering $\omega \sim m_g +\Lambda_y$ with $0\leq y <1$ we find that the dominant
contribution comes from ${\cal A} \sim {\cal A}^\ell_{\rm pole}$, cf.\ Eq.\ (\ref{l3}),
when integrating over $m_g+\Lambda_1<q_0<m_g+\Lambda_y$
\begin{eqnarray}\label{est2}
{\cal B}^t_{\rm cut}(m_g+\Lambda_y,\mathbf{k})&\sim&
g^3 \phi \int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}{\epsilon_\mathbf{q}}
\int\limits _{m_g+\Lambda_1}^{m_g+\Lambda_y} dq_0 \sum\limits_{\sigma=\pm}
\frac{\sigma}{q_0-\sigma \epsilon_\mathbf{q}}\,\frac{q_0}{q_0-m_g}\nonumber\\
&\sim &
g^2 \phi \int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}{\epsilon_\mathbf{q}}\,\sum
\limits_{\sigma=\pm}\frac{\sigma\,m_g}{\sigma \epsilon_\mathbf{q}-m_g}\int\limits _1^y dy^\prime
\sim g^2 \phi \int\limits_0^{\Lambda_{\rm q}} d\xi\,\frac{m_g}{\xi^2-m_g^2}\nonumber\\
&\sim& g^2\,\phi\;.
\end{eqnarray}
The contributions from the other gluon sectors are estimated with
${\cal A} \sim g\,\phi$, cf.\ Eqs.\ (\ref{Acuthigh},\ref{lpole}),
\begin{eqnarray}\label{est4}
g^3 \phi \int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}{\epsilon_\mathbf{q}}
\int\limits _{\omega-M}^{\omega} dq_0\,
\sum\limits_{\sigma=\pm}\frac{\sigma}{q_0-\sigma \epsilon_\mathbf{q}}&\sim&
g^3 \phi \int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}{\epsilon_\mathbf{q}} \left(
\ln\left|\frac{\omega-\epsilon_q}{\epsilon_q+\omega} \right|
-\ln\left|\frac{\omega-M+\epsilon_q}{\omega-M-\epsilon_q} \right| \right)\nonumber\\
&\sim& g^3 \phi \left( \frac{M}{\omega}\right)^2\;,
\end{eqnarray}
where the logarithms again prevent the generation of the BCS logarithm. The factor
$(M/\omega)^2$ arises from expanding the logarithms for $\omega \gg M$.
Hence, for energies of the form $\omega \sim m_g +\Lambda_y$
with $0\leq y <1$ we have
\begin{eqnarray}
\label{est3}
{\cal B}^t_{\rm cut}(\omega,\mathbf{k})&\sim& g^2\phi\;,
\end{eqnarray}
while for larger energies ${\cal A}^\ell_{\rm pole}$ does not contribute anymore
and ${\cal B}^t_{\rm cut}(\omega,\mathbf{k}) \sim g^3\phi\,(M/\omega)^2$,
cf.\ Eq.\ (\ref{est4}).
For the cut of the longitudinal gluons one obtains
\begin{eqnarray}\label{l2}
{\cal B}^\ell_{\rm cut}(\omega,\mathbf{k})&\sim&
g^2\int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}{\epsilon_\mathbf{q}} \int\limits _0^\omega
dq_0 \sum\limits_{\sigma=\pm}\frac{\sigma\,{\cal A}(q_0,\mathbf{q}) }
{q_0-\sigma \epsilon_\mathbf{q}}\;\omega^\prime\;{\cal I}(\lambda)\;,
\end{eqnarray}
where ${\cal I}(\lambda)$ is defined in Eq.\ (\ref{pintlong}).
Analogously to the analysis of ${\cal A}_{\rm cut}^\ell$ one finds for
$\omega \sim \phi$
\begin{eqnarray}
{\cal B}^\ell_{\rm cut}(\phi,\mathbf{k})&\sim& g^4\phi\int\limits_0^{\Lambda_{\rm q}}
\frac{d\xi}{\epsilon_\mathbf{q}} \int\limits _0^\phi dq_0 \sum\limits_{\sigma=\pm}
\frac{\sigma}{q_0-\sigma \epsilon_\mathbf{q}}\,\frac{\phi}{M}
\sim
g^4\phi\,\frac{\phi}{M} \int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}{\epsilon_\mathbf{q}}
\,\ln\left| \frac{\epsilon_\mathbf{q}-\phi}{\epsilon_\mathbf{q} + \phi}\right|\nonumber\\
&\sim&
g^4\phi\,\frac{\phi}{M}\;, \label{prevent2}
\end{eqnarray}
and similarly for $\omega\sim \Lambda_y$ with $ 0\leq y < 1$
\begin{eqnarray}\label{prevent3}
{\cal B}^\ell_{\rm cut}(\Lambda_y,\mathbf{k})\sim
g^3\phi\left(\frac{\phi}{M}\right)^{y}\;.
\end{eqnarray}
For $\omega \sim m_g +\Lambda_y$ with $0\leq y < 1$ we can simplify
$\omega^\prime\, {\cal I}(\lambda)\simeq1$ and find as in the transversal case, cf.\
Eq.\ (\ref{est2}),
\begin{eqnarray}
{\cal B}^\ell_{\rm cut}(m_g+\Lambda_y,\mathbf{k})&\sim& g^2 \phi \;.
\end{eqnarray}
In the limit of large energies, $M\ll \omega \sim \Lambda_{\rm gl}\sim \mu$, we estimate the
integral over the range $\omega-\Lambda_{\rm gl}<q_0 < \omega$ by substituting
${\cal A} \sim g\,\phi$ and obtain with $\omega^\prime\, {\cal I}(\lambda) \sim 1$
\begin{eqnarray}\label{large1}
{\cal B}^\ell_{\rm cut}(\omega,\mathbf{k})&\sim&
g^3 \phi \int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}{\epsilon_\mathbf{q}}
\int\limits _{\omega-\Lambda_{\rm gl}}^{\omega} dq_0 \sum\limits_{\sigma=\pm}
\frac{\sigma}{q_0-\sigma \epsilon_\mathbf{q}}
\sim g^3 \phi \int\limits_0^{\Lambda_{\rm q}} d\xi
\int\limits _{\omega-\Lambda_{\rm gl}}^{\omega} \frac{dq_0}{q_0^2} \nonumber\\
&\sim& g^3\phi \,\frac{M\Lambda_{\rm gl}}{\omega^2}\sim g^3\phi\, \frac{M}{\omega}\;.
\end{eqnarray}
Hence, also ${\cal B}^\ell_{\rm cut}$ becomes small in the limit of large energies.
The estimates for ${\cal B}^{\ell,t}_{\rm cut}$ are summarized in
Tables \ref{tableBcutzeta<M} and \ref{tableBcutzetasimM}.
\begin{table}
\centerline{\begin{tabular}[t]{|c||c|c|c|c|}
\hline
&~ $\omega\sim\phi $~ &~ $\omega\sim\Lambda_{1>y>0}$~ &
~$\omega\sim m_g+\Lambda_{1>y>0}$~ & ~$m_g\ll\omega<3\mu$~
\\ \hline\hline
~ ${\cal B}_{\rm cut}^t$ ~ & $g^4\phi$ & $g^3\phi$ & $g^2\phi$ &
$g^3\phi\left(\frac{M}{\omega} \right)^2$
\\ \hline
${\cal B}_{\rm cut}^\ell$ & ~$g^4\phi\,\frac{\phi}{M}$~ & ~
$g^3\phi\left(\frac{\phi}{M}\right)^y$ ~ & $g^2\phi$ & ~ $g^3\phi\,\frac{M}{\omega} $~
\\ \hline
\end{tabular}}
\caption{Estimates for ${\cal B}_{\rm cut}^{\ell,t}$ at different energy
scales and $\zeta\ll M$.}
\label{tableBcutzeta<M}
\vspace{0.5cm}
\centerline{\begin{tabular}[t]{|c||c|c|c|c|}
\hline
&~ $\omega\sim\phi $~ &~ $\omega\sim\Lambda_{1>y>0}$~ &
~$\omega\sim m_g+\Lambda_{1>y>0}$~ & ~$m_g\ll\omega<3\mu$~
\\ \hline\hline
~ ${\cal B}_{\rm cut}^t$ ~ & $~g^4\phi\,\left(\frac{\phi}{M}\right)^{1/3}~$ &
$~g^3\phi\,\left(\frac{\phi}{M}\right)^{y/3}$ &
$g^2\phi$ & $g^3\phi\left(\frac{M}{\omega} \right)^2$
\\ \hline
${\cal B}_{\rm cut}^\ell$ & ~$g^4\phi\,\frac{\phi}{M}$~ & ~
$g^3\phi\left(\frac{\phi}{M}\right)^y$ ~ & $g^2\phi$ & ~ $g^3\phi\,\frac{M}{\omega} $~
\\ \hline
\end{tabular}}
\caption{Estimates for ${\cal B}_{\rm cut}^{\ell,t}$ at different energy scales
and $\zeta\lesssim M$.}
\label{tableBcutzetasimM}
\end{table}
In the undamped gluon sector the term ${\cal M}^{\ell,t}_{{\cal B},T=0}$ in Eq.\ (\ref{ImMT0})
gives the contribution
\begin{eqnarray}\label{Bpole}
{\cal B}^{\ell,t}_{\rm pole}(\omega,\mathbf{k}) \sim g^2\int\limits_0^{\Lambda_{\rm q}} \frac{d\xi}
{\epsilon_\mathbf{q}} \int\limits _0^\omega dq_0 \sum\limits_{\sigma=\pm}\frac{\sigma\,
\rho_\phi(q_0,\mathbf{q})}{q_0-\sigma \epsilon_\mathbf{q}} \int\limits _{|\zeta-\xi|}^{2\mu} dp\,p \,
\rho^{\ell,t}_{\rm pole}(\omega^\prime,\mathbf{p})\,\delta[\omega^\prime-\omega_{\ell,t}(\mathbf{p})]\;.
\end{eqnarray}
Due to the restriction $p< 2\mu$ it follows with similar arguments as for
${\cal B}^{\ell,t}_{\rm cut}$ that ${\cal B}^{\ell,t}_{\rm pole}(\omega,\mathbf{k})=0$ for $\omega >4\mu$.
For the transversal sector we employ analogous approximations as for
${\cal A}^{t}_{\rm pole}$ and obtain
\begin{eqnarray}
{\cal B}^{t}_{\rm pole}(\omega,\mathbf{k})&\sim& g^2\int\limits_{\Lambda_1}^{\Lambda_0} \frac{d\xi}{\xi}
\int\limits _0^\omega dq_0 \sum\limits_{\sigma=\pm}
\frac{\sigma\,{\cal A}(q_0,\mathbf{q})}{q_0-\sigma \epsilon_\mathbf{q}}
\int\limits _{\sqrt{m_g^2+|\xi-\zeta|^2}}^{2\mu}\!\!\! d\omega_t \,\delta(\omega^\prime-\omega_{t})\;.
\end{eqnarray}
This contribution is non-zero only if $\omega>m_g$. First we consider energies
$\omega \sim m_g + \Lambda_{2y_1}$ and $\zeta \sim \Lambda_{y_2}$, and analyze the
two cases $y_1 <y_2$ and $y_1 >y_2$ separately. In the first case, the condition
$\omega>\sqrt{m_g^2+|\xi-\zeta|^2}$ requires $0<\xi<\Lambda_{y_1}$ and consequently
\begin{eqnarray}
{\cal B}^{t}_{\rm pole}(\omega,\mathbf{k})&\sim&
g^3\phi\int\limits_{\Lambda_1}^{\Lambda_{y_1}} \frac{d\xi}{\xi} \,
\,\ln\left|\frac{\omega-\sqrt{m_g^2+\xi^2}-\xi}{\omega-\sqrt{m_g^2+\xi^2}+\xi}\right|
\sim g^3\phi\;,\label{Bcuttrans1}
\end{eqnarray}
where ${\cal A}\sim g\,\phi$ and the logarithm prevents the BCS logarithm. The second case,
$y_1 >y_2$, leads to the condition
\mbox{$ \Lambda_{y_2} -\Lambda_{y_1} <\xi < \Lambda_{y_2} + \Lambda_{y_1}$}
and we find
\begin{eqnarray}
{\cal B}^{t}_{\rm pole}(\omega,\mathbf{k})&\sim&
g^3\phi\int\limits_{\Lambda_{y_2}-\Lambda_{y_1}}^{\Lambda_{y_2}+\Lambda_{y_1}}
\frac{d\xi}{\xi} \,\
\ln\left|\frac{m_g-\sqrt{m_g^2+\Lambda_{y_2}^2} -\Lambda_{y_2}}
{m_g-\sqrt{m_g^2+\Lambda_{y_2}^2} +\Lambda_{y_2}}\right|
\sim g^3\phi\,\left( \frac{\phi}{M}\right)^{y_1}\;,\label{Bcuttrans2}
\end{eqnarray}
where in the last step the logarithm was estimated to be of order $(\phi/M)^{y_2}$ and the
integral over $\xi$ to be of order $(\phi/M)^{y_1-y_2}$.
For $\omega > 2m_g$ the upper boundary for the integral over $\xi$ is given by
$\Lambda_{\rm q}$ without further restrictions. The integral over $q_0$ runs over values
$q_0 >m_g$ and therefore receives contributions from ${\cal A}^\ell_{\rm pole}$, cf.\ Eq.\ (\ref{l3}).
As a consequence one finds
\begin{eqnarray}\label{est8}
{\cal B}^{t}_{\rm pole}(\omega,\mathbf{k})\sim g^2\phi
\end{eqnarray}
analogously to Eq.\ (\ref{est2}). For $\omega \gg m_g$ the additional contributions from
$2m_q<q_0 <\omega$ are only $\sim g^3 \phi$, as can be seen in the same way as in
Eq.\ (\ref{est4}), and therefore ${\cal B}^{t}_{\rm pole}\sim g^2\phi$. However, for
$\omega\agt 2\mu+2m_g$ the condition $\omega^\prime = \omega_\ell$ can be fulfilled only
for $q_0>\omega - 2\mu\agt 2m_g $, where ${\cal A }\sim g\,\phi$, and one finds
\begin{eqnarray}\label{large2}
{\cal B}^{t}_{\rm pole}(\omega,\mathbf{k})&\sim&
g^3\phi\int\limits_{\Lambda_1}^{\Lambda_0} \frac{d\xi}{\xi} \int\limits _{\omega-2\mu}^\omega dq_0
\sum\limits_{\sigma=\pm}\frac{\sigma}{q_0-\sigma \epsilon_\mathbf{q}}
\sim g^3\phi\int\limits_{\Lambda_1}^{\Lambda_0} d\xi \int\limits _{\omega-2\mu}^\omega
\frac{dq_0}{q_0^2}\nonumber\\
&\sim& g^3\phi\,\frac{M\,2\mu}{\omega^2}\sim g^3\phi\,\frac{M}{\omega}\;.
\end{eqnarray}
In the longitudinal sector the analysis starts similarly with energies
$\omega \sim m_g +\Lambda_{2y_1}$ and $\zeta \sim \Lambda_{y_2}$. Since this restricts
the gluon momentum to $p \lesssim m_g$, we apply Eq.\ (\ref{rholapp}) and obtain
\begin{eqnarray}
{\cal B}^{\ell}_{\rm pole}(\omega,\mathbf{k})
&\sim& g^2\int\limits_{\Lambda_1}^{\Lambda_0} \frac{d\xi}{\xi}\int\limits _0^{\omega} dq_0
\sum\limits_{\sigma=\pm}\frac{\sigma\,{\cal A}(q_0,\mathbf{q}) }{q_0-\sigma \epsilon_\mathbf{q}}
\int\limits _{\sqrt{m_g^2+|\xi-\zeta|^2}}^{\omega} \!\!\!\!\!d\omega_\ell\,\frac{\omega_\ell^2}
{\omega^2_\ell-m_g^2}
\,\delta(\omega^\prime-\omega_{\ell})\;.
\end{eqnarray}
In the case that $y_1< y_2$, the condition $\omega>\sqrt{m_g^2+|\xi-\zeta|^2}$ requires
$0<\xi<\Lambda_{y_1}$ and one finds
\begin{eqnarray}
{\cal B}^{\ell}_{\rm pole}(\omega,\mathbf{k})
&\sim& g^2\int\limits_{\Lambda_1}^{\Lambda_{y_1}} \frac{d\xi}{\xi}\int\limits _0
^{\omega-\sqrt{m_g^2-|\xi-\zeta|^2}} \!\!\!\!\!dq_0\sum\limits_{\sigma=\pm}\frac{\sigma\,
{\cal A}(q_0,\mathbf{q})}{q_0-\sigma \epsilon_\mathbf{q}}\;\frac{(\omega-q_0)^2}{(\omega-q_0)^2-m_g^2} \;.\label{est5}
\end{eqnarray}
Since $q_0\leq\omega-\sqrt{m_g^2-|\xi-\zeta|^2}<\Lambda_{2y_1}$ is much smaller than
$\omega \sim m_g+ \Lambda_{2y_1}$, we neglect $q_0$ against $\omega$ on the r.h.s\ of
Eq.\ (\ref{est5}). Furthermore, we estimate ${\cal A}\sim g\,\phi$ and obtain similarly to
Eq.\ (\ref{Bcuttrans1})
\begin{eqnarray}
{\cal B}^{\ell}_{\rm pole}(\omega,\mathbf{k})
&\sim& g^3\phi\,\frac{\omega}{\omega-m_g}\sim g^3\phi\,\left(\frac{M}{\phi}\right)^{2y_1}
\;.\label{prevent4}
\end{eqnarray}
The case that $y_1> y_2$ leads to the condition
$\Lambda_{y_2} -\Lambda_{y_1} <\xi < \Lambda_{y_2} + \Lambda_{y_1}$, and we find
similarly to Eq.\ (\ref{Bcuttrans2})
\begin{eqnarray}
{\cal B}^{\ell}_{\rm pole}(\omega,\mathbf{k})
&\sim& g^3\phi\,\left(\frac{\phi}{M} \right)^{y_1}\frac{\omega}{\omega-m_g}\sim
g^3\phi\,\left(\frac{M}{\phi}\right)^{y_1}
\;.
\end{eqnarray}
For larger energies $\omega \agt 2m_g$ the upper boundary of the integral over $q_0$ will
just exceed $m_g$, where it is ${\cal A} \sim {\cal A}^\ell_{\rm pole}$, cf.\ Eq.\ (\ref{l3}). One
finds analogously to ${\cal B}^t_{\rm pole}$, cf.\ Eq.\ (\ref{est8}), that this gives the main
contribution of order
\begin{eqnarray}
{\cal B}^\ell_{\rm pole}(\omega,\mathbf{k})\sim g^2\phi \;.\label{est6}
\end{eqnarray}
In order to estimate ${\cal B}^{\ell}_{\rm pole}$ for energies $\omega> 2\mu+2m_g$, for which
$2m_g<q_0<\omega$ and ${\cal A}$ is only $\sim g\,\phi$, we have to employ
Eq.\ (\ref{rholpoleexp})
\begin{eqnarray}\label{lpole3}
{\cal B}^{\ell}_{\rm pole}(\omega,\mathbf{k})
&\sim& g^3\phi\int\limits_0^{\Lambda_{\rm q}}\frac{d\xi}{\epsilon_\mathbf{q}}
\int\limits _{2m_g}^\omega dq_0 \sum\limits_{\sigma=\pm}
\frac{\sigma}{q_0-\sigma \epsilon_\mathbf{q}}\int\limits_{m_g}^{2\mu}
d\omega_\ell\,\exp\left(-\frac{2\omega_\ell^2}{3m_g^2} \right)
\,\delta(\omega^\prime-\omega_{\ell})\nonumber\\
&&\sim g^3\phi\int\limits_{\Lambda_1}^{\Lambda_0}d\xi
\,\int\limits _{2m_g}^\omega \frac{dq_0}{q_0^2}\,
\exp\left[-\frac{2(\omega-q_0)^2}{3m_g^2} \right]
\sim g^3\phi\left(\frac{M}{\omega}\right)^2\;.
\end{eqnarray}
In the last step, the integral over $q_0$ was restricted to the region
$\omega -m_g \lesssim q_0 <\omega$ due to the exponential function.
The estimates for ${\cal B}^{t,\ell}_{\rm pole}$ are summarized in
Tables \ref{tableBpolezeta<M} and \ref{tableBpolezetasimM}.
\begin{table}
\centerline{\begin{tabular}[t]{|c||c|c|c|c|}
\hline
& ~ $\omega < m_g+\Lambda_1$ ~ & ~$\omega \sim m_g+\Lambda_{1>y\geq0}$~ &
~$2m_g<\omega <2\mu$~
& ~$2\mu<\omega<4\mu$~
\\ \hline\hline
~ ${\cal B}_{\rm pole}^t$ ~ & 0 & $g^3\phi$ & $g^2\phi$ & $g^3\phi\,\frac{M}{\omega}$
\\ \hline
${\cal B}_{\rm pole}^\ell$ & 0 & ~$g^3\phi\left(\frac{M}{\phi}\right)^y$ ~ & $g^2\phi$ & ~
$g^3\phi \,\left(\frac{M}{\omega}\right)^2$~
\\ \hline
\end{tabular}}
\caption{Estimates for ${\cal B}_{\rm pole}^{\ell,t}$ at different
energy scales and $\zeta\ll M$.}
\label{tableBpolezeta<M}
\vspace{1cm}
\centerline{\begin{tabular}[t]{|c||c|c|c|c|}
\hline
& ~ $\omega < m_g+\Lambda_1$ ~ & ~$\omega \sim m_g+\Lambda_{1>y\geq0}$~ &
~$2m_g<\omega <2\mu$~
& ~$2\mu<\omega<4\mu$~
\\ \hline\hline
~ ${\cal B}_{\rm pole}^t$ ~ & 0 & $g^3\phi\,\left(\frac{\phi}{M}\right)^{y/2}$ & $g^2\phi$ &
$g^3\phi\,\frac{M}{\omega}$
\\ \hline
${\cal B}_{\rm pole}^\ell$ & 0 & ~$g^3\phi\left(\frac{M}{\phi}\right)^{y/2}$ ~ & $g^2\phi$ &
~ $g^3\phi \,\left(\frac{M}{\omega}\right)^2$~
\\ \hline
\end{tabular}}
\caption{Estimates for ${\cal B}_{\rm pole}^{\ell,t}$ at different energy
scales and $\zeta\lesssim M$.}
\label{tableBpolezetasimM}
\end{table}
In the following the estimates for ${\cal A}$ and ${\cal B}$ are used to determine the order of
magnitude of ${\cal H}[{\cal A}]$ and ${\cal H}[{\cal B}]$.
\subsection{Estimating ${\cal H}[{\cal A}]$ and ${\cal H}[{\cal B}]$ }\label{hilbert}
In order to determine the order of magnitude of Re$\,\tilde \phi$ and the order of the
corrections due to $\cal B$ we estimate the Hilbert transforms ${\cal H}[{\cal A}]$ and
${\cal H}[{\cal B}]$. For that the quark momentum has to be exponentially close to the Fermi surface,
$\zeta \ll M$, because for quarks farther away from the Fermi surface Im$\,\phi$ cannot be
treated as a correction anymore. Furthermore, in that case the normal self-energy $\Sigma$
has to be accounted for self-consistently, which is beyond the scope of this work.
The integral over $\omega$ in the Hilbert transforms ${\cal H}[{\cal A}]$ and ${\cal H}[{\cal B}]$
is split into the energy regimes which were used to estimate $\cal A$ and $\cal B$,
cf.\ Eq.\ (\ref{splitting}). As explained in the discussion after Eq.\ (\ref{splitting}) we select for each
energy regime the most dominant gluon sectors and estimate their respective contributions to
Re$\,\phi$ and Re$\,\tilde\phi$.
At the smallest scale, $0\leq\omega \leq \Lambda_1$, one has
Im$\,\phi \sim {\cal A}^t_{\rm cut} \sim g^2\phi$, cf.\ Eq. (\ref{Aphi}), which yields
\begin{eqnarray}
&&\mathcal P\int\limits _0^{\Lambda_1} d\omega \,\sum\limits_{\sigma=\pm}
\frac{{\cal A}^t_{\rm cut}(\omega,\mathbf{k})}{\omega-\sigma\epsilon_\mathbf{k}} \sim g^2 \phi\,
\ln\left(\frac{\phi}{\epsilon_\mathbf{k}}\right)\sim g^2\, \phi\;.
\end{eqnarray}
At the scale $\Lambda_1\leq \omega \leq \Lambda_{\bar g}$ one has
Im$\,\phi \sim {\cal A}^t_{\rm cut} \sim g\,\phi$, cf. Eq.\ (\ref{Aphi2}). With the substitution
$d\omega/\omega = \ln(\phi/M)\,dy$ one finds
\begin{eqnarray}\label{hil1}
&&\mathcal P\int\limits _{\Lambda_1}^{\Lambda_{\bar g}} d\omega \,\sum\limits_{\sigma=\pm}
\frac{{\cal A}^t_{\rm cut}(\omega,\mathbf{k})}{\omega-\sigma\epsilon_\mathbf{k}}\sim g\,\phi
\int\limits _{\Lambda_1}^{\Lambda_{\bar g}}\frac{d\omega}{\omega} \sim g\,
\phi \ln\left(\frac{\phi}{M}\right)\int\limits _{1}^{{\bar g}}dy \sim \phi\;.\label{magcon}
\end{eqnarray}
The contribution from ${\cal A}^\ell_{\rm cut} \sim g\,\phi\, (\phi/M)^y$ at the same scale,
cf. Eq.\ (\ref{Aphi2b}), can be shown to be much smaller,
\begin{eqnarray}
&&\mathcal P\int\limits _{\Lambda_1}^{\Lambda_{\bar g}} d\omega \,
\sum\limits_{\sigma=\pm}\frac{{\cal A}^\ell_{\rm cut} (\omega,\mathbf{k})}{\omega-\sigma\epsilon_\mathbf{k}}
\sim g\,\phi\,\ln\left(\frac{\phi}{M}\right)\int\limits _{1}^{\bar g} dy \,\left(\frac{\phi}{M}\right)^y
\sim g\,\phi\,\frac{\phi}{M}\;.
\end{eqnarray}
At the scale $\Lambda_{\bar g}\leq \omega \leq \Lambda_0$ one has
Im$\,\phi \sim {\cal A}^t_{\rm cut} \sim {\cal A}^\ell_{\rm cut} \sim g\,\phi$,
cf.\ Eq.\ (\ref{Aphi2}, \ref{Aphi2b}), and finds
\begin{eqnarray}\label{hil2}
&&\mathcal P\int\limits_{\Lambda_{\bar g}}^{\Lambda_0} d\omega \,\sum\limits_{\sigma=\pm}
\frac{{\cal A}^{\ell,t}_{\rm cut}(\omega,\mathbf{k})}{\omega-\sigma\epsilon_\mathbf{k}}\sim g\,\phi
\int\limits_{\Lambda_{\bar g}} ^{\Lambda_0} \frac{d\omega}{\omega}
\sim \phi\int\limits_{\bar g}^0 dy\sim g\,\phi\;.
\end{eqnarray}
For energies $\omega \agt m_g + \Lambda_y$ with $0\leq y < 1$ one has
Im$\,\phi \sim {\cal A}^\ell_{\rm pole} \sim g\,\phi\, (M/\phi)^y$, cf. Eq.\ (\ref{l3}), and one finds with
$d\omega = \ln(\phi/M) \,\Lambda_y\,dy$
\begin{eqnarray}
\mathcal P\int\limits_{m_g+\Lambda_1}^{m_g+\Lambda_0} d\omega \,
\sum\limits_{\sigma=\pm}\frac{{\cal A}^\ell_{\rm pole}(\omega,\mathbf{k})}
{\omega-\sigma\epsilon_\mathbf{k}}
&\sim&
\frac{g\,\phi}{M}\ln\left(\frac{\phi}{M}\right)\int\limits_{1}^{0} dy \,
\Lambda_{y}\left(\frac{M}{\phi}\right)^y
\sim\frac{\phi}{M}\int\limits_{1}^{0} dy \,M \sim \phi\;.\label{elcon}
\end{eqnarray}
For the regime $m_g < \omega < 2\mu$ we have
Im$\,\phi \sim {\cal A}^t_{\rm pole}\sim g\,\phi$,
cf.\ Eq.\ (\ref{tpole}), and obtain
\begin{eqnarray}
\mathcal P\int\limits_{m_g}^{2\mu} d\omega \,\sum\limits_{\sigma=\pm}
\frac{{\cal A}^t_{\rm pole}(\omega,\mathbf{k})}{\omega-\sigma\epsilon_\mathbf{k}}
\sim g\,\phi\int\limits_{m_g}^{2\mu} \frac{d\omega}{\omega} \sim g\,\phi
\,\ln\left(\frac{\mu}{M} \right)\sim {g\,\phi}\;.\label{conlargeomega}
\end{eqnarray}
Finally, integrating over $2\mu < \omega < 4\mu$ with Im$\,\phi \sim
{\cal B}^t_{\rm pole}\sim g^3\phi\,(M/\omega)$, cf.\ Eqs.\ (\ref{large1},\ref{large2}), one obtains
\begin{eqnarray}
\mathcal P\int\limits_{2\mu}^{4\mu} d\omega \,\sum\limits_{\sigma=\pm}
\frac{{\cal B}^t_{\rm pole}(\omega,\mathbf{k})}{\omega-\sigma\epsilon_\mathbf{k}}
\sim g^3\phi\,M\int\limits_{2\mu}^{4\mu} \frac{d\omega}{\omega^2} \sim
g^3\phi \,\frac{M}{\mu} \sim g^4\phi\;.\label{converylargeomega}
\end{eqnarray}
From Eqs.\ (\ref{magcon}) and (\ref{elcon}) we conclude that Re$\,\tilde \phi\sim \phi$.
Furthermore, ${\cal H}[{\cal B}]$ contributes to Re$\,\phi$ only at sub-subleading order. The
corresponding corrections arise from the following sources. The first is ${\cal B}^t_{\rm cut}$,
which is $\sim g^2{\cal A}^t_{\rm cut}$ for $\omega \sim \Lambda_y$ with $\bar g<y<1$.
After Hilbert transformation it yields a contribution of order $g^2\phi$ to Re$\,\tilde \phi$,
cf.\ Eq.\ (\ref{hil1}), and is therefore of sub-subleading order.
For $\omega \sim \Lambda_y$ with $0<y<\bar g$ we have
${\cal B}^{\ell,t}_{\rm cut}\sim g{\cal A}^{\ell,t}_{\rm cut}$. From Eq.\ (\ref{hil2}) it follows that
${\cal H}[{\cal B}^\ell_{\rm cut}]$ and ${\cal H}[{\cal B}^t_{\rm cut}]$ are of sub-subleading order.
For $\omega = m_g+\Lambda_y$ with $0\leq y< $ one has
${\cal B}^\ell_{\rm pole}\sim g^2{\cal A}^\ell_{\rm pole}$ and a sub-subleading-order contribution
seems possible, since the corresponding contribution from ${\cal A}^\ell_{\rm pole}$ is $\sim \phi$,
cf.\ (\ref{elcon}). As the latter, however, combines with ${\hat\phi}$ to a subleading order term,
cf.\ Sec.\ \ref{repro}, it would be interesting to investigate if also ${\cal B}^\ell_{\rm pole}$
finds an analogous partner to cancel similarly. Moreover, we found that
${\cal B}^{\ell,t}_{\rm pole}\sim g {\cal A}_{\rm pole}^t$ for $m_g<\omega < 2\mu$. From the
estimate in Eq.\ (\ref{conlargeomega}) we conclude that the corresponding contributions
to Re$\,\phi$ are of sub-subleading order.
The results are summarized in Table \ref{tablesummary}.
In the next section it is analyzed at which order Im$\,\phi$ contributes to the local part of the gap function,
${\hat\phi}$.
\subsection{The contribution of Im$\,\phi$ to ${\hat\phi}$}\label{phi0}
The gap equation for the energy-independent part ${\hat\phi}(\mathbf{k})$ is obtained by considering
the integrals $I_0$ and $I_{k_0}$, cf.\ Eqs.\ (\ref{I0},\ref{Ik0}), in the limit
$|k_0| \rightarrow \infty$. Since $p\lesssim 2\mu$, the gluon spectral densities
$\rho^{\ell,t}(q_0,\mathbf{p})$ are nonzero only for $q_0 \lesssim 2\mu$. Consequently,
the integral over $q_0$ in Eq.\ (\ref{Ik0}) is bounded by $-2\mu<q_0< 2\mu$.
Then, due to the energy denominator under the integral, $I_{\rm k_0}$ tends to zero
as $1/|k_0|$ for $|k_0| \rightarrow \infty$. In the second term on the r.h.s.\ of Eq.\ (\ref{I0})
one has ${\epsilon_\mathbf{q}}<\Lambda_{\rm q} \sim g\mu$. It follows that for $k_0 \gg 2\mu>p$ the
transverse gluon propagator becomes $\Delta^t\sim 1/k_0^2$. In the longitudinal sector one has
$\Delta^\ell \rightarrow -1/p^2$. Hence, in the limit $|k_0| \rightarrow \infty$ only the longitudinal
contribution of the considered term does not vanish. Smilarly, one can argue that also in the first
term on the r.h.s.\ of Eq.\ (\ref{I0}) only the contribution from the static electric gluon
propagator remains. Consequently, we find for ${\hat\phi}(\mathbf{k})$
\begin{eqnarray}\label{localgapeq}
{\hat\phi}(\mathbf{k}) &=& \frac{g^2}{3(2\pi)^2} \int\limits _{0}^{\Lambda_{\rm q}}\frac{d\xi}{\epsilon_\mathbf{q}}
\int\limits _{|\xi-\zeta|}^{2\mu}\frac{dp}{p}\,{\rm Tr}_s^\ell (k,p,q)
\left[{\rm Re}\,\phi({\tilde \epsilon_\mathbf{q}},\mathbf{q})\,
Z^2(\tilde\epsilon_\mathbf{q})\tanh\left(\frac{{\tilde \epsilon_\mathbf{q}}}{2T} \right)\right.
\nonumber\\&&
\left. +{\mathcal P}\int\limits _{-\infty}^\infty d\omega \,
\frac{\rho_{\phi}(\omega,\mathbf{q})}{{\tilde \epsilon_\mathbf{q}}-\omega}\,Z^2(\omega)
\tanh\left(\frac{\omega}{2T} \right)\right]\;.
\end{eqnarray}
In the limit $T\rightarrow 0$ the hyperbolic functions simplify. After performing the integral over
$p$ and with ${\rm Tr}_s^\ell (k,p,q)\sim 1$ and $Z^2(\omega)\sim1$ we obtain
\begin{eqnarray}
{\hat\phi}(\mathbf{k}) &\sim& g^2 \int\limits _{\Lambda_1}^{\Lambda_0}\frac{d\xi}{\xi}
\ln\left(\frac{2\mu}{|\xi-\zeta|}\right)
\left[ {\rm Re}\,\phi({\tilde \epsilon_\mathbf{q}},\mathbf{q})
+{\mathcal P}\int\limits _{0}^\infty d\omega \,
\sum\limits_{\sigma=\pm}\,
\frac{\sigma\,\rho_{\phi}(\omega,\mathbf{q})}{\sigma{\tilde \epsilon_\mathbf{q}}-\omega}\right]
\;,\label{phi02}
\end{eqnarray}
where the large logarithm arises from the $p-$integral. With that and assuming $\zeta \ll M$, the
integral containing ${\rm Re}\,\phi({\tilde \epsilon_\mathbf{q}},\mathbf{q})$ is found to be of order $\phi$,
and hence ${\hat\phi}(\mathbf{k})\sim \phi$. The remaining
contribution from $\rho_\phi$ is identical to Eq.\ (\ref{splitting}) up to the extra $\sigma$ due
to the hyperbolic tangent. One can conservatively estimate this term by approximating
$\rho_\phi \sim g\,\phi$ for $0<\omega<4\mu$ and all $\Lambda_1\leq\xi\leq\Lambda_0$, and
adding $\rho_\phi\sim g\,\phi\,(M/\phi)^y$ in the range $\omega\sim m_g+\Lambda_y\,,~ 1>y>0$.
We find that the contributions from $\rho_\phi$ to ${\hat\phi}$ are of order $g^2\phi$ and hence
of sub-subleading order.
This completes the proof that the contributions from Im$\,\phi$ to
${\rm Re}\,\phi(\epsilon_\mathbf{k}, \mathbf{k}) = {\rm Re}\,\tilde\phi(\epsilon_\mathbf{k}, \mathbf{k})+{\hat\phi}(\mathbf{k})$
are in total beyond subleading order.
\subsection{Re$\,\phi(\epsilon_\mathbf{k},\mathbf{k})$ to subleading order}\label{repro}
In the following we recover the real part of the gap equation to subleading order by Hilbert
transforming the imaginary part of the gap equation (\ref{Imphieq}) and adding the equation for the local
gap, $\hat\phi(\mathbf{k})$, Eq.\ (\ref{localgapeq}). This shows how ${\hat\phi}\sim \phi$ and
${\cal H}[{\cal A}_{\rm pole}^\ell]\sim \phi$ combine to a subleading-order contribution.
The gap equation for Re$\,\tilde\phi(\epsilon_\mathbf{k},\mathbf{k})$ reads to subleading order
\begin{eqnarray}
{\rm Re}\,\tilde\phi({ \epsilon_\mathbf{k}},\mathbf{k})&=&
-\frac{g^2}{3(2\pi)^2} \int\limits_0^{\Lambda_{\rm q}}d\xi
\frac{Z^2(\tilde\epsilon_\mathbf{q})}{2\tilde\epsilon_\mathbf{q}}\,{\rm Re}\,\phi({\tilde \epsilon_\mathbf{q}},\mathbf{q})
\,\tanh\left(\frac{\tilde\epsilon_\mathbf{q}}{2T} \right)
\nonumber\\
&&
\times\sum\limits_{\sigma=\pm}\left[
\int\limits_{|\xi-\zeta|}^{\Lambda_{\rm gl}}\!\!dp\,p\left\{{\rm Tr}_s^\ell (k,p,q)\left[\frac{1}{p^2}+
\Delta_{\rm HDL}^{\ell}(\epsilon_\mathbf{k}-\sigma\tilde\epsilon_\mathbf{q},\mathbf{p})\right]+ {\rm Tr}_s^t (k,p,q)
\Delta^{t}_{\rm HDL}(\epsilon_\mathbf{k}-\sigma\tilde\epsilon_\mathbf{q},\mathbf{p})\right\}\right.\nonumber\\
&&\hspace{1.2cm}
+\left.
\int\limits_{\Lambda_{\rm gl}}^{2\mu}dp\,p\,{\rm Tr}_s^t (k,p,q)\,
\Delta^{t}_{0,22}(\epsilon_\mathbf{k}-\sigma\tilde\epsilon_\mathbf{q},\mathbf{p})
\right]\!,\label{rephitilde2}
\end{eqnarray}
where we used $\rho^\ell(\omega, \mathbf{p}) \equiv 0$ for $p>\Lambda_{\rm gl}$ in the effective
theory, cf.\ Eq.\ (\ref{hardspecl}). Furthermore, all terms $\sim \coth$ have been neglected.
Adding Eq.\ (\ref{localgapeq}), the $1/p^2$-term from the soft electric gluon propagator in
Eq.\ (\ref{rephitilde2}) restricts the $p-$integral of ${\hat\phi}$ from $\Lambda_{\rm gl}$ to
$2\mu$. This is the aforementioned cancellation of ${\hat\phi}$ and
${\cal H}[{\cal A}_{\rm pole}^\ell]$, which reduces these terms to the order $g\,\phi$.
After approximating the hard magnetic gluon propagator as
$\Delta^{t}_{0,22}(\epsilon_\mathbf{k}-\sigma\tilde\epsilon_\mathbf{q},\mathbf{p})=
1/p^2 +O(\Lambda_{\rm q}/\Lambda_{\rm gl})$, one can combine it with the remaining
contribution from ${\hat\phi}$. Using
${\rm Tr}_s^\ell (k,p,q)-{\rm Tr}_s^t(k,p,q) =4 +O(\Lambda_{\rm q}/\Lambda_{\rm gl})$,
one finally arrives at Eq.\ (124) of Ref.\ \cite{qwdhr}.
\subsection{${\rm Im}\,\phi(\epsilon_\mathbf{k}+i\eta,\mathbf{k})$ exponentially close to the Fermi surface}\label{calcimphi}
In Sec.\ \ref{estA} and \ref{estB} the contributions $\cal A$ and $\cal B$ to ${\rm Im}\,\phi$ have
been estimated for different regimes of $\omega$ and $\zeta$, cf.\ Tab.\
\ref{tableAcutzeta<M}-\ref{tableBpolezetasimM}. In the case that $\omega \sim \Lambda_y$
with $\bar g<y\leq 1$ and $\zeta < \Lambda_{y/3}$ we found
${\cal A}^t_{\rm cut}$ to be the dominant contribution to ${\rm Im}\,\phi$. The cut of the
longitudinal gluons is suppressed by a factor $(\phi/M)^y$, while the gluon poles do not contribute
at all. In other regions of $\omega$ and $\zeta$ different gluon sectors are shown to be dominant.
Furthermore, for $\omega >2m_g$ also the contributions from $\cal B$
would have to be considered, since there ${\cal B}$ is suppressed relative to ${\cal A}$ only by
one power of $g$ and therefore contributes at subleading order to
${\rm Im}\,\phi$. For $\omega \sim \Lambda_y$
with $\bar g<y \leq1$ and $\zeta < \Lambda_{y/3}$ we find for the imaginary part of the gap
\begin{eqnarray}\label{Imphiexact}
{\rm Im}\,\phi(\omega+i\eta,\mathbf{k})
&\simeq&
\frac{g^2\,\pi}{3(2\pi)^2}\int\limits_{\Lambda_1}^{\omega}\frac{d\xi}{\xi}\,Z^2(\tilde\epsilon_\mathbf{q})\,
{\rm Re}\,\phi({\tilde \epsilon_\mathbf{q}}\mathbf{q})\int\limits_\lambda^{\Lambda_{\rm gl}} dp
\,\frac{2M^2\,\omega^*}{\pi}\,\frac{p^2}{p^6+(M^2\omega^*)^2}\nonumber\\
&\simeq& \frac{g^2\,\pi}{9(2\pi)^2}\ln\left(\frac{\phi}{M}\right)\,\phi \int\limits_{1}^{y}
dy^\prime\,\sin\left(\frac{\pi\,y^\prime}{2} \right) \,\left(1-\frac{\bar g\,\pi\,y^\prime}{2} \right)
\nonumber\\
&=&\bar{g}\,\phi \;\frac{\pi}{2}\cos\left(\frac{\pi\,y}{2} \right)+ {\cal O}(\bar g^2)\;,
\end{eqnarray}
where we substituted $\omega = \Lambda_y$ and $d\xi/\xi = dy^\prime\,\ln(\phi/M)$ and used
$\ln(\phi/M)=-3\pi^2/(\sqrt{2}\,g)$. Furthermore, it was sufficient to approximate
${\rm Tr}_s^t (k,p,q)\simeq -2$. This result agrees with Eq.\ (81) in Ref.\ \cite{ren} where
a different approach is used.
\section{Conclusions and Outlook}\label{outlook}
In this work we studied how the non-local nature of the gluonic interaction between quarks at high
densities affects the energy and momentum dependence of the (2SC) color-superconducting gap
function at weak coupling and zero temperature. For this purpose, energy and momentum have
been treated as independent variables in the gap equation.
By analytically continuing from imaginary to real energies and appropriately
choosing the contour of the integral over energies, we split the gap equation into two
coupled equations: one for ${\rm Re}\, \phi$ and one for ${\rm Im}\, \phi$.
In order to solve these equations self-consistently, the gap had to be estimated for all
energies and for all momenta satisfying $|k-\mu| \leq \Lambda_{\rm q}$, where
$\Lambda_{\rm q}\sim g\mu$ is the quark cutoff of the effective theory
employed in this work. For quarks exponentially close to the Fermi surface,
we have proven the previous conjecture that, to subleading order,
one has $\phi = {\rm Re}\, \phi$, where ${\rm Re}\, \phi$ is the known subleading
order solution for the real part of $\phi$, which neglects all contributions arising from the
non-analyticities of $\phi$.
Furthermore, we found that, exponentially close to the Fermi surface and for small energies,
only the cut of the magnetic gluon propagator contributes to Im$\,\phi$. Thus, the analytic solution
of the imaginary part of the gap equation is rather simple, cf.\ Eq.\ (\ref{Imphiexact}). For energies of
order $m_g$, we showed that also the electric cut and the gluon poles contribute to Im$\,\phi$,
cf.\ Tables \ref{tableAcutzeta<M}-\ref{tableBpolezetasimM}. The increase of the imaginary part
with increasing energies can be interpreted as the opening of decay channels for the quasiquark
excitations. The peak Im$\,\phi\sim g^2\mu$ occurring for energies just above $m_g$ reflects the
decay due to the emission of on-shell electric gluons.
Treating energy and momentum independently, the solution also includes Cooper pairs
further away from the Fermi surface, up to $|k-\mu|\sim g\mu$. This becomes important
when one is interested in extrapolating down to more realistic quark chemical potentials
where the coupling between the quarks becomes stronger: With increasing $g$ also quarks
away from the Fermi surface participate in Cooper pairing. These are not included in the
Eliashberg theory, where one assumes that Cooper pairing happens exclusively at the
Fermi surface.
Finally, it would be interesting to generalize our analysis to non-zero temperatures.
The dependence of $\phi(T)/\phi(T=0)$ on $T/T_c$, where $T_c$ is
the critical temperature for the onset of color superconductivity, agrees with
that of a weakly coupled BCS superconductor if one neglects Im$\,\phi$ \cite{rdpdhr}.
For strongly coupled superconductivity in metals
it is known \cite{schrieffer}, however, that Im$\,\phi$ is
significantly modified at non-zero temperatures due to the presence of thermally
excited quasiparticles.
This in turn gives rise to important deviations from a BCS-like behavior of
$\phi(T)/\phi(T=0)$ at energies larger than the gap. An analogous analysis for color
superconductivity would be an interesting topic for future studies.
\section*{Acknowledgments}
I would like to thank Michael Forbes, Rob Pisarski, Hai-cang Ren, Dirk Rischke, Thomas Sch\"afer, Andreas Schmitt,
Achim Schwenk, and Igor Shovkovy for interesting and helpful discussions. I thank the German
Academic Exchange Service (DAAD) for financial support and the Nuclear Theory Group at the
University of Washington for its hospitality.
|
2,877,628,089,449 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
The interaction of a binary companion with an evolve giant star, like an asymptotic giant branch (AGB) star that is a progenitor of a planetary nebula (PN), has in principle two sources of gravitational energy that might energize the outflow. The first one is the orbital gravitational energy that the binary system releases as the orbital separation between the core of the giant star and the companion decreases.
This is likely to energize more the equatorial outflow than the polar outflow, either when the secondary star companion is outside the envelope, e.g., by ejecting mass through the second Lagrangian point (e.g., \citealt{Livioetal1979, MastrodemosMorris1999, Pejchaetal2016, Chenetal2017, Pejchaetal2017}) or during the common envelope evolution (e.g., \citealt{Iaconietal2017b, DeMarcoIzzard2017, Galavizetal2017, Iaconietal2017a, Chamandyetal2018, GarciaSeguraetal2018, MacLeodetal2018}, and references for older than two years papers therein).
The second energy source is the gravitational energy that the mass that the secondary more compact star accretes from the envelope of the giant star releases. The secondary star can be inside the envelope of the giant star or outside the envelope. The most efficient process to carry the accretion energy to the outflow is to launch jets from an accretion disk. Such jets influence the outflow along and near the polar directions much more than they affect the equatorial outflow.
In recent years the notion that in many cases jets shape the outflows from evolved stars, and in particular from AGB stars that evolve to become PNe, has benefited from progress in several directions. First is the realization that jets shape many PNe. Although the suggestions for jet-shaping of some PNe is old (e.g., \citealt{Morris1987, Soker1990AJ}), in the last two decades researchers have realized that jets shape a large fraction of non-spherical PNe (e.g., \citealt{SahaiTrauger1998, Boffinetal2012, HuarteEspinosaetal2012, Balicketal2013, Miszalskietal2013, Tocknelletal2014, Huangetal2016, Sahaietal2016, RechyGarciaetal2016, GarciaSeguraetal2016, Dopitaetal2018}).
Then is the understanding that in many cases the jets operate in a feedback mechanism (see \citealt{Soker2016Rev} for a review).
In the negative-feedback part the jets remove mass from the ambient medium which serves as the reservoir of accreted gas, hence reducing accretion rate that is followed by a decrease in jets' power (e.g., \citealt{Sokeretal2013, LopezCamaraetal2018}).
The positive-feedback part comes from the removal of energy and gas from the very inner regions of the accretion disk, just near the accreting star. Consequently, this reduces the pressure in those regions and hence allows for a high accretion rate, even at super-Eddington rates (e.g., \citealt{Shiberetal2016, Chamandyetal2018}).
The third direction of progress has been the finding that binary systems shape most, and probably all, PNe (e.g., \citealt{
Akrasetal2016, Alietal2016, Bondetal2016, Chenetal2016, Chiotellisetal2016, GarciaRojasetal2016, Hillwigetal2016a, Jonesetal2016, Madappattetal2016, Chenetal2017, DeMarcoIzzard2017, Hillwigetal2017,JonesBoffin2017, Sowickaetal2017,
Alleretal2018, Barkeretal2018, Bujarrabaletal2018, Ilkiewiczetal2018, Jones2018, Miszalskietal2018b}, for a sample of papers from the last 3 years; for a different model see \citealt{GarciaSeguraetal2005}).
In some cases there is a direct link between the presence of a binary central star and the presence of jets (e.g., \citealt{Boffinetal2012, Miszalskietal2013, Miszalskietal2018a}), and binary AGB systems and the presence of jets launched by the companion to the AGB star (e.g., \citealt{Thomasetal2013, Gorlovaetal2015, Bollenetal2017, VanWinckel2017}). The finding of binary systems in PNe is relevant to the present study because single AGB stars cannot launch jets for the lack of angular momentum, and the presence of a binary companion is crucial for launching jets in these progenitors of PNe.
In light of the large number of observations that show that jets shape many PNe and other nebulae around evolved stars, we continue our study of the different morphological structures that jets can form.
We stress here that by jets we refer to any bipolar outflow from the accretion disk, most likely in a binary system. This bipolar outflow can be two opposite narrow jets, a wide bipolar outflow from the accretion disk, it can be a continuous bipolar outflow or a chain of clumps. To all of these and more, we refer as jets.
There are many simulations of jet-shaping of PNe and related nebulae (e.g.,
\citealt{LeeSahai2004, Dennisetal2009, Leeetal2009, HuarteEspinosaetal2012, Balicketal2013, Akashietal2015, Balicketal2017, Akashietal2018, Balicketal2018}), but here we concentrate on a particular morphological feature that we term columns crown.
In the present paper we use the hydrodynamical core FLASH (section \ref{sec:numerical}) to study the formation of columns or filaments that protrude from lobes that jets inflate in the direction of the jets initial velocity.
We describe the formation of the protruding columns, the columns crown, in section \ref{sec:results}.
We are motivated by such thin columns that are observed in the PN Menzel~3 (Mz~3; PN~G331.7-01.0; the Ant nebula). In an earlier paper \citep{AkashiSoker2008a} we showed that a jet interacting with the AGB wind can form a lobe with a front lobe as observed in Mz~3.
We now perform a simulation with a different set of initial conditions that lead to the formation of a delicate structure of the columns crown.
The PN MZ~3 itself is a well studied PN (e.g., \citealt{Clyneetal2015, Alemanetal2018}) that we discuss more in section \ref{sec:summary} where we summarize our main findings.
\section{NUMERICAL SET-UP}
\label{sec:numerical}
This study is another one in our exploration of the different morphologies that the interaction of jets with an ambient medium can form, with the goal of accounting for the rich variety of morphologies of PNe and other nebulae around evolved stars. We describe here the initial conditions of the simulation that forms more or less straight columns along the polar directions.
We assume that the ambient medium is a spherical slow wind surrounded by a dense slow spherical shell that itself is surrounded by an outer slow wind zone. A giant star formed the dense shell in an episode of intensive mass loss rate. A main sequence binary companion that is outside the envelope of the giant star launches the two opposite jets at $\simeq 3000 ~\rm{yr}$ after the intensive mass loss episode that formed the dense shell ended.
We simulate the region far from the binary system, and so we ignore the orbital motion and launch the jets along the symmetry axis. As well, the large distance from the binary system allows us to neglect the gravitational field of the binary system, and so we do not include gravity. In other words, the velocities at which we inject the wind and jets are much larger than the escape speed from the regions we simulate.
We use version 4.2.2 of the hydrodynamical FLASH code \citep{Fryxell2000} with the unsplit PPM (piecewise-parabolic method) solver to perform our 3D hydrodynamical simulations. FLASH is an adaptive-mesh refinement (AMR) modular code used for solving hydrodynamics and magnetohydrodynamics problems.
We include radiative cooling of the optically thin gas, and take the cooling function for solar abundance from \cite{SutherlandDopita1993}. We turn off radiative cooling below a gas temperature of $10^4 ~\rm{K}$.
We employ a full 3D AMR (7 levels; $2^{10}$ cells in each direction) using a Cartesian grid $(x,y,z)$ with outflow boundary conditions at all boundary surfaces. We take the $z=0$ plane to be in the equatorial plane of the binary system, that is also the equatorial plane of the nebula, and we simulate the whole space (the two sides of the equatorial plane).
At time $t=0$ we place a spherical dense shell in the zone $10^{17} ~\rm{cm} < r < 1.75 \times 10^{17} ~\rm{cm}$ and with a density profile of $\rho_s = 4 \times 10^{-22} (r/10^{17} ~\rm{cm})^{-2} ~\rm{g} ~\rm{cm}^{-3}$, such that the total mass in the shell is $0.002M_\odot$.
The gas in the shell has an initial radial velocity of $v_s = 10 ~\rm{km} ~\rm{s}^{-1}$.
The spherical expanding shell corresponds to a wind with a terminal velocity of $v_s = 10 ~\rm{km} ~\rm{s}^{-1}$ that the giant star blown for about 2400 years at a mass loss rate of $\dot M_s=8 \times 10^{-7} M_\odot ~\rm{yr}^{-1}$.
The regions outside and inside the dense shell are filled with a much lower density gas, the spherically slow wind, with velocity of $v_{\rm wind}=v_s= 10 ~\rm{km} ~\rm{s}^{-1}$ and a constant mass-loss rate of $\dot M_{\rm wind}= 10^{-7}{\rm M_\odot ~\rm{yr}^{-1}}$.
We note that such a shell embedded inside a slow wind is not observed in any PN (e.g., \citealt{Sahaietal2011}). We explain this by the combination of two properties of the flow. Firstly, the ejection of such a shell due to a binary interaction is rare. Secondly, we expect that the interaction of the jets with the shell takes place closer to the center, namely, during a short time after the ejection of the shell. Most likely, the interaction region will be obscured by dust.
The shorter distance and time imply higher mass loss rate. For example, we can reduce the radius by a factor of about 20, say, decrease the shell production time to $120 ~\rm{yr}$, and increase the mass loss rate into the shell to $\dot M_s=1.6 \times 10^{-5} M_\odot ~\rm{yr}^{-1}$. The entire dense shell is within $10^{16} ~\rm{cm}$, or $600 ~\rm{AU}$. In that case the life of the dense shell before interaction is only 300 years, and might be hidden by dust at the time of the interaction. After the interaction the structure expands ballistically, to reach the same size as we obtain here. Here we simulate one case to present the principle properties of such an interaction.
We launch the two opposite jets from the inner $10^{16} ~\rm{cm} $ zone along the $z$-axis ( at $x=y=0$) and within a half opening angle of $\alpha = 25^\circ$. By the term `jets' we refer also to wide outflows, as we simulate here. More generally, we simulate slow-massive-wide (SMW) outflows.
We terminate the launching of the jets at $t=17~\rm{yrs}$.
The jets' initial velocity is $v_{\rm jet}=800 ~\rm{km} ~\rm{s}^{-1}$, just a little above the escape speed from a main sequence star. The mass-loss rate into the two jets together is $\dot M_{\rm 2jets} = 2 \times 10^{-6} M_\odot ~\rm{yr}^{-1}$.
For numerical reasons (to avoid very low densities) we inject a very weak slow wind in the directions where we do not launch the jets, i.e., in the sector $\alpha<\theta<90^\circ$ in each hemisphere (for more numerical details see \citealt{AkashiSoker2013}).
The accretion rate in the disk that launches the jets is about 10 times higher than the mass loss rate in the jets, or $2 \times 10^{-5} M_\odot ~\rm{yr}^{-1}$ in the case we simulate, or $ \approx 10^{-4} M_\odot ~\rm{yr}^{-1}$ had we taken the interaction time to be shorter. This is still below the Eddington accretion rate limit on to a main sequence star. The magnitude of the momentum in the two jets is about $5 \times 10^{36} ~\rm{g} ~\rm{cm} ~\rm{s}^{-1}$. This is much below values that are observed in many bipolar PNe (e.g., \citealt{Bujarrabaletal2018}). The jets' velocity we use is somewhat above the speed of observed collimated outflows in PNe, but it is only a little larger than the escape speed from main sequence stars. As well, the jets' velocity in the proto-PN He~3-1475 is $1200 ~\rm{km} ~\rm{s}^{-1}$ (e.g., \citealt{BorkowskiHarrington2001}).
We estimate that a jets' velocity of about $400 ~\rm{km} ~\rm{s}^{-1}$ in our simulation, as observed in young stellar objects, would lead to the same qualitative results.
Overall, we consider the jets' properties we use quite plausible, even if rare.
The initial temperature of the slow wind, the dense shell, and the jets is $10^4 ~\rm{K}$. The initial jets' temperature has no influence on the results (as long as it is highly supersonic) because the jets rapidly cool due to adiabatic expansion.
\section{RESULTS}
\label{sec:results}
\subsection{The formation of the bipolar structure}
\label{subsec:bipolar}
We start by presenting the 3D structure of the new morphological features that we have obtained here from the particular setting of a slow dense spherical shell embedded in the slow wind, and two jets that interact with the slow spherical ambient gas.
In Fig. \ref{fig:3D} we present the appearance of the nebula at six times from $t=12 ~\rm{yr}$ to $t=273 ~\rm{yr}$, where $t=0$ is the time when we start to inject the jets. The jets were active for 17 years. The density structure is depicted by four colors.
\begin{figure}
\begin{center}
\hskip -0.9 cm
\includegraphics[trim= 1.0cm 0.2cm 5.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_3D_2}
\includegraphics[trim= 1.0cm 0.2cm 5.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_3D_7}
\includegraphics[trim= 1.0cm 0.2cm 5.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_3D_11}
\newline
\vskip 0.3 cm
\hskip -0.9 cm
\includegraphics[trim= 1.0cm 0.2cm 5.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_3D_20}
\includegraphics[trim= 1.0cm 0.2cm 5.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_3D_30}
\includegraphics[trim= 1.0cm 0.2cm 5.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_3D_47}
\caption{Three dimensional density structure at six times, from upper left to bottom right: $12 ~\rm{yr}$, $44 ~\rm{yr}$, $70 ~\rm{yr}$, $120 ~\rm{yr}$, $177~\rm{yr}$, and $273~\rm{yr}$.
The size of the box is $5\times 10^{17}~\rm{cm}$ x $5\times 10^{17}~\rm{cm}$ x $10^{18}~\rm{cm}$.
The density scale is given by the colour-bar, where red, green, blue and pale blue represent densities of $2 \times 10^{-24}~\rm{g} ~\rm{cm}^{-3}$, $6 \times 10^{-24}~\rm{g} ~\rm{cm}^{-3}$, $5 \times 10^{-22}~\rm{g} ~\rm{cm}^{-3}$, and $2 \times 10^{-21}~\rm{g} ~\rm{cm}^{-3}$, respectively.}
\label{fig:3D}
\end{center}
\end{figure}
Two prominent morphological features develop by the end of the simulation. The first one is a bipolar structure, composed of two opposite lobes, one at each side of the equatorial plane. The structure of each lobe is not a simple prolate or oblate shape, but rather we see two bubbles on each side, as we mark on Fig. \ref{fig:Scematic}. A bubble with a denser surface that is seen in blue touches the center on the close side to the center, and is connected to a second bubble on the far side. In the panel at $t=273 ~\rm{yr}$ of Fig. \ref{fig:3D} the outer bubble is seen in blue in its close part, and in green in its far side from the center. Dense thin filaments are seen in blue on the surface of the outer bubble. They extend from the boundary between the two bubbles and up to the middle of the outer bubble.
\begin{figure}
\begin{center}
\hskip -0.9 cm
\includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.83\textwidth]{Scematics}
\vskip -3.6 cm
\caption{The prominent morphological features that our simulation reveals. }
\label{fig:Scematic}
\end{center}
\end{figure}
The second prominent feature, which is the focus of this paper, is the structure of straight eight wide filaments, or columns, that are seen in green at $t=273 ~\rm{yr}$ in Fig. \ref{fig:3D}, extending more or less in the $z$ direction from the outer bubble out. We term this structure columns crown. As we show below, they develop from Rayleigh-Taylor instability modes. We do note that here we form 8 thick columns. The numerical grid determines the number and width of the columns. Later we raise the possibility that in reality many more columns and thiner ones are formed.
In Fig. \ref{fig:dens_slice} - \ref{fig:vel_arrows} we present the density, temperature, and velocity maps, respectively, in a meridional plane at 6 times. This meridional plane has an angle of $60^\circ$ to the plane $x=0$. We choose this meridional plane to include two of the columns from the column crown, one column at each side of the symmetry axis $z$. The Horizontal axis in the figures where we present this meridional plane is the distance along the line $y=x \tan 60^\circ$. We can discern the following interaction phases.
\begin{figure}
\begin{center}
\hskip -0.3 cm
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_dens_slc_2}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_dens_slc_7}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_dens_slc_11}
\end{center}
\begin{center}
\hskip -0.3 cm
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_dens_slc_20}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_dens_slc_30}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_dens_slc_57}
\vskip -0.6 cm
\caption{The density maps in a meridional plane that is rotated at $60^\circ$ from the plane $x=0$. The horizontal axis is the distance along the line $y=x \tan 60^\circ$. We present the density at six times: $12 ~\rm{yr}$, $44 ~\rm{yr}$, $70 ~\rm{yr}$, $120 ~\rm{yr}$, $177$, and $317~\rm{yr}$. The density scale is given by the colour-bar in units of $~\rm{g} ~\rm{cm}^{-3}$. In the first 5 panels the red color represents to a density of $8 \times 10^{-22}~\rm{g} ~\rm{cm}^{-3}$, while in the last panel the red color represents a density of $4 \times 10^{-23}~\rm{g} ~\rm{cm}^{-3}$. }
\label{fig:dens_slice}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\hskip -0.3 cm
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_temp_slc_2}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_temp_slc_7}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_temp_slc_11}
\end{center}
\begin{center}
\hskip -0.3 cm
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_temp_slc_20}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_temp_slc_30}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_temp_slc_57}
\vskip -0.6 cm
\caption{The temperature in the same plane and at the same times as in Fig. \ref{fig:dens_slice}. The Temperature scale is given by the colour-bar in units of $~\rm{K}$ (the red color is in units of $10^6$; this was trimmed from the color bar). }
\label{fig:temp_slice}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\hskip -0.3 cm
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_velmag_arrows_2}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_velmag_arrows_7}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_velmag_arrows_11}
\end{center}
\begin{center}
\hskip -0.3 cm
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_velmag_arrows_20}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_velmag_arrows_30}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.0cm,clip=true,width=0.33\textwidth]{mz3_velmag_arrows_57}
\vskip -0.6 cm
\caption{The velocity in the same plane and same times as in Fig. \ref{fig:dens_slice}. Arrows present the direction of velocity and colors present the magnitude of the velocity according to the colour-bar and in units of $~\rm{cm} ~\rm{s}^{-1}$ (the red color is in units of $10^7$, beside the second panel where it is in units of $10^8$). }
\label{fig:vel_arrows}
\end{center}
\end{figure}
\textit{1. Jet injection and early interaction.}
We inject the jets only in the first 17 years. The upper left panel in each of the figures, at $t=12 ~\rm{yr}$, depicts this phase. At this phase we can still see the three parts of the spherical structure that the giant star formed according to our initial setting (section \ref{sec:numerical}), all expanding out at a velocity of $10 ~\rm{km} ~\rm{s}^{-1}$. A slow wind fills the volume from the center to $r=10^{17} ~\rm{cm}$, a denser shell from $r=10^{17} ~\rm{cm}$ to $r=1.75 \times 10^{17} ~\rm{cm}$ that was formed from an eight times higher mass loss rate episode, and a slow wind that extends from the dense shell to the edge of the numerical grid. The jets just start to inflate a bipolar structure.
\textit{2. Early inflation of bipolar nebula.} In the first 45 years, that are depicted by the first two panels of Fig. \ref{fig:dens_slice} - \ref{fig:vel_arrows}, the jets inflate a bipolar structure in the slow wind that was blown by the central star after the formation of the dense shell.
The jets' gas passes through shock waves and form hot zones with a temperature of up to $\simeq 10^7 ~\rm{K}$. The density structure at $t=44 ~\rm{yr}$ (second panel) reveals that the dense front of the inflated bipolar structure is a plane (red horizontal bar on each side of the equatorial plane), but that it has two protrusions, one at its edge of this bar. These will later develop into columns and be part of the columns crown.
\textit{3. Interaction with the dense shell.} The two panels that come next, at $t=70 ~\rm{yr}$ and $t=120 ~\rm{yr}$,show the outcome of the interaction with the dense shell. The panel at $t=120 ~\rm{yr}$ shows that the interaction of the expanding bipolar structure with the dense shell forms in each side of the equatorial plane an outer bubble. The lobe that was inflated earlier forms now a bubble closer to the center, while the interaction with the dense shell forms an outer bubble that is connected to the one closer to the center. The boundary between the two bubbles coincides with the inner radius of the dense shell at $10^{17} ~\rm{cm}$. The outer bubble on each side of the new bipolar structure opens up on its far (from the center) side, and protrusions extend further out. These protrusions, that are Rayleigh-Taylor instability tongues, form the columns crown.
Only a small fraction of the mass in the dense shell lies along the path of the jets and the bipolar structure they form when interacting with the inner slow wind. We turned off the jets before this bipolar structure hits the dense shell, so it is the bipolar structure that fractures the dense shell. The interaction forms two bubbles, the outer one that forms the columns crown comes from the dense shell.
At the end of our simulation the outflow removed two caps, one at each side of the equatorial plane, from the dense shell. The half opening angle of each of the removed caps is about $40 ^\circ$, such that the total mass in the two caps is about 23 per cent of the dense shell mass, or about $5 \times 10^{-4} M_\odot$. This is about 15 times the mass in the two jets.
We note though, that the columns crown is formed at an earlier time, and the fraction of the mass of the dense shell that leads to the formation of the dense crown is only about 10 per cent of the mass of the dense shell (a half opening angle of $25^\circ$), or about $2 \times 10^{-4} M_\odot$.
\textit{4. Late evolution.}
At later times the bipolar structure expands into the outer slow wind. Now the outcomes of the instabilities become prominent, in particular the columns crown and the filaments and blobs inside the outer bubble of the bipolar structure.
In the meridional plane that we present in Fig. \ref{fig:dens_slice}, two columns are seen as two red spots in the very upper boundary of the grid, and two red spots with their columns are seen in the very lower boundary of the grid at $t=317 ~\rm{yr}$ (lower right panel). The red spots are dense bullets that we mark also on Fig. \ref{fig:Scematic}. The higher density columns (they are in green) that extent from the outer bubble to the red spots in Fig. \ref{fig:dens_slice} are the columns that we present in Fig. \ref{fig:3D}. The columns are clearly seen also in the lower right panel of Figs. \ref{fig:temp_slice} and \ref{fig:vel_arrows}, where they have the appearance of `ears'. These `ear' structures are formed by the bullets (shrapnel) that move through the ambient gas. The high temperature that forms the `ear' structure behind each bullet results from a shock wave, as the bullets move at a velocity of $\simeq 500 ~\rm{km} ~\rm{s}^{-1}$ that is highly supersonic.
The bullets (shrapnel) themselves expand at an angle of about $25^\circ$ to the symmetry axis with a speed of about $500 ~\rm{km} ~\rm{s}^{-1}$, and the mass in each bullet is about several$~\times 10^{-7} M_\odot$.
The red zones inside the outer bubble in Fig. \ref{fig:dens_slice} were formed by instabilities, as we elaborate on in the next section.
Finally, we note that because we do not inject a strong wind or jets from the center anymore, the high pressure in the inner bubble pushes material back to the center. This back-flow turns to an outflow in the equatorial plane as seen in the last panel of Fig. \ref{fig:vel_arrows}. In an earlier paper \citep{AkashiSoker2008b} we studied a different kind of back-flow that the jets and the high pressure bubbles that they inflate form. In general , the possibility of a gas that flows back to the center is an interesting subject for future studies.
\subsection{Instabilities and the columns crown}
\label{subsec:instabilities}
The interaction of the jets with the inner slow wind, and then the interaction of the expanding bipolar structure with the dense shell and outer slow wind are prone to Rayleigh-Taylor instability modes. The basic condition for the Rayleigh-Taylor instability to develop here is that the density gradient and the pressure gradient have opposite sense. Namely, if
$\overrightarrow {\nabla} \rho \cdot \overrightarrow{\nabla} P <0$, where $\rho$ is the density and $P$ is the pressure, then this region is Rayleigh-Taylor unstable.
In Fig. \ref{fig:RT} we present the quantity
\begin{equation}
\tau_0^{-1} \equiv \rho^{-1} \left( - \overrightarrow {\nabla} \rho \cdot \overrightarrow{\nabla} P \right)^{1/2} \quad {\rm for} \quad
\overrightarrow {\nabla} \rho \cdot \overrightarrow{\nabla} P <0,
\label{eq:tau0}
\end{equation}
in unstable regions in the meridional plane and at six times as in earlier figures.
The growth time of a Rayleigh-Taylor unstable mode of wavelength $\lambda$ is of the order of $\tau_{\rm RT} \simeq \tau_0 (\lambda/D_\rho)^{1/2}$, where $D_\rho \equiv \rho/ \vert \overrightarrow {\nabla} \rho \vert$.
\begin{figure}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.2cm,clip=true,width=0.33\textwidth]{inv_tau_log_02}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.2cm,clip=true,width=0.33\textwidth]{inv_tau_log_07}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.2cm,clip=true,width=0.33\textwidth]{inv_tau_log_11}
\newline
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.2cm,clip=true,width=0.33\textwidth]{inv_tau_log_20}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.2cm,clip=true,width=0.33\textwidth]{inv_tau_log_30}
\includegraphics[trim= 1.0cm 0.2cm 7.5cm 0.2cm,clip=true,width=0.33\textwidth]{inv_tau_log_57}
\caption{Maps of the quantity $\tau^{-1}_0$ as given by equation (\ref{eq:tau0}) and in units of $~\rm{yr}^{-1}$, in regions that are Rayleigh-Taylor unstable. Higher values means higher growth rates of the instabilities. The red color in the panels stands for (from upper left to lower right) , $0.2$, $0.2$, $0.15$, $0.1$, $0.05$, and $0.05 ~\rm{yr}^{-1}$, respectively. }
\label{fig:RT}
\end{figure}
The unstable regions follow the bipolar structure.
In the third panel at $t=70 ~\rm{yr}$ there are two unstable regions that look like `ears' on the front of each bipolar lobe (one at each side of the equatorial plane). These are the early development of the columns from instabilities. The `ears' and their development to columns are seen also in the last four panels of Figs. \ref{fig:dens_slice} - \ref{fig:vel_arrows}. An `ear' is the structure that forms behind a dense bullet that moves supersonically through the ambient gas.
The fourth panel of Fig. \ref{fig:RT} at $t=120 ~\rm{yr}$ presents an interesting structure, in showing a large unstable region in the contact between the inner and outer bubble. Few red filaments in that region imply rapid growth rate of the instability. This region coincides with the inner radius of the dense shell, $r=10^{17} ~\rm{cm}$.
The physical properties of the flow determine where and how fast the instabilities develop. However, in our case it is the numerical grid, by its structure and resolution, that determines which are the most unstable modes and where exactly they develop the fastest. The Cartesian grid we use leads to the formation of 8 symmetrical columns, or four pairs of columns. In Fig. \ref{fig:z=1e17} we present the locations of the columns in the plane $z=10^{17} ~\rm{cm}$, i.e., parallel to the equatorial plane. The location of the columns are the high density spots that the red color represents in that figure. We suggest that in reality there are many more columns and that they are thinner, as there is no resolution limit. This implies that the bullets that we obtain (see Fig. \ref{fig:Scematic}) will be smaller, and will not survive for a long time. We might not observe them at late time.
\begin{figure}
\begin{center}
\hskip -0.5 cm
\includegraphics[trim= 1.0cm 0.2cm 0.0cm 0.2cm,clip=true,width=0.50\textwidth]{slc_z_1e17_30}
\includegraphics[trim= 1.0cm 0.2cm 0.0cm 0.2cm,clip=true,width=0.50\textwidth]{slc_z_1e17_47}
\vskip -0.3 cm
\caption{The density maps in the $z=10^{17}~\rm{cm}$ plane (parallel to the equatorial plane) at two times: $177 ~\rm{yr}$, and $273 ~\rm{yr}$, corresponding to the last two panels of Fig. \ref{fig:3D}. The density scale is given by the colour-bar in units of $~\rm{g} ~\rm{cm}^{-3}$.}
\label{fig:z=1e17}
\end{center}
\end{figure}
Let us further comment on our resolution. When the perturbations first develop, as we can see in Fig. \ref{fig:dens_slice} at $t= 44 ~\rm{yr}$, the width of the hight density fingers (4 long radial filaments in red) is about $2.5 \times 10^{15} ~\rm{cm}$ and the bullets are resolved then by only 2.5 cells. This shows that the grid-cells determine the size of the perturbations that develop later to bullets. At the end of our simulation the bullets have a width of about $30 \times 10^{15} ~\rm{cm}$ and each one is resolved by about 30 cells. The bullets survive to the end of our simulation, although their structure is not smooth. They are shaped like a boomerang and the density is not constant.
One can define the `bullet crushing time' (e.g., \citealt{Jonesetal1996})
\begin{equation}
t_{\rm bc} = \frac{D \sqrt{\chi}}{v_{\rm rel}} \simeq 60
\left(\frac{D} {3 \times 10^{16} ~\rm{cm}} \right)^{1/2}
\left(\frac{v_{\rm rel}} {500 ~\rm{km} ~\rm{s}^{-1}} \right)^{-1}
\left(\frac{\chi} {10} \right)^{1/2} ~\rm{yr},
\label{eq:tbc}
\end{equation}
where $v_{\rm rel}$ is the relative velocity of the bullet and the ambient gas, $D$ is the size of the bullets, and $\chi$ is the ratio of the bullet density to the ambient density. We have substituted typical values at the end of our simulation.
\cite{Jonesetal1996}, for example, show that after a time of several $t_{\rm bc}$ the bullet is highly distorted, although still exist. We have $t\simeq 5 t_{\rm bc}$ at the end of our simulation, and the bullets are distorted, but they are not `crushed' yet. It is possible that the numerical viscosity maintains the bullets intact for a longer time. A higher resolution simulation with lower numerical viscosity would form smaller bullets that live for a much shorter time. The result will be slower columns that might fit better the observations of Mz~3, as we discuss in section \ref{sec:summary}.
\section{DISCUSSION AND SUMMARY}
\label{sec:summary}
The purpose of this study is to enrich the variety of morphological features that jets can form when they interact with the slow outflow from evolved giant stars. The rich variety of morphological structures of PNe and other nebulae around evolved stars motivate us to conduct these hydrodynamical simulations. Our motivation to conduct this study is the beautifully complicated structure of the PN Mz~3, the Ant nebula. In Fig. \ref{fig:ant} we present this PN.
\begin{figure}
\begin{center}
\vskip -0.7 cm
\includegraphics[trim= 1.5cm 16.0cm 2.0cm 3.0cm,clip=true,width=0.95\textwidth]{ColumnsCrownComparisonFig}
\caption{Left panel: A false-color image of the PN Mz~3 by the Hubble Space Telescope. (Credit: NASA, ESA and The Hubble Heritage Team (STScI/AURA); Acknowledgment: Raghvendra Sahai and Bruce Balick.). Middle panel: An image of Mz~3 with marks from \cite{Clyneetal2015}. Right panel: Our numerical 3D density structure. }
\label{fig:ant}
\end{center}
\end{figure}
As with previous studies in this series, we set very simple initial conditions, i.e., a spherical slow ambient medium and one jet-launching episode. In the present study we set one dense shell within a slow wind from an AGB star, and injected two opposite jets in one episode that lasted $17 ~\rm{yr}$. In reality, the slow outflow is expected to be more complicated than we assume, and there can be several jet-launching episodes. For example, the jets do not need to encounter a closed shell, but rather it is sufficient that each jet catches up and interacts with a dense slow polar cap with a half opening angle of $\ga 25^\circ$. Such caps can be ejected by an earlier slow bipolar outflow.
This will alleviate the non-detection of such spherical shells in PNe (e.g., \citealt{Sahaietal2011}).
This type of interaction forms a bipolar nebula with two prominent morphological features on each of the two sides of the equatorial plane, as we mark on Fig. \ref{fig:Scematic}: (1) A bipolar lobe that is composed of two bubbles, and (2) a columns-crown.
In Figs. \ref{fig:3D} and \ref{fig:dens_slice} - \ref{fig:vel_arrows} we presented the evolution of the interaction. After the jets cease to exit, the bipolar structure that the jets inflated continues to move forward. The interaction with the dense shell splits each lobe into two bubbles, one touching the center and one further out.
The interaction at its several stages has many regions that are Rayleigh-Taylor unstable. In Fig. \ref{fig:RT} the instability maps show these unstable regions that form the columns crown and the filaments and clumps inside the outer bubble.
Let us compare the bipolar structure that we obtained in our simulation with the structure of Mz~3. We summarize the comparison in Table \ref{table:Compare}.
Although we terminate the simulation at an age of about 300 years, the outer parts of the nebula, including the columns crown, have very high Mach numbers, $\simeq 10$, and hence experience a ballistic motion that preserves the shape of the nebula. We therefore can apply our results to the older PN Mz~3.
Each of the two lobes of Mz~3 is composed of two bubbles. However, these are not exactly the same as the two bubbles we obtained here. While in our simulation the outer bubble is larger than the inner one, in Mz~3 the outer one is smaller than the inner one. In a previous paper \citep{AkashiSoker2008a} we termed these outer small bubbles `front lobes'. In that paper we set different initial conditions, in particular there was no dense shell, and show how a jet can inflate a front lobe.
Based on the earlier paper and the present study, we suggest one of the following two possibilities. (1) The exact setting of the dense ambient gas is more complicated, with some structure in between the setting we used in the two runs. (2) There were two jets-launching episodes in Mz~3, one episode that formed the two opposite crowns and one that formed the two front-lobes.
We also note that a simple way to form two touching bubbles is to have two jets-launching episodes, one after the other. This requires no dense shell.
\begin{table*}[t]
\small
\centering \caption{Observed and simulated properties of Mz~3.}
\begin{tabular}{c c c c}
\hline
Property & Observed & Simulated & Possible solutions for discrepancies\\
[0.5ex]
\hline \hline
General structure & Elongated &Reproduced & \\
& Bipolar nebula & & \\
\hline
Columns-crown & Marked BL~2 &Reproduced & \\
&in fig. \ref{fig:ant}& & \\
\hline
Opening of the & Very small (almost & $25^\circ$ to the & A more complicated initial \\
crown ¶llel to axis) & symmetry axis & ambient gas structure \\
\hline
Velocity of & $\simeq 100 ~\rm{km} ~\rm{s}^{-1}$ & $\simeq 500 ~\rm{km} ~\rm{s}^{-1}$ & Slower jets\\
columns & & & \\
\hline
Material near & Not observed. & Leftover from & Interaction closer to center \\
equatorial plane & & dense shell & followed by ballistic expansion\\
\hline
Front lobe & clear on one & Not reproduced & A later jet launching episode\\
(Fig. \ref{fig:ant})& side & & as in \cite{AkashiSoker2008a} \\
\hline
Two connected & Outer bubble is & Two bubbles, but & A somewhat different initial \\
bubbles (Fig. \ref{fig:ant}) & smaller & outer is larger & ambient medium structure \\
\hline
Side rays & Marked BL~3 & Not reproduced & A jet-launching episode \\
&in fig. \ref{fig:ant}& & of wide jets\\
\hline
Width of columns & Narrow & Wider than observed & Simulate with a higher \\
& & & resolution and a somewhat \\
& & & different ambient structure \\
\hline
Origin of columns &Extended zone & A ring on the lobe. & Several short jets or a \\
&on the lobe. & & longer one jets-episode \\
\hline
X-ray emission & Observed inside & Simulation has high & Late jets and a fast wind \\
& the lobes & Temperature gas; not& might improve the structural \\
& &exactly same structure& fitting \\
\hline
\end{tabular}
\label{table:Compare}
\newline
Comparison of the observed properties of Mz~3 with the results of the present simulation.
\end{table*}
Table \ref{table:Compare} list many properties of Mz~3 that we did not reproduce in the present simulation. There are two options to reconcile the discrepancies. The first is that our suggested jets-shell interaction is not the process that formed the columns crowns. The second one is that jets-shell interaction is the explanation to the columns-crowns, but there are several missing ingredients from our simulations. In the last column of Table \ref{table:Compare} we list these possible missing ingredients. The Table shows that we fail to reproduce most of the features of Mz~3. Future studies will have to examine other processes that can form the columns-crowns instead of jets, or else will reproduce with much more complicated simulations all properties of Mz~3, but leave jets as the explanation for the columns crown.
The distance to Mz~3 is uncertain, with different studies listing values in the range of about $1-3 ~\rm{kpc}$ (see discussion by \citealt{Clyneetal2015}). At a distance of $1.5 ~\rm{kpc}$ the width of the main nebula we obtained in our simulation is about $20 ^{\prime \prime}$. This is compatible with the observed width of the main nebula (e.g., \citealt{Clyneetal2015}). \cite{SantanderGarciaetal2004} find the maximum velocity of the columns of the crown to be about $300 ~\rm{km} ~\rm{s}^{-1}$. The velocity of the columns that \cite{Clyneetal2015} observe (BL2 in their nomenclature), on the other hand, is only about $100 ~\rm{km} ~\rm{s}^{-1}$. We find that the velocity of the bullets in our simulation is about $500 ~\rm{km} ~\rm{s}^{-1}$, and the velocity of the column that trails each bullet is somewhat lower. This shows that we did not reproduce the structure of Mz~3 quantitatively, but only qualitatively, but not perfectly. To obtain a better match we will have to thoroughly study the parameter space.
The two opposite columns crowns we have obtained here resemble the `crowns' in Mz~3, but they are not identical. Firstly, The `columns' in Mz~3 are much narrower than what we obtained here, and they better be termed filaments. Secondly, the `columns' in Mz~3 originate from an extended region on the lobes of Mz~3, while our columns originate from one ring on the outer bubble (Fig. \ref{fig:z=1e17}).
We suggest to reconcile these differences as follows. The reason we obtain thick columns is because of the limited resolution of the grid. A much finer grid (beyond our capabilities) might form thiner columns. As for the origin of the filaments. We used a short jets-launching episode that lasted only 17 years. It is possible that a longer episode, or several short episodes, might lead to a more extended crown.
There are other features in Mz~3 that we do not reproduce. One example is the columns at very large angles, marked as BL3 by \cite{Clyneetal2015}. Another is the exact structure of the X-ray emission within the main nebula \citep{Kastneretal2003}. Although we do obtain high temperature gas that emits X-ray, it is not clear it will survive for a long time, and the exact structure is not as that in Mz~3. Although we do not reproduce the exact X-ray structure, the X-ray structure does support shaping by jets \citep{Kastneretal2003}
We attribute the more complicated observed nebular structure to two types of effects. The first is a much more complicated slow wind prior to the launching of jets, and the second is later jets-launching episodes. On top of these, the central star can blow a fast wind. Adding more features to the slow wind, and adding more jets-launching episodes make the parameter space much too large to follow. For that reason in our different studies we tend to study one type of interaction at a time.
We summarize by stating that our main finding in the present study is that the type of Rayleigh-Taylor instability modes that develop in our simulation when jets and the lobes they inflate encounter circumstellar matter with large density gradients, larger than a wind with a constant mass loss rate, can account for the outward extending columns/filaments in Mz~3 and other nebulae.
We thank Bruce Balick and an anonymous referee for enlightening and for useful comments. We acknowledge support from the Israel Science Foundation and a grant from the Asher Space Research Institute at the Technion.
|
2,877,628,089,450 | arxiv | \section{Introduction}
This paper addresses a new construction for the class of
capacity-approaching low density parity check (LDPC) codes invented by
Gallager~\cite{gal:it62}. However, we start by introducing a result in
turbo coding~\cite{sun:tak:pp}, which motivated this work. Although
the focus is on LDPC codes, we will establish a bridge between turbo
code and LDPC code constructions using quadratic permutation
polynomials (QPP) over finite integer rings. The algebraic structure and excellent
error performance of the new construction are investigated.
Turbo codes using interleavers generated by permutation polynomials
over finite integer rings~\cite{sun:tak:pp} have proven to be both practical
and yield excellent error
performances~\cite{tak:mcf,ryu:tak:qinv,jpl2}. Moreover, the
construction allows a high degree of parallel processing of turbo
decoding without memory access
contention~\cite{tak:mcf} during the flow of extrinsic information.
To the best of our
knowledge, this is the only known class of algebraic interleavers
with a very simple description producing turbo codes with error rate
performance meeting or exceeding all other algebraic and pseudo-random
constructions at practical error rates~\cite{tak:mcf} for a wide range
of block lengths (256--4096-bit interleavers) such as in the
3GPP standard~\cite{3gpp}. We believe that the excellent error rate
performance is due to two main features of the interleaver: a
``pseudo-randomness'' property obtained by a non-linear
nature~\cite{tak:cos:it} of permutation polynomials of degree two or
larger; and, an algebraic structure allowing designs matched to the constituent
convolutional codes~\cite{sun:tak:pp}. The new construction for LDPC
codes using permutation polynomials in this paper also enjoys this
coexistence of pseudo-randomness and algebraic structure.
LDPC codes, like turbo codes, were initially designed with random
constructions~\cite{mackay:it99,ric:shk:urb:it01,lub:mit:shk:spl:it01,chu:for:ric:urb:cl2001}.
A ``randomness'' was perceived as an important feature for both types
of codes. However, requirements such as ease of implementation,
large girth for the associated graph, and better error performance
quickly spawned other construction methods. Many good algebraic,
combinatoric, and geometric constructions for LDPC codes have been
proposed in the
literature~\cite{margulis,ros:von:margulis,kou:lin:fos,fos:geo,sma:von:qcb}. Additionally,
good algorithmic constructions have also been
proposed~\cite{hu:peg}. Most constructions focus on the maximization of the
girth of the associated graph and employ computer simulations for
validation. We follow the same path in this paper.
Recently, some attention has been given to the error floor of LDPC
codes~\cite{richardson:al41,mackay:pos:near}.
As an example, the $(2640,1320)$ Margulis
code~\cite{ros:von:margulis} has a floor at a frame error rate (FER)
of $10^{-6}$. The two main causes of the floors in LDPC codes are
known to be a small minimum distance and low-weight
near-codewords. Graph symmetries and automorphisms are key properties
being used to investigate error floors in LDPC
codes~\cite{richardson:al41} below the reach
of Monte Carlo simulations. For the $(2640,1320)$ Margulis code,
rather than a poor minimum distance\footnote{However, the minimum
distance of this code is also not impressive because there are codewords of
Hamming weight 40 for this code~\cite{hu:fos:ele:nncs}.}, the main cause for its floor has been
identified as low-weight near-codewords of the type $(12,4)$, i.e.,
near-codewords of weight 12 and syndrome weight 4. In our
simulations, example codes for the new construction are given with no apparent
error floors down to FER's close to $10^{-7}$. This does not mean the new codes do not
have near-codewords; on the contrary, we have identified
near-codewords in one of our example codes. However, their simple
algebraic structure allows an easy identification of graph
automorphisms and may be a valuable framework for the understanding of
error floors in LDPC codes.
The existence of graph automorphisms of our new construction also
implies that permutation polynomial-based LDPC codes are
quasi-cyclic~\cite{fos:itldpcqc}. However, the new construction does not
necessarily generate codes that are equivalent to the codes
in~\cite{fos:itldpcqc}. It is known that the parity check matrices of
quasi-cyclic codes can be written as adjoined circulant square matrices up
to a code equivalence. A recent work~\cite{sma:von:qcb} defines
two subclasses of quasi-cyclic codes. Type I and II codes have equivalent
parity check matrices whose circulant sub-matrices have row/column
weights 0,1 and 0,1,2 respectively. The new construction generates
codes of both types. Codes of
type II were shown in~\cite{sma:von:qcb} to have better minimum
distance upper bounds than codes of type I, which is a superclass of
the codes
in~\cite{fos:itldpcqc}. We further observe a generalization of the result
in~\cite{sma:von:qcb} showing that codes
generated by the new construction have upper bounds on the
minimum distance potentially growing with the block length. This happens because
the so-called ``weight matrices'' for the circulant sub-matrices of the
parity check matrix have larger sizes (and consequently smaller
circulant sub-matrices) than the previously known constructions. Larger
minimum distances are also confirmed by using the nearest nonzero
codeword search (NNCS) method, which finds true codewords of low
weight (with a good likelihood of yielding a codeword of lowest
non-zero Hamming weight) in LDPC codes~\cite{hu:fos:ele:nncs}.
This paper is organized as follows. In section II, we
define the new LDPC construction and review a result for quadratic
permutation polynomials\cite{sun:tak:pp,ryu:tak:qinv} over the
finite integer ring $\mathbb{Z}_N$. The main results are derived in section
III, and examples and computer simulation results are given in section
IV. Finally, conclusions are discussed in section V.
\section{LDPC Construction}
Let $G$ be a $(\lambda,\rho)$ regular bipartite graph. The graph $G$
consists of $n$ variable nodes $\Lambda=\{v_0,v_1,\ldots, v_{n-1}\}$
with degree $\lambda$ and $r$ check nodes $\Gamma=\{c_0,c_1,\ldots,
c_{r-1}\}$ with degree $\rho$. There are
$N=n\lambda=r\rho$ edges $\Xi=\{e_0,e_1,\ldots,e_{N-1}\}$ in $G$. Each
edge $e_i,0\leq i<N$ has a left-label $i$ and right-label $f(i)$,
where $f(\cdot)$ is a permutation on $\{0,1,2, \ldots, N-1\}$.
Naturally, if the right-label of an edge $e_i$ is $0\leq j<N$ then the
left-label is $i=f^{-1}(j)=g(j)$, i.e., $g(\cdot)$ is the inverse
permutation function of $f(\cdot)$. Each variable node $v_m,0\leq m<n$
is connected to $\lambda$ edges whose left-labels are in the set
$\lambda_m=\{m\lambda,
m\lambda+1,\ldots, m\lambda+(\lambda-1)\}$. Each check node $c_m,0\leq
m<r$ is connected to $\rho$ edges whose right-labels are in the set
$\rho_m=\{m\rho,
m\rho+1,\ldots, m\rho+(\rho-1)\}$. Thus every regular $(\lambda,\rho)$
graph $G$ with $N$ edges is completely and uniquely (up to a graph
isomorphism) defined by a permutation $f(\cdot)$ on
$\{0,1,\ldots,N-1\}$. In this paper, we investigate an LDPC
construction when $f(\cdot)$ is a quadratic permutation polynomial
over integer rings~\cite{sun:tak:pp,ryu:tak:qinv}.
In this paper, let the set of primes be $\mathcal{P} = \{p_2
= 2, p_3 = 3, p_5 = 5 , ... \}$. Then an integer $N$ can be factored
as $N = \prod\nolimits_{p_i \in \mathcal{P}} p^{n_{N,i}}_i $, where
$p_i$'s are distinct primes, $n_{N,i} \geq 1$ for a finite number of
$i$ and $n_{N,i}=0$ otherwise.
The necessary and sufficient condition for a quadratic polynomial
$f(x)$ to be a permutation polynomial is given in the following
proposition.
\begin{proposition}\cite{ryu:tak:qinv}\cite{sun:tak:pp}
Let $N = \prod\nolimits_{p_i \in \mathcal{P}} p^{n_{N,i}}_{i} $.
The necessary and sufficient condition for a quadratic polynomial $f(x) = f_1 x + f_2 x^2 \pmod{N}$
to be a permutation polynomial can be divided into two cases.
\begin{enumerate}
\item Either $2 \nmid N$ or $4 |N$ (i.e., $n_{N,2}\not = 1$)\\
$\gcd(f_1, N)=1$ and $ f_2 = \prod\nolimits_{p_i \in \mathcal{P}} p^{n_{{f_2},i}}_{i}, n_{{f_2},i} \geq 1 $, $\forall i$
such that $n_{N,i} \geq 1$.
\item $2|N$ and $4\nmid N$ (i.e., $n_{N,2}=1$)\\
$f_1+f_2$ is odd, $\gcd(f_1,\frac{N}{2}) = 1$ and $f_2 = \prod\nolimits_{p_i \in \mathcal{P}} p^{n_{{f_2},i}}_{i}, n_{{f_2},i} \geq 1$, $\forall i$
such that $p_i \neq 2$ and $n_{N,i} \geq 1$.
\end{enumerate}
\label{prop:pp2}
\end{proposition}
\section{Design of Good LDPC Codes}
We propose an efficient search for LDPC graphs with large girth by
avoiding inspection of isomorphic graphs. Additionally, we only
check the girth of a graph by computing the local girth starting from
vertices that belong to different equivalence classes under a graph
automorphism.
\subsection{Isomorphic Graphs}
The following two propositions identify quadratic permutation
polynomials generating LDPC codes with isomorphic graphs.
\begin{proposition}
The graphs generated by $f(x)=f_1x+f_2x^2$ and
$f^\prime(x)=m\rho+f_1x+f_2x^2$, where $m$ is any integer are
isomorphic.
\label{prop:iso0}
\end{proposition}
\begin{proof}
This is readily seen by the definition of the construction and it
simply corresponds to a difference of a constant $m$ modulo $N/\rho$ in the
indices $c_i$ and $c_i^\prime, 0\leq i<r$ of the check nodes of the two graphs.
\end{proof}
\begin{proposition}
The graphs generated by
$f(x)=f_1x+f_2x^2$ and $f^\prime(x)=(f_1+2m\alpha f_2)x+f_2x^2$, where $m$ is
any integer and $\alpha=\rm{lcm}(\lambda,\rho)$ are
isomorphic.
\label{prop:iso}
\end{proposition}
\begin{proof} First we observe that a graph induced by $f(x+m\alpha)$
is clearly isomorphic to the one induced by $f(x)$ because it just
corresponds to a different relabeling of the variable
nodes. Developing $f(x+m\alpha)$ be obtain
\[
f(x+m\alpha)=f_1(x+m\alpha)+f_2(x+m\alpha)^2=f_1x+f_2x^2+ 2m\alpha f_2x+\alpha m f_1
+\alpha^2 m^2f_2
\]
\[
f(x+m\alpha)=(f_1+2m\alpha f_2)x+f_2x^2+\alpha m f_1 +\alpha^2 m^2f_2
\]
and the proposition follows from Proposition~\ref{prop:iso0}.
\end{proof}
Proposition~\ref{prop:iso} implies that in the search for good
QPP's, the range of search for $f_1$
can be set from 1 to $2f_2\alpha$. This means a small $f_2$ may be
advantageous because it reduces the search space for
$f_1$. Proposition~\ref{prop:iso} can also be interpreted as a
constrained design rule suggested in~\cite{sun:tak:pp} for turbo
codes. Conversely, the search range proposed therein can be interpreted as a
special case of Proposition~\ref{prop:iso} in which $\alpha=1$.
\subsection{Automorphic Graphs}
The nature of permutation polynomials makes the graph to have automorphisms.
Hence the determination of the girth of the
graph by exhaustive search is simplified by only examining trees
starting from vertices in $G$ that belong to different equivalence
classes under graph automorphisms. We prove next a theorem showing a
graph automorphism with the help of two lemmas.
\begin{lemma}
Let $u=\gcd(2f_2,N)$. Then the set of edge left-labels
\[
\theta_i=\left\{i,i+\frac{N}{u},i+\frac{2N}{u},\ldots,i+\frac{(u-1)N}{u}\right\}
\]
and $|\theta_i|=u$ for all $i$ forms an equivalence class under the
difference of mapped of right-labels.
\label{lem:eqv1}
\end{lemma}
\begin{proof}
This is seen by observing that
\[
f(x+\gamma)-f(x)\equiv 2f_2x\gamma +f(\gamma) \pmod{N}
\]
and finding the solutions for
\[
2f_2x\equiv 0 \pmod{N}.
\]
\end{proof}
\begin{lemma}
Let $t=\rm{lcm}(N/u,\lambda)/\lambda$. Then the set variable nodes
\[
\{v_i,v_{i+t},v_{i+2t},\ldots,v_{i+(N/(t\lambda))-1}\}
\]
for all $i$ forms an equivalence class under the difference of mapped
right-label edges connected to them.
\label{lem:eqv2}
\end{lemma}
\begin{proof}
This is a direct consequence of Lemma~\ref{lem:eqv1}.
\end{proof}
\begin{theorem}
Let $\beta= mt$ such that $m$ is the smallest
positive integer that makes $\rho|f(mt)$. Then the set of variable
nodes
\[
\{v_i,v_{i+\beta},v_{i+2\beta},\ldots,v_{i+(n/\beta)-1}\}
\]
for all $i$ forms an equivalence class under graph automorphisms.
\label{th:auto}
\end{theorem}
\begin{proof}
The theorem follows from Lemma~\ref{lem:eqv2}.
\end{proof}
\begin{corollary}
Let $\gamma=\frac{\beta\lambda}{\rho}$. Then the set of check
nodes
\[
\{c_i,c_{i+\gamma},v_{i+2\gamma},\ldots,c_{i+(r/\gamma)-1}\}
\]
for all $i$ forms an equivalence class under graph automorphisms.
\label{co:auto}
\end{corollary}
\begin{proof}
This follows from Theorem~\ref{th:auto} and the natural periodicity of
the variable and check node labels induced by the permutation
polynomial.
\end{proof}
Theorem~\ref{th:auto} also leads to an intuitive design rule: by
minimizing the number of equivalent classes, the code may look less
uniform and more random. This is achieved by selecting $f_2$ as small
as possible. Following Proposition~\ref{prop:pp2}, we may select $f_2$ to be the
product of every prime factor of $N$ repeated exactly
once, say this number is $f_{2_{\min}}$.\footnote{To simplify the explanation,
we are assuming only case
1) in Proposition~\ref{prop:pp2}, however, the procedure is easily
generalized for case 2).} Moreover,
increasing $f_2$ with more factors of $N$ also means $f_2$ approaches
$N$, eventually becoming a multiple of $N$ and hence zero modulo
$N$. The QPP then reduces to a linear
permutation polynomial~\cite{tak:cos:linear} losing the ``randomness''
that the non-linear quadratic polynomial provides. Our search for
good coefficients confirmed this trend where the girth of the
resulting graph increases with $f_2$ but only up to a certain
point. Often, $f_{2_{\min}}$ is the value that maximizes the girth of
the corresponding graph. A similar fact is described
in~\cite{sun:tak:pp} for turbo codes establishing a close tie between
LDPC and turbo codes based on permutation polynomials. The main
difference between the constructions of turbo codes and LDPC codes
with the permutation polynomials approach is in the constraints:
cycle-lengths of the constituent codes in turbo
codes~\cite{sun:tak:pp} and the degree distribution in LDPC codes.
\subsection{Quasi-Cyclic Representation}
Theorem~\ref{th:auto} naturally implies the new codes are
quasi-cyclic whose {\em shifting
constraint}~\cite[p. 185]{Lin-Costello-2nd} is
$\beta$. Quasi-cyclic codes can have their generator and parity check
matrices represented by circulant sub-matrices. Quasi-cyclic
constructions are interesting because encoding can be performed by
shift-registers~\cite[pp. 256--261]{Peterson-Weldon}.
Quasi-cyclic LDPC codes from circulant permutation matrices
have some important limiting factors; the girth of the
graph is at most 12~\cite{fos:itldpcqc} and the minimum Hamming
distance of the LDPC code is
upper bounded by $(\lambda+1)!$~\cite{MacKayHighRate98}. These ideas
have been recently generalized~\cite{sma:von:qcb} when circulant
sub-matrices are allowed to have row/column weights 0 and 1 (type I
constraint) and row/column weights 0, 1 and 2 (type II
constraint). Our construction generates both type I and II codes but
are not equivalent to the construction
in~\cite{fos:itldpcqc}, which are of type I with circulant matrices of
weight 1 only. The non-equivalence can be demonstrated by a counter example. The
codes in~\cite{fos:itldpcqc} always generate rank deficient parity
check matrices whereas we have observed that in general our
construction yields full rank parity check matrices (examples are
given in Section~\ref{sec:examples}). We show next an
example code that has constraints of type II.
The example code II in Table~\ref{tab:codes} is (3,6)-regular with
size $(1008,504)$. The plot of its parity check matrix is shown in
Figure~\ref{fig:parityH}. A dark dot represents a 1 and 0 otherwise.
\begin{figure}[htbp]
\centering
\includegraphics[height=5cm,width=10cm]{graphs/1008_504.eps}
\caption{Parity check matrix for the (3,6)-regular $(1008,504)$ example
code II.}
\label{fig:parityH}
\end{figure}
The matrix apparently lacks regularity. This is the ``randomness''
introduced by the quadratic permutation polynomial. However,
the parity check matrix can be rearranged by grouping columns and rows according to their
corresponding variable node and check node equivalence classes using
Theorem~\ref{th:auto} and Corollary~\ref{co:auto}. Let the parity
check matrix be $H=[x_0x_1\ldots x_{n-1}]$ where $x_i$'s are column
vectors. Then define
\[
H^\prime=[
x_0 x_{0+\beta}\cdots x_{0+(n/\beta-1)\beta},
x_1 x_{1+\beta}\cdots x_{1+(n/\beta-1)\beta}
\cdots
x_{\beta-1}x_{\beta-1+\beta}\cdots x_{\beta-1+(n/\beta-1)\beta}]
\]
Let now $H^\prime=[y_0,y_1,\cdots,y_{r-1}]^T$, where the $y_i$'s are
column vectors and $T$ denotes matrix transposition. Then define
\[
H^{\prime\prime}=[
y_0 y_{0+\gamma}\cdots y_{0+(n/\gamma-1)\gamma},
y_1 y_{1+\gamma}\cdots y_{1+(n/\gamma-1)\gamma}
\cdots
y_{\gamma-1}y_{\gamma-1+\gamma}\cdots y_{\gamma-1+(n/\gamma-1)\gamma}]^T
\]
These simple transformations of the parity check matrix result
in a new matrix consisting of circulant sub-matrices of size
$r/\gamma\times n/\beta$. In our example, the circulant sub-matrices
are of size $84\times 84$ and the resulting parity check matrix is
shown in Fig.~\ref{fig:parityHcirc}.
\begin{figure}[htbp]
\centering
\includegraphics[height=5cm,width=10cm]{graphs/1008_504circulant.eps}
\caption{Parity check matrix for the (3,6)-regular $(1008,504)$ example
code II in circulant form.}
\label{fig:parityHcirc}
\end{figure}
This means that the new code is quite structured after
all. More precisely, it can be said that the ``pseudo-randomness''
introduced by the non-linearity of the second degree permutation
polynomial has been factored. We will see next where the
``pseudo-randomness'' gets condensed. The weight of the rows of the
circulant sub-matrices in this example are 0, 1 and 2. This is
depicted by a weight matrix $A$ in (\ref{eq:circweight}).
\begin{equation}
A=\left[
\begin{array}{cccccccccccc}
1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 \\
1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 1 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 1 & 2 & 1 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 1 & 2 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\
1 & 2 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{array}
\right]
\label{eq:circweight}
\end{equation}
Observing matrix $A$, the ``pseudo-randomness'' is condensed in the
arrangement of the circulant sub-matrices of $A$. Further, matrix $A$ has
a more general form than the matrices obtained
in~\cite{MacKayHighRate98,sma:von:qcb}. Both work demonstrate an upper
bound on the minimum distance assuming that the number of rows in $A$
is $\lambda$. The recent work by Smarandache and
Vontobel~\cite{sma:von:qcb} shows that quasi-cyclic constructions of
type II have larger minimum distance upper bounds than the
ones for type I (a similar result for high-rate quasi-cyclic LDPC
codes is reported in~\cite{kam:hrqc}). The upper bound~\cite[Theorem 2]{sma:von:qcb} and an
example code are provided for $\lambda=3$ achieving the upper bound of
$d_{\min}\leq 32$. Their
theorem is stated as follows:
\begin{theorem}[\cite{sma:von:qcb}] Let $C$ be a quasi-cyclic code
with a $\lambda\times \rho$ weight matrix $A$.
\[
d_{\min}\leq \min_{\substack{S\subseteq\{1,\ldots,\rho\}\\|S|=\lambda+1}} \sum_{\substack{
S^\prime\subset S\\
S^\prime=\{i_1,\ldots,i_{\lambda}\}
}}\sum_{\sigma \in \Pi}
a_{\sigma(1),i_1}\cdots a_{\sigma(s\lambda),i_{\lambda}}
\label{th:dminupper}
\]
where $\Pi$ is the set of all permutations of $\{1,\ldots, \lambda\}$
and $a_{x,y}\quad x=1,2,\dots,\lambda \quad y=1,2,\dots,\rho$ are the
entries of $A$.
\end{theorem}
MacKay's Theorem 2 in~\cite{MacKayHighRate98} can be interpreted as a
special case of the previous theorem when all entries in $A$ are equal
to 1. The important point in Theorem~\ref{th:dminupper}
is that by allowing different weights for the circulant matrices, a larger
upper bound is obtained. We make another
straightforward generalization, which improves the upper bound on the
minimum distance even when the codes are of type I. By allowing $A$ to
have a number of rows larger than $\lambda$ (and columns larger than $\rho$) to accommodate our weight
matrices, we obtain the following theorem.
\begin{theorem} Let $C$ be a quasi-cyclic code
with a $s\lambda\times s\rho$ weight matrix $A$. Let $S\subseteq
\{1,\ldots, s\lambda+1\}$ and
\[
\psi(S)=\sum_{\substack{
S^\prime\subset S\\
S^\prime=\{i_1,\ldots, i_{s\lambda}\}}}\sum_{\sigma \in \Pi}
a_{\sigma(1),i_1}\cdots a_{\sigma(s\lambda),i_{s\lambda}}
\]
where $\Pi$ is the set of all permutations of $\{1,\ldots,s\lambda\}$.
Then
\[
d_{\min}\leq \min_{\substack{S\subseteq\{1,\ldots,s\rho\}\\
|S|=s\lambda+1\\
\psi(S)\neq 0}}\psi(S)
\]
\label{th:dminupper2}
\end{theorem}
\begin{proof}
This follows directly from Theorem 2 in~\cite{MacKayHighRate98} and Theorem 2
in~\cite{sma:von:qcb}. However, it is important to exclude $\psi(S)=0$,
otherwise the upper bound easily incorrectly evaluates to zero for the
new construction. This is because in the new construction, the weight
matrix $A$ is itself of low density as opposed to the ones
in~\cite{MacKayHighRate98,sma:von:qcb} which are
dense\footnote{Theorem 2 in~\cite{sma:von:qcb} must
also exclude the instances of $\psi(S)=0$ to be strictly
correct.}. The event $\psi(S)=0$ happens when the set $S=\{j_1,j_2,\ldots,
j_{s\lambda+1}\}$ defines a
sub-matrix $A(S)=[b_{j_1} b_{j_2} \ldots b_{j_{s\lambda+1}}]$ containing
a row of zeros where $b_{j_i}$ is the $j_i$th column of $A$.
\end{proof}
Applying Theorem~\ref{th:dminupper2} on matrix $A$, we obtain $d_{\min}\leq
62$ for the new $(1008,504)$ code II, which is greater than the upper
bound of 24 for all type I codes with $\lambda=3$. It is also greater than the
upper bound of 32 for type II codes with a dense weight matrix
in~\cite{sma:von:qcb}.
Tighter bounds may be obtained by recursively applying
Theorem~\ref{th:dminupper2} on the sub-matrices corresponding to the
events $\psi(S)=0$ but with the all-zeros row removed. As an example,
in the previous matrix $A$, if $S=\{1,2,3,4,5,6\}$ then $\psi(S)=0$
because the corresponding matrix has the third row from the top
all-zero. We can thus apply Theorem~\ref{th:dminupper2} on the
matrix
\begin{equation}
A^\prime=\left[
\begin{array}{cccccc}
1 & 0 & 1 & 1 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 2 \\
0 & 0 & 1 & 2 & 1 & 0 \\
1 & 2 & 1 & 0 & 1 & 1 \\
\end{array}
\right]
\end{equation}
However, the bound was not improved for this code.
Another upper bound on the minimum distance for
code II was computed by using the nearest nonzero codeword search
(NNCS) method in~\cite{hu:fos:ele:nncs}, which is likely to yield
the nonzero codeword of lowest Hamming weight. Codewords of weight 44 have been
found. Therefore Theorem~\ref{th:dminupper2} gives a loose upper bound for this
code. Moreover, no codewords of low weight have been found by extensive
computer simulations in Section~\ref{sec:examples} via undetected
errors. In summary, the combination of circulant matrices of weights 0, 1,
and 2 together with a low density weight matrix gives good minimum
distances for the new code construction. Another interesting example is
code III. Its weight matrix is of type I, therefore the new
construction generates both type I and II codes. Its weight matrix is
of size $64\times 128$, therefore the explicit computation of the
minimum distance upper bound in Theorem~\ref{th:dminupper2} becomes
complex, at least by a brute force. However, the NNCS method is still
manageable giving an upper bound on the minimum distance of
78. We conjecture the minimum distance
of the new construction grows linearly with the block length. This
fact is supported by the upper bounds on the minimum distance computed
for codes of several lengths as shown in Table~\ref{tab:codes}. We
also observe that the tightness of the upper bound in
Theorem~\ref{th:dminupper2} with the NNCS method gets looser with the
block length. Finally, even the NNCS method has its limitations in
terms of complexity. A full-run of the algorithm using a ``two
position bit reverse'' is very costly for code V. The upper bound of
248 is only for a limited search with some optimization by exploiting
the automorphism of the code.
\subsection{A Search Procedure for Good Coefficients}
We give an outline of an efficient search procedure for graphs
with large girth using Propositions~\ref{prop:iso} and Theorem~\ref{th:auto}.
\begin{enumerate}
\item Choose the degree distribution $(\lambda,\rho)$
\item Choose the code size $(n,k)$ and set the interleaver length $N=n\lambda$
\item Set $f_2$ to be the product of every prime factor of $N$ repeated
exactly once
\item Search the $f_1$ that maximizes the girth of the corresponding
graph using Propositions~\ref{prop:iso} and Theorem~\ref{th:auto}
\item Try a larger $f_2$ by including more factors\footnote{The
inclusion of factors not present in $N$ is in some cases
effective. This was the case for code I in Table~\ref{tab:codes}.}
of $N$ and repeat
the previous step. If no improvement in girth is noted, proceed to
the next step otherwise
repeat this step
\item Compute the rank of the parity check matrix to find the true
dimension of the code
\end{enumerate}
The search procedure quickly generates on a regular personal computer
all codes in Table~\ref{tab:codes}.
\section{Example Codes and Simulation Results}
\label{sec:examples}
Nine new QPP-based LDPC codes are listed in
Table~\ref{tab:codes}. Most of the entries are self-explanatory. The
columns $d_{\min}$ and $d_{\min}^{T3}$ represent upper bounds on the
minimum distance computed using the NNCS method in~\cite{hu:fos:ele:nncs} and
Theorem~\ref{th:dminupper2}, respectively. The only code that has rate not equal to
exactly 1/2 is code VIII because its parity check matrix is
rank deficient. We simulated codes II, IV and VI using BPSK modulation
under an additive white Gaussian noise (AWGN) channel. We first simulated
code II, which is (3,6)-regular and has size $(1008,504)$. It was
compared with a girth-8 progressive edge growth (PEG) code~\cite{hu:peg}. For a fair
comparison with the curve in~\cite{hu:peg}, we used the same number of
80 belief-propagation (BP) decoding iterations. At least 50 frame errors were
counted per simulated point in all of our simulations unless otherwise
noted. Simulation curves for bit error rate (BER) and frame error rate
(FER) are shown in Figure~\ref{fig:sim1}. The new QPP code outperforms
the PEG code at high signal-to-noise ratios (SNR's).
\begin{table}[htbp]
\centering
\caption{New QPP LDPC codes}
\label{tab:codes}
\[
\begin{array}{|c|c|c|c|c|c|c|c|c|c|} \hline
\mbox{Code} & (\lambda,\rho) & (n,k) & f(x) & N & \mbox{girth} &
\beta & d_{\min}^{} & d_{\min}^{T\ref{th:dminupper2}}
\\ \hline
I & (3,6) & (504,252) & 5x+210x^2 & 1512 & 8 & 6 & 22 & 22 \\
II & (3,6) & (1008,504) & 29x+42x^2 & 3024 & 8 & 12 & 44 & 62 \\
III & (3,6) & (2048,1024) & 7x+24x^2 & 6144 & 8 & 128 & 78 & -\\
IV & (3,6) & (2432,1216) & 11x+114x^2 & 7296 & 10 & 32 & 90 & 344 \\
V & (3,6) & (4096,2048) & 43x+24x^2 & 12288 & 10 & 256 & 248 & -\\
VI & (3,6) & (8192,4096) & 19x+24x^2 & 24576 & 10 & 512 & - & - \\
VII & (3,6) & (16384,8192) & 7x+24x^2 & 49152 & 10 & 1024 & - & - \\
VIII & (3,6) & (32768,16384^*) & 7x+48x^2 & 98304 & 12 & 1024 & - & -\\
IX & (4,8) & (1120,562) & 87x+70x^2 & 4480 & 8 & 8 & 40 & 96\\ \hline
\end{array}
\]
* The true dimension has not been computed.
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.7\hsize,angle=270]{graphs/ldpc1008_504it80.eps}
\caption{BER and FER curves for the new (3,6)-Regular $(1008,504)$ and PEG
(3,6)-Regular $(1008,504)$ codes using 80 BP decoding iterations}
\label{fig:sim1}
\end{figure}
We next extensively simulated codes II, IV and VI using 200
BP decoding iterations. The BER and FER simulation curves are
shown in Figure~\ref{fig:sim}.
\subsection{Code II (3,6)-Regular $(1008,504)$}
For code II, we simulated over 168 million
frames at 3.00dB and accounted for 77 detectable frame errors but no
undetectable frame errors. We also found that among the detected frame
errors there were 8 (10\%) low-weight near-codewords of the type $(12,2)$, i.e.,
a near-codeword of weight 12 and syndrome of weight 2, respectively.
Over 146 million frames were simulated at 3.10dB. Out of the 19
detectable frame errors, 8 (42\%) were $(12,2)$
near-codewords. This is a significant
increase in the percentage of low-weight near-codewords out of the
detected frame errors compared with the results at the SNR of
3.00dB. This indicates an error floor to appear at an FER near
$5\times 10^{-8}$. Therefore, code II has a similar problem
with near-codewords as the Margulis $(2640,1320)$ code~\cite{margulis}
as reported in~\cite{mackay:pos:near}. However, the floor is about one order of
magnitude below that of the Margulis code.
\subsection{Code IV (3,6)-Regular (2432,1216)}
Because code II has only half the length of the Margulis $(2640,1320)$ code
studied in~\cite{mackay:pos:near}, code IV was simulated for a fair
comparison. The results are very encouraging because there are no
signs of an error floor down to an FER of $3\times
10^{-7}$. Further, the following indicates that a floor must be much
lower. Over 16 million frames were simulated at 2.25dB resulting in a
total of 34 detected frame errors. Near-codeword and
syndrome weights were mostly triple-digit except for 6 of them with
the following weights: $(16,40)$, $(47,95)$, $(80,156)$, $(83,99)$, $(84,140)$,
and $(93,125)$. Over 63 million frames were simulated at 2.35dB
resulting in a total of 21 detected frame errors. Near-codeword and
syndrome weights were mostly triple-digit except for 3 of them with
the following weights: $(45,99)$, $(84,118)$, and $(96,128)$.
\subsection{Code VI (3,6)-Regular (8192,4096)}
Code VI has a block size of approximately 8000
bits as all the codes compared at the Jet Propulsion Laboratory (JPL)
in~\cite{and:dol:div:tho:jpl42-159}. There are two (3,6)-regular codes
compared in~\cite{and:dol:div:tho:jpl42-159}: a random code and a PEG
code. Both codes do not have structure and easy encoding and decoding
methods. The PEG code outperforms the random
code but the new code IV performs no worse than the PEG
code. Further, encoding and decoding of the new code are simpler due to its
algebraic structure. We observed no low-weight near-codewords at
1.70dB for over 3 million simulated frames and 28 detected frame
errors (FER = $8.9\times 10^{-6}$). The number of frames simulated at
1.80dB was over 16 million with 3 detected frame errors (FER =
$1.7\times 10^{-7}$). All detected frame errors at 1.70dB and 1.80dB
had triple-digit near-codeword and syndrome weights.
\begin{figure}
\centering
\includegraphics[width=0.7\hsize,angle=270]{graphs/ldpc1008_504.eps}
\caption{BER and FER curves for the QPP LDPC codes II, IV and VI using 200
BP iterations}
\label{fig:sim}
\end{figure}
\section{Conclusions}
We proposed a new construction for regular LDPC codes using quadratic
permutation polynomials (QPP) over finite integer rings. It is one of
the simplest
known constructions and yet provides enough flexibility to generate
a large family of good codes of practical interest. The new
construction only requires the code size $(n,k)$, the degree
distribution $(\lambda,\rho)$ of the associated bipartite graph with
$N$ edges, and two integers representing the coefficients of a
QPP $f(x)=f_1x+f_2x^2\pmod{N}$ as the
code defining parameters. The algebraic structure allows the
identification of graph isomorphisms and automorphisms, which
significantly simplify the search for good parameters for the
coefficients of $f(x)$. The degree of $f(x)$ being two ensures a
non-linearity in the graph structure, which we believe makes it as
good as random constructions with the additional advantages of an
algebraic construction. LDPC codes with corresponding bipartite
graphs with girth as large as 12 are easily obtained. Analytical and
algorithmic upper bounds on the minimum distance were given and computed
for the new codes. Although not formally proven, the new codes appear
to have a minimum distance growing with the block length as opposed to
a fixed small upper bound on the minimum distance of 24 for
quasi-cyclic LDPC codes in~\cite{fos:itldpcqc}. Simulation results
confirm that the new codes have excellent error rate
performance. In particular, we have found neither undetected errors
nor noticeable error floors down to frame error rates close to
$10^{-7}$; therefore, the new QPP LDPC codes exceed the performance of other
algebraic constructions such as the Margulis
construction~\cite{margulis,ros:von:margulis}. We do expect an error
floor around an FER of $10^{-8}$ for
one of our codes caused by low-weight near-codewords. However, because the new
codes have easily identifiable graph automorphisms, we believe a
framework can be set for a further understanding of the important
issue of error floors in LDPC codes due to low-weight near-codewords using the
techniques in~\cite{richardson:al41,mackay:pos:near}.
We also narrowed the theoretical gap between LDPC codes and
turbo codes under the unified method of permutation polynomial
algebraic interleavers over integer rings. The designs of the
permutation polynomials were very similar.
The constraints for an LDPC code were the degree distribution,
while for a turbo code the constraints were the cycle-length of the recursive
constituent codes.
In this paper, only regular constructions have been
demonstrated. However, an extension to irregular codes is possible by
laying out vertex labels periodically according to their corresponding
node degrees. This will be investigated in future work.
Finally, from the practical side, the new QPP LDPC codes have very attractive
features. They can be encoded by shift-registers because they are
quasi-cyclic, and a parity check matrix in the form of circulant
sub-matrices is easily obtained. Moreover, the decoding allows a high
degree of parallel processing without exhibiting memory access
contention~\cite{tak:mcf} caused by extrinsic information flow.
\section{Acknowledgement}
The author wishes to thank Marc Fossorier for stimulating
discussions. The author also thanks David MacKay for discussions and
for providing additional data clarifying the low-weight
near-codewords identified in~\cite{mackay:pos:near} for a Margulis
code.
|
2,877,628,089,451 | arxiv | \section{Introduction}
Semantic segmentation is a foundation in the computer vision field, which aims to predict the pixel-wise classification of the images and it enjoys a wide range of applications, such as medical image analysis, geological inspection, and auto vehicle. Recently, benefiting from the deep neural networks, modern semantic segmentation models~\cite{chen2017deeplab,chen2018encoder,lin2016efficient,long2015fully} have achieved remarkable progress with massive human-annotated labeled data.
However, collecting pixel-level labels is very time-consuming and labor-intensive, which shifts much research attention to weakly supervised semantic segmentation (WSSS).
\begin{figure}[]
\begin{center}
\centering
\includegraphics[width=3.3in]{img/CDA1.png}
\end{center}
\caption{Illustration of the difference between conventional augmentation approaches and our method. Classical data augmentation consists of generating images obtained by basic geometrical transformations (rotation) and color changes of original training images. Context Decoupling Augmentation (CDA) aims to randomly paste the given object instances into the scenes, so as to decouple the inherent context position of the original objects in the image.}
\label{fig1}
\end{figure}
There exist various types of weak supervision for semantic segmentation like using bounding boxes~\cite{dai2015boxsup,khoreva2017simple}, scribbles~\cite{lin2016scribblesup,vernaza2017learning}, points~\cite{bearman2016s}, and image-level labels~\cite{hong2017weakly,ahn2018learning,ahn2019weakly,wang2020self,zhang2020splitting}. Among them, image-level class labels have been widely used since they demand the least annotation efforts and are already provided in existing large-scale image datasets.
To tackle the task in WSSS, the mainstream pipelines are based on visualization network CAM~\cite{zhou2016learning}, which first train a classifier network with image-level labels and discover discriminative regions that are activated for classification. Second, by expanding the seed areas using different techniques~\cite{kolesnikov2016seed,ahn2018learning,ahn2019weakly}, we can obtain the pseudo-masks as the ground-truth for training a standard supervised semantic segmentation model.
In this paper, we aim at focusing the research on augmentation for WSSS, which is crucial for deep learning networks. As shown in Figure~\ref{fig1} upper part, given a training image, traditional data augmentation methods utilize some geometrical transformations, such as rotation, scaling, flipping, and even some color conversions to increase the diversity of images to avoid overfitting. However, for weakly supervised semantic segmentation, adjusting the image as a whole and maintain the same contextual semantic relation will not significantly help the networks to mine the object areas.
For example, “sofa" always appears in the room in the datasets, therefore, the trained network may not only recognize the objects depending on the instance features but also their co-occurrence context information~\cite{li2018tell}. Specifically, when object instances often appear at the same time with some accompanying backgrounds, it will cause the networks to yield confounding bias. Namely, the networks can perform classification task well is not due to successfully distinguishing the characteristics of objects, but to being aware of the appearance of certain contextual semantic information, which is harmful to mine the object regions.
Based on this observation, we propose a Context Decoupling Augmentation (CDA) method, designing for disassembling the inherent contextual information of the original image. As shown in Figure~\ref{fig1} bottom half, the “cat" shows in the “sky", and the “sofa" falls on the “road". Although some of these scene collocations rarely appear in life, the models can pay more attention to the objects corresponding to the classification labels.
Unlike the fully-supervised data augmentation approaches~\cite{dvornik2018modeling}, we cannot access the object instance labels to extract the objects under the weakly supervised setting.
Therefore, we first adopt off-the-shelf WSSS approaches to obtain the object instances that have been well-segmented. Secondly, we randomly paste the selected foreground instances into the input images to get the new enhanced images and put them into the model for training together with the original ones without augmentation.
In this way, we can break the dependency between objects and contextual background, and the models will focus on the internal information of the foreground instances rather than the context information to predict the categories they belong to. Besides, we use an online training technique to conduct data augmentation, which means that the combination of the raw input images and the object instances to be pasted are different each time.
This greatly increases the diversity of combinations of various scenes and object instances, and thus enhance the decoupling capability of the networks.
In the proposed context decoupling augmentation framework, we utilize different WSSS networks as our baselines. Specifically, we investigate several architectures, including AffinityNet~\cite{ahn2018learning}, IRNet~\cite{ahn2019weakly}, SEAM~\cite{wang2020self}. To verify the effectiveness of our proposed method, extensive experiments show that CDA can improve pseudo-masks more than 2.8\% mIoU on average. We achieve new state-of-the-art performance by 66.1\% mIoU on the $\mathit{val}$ set and 66.8\% mIoU on the $\mathit{test}$ set of PASCAL VOC 2012~\cite{everingham2015pascal}.
The main contributions of our paper can be summarized as follows:
\begin{itemize}
\item
We present a generally applicable data augmentation approach for weakly supervised semantic segmentation, which, to the best of our knowledge, has not been well explored.
\item
The proposed context decoupling augmentation (CDA) method does not require additional data and it can remove the correlation between foreground object instances and background context information, which can drive the network focus on object regions rather than the background.
\item
Experimental evaluations on PASCAL VOC 2012 show the effectiveness of our proposed method and CDA can boost the performance of different WSSS methods to the new state-of-the-art by a large margin.
\end{itemize}
\begin{figure*}
\begin{center}
\centering
\includegraphics[width=6.8in]{img/001.png}
\end{center}
\caption{Overview of the proposed augmentation scheme. Stage-I: use the off-the-shelf weakly supervised semantic segmentation methods to obtain some simple object instances with good segmentation. Stage-II: paste the object instances randomly into the raw images to form the new input images, and perform online data augmentation training in a pairwise way with the original input images.}
\label{fig2}
\end{figure*}
\section{Related Work}
In this section, we first give a brief review on weakly supervised semantic segmentation, and then we introduce related work on data augmentation.
\subsection{WSSS}
Image labels as the weak supervision for segmentation have been widely studied in the past few years. Many approaches~\cite{wei2017object,ahn2018learning,ahn2019weakly} use CAM~\cite{zhou2016learning} to mine the object seed regions by predicting image labels. To solve the problem that only the discriminative regions can be highlighted, researchers designed to expand the object seed regions in various ways. For example, in~\cite{yu2015multi}, the target regions are expanded by fusing different discriminative regions generated by convolutional layers with different expansion rates. ~\cite{wei2017object} drives the network to learn the rest parts of the objects by iteratively erasing the target areas. In addition, some previous works~\cite{hong2017weakly,hou2018self} use additional data, such as videos and saliency maps, to explore the objects areas.
Although object expansion technologies emerge endlessly, they all use CAM~\cite{zhou2016learning} as the cornerstone. The effect of subsequent diffusion depends on the first step of the CAM learning features.
As only image-level labels are provided, when objects are closely coupled with contextual backgrounds, such as “boat" and “water", “aeroplane" and “sky", “train" and “track", CAM will mistakenly recognize the background together with foreground objects. As mentioned in~\cite{li2018tell}, the training networks have no incentive to focus attention only on the foreground class as there may be bias towards other contextual factors as a distractor with high correlation. Thus, this is an issue that's worth thinking about and that needs to be solved.
\subsection{Data Augmentation}
Data augmentation is a major trick to train deep neural networks, which aims to increase the diversity of the data by increasing the training samples and avoid overfitting to a certain extent.
Conventional data augmentation approaches perform a series of operations on the basic data, such as rotation, flipping, adding Gaussian noise, etc.
Some works have explored synthesizing training data~\cite{frid2018synthetic,peng2015learning} for further generalizability. Generating new training samples by Stylizing ImageNet~\cite{geirhos2018imagenet} can lead to better classification performances. Recently, GAN~\cite{zhu2017unpaired} has been employed to transfer the style of the images and to make the content of the images from one domain to another, which can enrich the semantic information of the images to train the deep neural networks.
Furthermore, ~\cite{mixup} introduced a method to mix two random samples and divide the classification results proportionally to enhance images.~\cite{cutout} conducted augmentation by randomly cutting out some areas in the sample and filled it with 0 pixel value, and keep the result of classification unchanged.
For object detection and segmentation, a popular data augmentation way is “copy-and-paste”~\cite{dvornik2018modeling,dwibedi2017cut}. These works pasted real segmented objects into natural images, which is beneficial to increase the object complexity of the internal images and can help to solve the problem of small target detection. However, obtaining these segmented objects requires pixel-wise instance labels.~\cite{remez2018learning} used box-supervision and the off-the-shelf faster-RCNN~\cite{ren2015faster} method to segment and generate masks via cut-and-paste.~\cite{arandjelovic2019object} adopted the unsupervised cut-and-paste learning method to generate new combined images, but this kind of method is only applicable to the image of single object.
It is the first time that we employ copy-and-paste in the WSSS field and it does not require the help of pixel-wise labels and other auxiliary approaches. Thus, for WSSS, such a data augmentation scheme is significant and has not been well explored.
\section{Framework}
Our approach mainly consists of two stages : (1) we first collect the easy examples of well-segmented objects by using off-the-shelf \ WSSS methods; (2) then we train the network in a pairwise manner with online augmentation. In this section, we describe these two stages in details.
\subsection{Object Instances Collecting}
We aim to apply data augmentation on one of the WSSS models ($\mathit{e.g.,}$ IRNet~\cite{ahn2019weakly}). To some extent, the WSSS method can successfully predict good masks for some easy objects with class labels. Therefore, as shown in Figure~\ref{fig2}, in the first stage, we train the original network and we are able to select qualified object instances through the scene complexity of the image, the scope of the object and the semantic relevance by setting some criteria.
Specifically, for the inferring phase after training the network, we follow two main criteria for collecting object instances: (i) the current image should only have a single class. The intuition behind this is that in the case of only a single class, the image information should be simple and without a complex semantic environment, the segmentation results of the model should be more accurate; (ii) the segmentation result of the current image should meet the condition, $\epsilon_1 < \frac{m}{n} < \epsilon_2$, where $\epsilon_1$ and $\epsilon_2$ are two threshold factors, respectively. $m$ is the number of pixels belonging to the foreground object, $n$ is the number of pixels of the entire image. The reason lies that if the scale value of $\frac{m}{n}$ is too large, it should be that the background is incorrectly identified as the foreground. In contrast, if the scale value is too small, it should be that the model has not been able to recognize enough foreground object pixel information.
Different from existing synthesis approaches~\cite{dvornik2018modeling,dwibedi2017cut} , our method is based on self-provided masks to obtain qualified object instances images.
\begin{figure}[]
\begin{center}
\centering
\includegraphics[width=3.3in]{img/blend.png}
\end{center}
\caption{Different kinds of pasting methods used in experiments. (a) Raw input, (b) Random rescale pasting, (c) Random rescale + rotation pasting, (d) Random rescale + rotation + Gaussian smoothing pasting. }
\label{fig3}
\end{figure}
\subsection{Online Augmentation Training}
\textbf{Blending.}
Before we take a step to train the network in the second stage, we first introduce how to blend the object instances into the natural images.
As shown in Figure~\ref{fig3}, we show different types of pasting skills in our experiments. It’s worth mentioning that we only paste objects that have not appeared in the original images. The significance of this is that we can increase the diversity of objects of the images, while also reducing the dependence of the same objects in the inherent scene.
By randomly rescaling the objects, we can paste them into the images appropriately to prevent them from being too large or too small. The addition of random rotation can change the inherent orientation properties of the objects. Adding Gaussian smoothing can help the added objects boundary blend more naturally.
\begin{figure}[]
\begin{center}
\centering
\includegraphics[width=3.3in]{img/good2.png}
\end{center}
\caption{Examples of the input augmented images with varying degrees of occlusion.}
\label{fig4}
\end{figure}
In some cases, the blending may not be ideal, we elaborate on several possibilities for random pasting. As shown in Figure~\ref{fig4}, we have listed several augmented images of random pasting and we call them “perfect", “good" and “noise" examples.
As for the “good" example, the new object “bird" covers part of the “dog” in the original image, however, we argue that this could help to erase the discriminative regions and force the network to discover more object regions like the function in ~\cite{wei2017object}. The noise example shows that the “sofa" completely covers the “aeroplane” in the original image, which will cause confusion to network classification.
However, we consider that such hard examples do not account for the majority. Most objects occupy in the middle or prominent location of the natural images. The random blending method we employ tends to paste the new objects into the off-center position of the images. Thus, this case does not affect learning. Hence, our framework is robust to the quality of augmentation. According to our experiments, this simple random blending method performs well in boosting the performance.
\renewcommand{\algorithmicrequire}{ \textbf{Input:}}
\renewcommand{\algorithmicensure}{ \textbf{Output:}}
\begin{algorithm}[htb]
\caption{Stage-II: Online Augmentation.}
\label{alg:Framwork}
\begin{algorithmic}[1]
\REQUIRE ~~\\
The training dataset images $\mathcal{I}$ and the corresponding label $\mathcal{L}$;\\
The object instances $\mathcal{O}$ and the corresponding label $\mathcal{T}$.
\label{ code:fram:extract
\WHILE{not done}
\STATE ($\mathcal{I}_i$, $\mathcal{L}_i$) $\gets$ Draw one sample from training dataset;\
\STATE ($\mathcal{O}_j$, $\mathcal{T}_j$) $\gets$ Draw one sample from object instances subset;\
\WHILE{$\mathcal{T}_j$ in $\mathcal{L}_i$}
\STATE ($\mathcal{O}_j$, $\mathcal{T}_j$) $\gets$ Resample;\
\ENDWHILE
\STATE $\mathcal{I}^{'}_i$ $\gets$ Blend $\mathcal{O}_j$ into $\mathcal{I}_i$;\
\STATE $\mathcal{L}^{'}_i$ $\gets$ Append $\mathcal{T}_j$ in $\mathcal{L}_i$;\
\STATE Train CAM $\gets$ Loss($\mathbb{C}$($\mathcal{I}_i$), $\mathcal{L}_i$) + Loss($\mathbb{C}$($\mathcal{I}^{'}_i$), $\mathcal{L}^{'}_i$);\
\ENDWHILE
\label{code:fram:trainbase}
\STATE Expansion.
\end{algorithmic}
\end{algorithm}
\textbf{Online Training.}
The augmentation scheme is conducted online to enhance the network trained in stage-I to improve the ability to distinguish object features.
Formally, in each batch, we sample $N$ images from the training dataset and the same number object instances images from the subset which is provided from stage-I. Then we randomly paste the segmented objects into the input images, which creates a $N$ batch new images. Thus, a batch of size $2N$ is generated online for each augmentation iteration.
The construction process of the online augmentation learning is summarized in Algorithm~\ref{alg:Framwork}.
Note that we train the online augmentation method in a pairwise manner as shown in Figure~\ref{fig2} stage-II left. We consider this can further help the networks to recognize the objects for the reason that some images have new blended objects, while some do not, which can help the classifier find more discriminative features.
The motivation behind this is similar to “finding the differences" with the human visual system. When the two images have a different object but with a duplicated background, which can often leave a deep impression. For the same reason, this can make the network classifier learn better features of this kind of object.
\subsection{Discussion}
The proposed CDA framework contributes a new data augmentation learning strategy. Unlike the previous “copy-and-paste” works, we do not use additional pixel-wise labels.
Specifically, by using the self-provided initial segmentation masks of the models, we can obtain the object instances for the next phase augmentation training. Furthermore, since our goal is to decouple the high correlation between objects and their contextual background, we don’t need to consider much about visual context~\cite{dvornik2018modeling,chu2018deep}, which can greatly improve the efficiency of pasting objects into the images.
Besides, we adopt online augmentation training skills. Compared with static offline data augmentation, which merely enlarges the scale of the training dataset in linear-level. Namely, once a new dataset is formed, the number of images will remain unchanged. However, our method is able to obtain exponential-level augmentation, because the combination of object instances and natural images can be ever-changing in each round of training.
\section{Experiments}
To demonstrate the contributions of the proposed method, we conduct several ablation studies to show the effectiveness of CDA and compare different baselines models to the state-of-the-arts. We will give the details of the datasets, evaluation metric, and baseline models in the following.
\subsection{Dataset}
All the networks in our framework are trained and evaluated on the PASCAL VOC 2012~\cite{everingham2015pascal} segmentation benchmark for a fair comparison to previous approaches.
The official dataset separation has 1464 images for training, 1449 for validation and 1456 for testing.
Following the common practice, we take additional annotations to build an augmented training set with 10582 images presented in~\cite{hariharan2011semantic}.
We use the standard mean Intersection-over-Union (\textbf{mIoU}) as the evaluation metric for all experiments.
\subsection{Implementation Details}
To validate the applicability of CDA, we deploy it on three popular WSSS models including IRNet~\cite{ahn2019weakly}, AffinityNet~\cite{ahn2018learning} and SEAM~\cite{wang2020self}.
The general training architecture components include a multi-label image classification step, a
pseudo-mask generation step, and the final segmentation model (DeepLab-v2~\cite{chen2017deeplab}). We strictly follow the same settings as reported in the official codes. Specially, for SEAM~\cite{wang2020self} and AffinityNet~\cite{ahn2018learning} baselines, ResNet38~\cite{he2016deep} that pre-trained on ImageNet~\cite{deng2009imagenet} is adopted as backbone with batch size as 8 and 16, respectively. When training the networks, multi-scale and data augmentation techniques like horizontal flip, random cropping, and color jittering are deployed in both architectures. Following the poly policy $lr_{init}$ = $lr_{init}(1-itr/max\_itr)^\rho$ with $\rho$ = 0.9 for decay, the models are trained with a fix input size as 448 $\times$ 448 using Adam optimizer~\cite{kingma2014adam}. Besides, online hard example mining~\cite{shrivastava2016training} is employed on the training loss in SEAM.
As for IRNet~\cite{ahn2019weakly}, ResNet50~\cite{he2016deep} is used as the backbone network (pretrained on ImageNet). The batch size is set to 16 for the image classification model and 32 for the inter-pixel relation model. The input image is cropped into a fix size of 512 $\times$ 512 using zero padding if needed. The model is trained with the same polynomial decay strategy as in AffinityNet~\cite{ahn2018learning} using stochastic gradient descent (SGD) for optimization with 8, 000 iterations.
The fully-connected CRF~\cite{krahenbuhl2011efficient} is used in three baselines to refine CAM, pseudo-mask, and segmentation mask with the default parameters in the public code.
We set the threshold $\epsilon_1$ = 0.1 and $\epsilon_2$ = 0.7 by experience.
\subsection{Ablation Studies}
To verify the effectiveness of our CDA, we evaluate CAM seed regions, pseudo-masks, and segmentation masks. In our experiments, the standard mean Intersection over Union (mIoU) is used on the training set for evaluating CAM seed area masks and pseudo-masks, and on the $\mathit{val}$ and $\mathit{test}$ sets for evaluating segmentation masks. For the sake of simplicity, since the three WSSS models are all based on CAM~\cite{zhou2016learning}, we use one of the representative models (IRNet~\cite{ahn2019weakly}) as a baseline to conduct several ablation studies on CAM in mIoU to illustrate the role of each component of our approach.
\begin{table}[]
\begin{center}
\scalebox{0.9}{
\begin{tabular}{c|c|c}
\toprule
\toprule
Method \ & \ operation \ & \ mIoU (\%)\\
\midrule
\multirow{2}*{Conventional Augmentation }& Rotation & 48.5 \\
& Translation & 48.4 \\
\midrule
\multirow{3}*{Mixup~\cite{mixup}}& $\alpha$ = 0.3& 48.7 \\
& $\alpha$ = 0.5 & 48.5 \\
& $\alpha$ = 0.8 & 49.0 \\
\midrule
CutOut~\cite{cutout} & Random& 48.9 \\
\midrule
CutMix~\cite{cutmix} & Random& 49.2 \\
\midrule
Random pasting (ours) & Rescale & \textbf{49.8} \\
\bottomrule
\end{tabular}}
\end{center}\caption{Experiments of different augmentation methods. Here $\alpha$ is the intensity of the interpolation between the eigenvector and the target vector.}\label{table1}
\end{table}
\begin{table}[]
\begin{center}
\scalebox{0.9}{
\begin{tabular}{ccccc}
\toprule
\toprule
Baseline & Rescale &Rotation& Gaussian & mIoU (\%)\\
\midrule
$\checkmark$& & & & 48.3 \\
$\checkmark$& $\checkmark$& & & 49.8 \\
$\checkmark$& $\checkmark$&$\checkmark$ & & \textbf{50.8} \\
$\checkmark$& $\checkmark$& & $\checkmark$& 49.6 \\
$\checkmark$& $\checkmark$&$\checkmark$ & $\checkmark$& 50.4 \\
\bottomrule
\end{tabular}}
\end{center}\caption{The ablation study of the effect on different pasting methods. Baseline indicates the original CAM method without pasting new objects for augmentation.}\label{table2}
\end{table}
\begin{table}[]
\begin{center}
\scalebox{0.9}{
\begin{tabular}{c|c}
\toprule
\toprule
Training manner \ & \ \ mIoU (\%) \\
\midrule
Pairwise& \textbf{50.8} \\
None-pairwise & 50.1 \\
\bottomrule
\end{tabular}}
\end{center}\caption{Experiments of augmentation training manner.}\label{table3}
\end{table}
\textbf{Random pasting vs. Other sophisticated augmentation methods:} As for the traditional augmentation methods, we adopt the random rotation and translation to expand the dataset to three times the original size, however, they can not bring significant boost for the performance. We also compare Mixup~\cite{mixup}, CutOut~\cite{cutout} and CutMix~\cite{cutmix} methods to generate new augmented images.
As shown in Table~\ref{table1}, random rescale pasting outperforms the other three methods achieving \textbf{49.8}\% mIoU.
These results demonstrate that random pasting is suitable for our CDA framework. We consider that proper occlusion helps the network to better mine the features of other areas of the objects, and the situation of complete occlusion is relatively rare which will not affect our learning process.
\begin{figure}[]
\begin{center}
\centering
\includegraphics[width=3.3in]{img/cam.png}
\end{center}
\caption{Qualitative visualization of CAMs. Our CDA framework not only suppresses over-activation ($1^{st}, 2^{nd}, 3^{rd}$ row) of the high correlation contextual backgrounds of the objects and expands CAMs to cover the whole object regions ($4^{th}$ row).}
\label{fig5}
\end{figure}
\textbf{Comparison with baseline:} We further explore the impact of different pasting methods on data augmentation. Table~\ref{table2} shows that using random rescale pasting has a 1.5\% improvement compared to baseline. After combining rescale and rotation, we can get the best performance to \textbf{50.8}\% mIoU on PASCAL VOC training set. The results show that applying Gaussian smoothing can not help to improve the performance. Therefore, in subsequent experiments, unless otherwise specified, we will use the random rescale combining with the rotation method.
\begin{table*}[]
\begin{center}
\scalebox{0.9}{
\begin{tabular}{c|c|c|c|c|c}
\toprule
\toprule
Network & \ \ Backbone \ & \ \ \ \ CAM \ \ \ \ & \ Pseudo-Masks \ & \ Seg. Masks (val-set) & \ Seg. Masks (test-set)\\
\midrule
AffinityNet~\cite{ahn2018learning} & ResNet-38 & 48.0 & 59.7 & 61.7 & 63.7\\
+ CDA & ResNet-38 & $48.9_{\color{red}+0.9}$ & $63.3_{\color{red}+3.6}$ & $64.2_{\color{red}+2.5}$ & $65.8_{\color{red}+2.1}$\\
\midrule
IRNet$^{*}$~\cite{ahn2019weakly} & ResNet-50 & 48.3 & 65.9 & 63.5 & 64.8\\
+ CDA & ResNet-50 & $50.8_{\color{red}+2.5}$ & $67.7_{\color{red}+1.8}$ & $65.8_{\color{red}+2.3}$ & $66.4_{\color{red}+1.6}$\\
\midrule
SEAM~\cite{wang2020self} & ResNet-38 & 55.4 & 63.4 & 64.5 & 65.7\\
+ CDA & ResNet-38 & $58.4_{\color{red}+3.0}$ & $66.4_{\color{red}+3.0}$ & $66.1_{\color{red}+1.6}$ & $66.8_{\color{red}+1.1}$\\
\bottomrule
\end{tabular}}
\end{center}\caption{Different baselines with our CDA framework performance in mIoU on PASCAL VOC. $^{*}$denotes our reimplemented results since the original code does not provided pre-trained weights.}\label{table5}
\end{table*}
Figure~\ref{fig5} shows the qualitative comparison between our CAM+Aug by CDA method and the original CAM.
As shown in the first and second rows in the figure and the labels of objects are “table". The original CAM will activate background semantic information that is strongly related to the “table”, such as “chair”. However, by employing the decoupling augmentation training strategy, our method can focus on the target areas. For the image with the label of “train”, CAM even pays attention not to the object itself, but the “track”, which will be detrimental to the subsequent segmentation task. Moreover, CDA can also help the network expand and discover more comprehensive object features but not only the most discriminative regions like the “cat” shown in the last row.
\textbf{The effect on pairwise training:} Compared to merely using the augmented images to train the networks, we use the none-augmented images with the augmented images as pair images to jointly train the models as shown in Figure~\ref{fig2} stage-II.
The results shown in Table~\ref{table3} show that applying pairwise training strategy outperforms the one in single augmented images, which illustrates that this helps the network classifier to learn more discriminative features.
\begin{table}[]
\begin{center}
\scalebox{0.85}{
\begin{tabular}{c|c|c}
\toprule
\toprule
Number of pasted objects & Same category objects & mIoU (\%)\\
\midrule
1 & $\times$ & \textbf{50.8}\\
2 & $\times$ & 48.9\\
3 & $\times$ & 47.8\\
\bottomrule
1 & \checkmark & 50.2\\
2 & \checkmark & 48.6\\
3 & \checkmark & 47.4\\
\bottomrule
\end{tabular}}
\end{center}\caption{Experiments of different number of pasted objects for augmentation.}\label{table4}
\end{table}
\textbf{The effect on objects numbers:} Under the default settings of our experiment, we only paste one new instance that does not exist in the original images. We further explore the effect of pasting multiple objects into the images to conduct augmentation. As shown in Table~\ref{table4} above the solid line, when the number of object to be pasted increases from one to two, the mIoU performance will decrease. As the number of pasted objects changes to three, it will even worse than the baseline. The results show that over-pasted objects may cover the objects in the original image, making the noise sample dominant. This will confuse the classifier, which will bring negative effects.
In addition, as depicted below the solid line in Table~\ref{table4}, when we allow the pasted object to be consistent with the object category in the original image, their general performance is worse than the former.
This shows that forcing objects of different categories to be pasted into images can decouple the strong contextual dependence of objects in the original semantic environment.
\begin{figure}[]
\begin{center}
\centering
\includegraphics[width=3.3in]{img/pse.png}
\end{center}
\caption{Visualization of pseudo-masks (baseline: IRNet~\cite{ahn2019weakly}). (a) Input images. (b) Ground-Truth labels. (c) Our CAM+Aug. (d) Original CAM.}
\label{fig6}
\end{figure}
\begin{figure*}
\begin{center}
\centering
\includegraphics[width=6.6in]{img/deeplab.png}
\end{center}
\caption{Qualitative results on the PASCAL VOC 2012 val set. (a) Input images. (b) Ground-truth labels. (c) Results obtained by IRNet~\cite{ahn2019weakly} baseline. (d) Results of our IRNet + CDA. More results can be found in the supplementary material.}
\label{fig7}
\end{figure*}
\begin{table}[]
\begin{center}
\scalebox{0.88}{
\begin{tabular}{ccc|cc}
\toprule
\toprule
Methods & Backbone & Saliency & $\mathit{val}$ & $\mathit{test}$\\
\midrule
CCNN~\cite{pathak2015constrained}$_{\text{ICCV'15}}$ & VGG16 & -& 35.3 & 35.6\\
SEC~\cite{kolesnikov2016seed}$_{\text{ECCV'16}}$ & VGG16 & -&50.7& 51.1\\
STC~\cite{wei2016stc}$_{\text{TPAMI'17}}$ & VGG16 & \checkmark&49.8 & 51.2\\
AdvEra~\cite{wei2017object}$_{\text{CVPR'17}}$& VGG16 & \checkmark & 55.0 & 55.7\\
DCSP~\cite{chaudhry2017discovering}$_{\text{BMVC'17}}$& ResNet101 & \checkmark & 60.8 & 61.9\\
MDC~\cite{wei2018revisiting}$_{\text{CVPR'18}}$& VGG16 & \checkmark & 60.4 & 60.8\\
MCOF~\cite{wang2018weakly}$_{\text{CVPR'18}}$& ResNet101 & \checkmark & 60.3 & 61.2\\
DSRG~\cite{huang2018weakly}$_{\text{CVPR'18}}$& ResNet101 & \checkmark & 61.4 & 63.2\\
AffinityNet~\cite{ahn2018learning}$_{\text{CVPR'18}}$& ResNet-38 & - & 61.7 & 63.7\\
IRNet~\cite{ahn2019weakly}$_{\text{CVPR'19}}$& ResNet50 & - & 63.5 & 64.8\\
FickleNet~\cite{lee2019ficklenet}$_{\text{CVPR'19}}$& ResNet101 & \checkmark & 64.9 & 65.3\\
SEAM~\cite{wang2020self}$_{\text{CVPR'20}}$& ResNet38 & - & 64.5 & 65.7\\
ICD~\cite{IDC}$_{\text{CVPR'20}}$ & ResNet101 & - & 64.1 & 64.3 \\
\midrule
IRNet + CDA (ours)& ResNet50 & - & {\color{blue} 65.8} & {\color{blue} 66.4} \\
SEAM + CDA (ours)& ResNet38 & - & {\color{red} 66.1} & {\color{red} 66.8} \\
\bottomrule
\end{tabular}}
\end{center}\caption{Performance comparisons with other state-of-the-art WSSS methods on PASCAL VOC 2012 dataset. The {\color{red} best} and {\color{blue} second best} performance under each set are marked with corresponding formats.}\label{table6}
\end{table}
\textbf{Analysis of pseudo labels and Segmentation masks:} The overall results are shown in Table~\ref{table5}. We can observe that deploying CDA on different weakly supervised semantic segmentation models can improve all their performances. Specifically, SEAM~\cite{wang2020self} can achieve the best performance in Segmentation Masks on both validation set and testing set.
Figure~\ref{fig6} shows that we can obtain more accurate and complete masks covering the object areas.
\subsection{Comparison with State-of-the-arts}
Finally, we compare our framework with state-of-the-art methods on the PASCAL VOC 2012 dataset including both the validation set and the testing set. For a fair comparison, we adopt the same DeepLab~\cite{chen2014semantic,chen2017deeplab} architectures as reported in the original papers.
Table~\ref{table6} shows experiment results of previous approaches.
Although different baselines already boosts performance compared to previous methods, when CDA is deployed in the models, SEAM~\cite{wang2020self} can achieve the best performance and outperform other state-of-the-arts by a large margin. IRNet~\cite{ahn2019weakly} yield the second best performance and can beat its later published works.
Figure~\ref{fig7} presents qualitative results of our CDA approach applying on IRNet baseline and compares them to those of IRNet itself.
We can observe that CDA can make more accurate predictions on objects, which shows better demarcations in some coherent areas. Meanwhile, CDA can help to expand and discover more comprehensive object regions.
\section{Conclusion}
In this paper, we discuss and investigate the confounding bias by the network classifier in object recognition when there are only class-labels. To this end, we propose a Context Decoupling Augmentation (CDA) method for weakly supervised semantic segmentation and to narrow the gap with fully supervision.
Specifically, through two-stage training, the object instances provided by the network itself are pasted into the input natural images to conduct augmentation. To further improve the ability of network for learning object features, we adopt pairwise training manner to help the classifier to distinguish more discriminative features.
Experimental results show that CDA can help boost various WSSS methods to the new state-of-the-arts without using extra data and labels.
\vspace{1ex}
\noindent \textbf{Acknowledgment.} \ This work was supported by NSFC 61876208, Key-Area Research and Development Program of Guangdong 2018B010108002, National Research Foundation Singapore under its AI Singapore Programme (AISG-RP-2018-003) and the MOE Tier-1 research grants: RG28/18 (S) and RG22/19 (S).
{\small
\bibliographystyle{ieee}
|
2,877,628,089,452 | arxiv | \section{Introduction}
Dirac materials~\cite{wehling2014}, such as graphene~\cite{novoselov2005,geim2007,castroneto2009} and a growing number of novel two-dimensional systems, exhibit a huge variety of possible ordered states in the presence of sufficiently strong interactions~\cite{sorella1992,khvesh2001,gorbar2002,herbut2006,black2007,raghu2008,roy2009,honerkamp2008,grushin2013,daghofer2013,duric,herbut2009}.
By inreasing, for instance, repulsive onsite or nearest-neighbor density-density interactions, these systems are expected to exhibit a continuous quantum phase transition from the semimetallic phase into spin-density-wave (SDW) and charge-density-wave (CDW) phases, respectively~\cite{herbut2009,araki2012,wu2013,janssen2014}.
More exotic interaction-induced states have also been discussed, such as Kekul\'e states~\cite{hou2007,roy2010,khari2012,classen2014} or a topological Quantum (Spin) Hall state~\cite{raghu2008,grushin2013,daghofer2013,scherer2015}.
In fact, the nature of an ordered state crucially depends on the system's precise interaction profile, e.g., the magnitudes and ratios of the local and non-local short-ranged interaction terms.
Current experimental data suggests that free-standing graphene is in the semimetallic (SM) phase~\cite{experiments,experiments2}.
From the theoretical side, calculations based on the constrained Random Phase Approximation (cRPA) and beyond provide values for the interaction parameters of the Coulomb repulsion for graphene and its few-layer relatives~\cite{wehling2011,rosner2015}.
Quantum Monte Carlo (QMC) studies for these parameters confirm the semimetallic behavior of physical graphene in agreement with the experimental findings~\cite{ulybyshev2013,smith2014}.
At the same time, however, these results sugggest that the material may be not too far from a possible transition into an ordered state.
Other QMC calculations also find sizable charge-density and spin-current correlations, although they do not become long-ranged within the accessible parameter region~\cite{golor2015}.
Furthermore, a uniform and isotropic strain of about 15\% can be expected to induce an interaction-driven metal-insulator transition in graphene~\cite{assaad2015}.
It is therefore not inconceivable that physical graphene could possibly be tuned through a symmetry-breaking quantum phase transition~\cite{khvesh2001,herbut2006,Juricic2009,katanin2015}.
Similar conclusions may be expected to hold for other Dirac materials~\cite{wehling2014} and should also be relevant for ``artificial graphene''~\cite{polini2013}.
Despite the great progress in the last years, our theoretical understanding of the role of interactions in Dirac materials is far from being complete.
In fact, QMC simulations typically suffer from a sign problem when nonlocal interaction parameters grow large, inducing a strong bias toward the antiferromagnetic state~\cite{ulybyshev2}.
Fermionic renormalization group approaches have provided important contributions to the understanding of interacting electrons in Dirac materials accounting for further-ranged interactions on equal footing~\cite{herbut2006,honerkamp2008,roy2009,classen2014,scherer2015}.
These approaches are well-suited for the identification of the ordering tendencies and their classification by symmetries.
However, the purely fermionic approach typically misses important order-parameter fluctuations
and the description of symmetry-broken regimes in the phase diagram is intricate~\cite{metzner2012}.
An inclusion of order-parameter fluctuations aiming at more quantitative studies of the phase transitions and their critical behavior in Dirac materials can be achieved within bosonized approaches~\cite{rosa2001} that also allow to describe the symmetry-broken regime.
In this spirit, the SDW and CDW transitions have been investigated, however, only as completely separate transitions~\cite{herbut2009, janssen2014}.
Here, we take the vicinity of graphene-like materials to both the SDW and the CDW ordered states as a motivation to study the nature of the quantum multicritical point connected to the intersection of the different phase transition lines.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\columnwidth]{figure01_sum.pdf}
\caption{(a) Schematic phase diagram of the extended Hubbard model on the honeycomb lattice with onsite interaction $U$ and nearest-neighbor interaction $V$. Neutral suspended graphene is found to be in the semimetallic state indicated by the star. Solid lines denote second-order and dashed lines first-order transitions. The neighborhood of the multicritical point (gray shaded area) may be governed by either a (\RNum 1) second-order tetracritical point or a (\RNum 3) second-order bicritical point with first-order transition between the ordered states or by a (\RNum 2) first-order triple point.
(b) Sketch of stability ranges of fixed points for generalized fermion flavor number $N_f$. Two different fixed points FP1 and FP2 are stable for small and large $N_f$, respectively. Graphene lies in the hatched region, where no stable fixed point exists. This leads to a first-order triple point in the phase diagram [situation (\RNum 2)]. The critical flavor numbers $N_{c_{i}}$ change considerably when including nonperturbative effects. In our approximation we find $N_{c_1}=1.6$ and $N_{c_2}=3.6$.}
\label{fig:phasediag}
\end{figure}
Such a study can be expected to reveal fascinating details of the phase diagram of Dirac materials, in particular, whether first-order or continuous phase transitions appear as a result of the competition of order parameters and whether there can be a coexistence of two ordered phases.
In principle, we can distinguish three different possibilities for the multicritical point: we can have either
(I) a second-order tetracritical point, allowing for the coexistence of the two ordered states, or (II) a triple point at which all transitions become first order, or (III) a second-order bicritical point with first-order transitions between the ordered states. A sketch of the phase diagram together with the three possible behaviors near the multicritical point is depicted in Fig.~\ref{fig:phasediag}(a).
For graphene and related materials multicritical behavior has previously been studied in different contexts using the $\epsilon$ expansion to first order, with $\epsilon$ being the distance from the upper critical space-time dimension of four~\cite{roy2011,roy2014,classen2015}.
Concerning the multicritical behavior and competition of CDW and SDW orders, we have recently put forward a corresponding study in Ref.~\cite{classen2015} using an effective Gross-Neveu-Yukawa model.
To first order in $\epsilon$, we have there found that a rather complex picture emerges as a function of the fermion ``flavor'' number $N_f$, i.e. the number of Dirac fermions.
The graphene case, $N_f=2$, appeared to be dominated by a second-order tetracritical point [situation (I) in Fig.~\ref{fig:phasediag}] with the universal behavior being in the same universality class as the SM-SDW transition, i.e., the ``chiral Heisenberg'' universality class~\cite{rosenstein1993}.
The first-order $\epsilon$ expansion is a formidable tool to detect and discover the qualitative aspects of these systems.
It may, however, be subject to considerable quantitative corrections when including higher orders~\cite{herbut1997,fei}.
Furthermore, the convergence properties of the asymptotic series related to this type of expansion are {\it a priori} not clear, in particular when $\epsilon$ becomes of order one.
This paper aims at a considerable improvement in precision of our previous qualitative investigation by employing the functional renormalization group (FRG), which has proven to be a versatile and reliable tool to study both fermionic~\cite{rosa2001, janssen2012, mesterhazy2012, janssen2014, vacca2015} and bosonic systems~\cite{litim2011} at criticality.
In the context of multicritical behavior of bosonic systems with $\mathrm O(N_1) \oplus \mathrm O(N_2)$ symmetry, the significant quantitative improvement of the FRG approach as compared to the first-order $\epsilon$ expansion has been explicitly demonstrated~\cite{eichhorn2013}.
Employing the FRG, we are able to confirm the qualitative picture that we established in Ref.~\cite{classen2015}.
However, large quantitative modifications appear concerning the stability of the various fixed points as a function of $N_f$, with severe implications for the phase diagram in the graphene case, $N_f=2$.
We find that:
\begin{enumerate}[(1)]
\item For small number of fermion flavors, $N_{f}<1.6$, the decoupled fixed point related to the antiferromagnetic transition (``chiral Heisenberg'' universality class) is stable.
\item For intermediate fermion flavor numbers, $1.6< N_f < 3.6$, including the graphene case $N_f=2$, there is no admissible stable fixed point, suggesting a triple point and corresponding first-order transitions.
\item For large number of flavors, $N_f > 3.6$, we rediscover the novel stable fixed point with nontrivial interactions between the different sectors, found previously in the $\epsilon$ expansion~\cite{classen2015}.
\end{enumerate}
Our results concerning the ranges of stable fixed points are sketched in Fig.~\ref{fig:phasediag}(b). In the case where stable fixed points exist, we furthermore study the critical behavior in detail by investigating critical exponents and anomalous dimensions.
The rest of the paper is organized as follows:
In the next section we introduce our effective model that couples the fermionic and bosonic degrees of freedom that become dominant in the vicinity of the multicritical point.
We give an overview over the relevant terminology in Sec.~\ref{sec:critical} and explain our method in Sec.~\ref{sec:frg}.
In Sec.~\ref{sec:results} we discuss the resulting fixed-point structure and the concomitant critical behavior as function of space-time dimension and fermion flavor number $N_f$, and compare various limits to literature results.
In the limit of large $N_f$ we are able to present an analytic solution of the flow equations, including the full form of the fixed-point potential.
The implications for the nature of the phase diagram are studied in Sec.~\ref{sec:phasediag} and we draw our conclusions in Sec.~\ref{sec:conclusion}.
\section{Extended Hubbard model and effective relativistic theory}
Let us start with the single-particle Hamiltonian for electrons with spin $s$ on the honeycomb lattice at half-filling,
\begin{align}
H_0&=-t\sum_{\mathbf{R},i,s}\left[u_s^\dagger(\mathbf{R}) v_s(\mathbf{R}+\boldsymbol{\delta}_i)+\text{h.c.}\right]\,,
\end{align}
summing over the sites $\mathbf{R}$ of the triangular sublattice and the three nearest-neighbor vectors $\boldsymbol{\delta}_i$.
$u_s^{(\dagger)}$ and $v_s^{(\dagger)}$ correspond to annihilation (creation) operators on the two different sublattices of the honeycomb lattice.
This leads to two energy bands $\epsilon_{\mathbf{k}}=\pm t\left|\sum_{i=1}^3 \exp(i\, \mathbf{k}\cdot\boldsymbol{\delta}_i)\right|$ with linear and isotropic slope close to the pointlike Fermi surface located at the two inequivalent points $\mathbf{K}$, $\mathbf{K}'$ at the corners of the Brillouin zone.
Onsite and nearest-neighbor interactions are implemented by the interaction terms
\begin{align}\label{eq:int}
H_{I}=U\sum_{i}n_{i,\uparrow}n_{i,\downarrow}+V\sum_{\substack{\spitz{i,j},s,s'}}n_{i,s}n_{j,\pr{s}}\,,
\end{align}
with the density operators $n_{i,s}$ on site $i$.
Retaining the Fourier modes near $\mathbf{K}$, $\mathbf{K}'$ only, the low-energy model of the free electrons at temperature $T=0$ can be written as a relativistic Dirac field theory in the continuum~\cite{herbut2006}
\begin{align}
S_F=\int d\tau dx^{D-1}\left[\bar\Psi\left(\mathbbm{1}_2\otimes\gamma_\mu\right)\partial_\mu\Psi\right]\,,
\end{align}
with space-time index $\mu=0,\dots,D-1$ and the $D$-dimensional derivative $\partial_\mu=(\partial_\tau, \nabla)$. The $(4\times4)$ gamma matrices obey the Euclidian Clifford algebra $\{\gamma_\mu,\gamma_\nu\}=2\delta_{\mu\nu}$. In $D=2+1$ dimensions they are explicitly represented by
%
$\gamma_0=\mathbbm{1}_2\otimes\sigma_z$, $\quad \gamma_1=\sigma_z\otimes\sigma_y$, $\quad \gamma_2=\mathbbm{1}_2\otimes\sigma_x$.
In this frame the spin-$\frac{1}{2}$ electrons and holes are described by an 8-component Dirac fermion $\Psi=\left(\Psi_\uparrow,\Psi_\downarrow\right)^T$ and its conjugate $\bar\Psi=\Psi^\dagger(\mathbbm{1}_2\otimes\gamma_0)$.
The Dirac field $\Psi$ is related to the Grassmann fields $u,v$ by
\begin{align}
\Psi_s^\dagger(\mathbf x,\tau)&=\int\frac{d\omega d^{D-1}\mathbf q}{(2\pi)^{D}}e^{i\omega \tau+i \mathbf q\cdot \mathbf x}\big[
u_s^\dagger(\mathbf K+\mathbf q,\omega),\nonumber\\
& v_s^\dagger(\mathbf K+\mathbf q,\omega),u_s^\dagger(-\mathbf K+\mathbf q,\omega),v_s^\dagger(-\mathbf K+\mathbf q,\omega)
\big]\label{eq:spinor}
\end{align}
with $\mathbf{K}'=-\mathbf{K}$. We can define two additional $(4\times4)$ matrices that anticommute with all $\gamma_\mu$: $\gamma_3=\sigma_x\otimes\sigma_y$ and $\gamma_5=\sigma_y\otimes\sigma_y$. Their product $\gamma_{35}=-i\gamma_3\gamma_5$ commutes with all $\gamma_\mu$, while it anticommutes with $\gamma_3$ and $\gamma_5$.
To describe the multicritical point in the phase diagram we introduce bosonic degrees of freedom related to the SDW and CDW fluctuations. These can be written in terms of the order-parameter fields~\cite{semenoff1984, herbut2006}
\begin{align} \label{eq:order-parameter}
\Phi=\big(\chi,{\boldsymbol\phi}\big)=\left(\langle \bar\Psi\Psi\rangle,\langle \bar\Psi(\boldsymbol{\sigma}\otimes\mathbbm{1}_4)\Psi\rangle\right)\,,
\end{align}
which can also be understood as order parameters for the various possible chiral symmetries~\cite{gehring2015}.
Another very interesting set of order parameters, with possibly the \emph{same} quantum critical behavior~\cite{herbut2009}, is given by
\begin{equation}
\tilde\Phi=\big(\tilde\chi,\tilde{\boldsymbol\phi}\big)=\left(\langle \bar\Psi\gamma_{35}\Psi\rangle,\langle \bar\Psi(\boldsymbol{\sigma}\otimes\gamma_{35})\Psi\rangle\right)\,.
\end{equation}
These can be related to the much-discussed Quantum Anomalous and Quantum Spin Hall states~\cite{haldane1988}, and may also be relevant in the phase diagram of Dirac materials~\cite{raghu2008}.
Note that $\tilde{\boldsymbol\Phi}$'s zeroth component $\tilde \chi$ breaks the time-reversal symmetry, defined by the time-reversal operator~\cite{roy2009}
\begin{equation}
T = (\sigma_y \otimes i \gamma_1 \gamma_5) K\,,
\end{equation}
where $K$ denotes complex conjugation, while its spatial part $\tilde{\boldsymbol\phi}$ breaks the SU(2) spin-rotational symmetry but respects time-reversal symmetry. In the following, we will focus on a condensation of the chiral order parameters $\chi$ and $\boldsymbol\phi$ only, cf. Eq.~\eqref{eq:order-parameter}, and assume that the fields $\tilde\chi$ and $\tilde{\boldsymbol\phi}$ are sufficiently massive so that their fluctuations become subdominant.
The spin-singlet Ising field $\chi\propto u_s^\dagger u_s - v_s^{\dagger} v_s$ characterizes the staggered density phase, i.e., the CDW state, whereas the Heisenberg triplet ${\boldsymbol\phi}\propto u_s^\dagger \boldsymbol{\sigma}_{ss'} u_{s'} - v_s^{\dagger} \boldsymbol{\sigma}_{ss'} v_{s'}$ corresponds to the antiferromagnetic SDW state.
Near the putative transitions into the CDW and SDW states the fluctuations of the corresponding order parameters play a crucial role.
We incorporate their dynamics in the bosonic action
\begin{align}\label{eqn:actionB}
\hspace{-0.2cm}S_B&\hspace{-0.1cm}=\hspace{-0.1cm}\int\hspace{-0.1cm} d\tau dx^{D-1}\Big[
\frac{1}{2}\chi(-\partial_\mu^2+m_\chi^2)\chi+\frac{1}{2}{\boldsymbol\phi}(-\partial_\mu^2+m_\phi^2){\boldsymbol\phi}\nonumber\\
&\quad\quad\quad+\frac{1}{8}\lambda_{2,0}\chi^4+\frac{1}{8}\lambda_{0,2}\big({\boldsymbol\phi}^2\big)^2+\frac{1}{4}\lambda_{1,1}\chi^2{\boldsymbol\phi}^2
\Big],\,
\end{align}
where we also allow for a coupling $\lambda_{1,1}$ between the two order parameters.
Finally, bosonic and fermionic degrees of freedom are coupled in terms of the Yukawa interactions
\begin{align}
S_Y=\int d\tau dx^{D-1}\left[
\bar{g}_{\chi}\chi\bar\Psi (\mathds{1}_2\otimes\mathds{1}_4)\Psi+\bar{g}_{\phi}{\boldsymbol\phi}\bar\Psi({\boldsymbol{\sigma}}\otimes\mathds{1}_4)\Psi\,
\right].\nonumber
\end{align}
The complete action $S$ is then given by
\begin{align}\label{eq:mic}
S=S_F+S_B+S_Y\,,
\end{align}
which respects Lorentz, spin-rotational, time-reversal and sublattice-exchange symmetry. The ordered phases are characterized by a finite expectation value of one or both bosonic fields, leading to the spontaneous breaking of the spin-rotational or sublattice-exchange symmetry.
In the following, it will prove useful to consider an arbitrary number $N_f$ of Dirac points in the spectrum, implemented by the replacement
\begin{align}
\bar\Psi(\boldsymbol{\sigma}\otimes\mathbbm{1}_4)\Psi &\mapsto \bar\Psi(\boldsymbol{\sigma}\otimes\mathbbm{1}_{2N_f})\Psi\,, \\
\bar\Psi(\mathbbm{1}_2\otimes\mathbbm{1}_4)\Psi &\mapsto \bar\Psi(\mathbbm{1}_2\otimes\mathbbm{1}_{2N_f})\Psi\,,
\end{align}
where $\Psi$ and $\bar{\Psi}$ now have $2N_f$ components \emph{for each spin projection}. We will refer to $N_f$ as the fermion ``flavor'' number, with $N_f=2$ for graphene. Let us note that the explicit implementation of the flavor number is not important. To derive the results, only the Clifford algebra and the product $d_\gamma N_f$ is needed, where $d_\gamma$ denotes the dimension of the gamma matrices. We will also consider general space-time dimension $2<D<4$, with an eye on the physical $D=2+1$.
\section{Fixed points and critical behavior}\label{sec:critical}
\subsection{RG $\beta$ functions and fixed points}\label{sec:FPs}
Renormalization group theory describes the scale dependence of a physical system by providing $\beta$ functions for the different couplings of a theory.
The $\beta$ functions are differential equations encoding the evolution of the system with respect to the energy (or momentum) scale $k$.
Starting from a ``microscopic'' model for a system at some ultraviolet (UV) cutoff scale $k = \Lambda$, one can then infer the low-energy, or infrared (IR), characteristics in terms of the solution of the $\beta$ functions.
In our case, the UV scale $\Lambda$ corresponds to the scale at which our effective model, Eq.~\eqref{eq:mic}, is valid, and as such is much smaller than the bandwidth (at which an accurate lattice description would have to be employed).
More explicitly, we introduce the generalized set of dimensionless couplings for the theory by $\alpha_i$, $i \in \{1,2,\ldots \}$.
The $\beta$ functions can be written in the form $\partial_t \alpha_i=\beta_i(\alpha_1,\alpha_2,\ldots)$, where the change in scale is written in terms of the renormalization group time $t = \ln (k/\Lambda) \leq 0$.
A fixed point~$\alpha^*$ of these equations is given by
\begin{align}
\beta_i(\alpha_1^\ast,\alpha_2^\ast,\ldots)=0 \quad \forall\ i,
\end{align}
and can be associated to a possible continuous phase transition.
The critical properties and scaling behavior near such a transition are encoded in the RG flow in the vicinity of the fixed point~$\alpha^*$,
\begin{align}
\partial_t \alpha_i=&B_{i,j} (\alpha_j^*-\alpha_j)+ \mathcal O\left((\alpha_j^*-\alpha_j)^2\right)\,,\end{align}
where $B_{i,j}=(\partial\beta_i/\partial \alpha_j)|_{\alpha=\alpha^*}$.
The eigenvalues $\theta_i$ of $(-B_{i,j})$ (``critical exponents'') are universal quantities that characterize the scaling laws at the putative continuous phase transition.
All positive critical exponents $\theta_i$ correspond to RG-relevant directions, i.e., the fixed point repels the flow in that direction.
In turn, negative $\theta_i$ are RG irrelevant and correspond to attractive directions.
Fixed points with no more than two relevant directions (corresponding to no more than two positive critical exponents, $\theta_1$ and $\theta_2$) can be accessed by tuning two microscopic parameters, e.g., onsite interaction $U$ and nearest-neighbor interaction $V$ for the microscopic theory, Eq.~\eqref{eq:int}, or the two masses $m_\chi^2$ and $m_\phi^2$ in our effective model, Eq.~\eqref{eq:mic}. In this work, we will call such fixed points ``stable''.
The third largest critical exponent $\theta_3$ then decides over the stability of a fixed point.
In addition, unitarity requires real Yukawa couplings $g_{\chi,\ast}, g_{\phi,\ast} \in \mathbbm{R}$, and the action has to be bounded from below, i.e., $\lambda_{2,0}^*, \lambda_{0,2}^*\geq 0$ and $\lambda^*_{1,1}>-\sqrt{\lambda^*_{2,0}\lambda_{0,2}^*}$.
\subsection{Classification of fixed points}\label{subsec:FPs}
As pointed out above, we are interested in the stable fixed point of the system, governing the quantum multicritical behavior of its phase diagram.
The model incorporates the separate SM-to-SDW and the SM-to-CDW transitions described by the chiral Ising and chiral Heisenberg universality classes, respectively, as well as a purely bosonic model with a $\mathrm O(1)\oplus \mathrm O(3)$.
Just as in such bosonic models, we can deduce the existence of some of the appearing fixed points and critical properties from symmetry considerations and from the limiting cases of the separate models~\cite{janssen2014,eichhorn2013}:
\begin{enumerate}[(1)]
\item The bosonic system, when the fermions completely decouple, i.e., $g_{\chi,\ast}^2=0$ and $g_{\phi,\ast}^2=0$, which puts the fermionic sector at its Gaussian fixed point. For the remaining bosonic sector there are three possible fixed points of the $\mathrm O(1)\oplus \mathrm O(3)$ model: the decoupled, the isotropic and the biconical one.
\item The chiral Ising sector with the Ising field $\chi$ at its nontrivial fermionic fixed point $g_{\chi,\ast}^2\neq 0$, and with the fermions decoupled from the Heisenberg field ${\boldsymbol\phi}$, $g_{\phi,\ast}^2 = 0$, which is then either at its bosonic Gaussian or Wilson-Fisher (Heisenberg) fixed point. The latter will be called ``chiral Ising plus Heisenberg'' (cI+H) fixed point in the following.
\item The chiral Heisenberg sector with the Heisenberg field ${\boldsymbol\phi}$ at its nontrivial fermionic fixed point $g_{\phi,\ast}^2\neq 0$, and with the fermions decoupled from the Ising field $\chi$, $g_{\chi,\ast}^2 = 0$, which is then either at its bosonic Gaussian or Wilson-Fisher (Ising) fixed point. The latter will be called ``chiral Heisenberg plus Ising'' (cH+I) fixed point.
\end{enumerate}
Regarding the stability of these fixed points, we can infer the following:
Every fixed point of the separate sectors will have one relevant direction related to its mass parameter.
The chiral Heisenberg and the chiral Ising fixed point do not show further relevant directions in the individual, uncoupled systems~\cite{janssen2014}.
In contrast, the Wilson-Fisher fixed point of the uncoupled sector $i$ ($i\in \{\chi,{\boldsymbol\phi}\}$), specified by ${g_i^{2}}^*=0$, features one additional relevant direction corresponding to the Yukawa coupling.
But upon coupling this sector to the second bosonic field, this direction may or may not become irrelevant.
Thus, the cI+H and the cH+I are the most promising candidates for stable fixed points.
Due to the Yukawa couplings being relevant below four space-time dimensions, the purely bosonic fixed points are unlikely to become stable by the coupling of both systems.
This general expectation was indeed confirmed in the first-order $\epsilon$-expansion study of the coupled model, see Ref.~\cite{classen2015}, in which the cH+I fixed point appeared stable in the case of graphene ($N_f=2$).
On the other hand, for large $N_f$ a novel fixed point with both Yukawa interactions $g_{\chi,\ast}^2\neq 0$ and $g_{\phi,\ast}^2\neq 0$ became stable.
A third option, which was found in Ref.~\cite{classen2015} for intermediate $N_f$, is that there is no stable fixed point at all.
In this case the flow does not exhibit scale-invariant behavior and the phase diagram close to the intersection of the various phase (SM, SDW and CDW) is goverened by a triple point with first-order transitions.
\section{Functional Renormalization}\label{sec:frg}
\subsection{Method}
We employ the functional renormalization group (FRG) to derive nonperturbative flow equations for the couplings of the quantum multicritical system~\cite{frg}.
The FRG provides a systematic approach to implement Wilson's idea of successively performing integration over degrees of freedom in the functional integral representation.
It yields an exact functional differential equation describing the evolution of the generating functional for the one-particle irreducible correlation functions, i.e., the effective action $\Gamma$, with an infrared momentum scale $k$, reading~\cite{wetterich1993}
\begin{align}\label{eqn:Wetterich}
\partial_t\Gamma_k = \frac{1}{2}\text{STr}\eck{(\Gamma_k^{(2)}+R_k)^{-1}\partial_t R_k}.
\end{align}
The scale-$k$-dependent or \emph{flowing} action $\Gamma_k$ interpolates between the microscopic action $\Gamma_{k\rightarrow\Lambda}=S$ at UV cutoff $\Lambda$ and the full quantum effective action $\Gamma_{k\rightarrow0}$ in the IR.
To ensure the interpolation, we have introduced the regulator function $R_k$.
It induces the iterative integration procedure and ensures that only modes with high momentum $\abs{q}\gtrsim k$ give a contribution to the integral in $\Gamma_k$, thereby avoiding infrared singularities.
Therefore it has to satisfy the requirements $R_k(q)\rightarrow \infty$ for $k\rightarrow \Lambda \rightarrow \infty$ and $R_k(q)\rightarrow 0$ for $k/|q|\rightarrow 0$.
Explicitly, the regulator modifies the microscopic action which appears in the functional integral representation of the partition function, $Z=\int_\Lambda \mathcal{D}\varphi\, e^{-S[\varphi]}$, by replacing
\begin{align}
S\rightarrow S &+ \int\frac{d^Dqd^Dp}{(2\pi)^{2D}}\left[\frac{1}{2}\chi(-q)R_{\chi,k}^{(B)}(q,p)\chi(p)\right. \nonumber\\
&\hspace{-0.2cm}\left.+\frac{1}{2}{\boldsymbol\phi}(-q)R_{\phi,k}^{(B)}(q,p){\boldsymbol\phi}(p) + \bar\Psi(q)R_k^{(F)}(q,p)\Psi(p)\right].\nonumber
\end{align}
The flowing action is then defined as the Legendre transform of the regularized Schwinger functional $W_k[J]=\ln Z_k$, see, for instance, Ref.~\cite{frg} for details.
Further, in Eq.~(\ref{eqn:Wetterich}) (the so-called ``Wetterich'' equation) we have abbreviated $\partial_t=k\partial_k$, using the renormalization group time $t = \ln(k/\Lambda)$. The Hessian $\Gamma_k^{(2)}$ is given by
\begin{align}
\rund{\Gamma_k^{(2)}}_{i,j}(p,q)=\frac{\overrightarrow{\delta}}{\delta\Phi(-p)^T}\Gamma_k\frac{\overleftarrow{\delta}}{\delta\Phi(q)}\,,
\end{align}
with $\Phi=(\chi,{\boldsymbol\phi},\Psi,\bar\Psi^T)^T$ representing a collective field variable for all fermionic and bosonic degrees of freedom of our model~\eqref{eq:mic}.
In the Wetterich equation, the regulators for this model are combined to
\begin{align}
R_k=\begin{pmatrix}R_{\chi,k}^{(B)} & 0 & 0 & 0\\ 0 & R_{\phi,k}^{(B)} & 0 & 0 \\ 0 & 0 & 0 & R_k^{(F)} \\ 0 & 0 & -R_k^{(F)T} & 0 \end{pmatrix}\,.
\end{align}
and STr sums over all degrees of freedom including a minus sign in the fermionic sector as well as a loop integration over momenta.
The FRG method provides a unified framework to access universal as well as nonuniversal properties of physical systems.
It may be employed to describe the critical behavior in the vicinity of continuous classical or quantum phase transitions and is also applicable to systems away from criticality.
By means of suitable expansion schemes it has also been used to study first-order phase transitions~\cite{frg}.
The FRG can be applied in arbitrary (fractional) dimension, and even low-order truncations already appear to give reasonably accurate results in both purely bosonic as well as coupled boson-fermion systems~\cite{litim2011, rosa2001, janssen2014}.
\subsection{Truncation}
While the Wetterich equation itself is an exact identity, it can usually not be solved exactly. In this work, we use a scheme inspired by the derivative expansion, which we truncate after the leading order. Explicitly, we employ the following ansatz for $\Gamma_k$ [so-called ``improved local potential approximation'' (LPA')],
\begin{align} \label{eq:truncation}
\Gamma_k=&\int d^Dx~\Big(
Z_{\Psi,k}\bar\Psi(\mathds{1}_2\otimes \gamma_\mu)\partial_\mu\Psi\nonumber\\
&-\frac{1}{2}Z_{\chi,k}\chi\partial_\mu^2\chi-\frac{1}{2}Z_{\phi,k}{\boldsymbol\phi}\partial_\mu^2{{\boldsymbol\phi}}+ U_k(\bar\rho_\chi,\bar\rho_\phi) \nonumber\\
&+ \bar{g}_{\chi, k}\chi\bar\Psi (\mathds{1}_2\otimes\mathds{1}_4)\Psi+\bar{g}_{\phi, k}{\boldsymbol\phi}\bar\Psi ({\boldsymbol{\sigma}}\otimes\mathds{1}_4)\Psi \Big)\,,
\end{align}
which is a direct generalization of the quantitatively successful truncation used for the separate chiral Heisenberg and chiral Ising universality classes~\cite{rosa2001, janssen2014}.
In the first line of Eq.~\eqref{eq:truncation}, we have introduced the kinetic part of the fermion fields, followed by the kinetic parts of the order-parameter fields in the second line. We assume scale-dependent, but field-independent wave-function renormalizations $Z_{\Psi,k}$, $Z_{\chi,k}$ and $Z_{\phi,k}$.
In the third line, the Yukawa couplings also become scale-dependent quantities.
The scale-dependent effective bosonic potential $U_k$, also appearing in the second line, only depends on the invariants $\rho_\varphi=\frac{1}{2}\varphi^2$, $\varphi \in \{\chi,{\boldsymbol\phi}\}$, as imposed by the symmetry of the original action, Eq.~\eqref{eq:mic}.
For most purposes, we will in the following expand the effective potential about its scale-dependent minimum $(\bar\rho_{\chi,\text{min}}$, $\bar\rho_{\phi,\text{min}})$, the latter being either zero or positive, then describing a regime of spontaneously broken symmetry.
In case a nonvanishing $(\bar\rho_{i,\text{min}}$ survives the integration towards the IR, $k\rightarrow 0$, it corresponds to a finite vacuum expectation value for the order-parameter fields $\chi$ and/or ${\boldsymbol\phi}$, i.e., an ordered phase.
\subsection{Flow equations}
\label{sec:frgflow}
\subsubsection{Effective potential}
In order to determine the scaling behavior of the effective action, we will consider dimensionless quantities in the following. Therefore, we define the dimensionless version of the effective potential
\begin{align}
u(\rho_\chi,\rho_\phi)=k^{-d}U\left(\frac{k^{D-2}}{Z_{\chi,k}}\bar\rho_\chi,\frac{k^{D-2}}{Z_{\phi,k}}\bar\rho_\phi\right)\,,
\end{align}
and the corresponding Yukawa couplings
\begin{align}
g_{\chi/\phi}^2=\frac{k^{D-4}}{Z_{\chi/\phi,k}Z_{\Psi,k}^2}\bar g_{\chi/\phi,k}^2\,.
\end{align}
We also define the anomalous dimensions
\begin{align}\label{eqn:anom}
\eta_{\chi/\phi}=-\frac{\partial_t Z_{\chi/\phi,k}}{Z_{\chi/\phi,k}} \quad\text{and}\quad \eta_{\Psi}=-\frac{\partial_t Z_{\Psi,k}}{Z_{\Psi,k}}.
\end{align}
To obtain the flow equation for the dimensionless scale-dependent effective potential $u$, Eq.~(\ref{eqn:Wetterich}) is evaluated for constant bosonic fields $\chi$, ${\boldsymbol\phi}$ and vanishing fermion fields $\Psi$. The resulting flow equation can be compactly written as
\begin{align}\label{eq:potflow}
\partial_t u=&(D-2+\eta_\chi)\rho_\chi u^{(1,0)}+(D-2+\eta_\phi)\rho_\phi u^{(0,1)}-D u\nonumber\\
&+I_{R}(\omega_\chi,\omega_\phi,\omega_{\phi\chi}^2)+2I_{G}(u^{(0,1)})\nonumber\\
&-2N_f\Big[I_\psi(\omega_\psi^+)+I_\psi(\omega_\psi^-)\Big]\,,
\end{align}
where we have defined the following quantities
\begin{eqnarray}
u^{(i,j)}&=&\frac{\partial^{i+j} }{\partial \rho_\chi^i \partial \rho_\phi^j}u(\rho_\chi,\rho_\phi)\,,\\
\omega_\chi&=&u^{(1,0)}+2\rho_\chi u^{(2,0)}\,,\\
\omega_\phi&=&u^{(0,1)}+2\rho_\phi u^{(0,2)}\,,\\
\omega_{\phi\chi}^2&=&4\rho_\phi \rho_\chi \big(u^{(1,1)}\big)^2\,,\\
\omega_\psi^{\pm}&=&2\rho_\chi g_\chi^2+2\rho_\phi g_\phi^2\pm 4\sqrt{\rho_\chi \rho_\phi}g_\chi g_\phi\,.
\end{eqnarray}
The {\it threshold} functions $I_R, I_G$, and $I_\Psi$ involve the loop integrations and the regulator dependence.
For a suitable choice of the regulator functions for the bosons and fermions, these integrations can be performed analytically and the result can be given in a closed form, see Appendix~\ref{app:thresh}.
The effective dimensionless potential $u$ is expanded about its scale-dependent dimensionless minimum at $(\kappa_\chi,\kappa_\phi)=((k^{D-2}/Z_\chi )\bar\rho_{\chi,\text{min}}, (k^{D-2}/Z_\phi) \bar\rho_{\phi,\text{min}})$. Its IR limit corresponds to the vacuum expectation values of $\chi$ and ${\boldsymbol\phi}$, $\lim_{k\rightarrow0}\kappa_{\varphi}=\langle\frac{1}{2}\varphi^2\rangle$, $\varphi\in\{\chi,\phi\}$.
We may distinguish four qualitatively different combinations for the location of the minimum of $u$:
\begin{enumerate}[(i)]
\item Both sectors remain in the symmetric regime (SYM-SYM) with $(\kappa_\chi,\kappa_\phi)=(0,0)$, or
\item either of the symmetries is spontaneously broken $\kappa_\chi\neq0, \kappa_\phi=0$~(SSB-SYM) or vice versa (SYM-SSB), or
\item both order parameters attain a nonzero vacuum expectation value $\kappa_{\chi,\phi}\neq0$ (SSB-SBB).
\end{enumerate}
The following parameterization of the effective potential in terms of a two-dimensional Taylor expansion accounts for all of these scenarios
\begin{align}
u(\rho_\chi,\rho_\phi)= \sum_{m+n\geq1}^{m+n=N}\frac{\lambda_{n,m}}{n!m!}(\rho_\chi-\kappa_\chi)^n(\rho_\phi-\kappa_\phi)^m\,,
\end{align}
with $\lambda_{1,0}=m_\chi^2$ if $\kappa_\chi=0$ and $\lambda_{1,0}=0$ if $\kappa_\chi\neq0$.
Analogous definitions are used for $\lambda_{0,1}, \kappa_\phi$ and $m_\phi^2$.
The $\beta$ functions for the expansion parameters in the different regimes are then obtained by the projections:
\begin{enumerate}[(i)]
\item SYM-SYM regime:
\begin{align}
\partial_t\lambda_{n,m}=&(\partial_tu)^{(n,m)}|_{\begin{subarray}{l}\rho_\chi=0\\\rho_\phi=0\end{subarray}}\,.
\end{align}
\item SSB-SYM regime:
\begin{align}
\label{eqn:SSBSYM1}
\partial_tm_\phi^2&=\left.(\partial_tu)^{(0,1)}+\lambda_{1,1}\partial_t\kappa_\chi\right|_{\begin{subarray}{l}\rho_\chi=\kappa_\chi\\\rho_\phi=0\end{subarray}}\,,\\
\partial_t\kappa_\chi&=-\left.\frac{(\partial_tu)^{(1,0)}}{\lambda_{2,0}}\right|_{\begin{subarray}{l}\rho_\chi=\kappa_\chi\\\rho_\phi=0\end{subarray}}\,,\\
\partial_t\lambda_{n,m}&=\left.(\partial_tu)^{(n,m)}+\lambda_{n+1,m}\partial_t\kappa_\chi\right|_{\begin{subarray}{l}\rho_\chi=\kappa_\chi\\\rho_\phi=0\end{subarray}}\,.\label{eqn:SSBSYM2}
\end{align}
The projections of the SYM-SSB regime can be obtained accordingly by exchanging $\chi$ and ${\boldsymbol\phi}$ in Eq.~(\ref{eqn:SSBSYM1}) - (\ref{eqn:SSBSYM2}).
\item SSB-SSB regime:
\begin{align}
\partial_t\kappa_\chi&=\left.\frac{\lambda_{1,1}(\partial_tu)^{(0,1)}-\lambda_{0,2}(\partial_tu)^{(1,0)}}{\lambda_{2,0}\lambda_{0,2}-\lambda_{1,1}^2}\right|_{\begin{subarray}{l} \rho_\chi=\kappa_\chi\\ \rho_\phi=\kappa_\phi\end{subarray}}\,,\\
\partial_t\kappa_\phi&=\left.\frac{\lambda_{1,1}(\partial_tu)^{(1,0)}-\lambda_{2,0}(\partial_tu)^{(0,1)}}{\lambda_{2,0}\lambda_{0,2}-\lambda_{1,1}^2}\right|_{\begin{subarray}{l}\rho_\chi=\kappa_\chi\\\rho_\phi=\kappa_\phi\end{subarray}}\,,\\
\partial_t\lambda_{n,m}&=\Big[(\partial_tu)^{(n,m)} + \lambda_{n+1,m}\partial_t\kappa_\chi \nonumber\\
&\quad\quad+ \left.\lambda_{n,m+1}\partial_t\kappa_\phi\Big]\right|_{\begin{subarray}{l}\rho_\chi=\kappa_\chi\\\rho_\phi=\kappa_\phi\end{subarray}}\,.
\end{align}
\end{enumerate}
For our numerical results we expand the effective potential up to order $\chi^8,{\boldsymbol\phi}^8$ (LPA' 8) and check the convergence of critical quantities with respect to the inclusion of higher orders in the fields up to order $\chi^{12},{\boldsymbol\phi}^{12}$ (LPA' 12).
An interesting option to overcome the limitations of local expansions in field space makes use of pseudo-spectral methods. This has recently been put forward within the FRG approach to access fixed points and critical exponents, see Ref.~\cite{borchardt2015}.
We leave the implementation of these methods in the present context for future work.
\subsubsection{Yukawa couplings}
For the projection of the flow of the Yukawa couplings we split the two-point function into its fluctuation dependent and independent parts
$
\Gamma_{k,0}^{(2)}=\left.\Gamma_k^{(2)}\right|_{\chi={\boldsymbol\phi}=\Psi=0},
$ and
$
\Delta\Gamma_k^{(2)}=\Gamma_k^{(2)}-\Gamma_{k,0}^{(2)}\,.
$
Then we expand the Wetterich equation in the following way
\begin{align}\label{eqn:logWetterich}
\partial_t\Gamma_k =& \frac{1}{2}\tilde\partial_t\text{STr}\eck{\ln(\Gamma_k^{(2)}+R_k)} \\
=&\frac{1}{2}\tilde\partial_t\text{STr}\eck{\ln(\Gamma_{k,0}^{(2)}+R_k)} \nonumber\\
&+ \frac{1}{2}\tilde\partial_t\text{STr}\sum_{n=1}^\infty\frac{(-1)^{n+1}}{n}\eck{(\Gamma_{k,0}^{(2)}+R_k)^{-1}\Delta\Gamma_k^{(2)}}^n\nonumber\,.
\end{align}
Here, the scale derivative $\tilde\partial_t$ acts only on the $t$-dependence of the regulator.
The fields are divided into their vacuum expectation value and a fluctuating part, $\chi=\chi_0+\Delta\chi$, ${\phi}_3=\phi_{3,0}+\Delta\phi_3$, and $\phi_{1,2}=\Delta\phi_{1,2}$. This allows us to devise suitable projection rules to extract the flow of Yukawa couplings
\begin{widetext}
\begin{align}
\partial_t g_\chi&=\frac{1}{6 N_f d_\gamma}\text{Tr}\eck{\frac{\overrightarrow\delta}{\delta \Delta\chi(p')}\frac{\overrightarrow\delta}{\delta\bar\Psi(p)}\tilde\partial_t\text{STr}\eck{\rund{\frac{\Delta\Gamma_k^{(2)}}{\Gamma_{k,0}^{(2)}+R_k}}^3}\frac{\overleftarrow\delta}{\delta\Psi(q)}}_{\begin{subarray}{l}p'=p=q=0\\\bar\Psi=\Psi=\Delta\chi=\Delta\phi=0\end{subarray}}\,,\\
\partial_t g_\phi&=\frac{1}{6 N_f d_\gamma}\text{Tr}\eck{(\sigma_1\otimes\mathds{1}_4)\frac{\overrightarrow\delta}{\delta \phi_1(p')}\frac{\overrightarrow\delta}{\delta\bar\Psi(p)}\tilde\partial_t\text{STr}\eck{\rund{\frac{\Delta\Gamma_k^{(2)}}{\Gamma_{k,0}^{(2)}+R_k}}^3}\frac{\overleftarrow\delta}{\delta\Psi(q)}}_{\begin{subarray}{l}p'=p=q=0\\\bar\Psi=\Psi=\Delta\chi=\Delta\phi=0\end{subarray}}\,.
\end{align}
\end{widetext}
The resulting $\beta$-functions can again be calculated analytically and presented in a closed form.
For these Yukawa couplings, the expressions, however, are rather lengthy and are therefore deferred to Appendix~\ref{app:betafcts}.
\subsubsection{Anomalous dimensions}
We finally need a projection prescription for $\partial_t Z_i$, $i\in\{\chi,{\boldsymbol\phi},\psi\}$.
To this end, the expansion of the Wetterich equation, Eq.~\eqref{eqn:logWetterich}, is evaluated for momentum-dependent fields to calculate the anomalous dimensions according to Eq.~(\ref{eqn:anom}).
Here, we choose to project onto the Goldstone modes.
Note that a projection onto the radial mode would provide admixtures of additional terms.
Details on this choice are presented in Appendix~\ref{app:projection}.
For the anomalous dimension of the Heisenberg field we can arbitrarily select one of the two Goldstone modes in order to determine $\partial_t Z_\phi$.
For the Ising field, however, there is no Goldstone.
Here, we define $\eta_\chi$ as the limiting case $N \to 1$ of $N$ copies of the Ising field, see Appendix~\ref{app:projection} for details.
In summary, we use the following prescriptions:
\begin{widetext}
\begin{align}
\partial_t Z_\psi&=\left.\frac{i}{4DN_fd_\gamma}\text{Tr}\eck{(\mathds{1}_2\otimes\gamma_\mu)\frac{\partial}{\partial p_\mu}\int\frac{d^Dq}{(2\pi)^D}\frac{\overrightarrow\delta}{\delta\bar\Psi(p)}\tilde\partial_t\text{STr}\eck{\rund{\frac{\Delta\Gamma_k^{(2)}}{\Gamma_{k,0}^{(2)}+R_k}}^2}\frac{\overleftarrow\delta}{\delta\bar\Psi(q)}}\right|_{\begin{subarray}{l}p=q=0 \\ \Delta\chi=\Delta\phi=\bar\Psi=\Psi=0\end{subarray}}\,.
\end{align}
and
\begin{align}
\partial_t Z_\chi&=\lim_{N\rightarrow1}\left[\left.-\frac{1}{4}\frac{\partial}{\partial p^2}\int\frac{d^Dq}{(2\pi)^D}\frac{\overrightarrow\delta}{\delta\Delta\chi_{G,1}(-p)}\tilde\partial_t\text{STr}\eck{\rund{\frac{\Delta\Gamma_k^{(2)}}{\Gamma_{k,0}^{(2)}+R_k}}^2}\frac{\overleftarrow\delta}{\delta\Delta\chi_{G,1}(q)}\right|_{\begin{subarray}{l}p=q=0 \\ \Delta\chi=\Delta\phi=\bar\Psi=\Psi=0\end{subarray}} \right] \label{eqn:etachi}\,,\\
\partial_t Z_\phi&=\left.-\frac{1}{4}\frac{\partial}{\partial p^2}\int\frac{d^Dq}{(2\pi)^D}\frac{\overrightarrow\delta}{\delta\Delta\phi_1(-p)}\tilde\partial_t\text{STr}\eck{\rund{\frac{\Delta\Gamma_k^{(2)}}{\Gamma_{k,0}^{(2)}+R_k}}^2}\frac{\overleftarrow\delta}{\delta\Delta\phi_1(q)}\right|_{\begin{subarray}{l}p=q=0 \\ \Delta\chi=\Delta\phi=\bar\Psi=\Psi=0\end{subarray}}\,,
\end{align}
\end{widetext}
with $\chi\rightarrow(\chi_R+\Delta\chi_{R,1},\Delta\chi_{G,1},\ldots,\Delta\chi_{G,N-1})$.
The full analytical expressions for the anomalous dimensions in terms of threshold functions are given in Appendix~\ref{app:anom}.
Further, in Appendix~\ref{app:threshold}, the corresponding threshold functions are listed for general regulator, as well as explicit expressions for the linear and sharp regulators are given.
In this study, we use the linear regulator to calculate critical exponents and the sharp regulator to determine the perturbative limit of the above flow equations.
We use this as a crosscheck in the following way:
The upper critical space-time dimension of the theory is four.
Consequently, in $D=4-\epsilon$ dimensions, perturbation theory becomes reliable.
The one-loop flow equations obtained in a standard Wilsonian approach can be reproduced from the FRG approach as a limiting case.
To this end, we consider the symmetric regime and neglect all perturbatively irrelevant operators in the ansatz for the effective action.
Then, expanding the flow equations in $\epsilon=4-D$ yields exactly the 1-loop results of Ref.~\cite{classen2015}.
\section{Results}\label{sec:results}
Using the FRG flow equations, we can now search for fixed points and study their evolution as a function of space-time dimension $D$ and number of fermion flavors $N_f$.
We start with benchmarking by comparing to the results on the separate chiral Ising and the chiral Heisenberg universality classes from Ref.~\cite{janssen2014,classen2015}.%
\footnote{We have also checked the existence and properties of the purely bosonic fixed points, which can be compared with Ref.~\cite{eichhorn2013}---however, since these fixed points turn out to have more than three relevant directions when fermions are present (as expected), they will not play a role in the remainder of this study.}
\subsection{Chiral Ising and chiral Heisenberg universality class for $N_f=2$}
\begin{table}[b]
\caption{\label{Tab:01} Anomalous dimensions and largest critical exponents from this work (first line) in comparison with different methods and models for the chiral Ising universality class, $N_f=2$ and $D=3$. The boldface numbers show the values of the decisive third critical exponent, exhibiting that this multicritical fixed point is unstable.}
\label{tab:I+H}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill} } c c c c c c c c}
\hline\hline
model & method& $\theta_1$ & $\theta_2$&$\theta_3$ & $\eta_\chi$ & $\eta_\phi$ & $\eta_\psi$\\
\hline
cI+H & FRG &1.359 & 0.983 & {\bf 0.719} & 0.760 & 0.041 & 0.032 \\
&$\epsilon^1$ exp~\cite{classen2015} &1.545 & 1.048 & {\bf 0.571} & 0.571 & 0 & 0.071\\
\hline
cI & FRG~\cite{janssen2014} & & 0.982 & & 0.760 & & 0.032 \\
& FRG~\cite{vacca2015} & & 0.996 & & 0.789 & & 0.031 \\
& $\epsilon^2$ exp~\cite{rosenstein1993} & & 1.055 & & 0.695 & & 0.065 \\
& MC~\cite{karkkainen1994} & & 1.00 & & 0.754 & & \\
\hline
O(3)& $\epsilon^5$ exp~\cite{guida1998} &1.419 & & & & 0.037 & \\
& MC~\cite{campostrini2002} &1.406 & & & & 0.038 & \\
\hline\hline
\end{tabular*}
\end{table}
\begin{table}[t]
\caption{\label{tab:H+I} Anomalous dimensions and largest critical exponents from this work (first line) in comparison with different methods and models for the chiral Heisenberg universality class, $N_f=2$ and $D=3$. The boldface numbers show the values of the decisive third critical exponent. Note the dramatic change in $\theta_3$ for the cH+I when going from the $\epsilon$ expansion to the FRG results, rendering the cH+I fixed point unstable.}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill} } c c c c c c c c}
\hline\hline
model & method & $\theta_1$ & $\theta_2$&$\theta_3$ & $\eta_\chi$ & $\eta_\phi$ & $\eta_\psi$\\
\hline
cH+I & FRG &1.564 & 0.773 & {\bf 0.241} & 0.044 & 1.015 & 0.084 \\
& $\epsilon^1$ exp \cite{classen2015} &1.667 & 0.473 & {\bf -0.8} & 0 & 0.8 & 0.3 \\
\hline
cH & FRG \cite{janssen2014} & & 0.772 & & & 1.015 & 0.084 \\
& $\epsilon^2$ exp \cite{rosenstein1993} & & 0.834 & & & 0.959 & 0.242 \\
& MC \cite{toldin2015} & & 1.19 & & & 0.70 & \\
\hline
O(1)& $\epsilon^5$ exp \cite{guida1998} &1.590 & & & 0.036 & & \\
& MC \cite{hasenbusch2011} &1.587 & & & 0.036 & & \\
\hline\hline
\end{tabular*}
\end{table}
In Tables \ref{tab:I+H} and \ref{tab:H+I}, we give our best estimates for the critical exponents of the chiral Ising plus Heisenberg (cI+H) and the chiral Heisenberg plus Ising (cH+I) fixed points, respectively, for $N_f=2$ and $D=3$.
As an important result, we find that both of these fixed points exhibit three relevant directions and are therefore unstable.
This finding is different from the leading-order $\epsilon$ expansion, in which the cH+I fixed point appeared stable, featuring only two relevant directions.
Since these fixed points are composites from a chiral and a purely bosonic model, we can compare several quantities with previous (FRG) calculations and other methods.
The exponent $\theta_1$ is given by the correlation length exponent in a O(1) or a O(3) model for the cH+I and the cI+H fixed point, respectively.
The second critical exponent $\theta_2$ is inherited from the chiral Heisenberg (chiral Ising) model.
Additionally, the values of the anomalous dimensions are inherited from the separate models, cf.~Table~\ref{tab:I+H}.
In the case of the cI+H fixed point, the anomalous dimensions $\eta_\chi$ and $\eta_\psi$ come from the chiral Ising model, and $\eta_\phi$ from the bosonic O(3) (Heisenberg) model.
For the cH+I fixed point, $\eta_\phi$ and $\eta_\psi$ can be inferred from the chiral Heisenberg system, while $\eta_\chi$ is adopted from the bosonic O(1) (Ising) model.
As can be read off from Table~\ref{tab:I+H} (Table~\ref{tab:H+I}), we find good agreement between our estimates with the well-known results for the bosonic Heisenberg (bosonic Ising) universality class, i.e., for the exponents $\theta_1$ and $\eta_\phi$ ($\eta_\chi$).
For the chiral Ising (chiral Heisenberg) universality class, the exponents [$\theta_2$ and $\eta_\chi$ ($\eta_\phi$)] are not precisely known. In any case, we find very good agreement with previous FRG computations~\cite{janssen2014, vacca2015}, and reasonably good agreement with the second-order $\epsilon$-expansion results~\cite{rosenstein1993}. When comparing our estimates with the Monte-Carlo results, we find good agreement in the case of the chiral Ising universality class~\cite{karkkainen1994}, whereas in the case of the chiral Heisenberg universality class we observe significant discrepancies to the newest Monte-Carlo estimates~\cite{sorella2012, assaad2013, toldin2015}, see Ref.~\cite{janssen2014} for a discussion.
Let us note that the exponent $\theta_3$ for the decoupled fixed point in the purely bosonic $\mathrm O(N_1)\oplus \mathrm O(N_2)$ model can be deduced from $\theta_1$ and $\theta_2$ by an exact scaling relation~\cite{calabrese2003}.
An equivalent relation is at present unknown for the fermionic model studied here, because there is no {\it a priori} knowledge about the scaling dimensions of the fermionic operators.
\subsection{From $\bf{4-\epsilon}$ to 3 space-time dimensions}
\begin{figure}[t!]
\centering
\includegraphics[width=.9\columnwidth]{figure03a_cHI_dim3.pdf}\\
\includegraphics[width=.9\columnwidth]{figure03b_cHI_dim2.pdf}
\caption{Three largest critical exponents $\theta_1$, $\theta_2$ (top), and $\theta_3$ (bottom) at the cH+I fixed point from FRG and $\epsilon$ expansion~\cite{classen2015} as function of the space-time dimensions, for $N_f=2$. The cH+I fixed point is stable close to four dimensions. In the FRG approach, however, it bends to positive values below $D=3.17$ and renders the cH+I fixed point unstable in $D=3$. }
\label{fig:dim}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=.9\columnwidth]{figure04a_D_d33.pdf}\\
\includegraphics[width=.9\columnwidth]{figure04b_D_d22.pdf}
\caption{Three largest critical exponents $\theta_1$, $\theta_2$ (top), and $\theta_3$ (bottom) at large-$N_f$ fixed point from FRG and $\epsilon$ expansion~\cite{classen2015} as function of the space-time dimensions, for $N_f=20$. Here, the large-$N_f$ fixed point is stable in both approaches for all $3\leq D <4$.}
\label{fig:dim2}
\end{figure}
Using the FRG equations, we can directly evaluate the fixed points in arbitrary space-time dimensions $2<D<4$.
This allows us to systematically compare to the fixed-point solutions of the $\epsilon$ expansion and track deviations when approaching $D=3$, i.e., for large values of $\epsilon$.
As explained in Sec.~\ref{subsec:FPs}, the study of the quantum multicritical point in first-order $\epsilon$ expansion has revealed two fixed points, which became stable at different ranges of the fermion flavor number $N_f$.
For the graphene case, $N_f=2$, the $\epsilon$ expansion renders the cH+I fixed point stable, whereas a novel interacting fixed point that couples both chiral sectors of the theory became stable at large $N_f$. We will refer to this fixed point as ``large-$N_f$ fixed point'' in the following.
We can identify both fixed points within the FRG approach close to four space-time dimensions and then investigate their stability, as determined by the third-largest critical exponent $\theta_3$, as function of dimension $D$.
The evolution of all three largest critical exponents upon varying the dimension is depicted in Fig.~\ref{fig:dim} for the cH+I fixed point and in Fig.~\ref{fig:dim2} for the large-$N_f$ fixed point.
For the first two exponents the leading-order $\epsilon$-expansion results agree fairly well with the nonperturbative values of the FRG.
On the other hand, the decisive third exponent shows large deviations in three space-time dimensions.
Regarding the cH+I FP, this effect can be traced back to the propagators in the loop contributions, which to first order in $\epsilon$, are accounted for only in the flow equations of the masses.
However, for this fixed point $\theta_3$ is predominantly determined by $\beta_{g_\chi^2}$.
Dimensional analysis shows that $g_\chi^2$ scales like $(4-D)$, which is corrected by the loop contributions and the anomalous dimensions.
These are much smaller in the FRG compared to the first order in $\epsilon$ due to the threshold effects of the propagators.
Thus, while to first order in $\epsilon$ the loop contributions reduce $\theta_3$ below zero, the reduction is not as large in the nonperturbative setting.
In contrast, main contributions to $\theta_1$ and $\theta_2$ come from the mass flow equations so that the $\epsilon$ expansion captures their behavior already quite well to first order.
For larger values of $N_f$, loop contributions become less important and the critical exponents are mainly fixed by the canonical scaling.
Here, the first order $\epsilon$ expansion underestimates the anomalous dimensions, so that the exponents tend faster to the final values of $\pm1$ within the FRG approach.
The effect is larger for $\theta_3$ because the anomalous dimension enters it twice, both directly as well as through the derivative with respect to $g_\chi^2$.
Such significant quantitative improvement of the FRG approach as compared to the first-order $\epsilon$ expansion is well-known from the corresponding multicritical bosonic systems with $\mathrm O(N_1)\oplus \mathrm O(N_2)$ symmetry~\cite{eichhorn2013}, for which the true regions of stability of the different fixed points are by now well established~\cite{herbutbook}.
Eventually, the difference in $\theta_3$ leads to an important change of the stability analysis in three space-time dimensions for $N_f=2$ because the cH+I fixed point looses its stability at about $D = 3.17$.
At this point it collides with a new fixed point that couples both sectors.
However, the new fixed point is unphysical for $D < 3.17$ as it has a negative square of the Yukawa coupling $g_\chi^2<0$.
Therefore, in $D=3$ no stable and physically admissible fixed point is found for the graphene case, i.e., all allowed fixed points have more than two relevant directions for $N_f=2$.
\subsection{Dependence on $N_f$ in $D=3$}
In this section, we study the fixed-point structure as function of the fermion flavor number $N_f$ in three space-time dimensions.
We find several regimes exhibiting the qualitative behavior known from first-order $\epsilon$ expansion:
(1) For small $N_f$ the cH+I fixed point is stable, followed by a regime of (2) intermediate $N_f$ where no stable and physically admissible fixed point exists.
(3) For large $N_f$, a novel FP with a coupling between the different sectors is stable.
On the other hand, the values of $N_f$ marking the borders change considerably when going from the first order $\epsilon$ expansion to the FRG approach.
We again investigate the sign of $\theta_3$ to analyze the stability and determine the critical flavor numbers for the different regimes.
The result is depicted in Fig.~\ref{fig:theta3Nf}, where for comparison we also show the $\epsilon$-expansion results from Ref.~\cite{classen2015}.
One can explicitly see that the qualitative behavior is the same within both approaches, but the different regimes are shifted and shrunk.
Within the FRG, the first regime at small flavor numbers $N_f<1.6$ is characterized by the cH+I fixed point.
At $N_f = 1.6$ it collides with another fixed point and looses stability.
As pointed out in the previous section, the other fixed point is unphysical due to a negative $g_\chi^2$ for $N_f > 1.6$.
Hence, for $N_f \in (1.6,3.6)$ no stable and physically admissible fixed point is found.
Finally, for $N_f>3.6$ the large-$N_f$ fixed point known from the $\epsilon$ expansion becomes stable.
For numerical comparison, we give the three largest critical exponents and the anomalous dimensions for two examples of $N_f$, for which a stable fixed point is found, in Table~\ref{tab:stable}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{figure05b_theta3eps_Nf.pdf}\\
\includegraphics[width=0.9\columnwidth]{figure05a_theta3frg_Nf.pdf}
\caption{Third largest critical exponent $\theta_3$ as function of the fermion flavor number $N_f$ of cH+I fixed point (dashed/black) and large-$N_f$ fixed point (solid/red), from $\epsilon$ expansion~\cite{classen2015} (top) in comparison to FRG (bottom). We also show $1-\eta_\chi$ for the cH+I fixed point in the FRG (bottom), note the offset from 1 for large $N_f$.}
\label{fig:theta3Nf}
\end{figure}
\begin{table}[t!]
\caption{\label{tab:stable} LPA8' results of the stable fixed point for $N_f=1$ and $N_f=20$ in $D=3$.}
\renewcommand{\arraystretch}{1.4}
\renewcommand{\tabcolsep}{4pt}
\begin{tabular}{c c c c c c c c c}
\hline\hline
$N_f$ & stable FP & $\theta_1$ & $\theta_2$&$\theta_3$ & $\eta_\phi$ & $\eta_\chi$ & $\eta_\psi$\\
\hline
1 & cH+I & 1.564 & 0.558 & -0.703 & 1.003 & 0.044 & 0.207 \\
20 & large-$N_f$ & 1.085 & 0.969 & -0.883 & 0.980 & 0.913 & 0.010 \\
\hline\hline
\end{tabular}
\end{table}
\subsubsection{Large $N_f$ behavior}
To complete the stability analysis for the cH+I and the large-$N_f$ fixed points, we finally study the limit $N_f \to \infty$, for which we can calculate the exponents analytically.
To this end, we rescale the potential and the bosonic wave function renormalization by suitable factors of $N_f$, $U\rightarrow U/N_f, \ Z_{\chi/\phi}\rightarrow Z_{\chi/\phi}/N_f$, ensuring that $g_{\chi}$ and $g_\phi$ remain positive.
After this rescaling, the boson loops become of order $\mathcal{O}(1/N_f)$ and the flow equations to leading order read
\begin{align}\label{eqn:potential_largeNf}
\partial_t u&=-D u + (D-2 +\eta_\chi)\rho_\chi u^{(1,0)}\nonumber\\
&+ (D-2 +\eta_\phi)\rho_\phi u^{(0,1)}\nonumber\\
&- 2\eck{I_\psi(\omega_\psi^+)+I_\psi(\omega_\psi^-)} + \mathcal{O}(1/N_f)\,,
\end{align}
and
\begin{align}
\partial_t g_{\chi/\phi}^2&=(D-4+\eta_{\chi/\phi})g_{\chi/\phi}^2+ \mathcal{O}(1/N_f)\label{eqn:YukawalargeNf}\,,\\
\eta_{\chi/\phi}&=\frac{32v_D}{D} m_4^{(F)}(0)g_{\chi/\phi}^2+\mathcal{O}(1/N_f)\,, \label{eq:anomalous-dimension-large-Nf}\\
\eta_\psi&=\mathcal{O}(1/N_f)\,,
\end{align}
with the threshold functions as given in Appendix~\ref{app:threshold} and $v_D=(\Gamma(D/2)2^{D+1}\pi^{d/2})^{-1}$.
The problem becomes symmetric with respect to $\chi$ and ${\boldsymbol\phi}$ and can be solved exactly. In the sector of the Yukawa couplings, the fixed-point solutions are
\begin{align}
g_{\chi/\phi}^{*^2}&=0 & \text{or} &&
g_{\chi/\phi}^{*^2}&=\frac{(4-D)D}{32v_Dm_4^{(F)}(0)}\,.
\end{align}
Further, the partial differential equation for the effective potential is solved by
\begin{align}
u(\rho_\chi,\rho_\phi)&=\frac{8v_D}{D^2}\left[\frac{_2F_1\left(\frac{D}{2},\frac{D}{2};\frac{D+2}{2};\frac{\omega_-}{1+\omega_-}\right)}{\left(1+\omega_-\right)^{D/2}}-2\right.\nonumber\\
&\left. +\frac{_2F_1\left(\frac{D}{2},\frac{D}{2};\frac{D+2}{2};\frac{\omega_+}{1+\omega_+}\right)}{\left(1+\omega_+\right)^{D/2}}\right]+\rho _{\chi }^{D/2} c\Big(\frac{\rho _{\phi }}{\rho _{\chi }}\Big)\,,\nonumber\\
\text{with}\quad \omega_\pm&=\frac{4v_D}{D}\frac{(4-3 D)}{\left(D^2-6 d+8\right)}\left(\sqrt{\rho _{\phi }}\pm\sqrt{\rho _{\chi }}\right)^{-2}\,,
\end{align}
and $_2F_1(a,b;c;z)$ denotes the Gaussian hypergeometric function (see, e.g., \cite{gradshteyn}). For any smooth function $c(\rho_\phi/\rho_\chi)$ that depends only on the ratio of the invariants $\rho_\phi/\rho_\chi$ the corresponding $u(\rho_\chi,\rho_\phi)$ solves the fixed-point equation~(\ref{eqn:potential_largeNf}). We can restrict $c$ by the physical requirement that the effective potential should be bounded from below and finite for $\rho_\chi\rightarrow0$ and $\rho_\phi\rightarrow0$. Alternatively, it can be determined so that $u$ equals the large-$N_f$ limit of the Taylor-expanded effective potential.
Regarding the stability analysis, we find that in this limit the entries $\partial \beta_{g_\chi^2}/\partial g_\chi^2$ and $\partial \beta_{g_\phi^2}/\partial g_\phi^2$ in the stability matrix fix $\theta_3=\theta_4$.
As can be seen in Eq.~(\ref{eqn:YukawalargeNf}), $\theta_3$ is then determined by the canonical scaling only
\begin{align}\label{eqn:theta3_largeNf}
\theta_3 = -\frac{\partial \beta_{g_\chi^2}}{\partial g_\chi^2}=-\Big(D-4+\eta_\chi +g_\chi^2 \frac{\partial \eta_\chi}{\partial g_\chi^2} + \mathcal{O}\big(\frac{1}{N_f}\big)\Big).\nonumber
\end{align}
For the stable large-$N_f$ fixed point both Yukawa couplings are nonzero and uniquely determined by the requirements $\eta_\chi=\eta_\phi=4-D$. We furthermore have $g_\chi^2 \partial \eta_\chi/\partial g_\chi^2=\eta_\chi$ and $g_\phi^2 \partial \eta_\phi/\partial g_\phi^2=\eta_\phi$ [Eq.~\eqref{eq:anomalous-dimension-large-Nf}].
This requires that the third (and fourth) largest critical exponent $\theta_3$ must tend to minus one in $D=3$,
\begin{align}
\text{large-$N_f$ fixed point:}\qquad \lim_{N_f\to \infty}\theta_3 = -1\,.
\end{align}
To investigate the cH+I fixed point in the limit of $N_f \to \infty$, we scale only the Heisenberg sector with the factor $1/N_f$, since the Ising sector completely decouples and becomes purely bosonic. Again the third critical exponent is determined by $\beta_{g_{\chi}^2}$. But now only loops including $g_\phi^2$ are suppressed by $1/N_f$, such that $\theta_3$ is given by
\begin{align}
\theta_3=-\frac{\partial \beta_{g_\chi^2}}{\partial g_\chi^2}=&-\Bigg(D-4+\eta_\chi +g_\chi^2 \frac{\partial \eta_\chi}{\partial g_\chi^2}\nonumber \\
&\quad +\frac{\partial}{\partial g_\chi^2}\mathcal{L}(g_\chi^4) + \mathcal{O}\left(\frac{1}{N_f}\right)\Bigg),
\end{align}
where $\mathcal{L}(g_\chi^4)$ denotes loops that are at least proportional to $g_\chi^4$. With $g_\chi^2=0$ this reduces to
\begin{align}
\text{cH+I:} \qquad \lim_{N_f \to \infty}\theta_3 = 1-\eta_\chi
\end{align}
in $D=3$ space-time dimensions.
We therefore see that the third critical exponent for the cH+I fixed point computed within the FRG does {\it not} coincide with the one from $\epsilon$ expansion.
This is due to the contribution from the anomalous dimension in the Ising sector, which becomes nonvanishing only to second order in $\epsilon$.
In our approximation we have $\eta_\chi=0.044$, such that $\eta_\chi \nrightarrow 1$ for large $N_f$.
The large-$N_f$ behavior of both fixed points is exhibited in Fig.~\ref{fig:theta3Nf}.
\subsection{Phase diagram}\label{sec:phasediag}
In the case that a stable, physical fixed point is found, the structure of the phase diagram near the multicritical point can be extracted from a simple criterion~\cite{liu1973,nelson1974,kivelson2001}.
We define
\begin{align}
\Delta=\lambda_{2,0}\lambda_{0,2}-\lambda_{1,1}^{2},
\end{align}
with $\lambda_{2,0}, \lambda_{0,2}$ and $\lambda_{1,1}$ being the corresponding expansion coefficients of the infrared effective potential.%
\footnote{Note that we use other conventions than in Ref.~\cite{classen2015} regarding the factors in the action Eq.~(\ref{eqn:actionB}), leading to a different factor in the definition of $\Delta$.}
For $\Delta>0$ the effective potential in the ordered phase is minimized when {\it both} sectors simultaneously develop a nonzero vacuum expectation value, corresponding to a phase of coexistence between the separate phases.
The mixed phase cannot exist if $\Delta<0$, and the phase diagram exhibits bicritical behavior in this case.
The criterion has been used in a variety of studies that investigate the competition between different order parameters~\cite{calabrese2003,eichhorn2013,eichhorn2014,classen2015}.
If we start the RG flow near the stable fixed point, it remains in its vicinity for a long RG ``time''.
In this way, the direct neighborhood of the multicritical point in the phase diagram should depend only on the properties of the effective potential {\it at the fixed point}.
To determine the behavior near the multicritical point, it then suffices to compute the value for $\Delta$ using the fixed-point values for the quartic couplings.
Eventually, of course, the system will flow away from the critical surface and the argument breaks down far from the multicritical point.
Fig.~\ref{fig:Delta} shows the value for $\Delta$ at the cH+I fixed point for $N_f<1.6$ and the large-$N_f$ fixed point for $N_f>3.6$.
$\Delta$ is positive at the cH+I, so that the phase diagram exhibits tetracritical behavior and a coexistence phase [situation I in Fig.~\ref{fig:phasediag}(a)] for small $N_f$.
On the other hand, $\Delta$ is negative at the large-$N_f$ fixed point, leading to bicritical behavior with a first-order transition between the ordered states for large $N_f$. The transition across the multicritical point, however, remains continuous [situation III in Fig.~\ref{fig:phasediag}(a)].
This is in qualitative agreement with the previous result~\cite{classen2015}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{Delta_Nf_frg.pdf}
\caption{$\Delta=\lambda_{2,0}\lambda_{0,2}-\lambda_{1,1}^2$ at the stable fixed point for different fermion flavor numbers. Each $\lambda$ is scaled with $v_{D=3}$. For small $N_f<1.6$, $\Delta$ is positive at the stable (cH+I) fixed point, whereas for large $N_f>3.6$, $\Delta$ is negative at the stable (large-$N_f$) fixed point. For intermediate $N_f$ (gray shaded region), there is no stable and physically admissible fixed point.}
\label{fig:Delta}
\end{figure}
When a stable and physically admissible fixed point does {\it not} exist, the phase diagram close to the intersection of SM, SDW, and CDW is governed by a triple point, with {\it all} transitions in its vicinity appearing first order [situation II in Fig.~\ref{fig:phasediag}(a)].
As an important observation within the FRG approach, we in fact find this situation to be realized for the physical case of graphene, $N_f=2$ (gray shaded region in Fig.~\ref{fig:Delta}).
\section{Conclusions}\label{sec:conclusion}
We have studied a multicritical point in the phase diagram of electrons on the honeycomb lattice using an effective field theory as low-energy theory of an extended Hubbard model with onsite and nearest-neighbor interaction.
Our theory accounts for the universal behavior in the regime where the semimetallic phase, the charge density wave phase, and the spin density wave phase meet.
Within a nonperturbative FRG approach we were able to investigate the dependence on space-time dimension $2<D<4$ and flavor number $N_f$, thereby extending a previous study close to four space-time dimensions~\cite{classen2015}.
We have calculated the fixed-point structure and its stability ranges to describe the competition of the different phases.
This enables us to determine the nature of the transition lines and the possibility of a CDW-SDW coexistence phase, as function of the number of fermion flavors.
Besides, we provide a quantitative description of the critical behavior in the cases when the transitions are continuous.
We have followed the two fixed points that are stable (at different ranges of $N_f$) from the upper critical space-time dimension $D=4$ down to $D=3$. While our results agree near $D=4$ both qualitatively and quantitatively with the $\epsilon$-expansion results~\cite{classen2015}, we have found significant quantitative changes in the decisive third critical exponent and anomalous dimensions in $D=3$.
This leads to modified stability ranges, although the qualitative picture remains the same as in the $\epsilon$ expansion.
The borders of the different stability regimes are determined by the collision of fixed points.
They move in theory space as function of dimension and fermion flavor number and exchange stability when they meet.
Explicitly, we have found three different regimes for varying fermion flavor number.
For small number of flavors the cH+I fixed point determines the physical properties at the multicritical point.
In other words, the semimetal-to-antiferromagnet transition determines also the universal behavior at the multicritical point. Beyond this point, our results suggests a mixed phase in which both SDW and CDW orders coexist (tetracritical point).
For large $N_f$ a new fixed point of the coupled system emerges and becomes stable.
The transition between both ordered states now is first order, whereas directly at the multicritical point the transition is continuous and defines a novel universality class (bicritical point), in agreement with the large-$N_f$ calculations within the fermionic description~\cite{herbut2006}.
The graphene case is placed in a third regime that occurs for intermediate $N_f$.
Here we do not find any stable fixed point, leading us to the prediction that a triple point appears with first-order transitions only.
While in our description the number of fermion flavors has been introduced merely as a theoretical control parameter, a similar deformation of the honeycomb lattice system should be relevant for the novel systems with a large number of Dirac cones \cite{miert2015}.
We have employed the FRG in terms of a local potential approximation within the derivative expansion including anomalous dimensions.
As a crosscheck, we have compared our results with known limits from, on the one hand, the separate universality classes~\cite{janssen2014}, and, on the other hand, the $\epsilon$-expansion results near the upper critical dimension~\cite{classen2015}.
In both limits we find perfect agreement, as it should be.
We have also compared our predictions for $D=3$ with various literature results.
As shown in \cite{janssen2014}, the FRG critical exponents in the Gross-Neveu-Yukawa model of the separate transitions also become exact close to the lower critical dimension of the corresponding, purely fermionic Gross-Neveu model, and the FRG interpolates continuously in between both exact limits.
We expect this also to hold for the additional critical exponents arising from the coupling of both the chiral Ising and the chiral Heisenberg sectors.
Concerning the convergence of our polynomial truncation we have verfied the convergence of the critical exponents upon inclusion of higher polynomial orders within our potential expansion, see Appendix~\ref{app:conv}.
However, in order to resolve the discrepancy between FRG and Monte Carlo results~\cite{sorella2012, assaad2013, toldin2015} for the critical exponents of the chiral Heisenberg universality class it might be needed to go beyond our approximation.
This could be done, for instance, by accounting for field-dependent Yukawa couplings \cite{vacca2015} and/or field-dependent wave function renormalizations.
In addition it would be interesting to compute the global behavior of the fixed-point potential at finite $N_f$, in particular in light of the first-order phase transitions we predict.
This could be done, for instance, along the lines suggested in Ref.~\cite{borchardt2015}.
\acknowledgments
We acknowledge discussions with I. Boettcher, J. Borchardt, A. Eichhorn, H. Gies, B. Knorr, F. Rennecke, L. von Smekal, and S. Wessel. L.C.\ acknowledges support by the Studienstiftung des deutschen Volkes. M.M.S.\ is supported by Grant No.\ ERC-AdG-290623. I.F.H.\ and L.J.\ were supported by the NSERC of Canada. L.J.\ was also supported by the DFG under Grant Nos. JA\,2306/1-1, JA\,2306/3-1, and SFB1143.
|
2,877,628,089,453 | arxiv |
\section*{Acknowledgment}
Authors acknowledge the CY Initiative of Excellence for the support of the project through the ASIA Chair of Excellence Grant (PIA/ANR-16-IDEX-0008).
\subsection{{\ac{TA}} noise power alleviation ratio}\label{sec:TAMDFT_Noise_Power_Derivation}
According to the {\ac{TA}} processing applied in~\eqref{eq: proposed5}, and assuming that the noise terms of consecutive symbols are uncorrelated, the noise power for the $q$-th estimated channel is expressed as follows
\begin{equation}
\small
{R}_{\text{DL-TA}_{q}} = \Ex{ \norm{\frac{\tilde{\ma{v}}_{{q-1}}}{2} + \frac{\tilde{\ma{v}}_{{q}}}{2}}^2}
= \frac{\sigma_{{q-1}}^2}{4} + \frac{\sigma_{{q}}^2}{4}.
\end{equation}
Where $\tilde{\ma{v}}_{{q}}$ denotes the AWGN noise at the $q$-th received {\ac{OFDM}} symbol. We consider that $\hat{\bar{\ma{h}}}_{\text{DL-TA}_{1}}= \hat{\tilde{\ma{h}}}_{\text{LS}}$, therefore the noise power of the first estimated channel is $\sigma^2$, and the noise power enhancement ratio for the successive estimated channels can be computed as follows
\begin{equation}
\small
{R}_{\text{DL-TA}_{q}} = \left\{
\begin{array}{ll}
1 ,&\quad q = 1 \\\\
\frac{1}{4} + \frac{1}{4} = \frac{1}{2},&\quad q = 2 \\\\
\frac{\ma{R}_{\text{DL-TA}_{q-1}}}{4} + \frac{1}{4},&\quad 3 < q \leq I+1
\end{array}\right.
\label{eq:Noise_P1}
\end{equation}
The generalization formula of~\eqref{eq:Noise_P1} can be written as a sequence where the first element ${R}_{\text{DL-TA}_{1}} = 1$ as follows
\begin{equation}
\small
\begin{split}
{R}_{\text{DL-TA}_{q}} &= \frac{1}{4} {R}_{\text{DL-TA}_{q-1}} + \frac{1}{4}
= \frac{1}{4} \left( {R}_{\text{DL-TA}_{q-1}} + 1 \right) \\
&= \frac{1}{4} \left( \frac{1}{4} {R}_{\text{DL-TA}_{q-2}} + \frac{1}{4} + 1 \right) \\
&= \frac{1}{4} \left( \frac{1}{4} \left( \frac{1}{4} R_{\text{DL-TA}_{q-3}} + \frac{1}{4} \right) + \frac{1}{4} + 1 \right) \\
&= \frac{1}{4} \left( \frac{1}{4^{(q-1)-1}} {{R}}_{\text{DL-TA}_{q-(q-1)}} + \frac{1}{4^{(q-1)-1}} + \dots + \frac{1}{4^{0}} \right) \\
&= \frac{1}{4} \left( \frac{1}{4^{q-2}} {{R}}_{\text{DL-TA}_{1}} + \frac{1}{4^{q-2}} + \frac{1}{4^{q-3}} + \dots + \frac{1}{4^{0}} \right) \\
&= \frac{1}{4} \left( \frac{1}{4^{q-2}} + \frac{1}{4^{q-2}} + \frac{1}{4^{q-3}} + \dots + \frac{1}{4^{0}} \right).
\end{split}
\label{eq:Noise_P2}
\end{equation}
The sequence derived in~\eqref{eq:Noise_P2} can be written as follows:
\begin{equation}
\small
{R}_{\text{DL-TA}_{q}} = \frac{1}{4} \left( \frac{1}{4^{q-2}} + \sum_{j = 2}^{q} \left({\frac{1}{4}}\right)^{q-j} \right) .
\label{eq:Noise_P3}
\end{equation}
Let $j^{\prime} = q - j$, then~\eqref{eq:Noise_P3} can be written in terms of $j^{\prime}$ according to the summation of geometric sequence rule~\cite{ref_geo} such that
\begin{equation}
\small
{R}_{\text{DL-TA}_{q}} = \frac{1}{4} \left( \frac{1}{4^{q-2}} + \sum_{j^{\prime} = 0}^{q-2} \left({\frac{1}{4}}\right)^{j^{\prime}} \right)
= \frac{4^{q-1} + 2}{ 3 \times 4^{q-1}}.
\label{eq:Noise_P5}
\end{equation}
\section{Conclusion} \label{conclusions}
In this paper, we have investigated the channel estimation challenge in vehicular environments. This challenge arises from the doubly selective nature of the vehicular channel, especially in high mobility scenarios. The recently proposed {\ac{DL}}-based IEEE 802.11p estimators have been presented and their limitations have been discussed. In order to overcome these limitations, we have proposed an {\ac{LSTM}}-based estimator, that employs an {\ac{LSTM}} unit for channel estimation, and a {\ac{TA}} processing as a noise alleviation technique. Simulation results have
shown the performance superiority of the proposed {\ac{LSTM}}-{\ac{DPA}}-{\ac{TA}} estimator over the recently proposed {\ac{DL}}-based estimators, while recording a significant reduction in computational complexity.
\section{Introduction} \label{introduction}
Vehicular communication technologies~\cite{ref1} describe a set of communication models that can be employed by vehicles in different application contexts, resulting in a well organized network infrastructure. The main motivation behind such technologies is to facilitate several future smart city applications including road safety and autonomous driving.
In general, wireless communications in vehicular environment encounter a critical reliability challenge due to the doubly selective nature of the vehicular channel that varies rapidly especially in high mobility scenarios. Moreover, a precisely-estimated channel is critical for the equalization, demodulation, and decoding operations performed at the receiver. Therefore, robust and accurate channel estimation plays a crucial role in determining the overall system performance.
IEEE 802.11p standard is an international standard that defines vehicular communications specifications. However, IEEE 802.11p standard is based initially on the IEEE 802.11a standard that was proposed for indoor environments, where an insufficient number of pilots is allocated for channel estimation. As a result, IEEE 802.11p conventional estimators are based on the {\ac{DPA}} estimation, where the demapped data subcarriers, besides pilot subcarriers are employed in the channel estimation. In order to improve the {\ac{DPA}} estimation performance, {\ac{STA}} estimator~\cite{ref_STA} applies averaging in both the time and the frequency domains as post processing operations to the {\ac{DPA}} estimated channel. STA estimator performs well in low {\ac{SNR}} region, however it suffers from a significant performance degradation in high {\ac{SNR}} region especially in high mobility vehicular scenarios. The {\ac{TRFI}}~\cite{ref_TRFI} estimator assumes high correlation between successive symbols, thus it employs frequency domain interpolation to improve the {\ac{DPA}} performance. {\ac{TRFI}} estimator outperforms the {\ac{STA}} estimator in high {\ac{SNR}} region, however, these conventional estimators suffer from a considerable performance degradation in high mobility scenarios.
Recently, the rapid advancements in {\ac{DL}} and their successful applications in several domains, have sparked significant interest to adopt {\ac{DL}} techniques for wireless communication applications including channel estimation. {\ac{DL}} techniques are characterized by robustness, low-complexity, and good generalization ability making their integration into communication systems beneficial. Motivated by these advantages, {\ac{DL}} algorithms have been used for IEEE 802.11p channel estimator, where the authors in~\cite{ref_STA_DNN} and~\cite{ref_TRFI_DNN} employ {\ac{DNN}} as a post processing unit after the {\ac{STA}} and {\ac{TRFI}} conventional estimators respectively. The simulation results have showed that {\ac{STA}}-{\ac{DNN}} and {\ac{TRFI}}-{\ac{DNN}} are able to significantly improve the performance, however, they suffer from an error floor in high mobility scenarios. Another different {\ac{DL}}-based approach has been proposed in~\cite{ref_LSTM_DNN_DPA}, where {\ac{LSTM}} unit followed by {\ac{DNN}} network are employed as a pre-processing modules for channel estimation and noise error compensation. After that, {\ac{DPA}} estimation is applied using the {\ac{DNN}} output. This {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator, outperforms the recently proposed {\ac{STA}}-{\ac{DNN}} and {\ac{TRFI}}-{\ac{DNN}} estimators, but it suffers from a considerable computational complexity due to the employment of two consecutive {\ac{DL}} networks.
In order to overcome the high complexity of the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator, while improving the {\ac{BER}} and {\ac{NMSE}} performances, in this paper we propose an {\ac{LSTM}}-based estimator, where the channel is estimated using an {\ac{LSTM}} unit only. After that, {\ac{DPA}} estimation is applied using the {\ac{LSTM}} estimated channel. Finally, unlike the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator where {\ac{DNN}} network is used for noise elimination, in the proposed estimator, {\ac{TA}} processing is employed as a noise alleviation technique where the noise alleviation ratio is calculated analytically. We also provide a detailed computational complexity analysis and make a comparison between the recent related work on {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator and our proposed {\ac{LSTM}}-{\ac{DPA}}-{\ac{TA}} estimator.
The remainder of this paper is organized as follows: in Section~\ref{soa_estimators}, the system model followed by the recently proposed {\ac{SoA}} {\ac{DL}}-based IEEE 802.11p estimators are provided. The proposed {\ac{LSTM}}-{\ac{DPA}}-{\ac{TA}}, as well as the {\ac{TA}} processing analytical derivation, are described in Section~\ref{proposed_estimator}. In Section~\ref{simulation_results}, simulation results and computational complexity analysis are presented where the performance of the proposed scheme is evaluated in terms of \ac{BER}. Finally, the paper is concluded in Section~\ref{conclusions}.
\section{Proposed LSTM-based Estimation Scheme} \label{proposed_estimator}
In this section, the proposed LSTM based estimator is discussed. First of all, a brief description of {\ac{LSTM}} network is presented. After that, the proposed {\ac{TA}} noise power alleviation is analytically derived.
\subsection{LSTM Overview}
{\ac{LSTM}} networks are basically proposed to deal with sequential data where the order of the data matters and there exists a correlation between the previous and the future data. In this context, {\ac{LSTM}} networks are defined with an appropriate architecture that can learn the data correlation over time, thus giving the {\ac{LSTM}} network the ability to predict the future data based on the previous observations.
{\ac{LSTM}} unit contains computational blocks known as gates which are responsible for controlling and tracking the information flow over time. The {\ac{LSTM}} network mechanism can be explained in four major steps as follow
\paragraph{Forget the irrelevant information} In general, the {\ac{LSTM}} unit classify the input data into relevant and irrelevant information. The first processing step is to eliminate the irrelevant information that are not important for the future data prediction. This can be performed through the forget gate that decides which information the {\ac{LSTM}} unit should keep, and which information it should delete. The forget gate processing is defined as below
\begin{equation}
\small
{\ma{f}}_{t} = {\sigma} (\ma{W}_{f, t}\bar{\ma{x}}_{t} + \ma{W}^{\prime}_{f,t}\bar{\ma{h}}_{t-1} + \bar{\ma{b}}_{f,t}),
\label{eq: lstm_fg}
\end{equation}
where ${\sigma}$ is the sigmoid function, $\ma{W}_{f,t} \in \mathbb{R}^{P \times K_{in}}$, $\ma{W}^{\prime}_{f,t} \in \mathbb{R}^{P \times P}$ and $\bar{\ma{b}}_{f,t} \in \mathbb{R}^{P \times 1}$ are the forget gate weights and biases at time $t$, $\bar{\ma{x}}_{t} \in \mathbb{R}^{K_{in} \times 1}$ and $\bar{\ma{h}}_{t-1}$ denote the {\ac{LSTM}} unit input vector of size $K_{in}$, and the previous hidden state of size $P$ respectively.
\paragraph{Store the relevant new information} After classifying the relevant information, the {\ac{LSTM}} unit applies some computations on the selected information through the input gate
\begin{equation}
\small
{\bar{\ma{i}}_{t}} = {\sigma} (\ma{W}_{\bar{\ma{i}}, t}\bar{\ma{x}}_{t} + \ma{W}^{\prime}_{\bar{\ma{i}},t}\bar{\ma{h}}_{t-1} + \bar{\ma{b}}_{\bar{\ma{i}},t}),
\label{eq: lstm_ing}
\end{equation}
\begin{equation}
\small
{\tilde{{\ma{c}}}}_{t} = \text{tanh} (\ma{W}_{{\tilde{{\ma{c}}}}, t}\bar{\ma{x}}_{t} + \ma{W}^{\prime}_{{\tilde{{\ma{c}}}},t}\bar{\ma{h}}_{t-1} + \bar{\ma{b}}_{{\tilde{{\ma{c}}}},t}).
\label{eq: lstm_incg}
\end{equation}
\paragraph{Update the new cell state} Now, the {\ac{LSTM}} unit should update the current cell state ${{{\ma{c}}}}_{t}$ based on the two previous steps such that
\begin{equation}
\small
{{{\ma{c}}}}_{t} = {\ma{f}}_{t} \odot {\ma{c}}_{t-1} + \bar{\ma{i}}_{t} \odot {\tilde{{\ma{c}}}}_{t},
\label{eq: lstm_cell_state}
\end{equation}
where $\odot$ denotes the Hadamard product.
\paragraph{Generate the LSTM unit output} The final processing step is to update the hidden state and generate the output by the output gate. The output is considered as a cell state filtered version and can be computed such that
\begin{equation}
\small
{\ma{o}}_{t} = {\sigma} (\ma{W}_{o, t}\bar{\ma{x}}_{t} + \ma{W}^{\prime}_{o,t}\bar{\ma{h}}_{t-1} + \bar{\ma{b}}_{o,t}),
\label{eq: lstm_og}
\end{equation}
\begin{equation}
\small
{\bar{{\ma{h}}}}_{t} = {\ma{o}}_{t} \odot \text{tanh}{\ma{c}}_{t}.
\label{eq: lstm_hidden_state}
\end{equation}
We note that, in literature there exists several {\ac{LSTM}} architecture variants, where the interactions between the {\ac{LSTM}} unit gates are modified. The authors in~\cite{ref_lstm_var} performed a nice comparison of popular {\ac{LSTM}} architecture variants. However, for the proposed estimator we focus on the classical {\ac{LSTM}} unit architecture.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/lstmbd.eps}
\caption{LSTM unit architecture.}
\label{fig:lstm_archi}
\end{figure}
\subsection{Proposed LSTM-TA Estimator}
Unlike the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator, where the {\ac{LSTM}} and {\ac{DNN}} networks are used for channel estimation and noise compensation respectively, the proposed estimator employs only {\ac{LSTM}}. Moreover, the {\ac{LSTM}} input dimension is decreased to include only $K_{on}$ subcarriers, therefore the overall computational complexity is significantly reduced. The proposed estimator proceeds as follows:
\paragraph{LSTM-based prediction}: The first step is to estimate the channel for the current received {\ac{OFDM}} symbol employing the previous estimated channel $\hat{\tilde{\ma{h}}}_{\text{DL-TA}_{i-1,d}[k]}$, where the $i$-th {\ac{LSTM}} unit input is denoted by $\tilde{\bar{{\ma{x}}}}_{{i}} \in \mathbb{R}^{2 K_{on} \times 1}$, where
\begin{equation}
\small
\bar{\ma{x}}_{i} = \left\{
\begin{array}{ll}
\hat{\tilde{\ma{h}}}_{\text{LSTM}_{i-1,d}}[k] ,&\quad k \in \Kd \\
\hat{\tilde{\ma{h}}}_{{i-1,p}}[k],&\quad k \in \Kp
\end{array}\right. .
\label{eq: proposed2}
\end{equation}
$\tilde{\bar{{\ma{x}}}}_{{i}}$ is obtained by applying complex to real valued conversion to $\bar{\ma{x}}_{i}$ by stacking its real and imaginary values in one vector. We note that $\hat{\tilde{\ma{h}}}_{{i-1,p}}[k]$ denotes the {\ac{LS}} estimated channel at the $\Kp$ subcarriers. After that, $\tilde{\bar{{\ma{x}}}}_{{i}}$ is processed by the {\ac{LSTM}} unit, such that
\begin{equation}
\small
\hat{\tilde{\ma{h}}}_{\text{LSTM}_{i,d}} = \Omega_{\text{LSTM}}(\tilde{\bar{\ma{x}}}_{i},\Theta),
\label{eq: proposed1}
\end{equation}
where $\Omega_{\text{LSTM}}$ is the {\ac{LSTM}} unit processing with overall weights denoted by $\Theta$.
\paragraph{DPA estimation} The {\ac{LSTM}} estimated channel undergoes {\ac{DPA}} estimation using the $i$-th received {\ac{OFDM}} symbol as follows
\begin{equation}
\small
\ma{d}_{\text{LSTM}_{i}}[k] = \mathfrak{D} \big( \frac{\ma{y}_i[k]}{\hat{\tilde{\ma{h}}}_{\text{LSTM}_{i-1}}[k]}\big)
,~ \hat{\tilde{\ma{h}}}_{\text{LSTM}_{0}}[k] = \hat{\tilde{\ma{h}}}_{\text{LS}}[k],
\label{eq: proposed3}
\end{equation}
\begin{equation}
\small
\hat{\tilde{\ma{h}}}_{\text{LSTM-DPA}_{i}}[k] = \frac{\ma{y}_i[k]}{\ma{d}_{\text{LSTM}_{i}}[k]}.
\label{eq: proposed4}
\end{equation}
\paragraph{TA processing} Finally, in order to alleviate the impact of the AWGN noise, {\ac{TA}} processing is applied to the $\hat{\tilde{\ma{h}}}_{\text{LSTM-DPA}_{i}}[k]$ estimated channel, such that
\begin{equation}
\small
\hat{\bar{\ma{h}}}_{\text{DL-TA}_{i,d}} = (1 - \frac{1}{\alpha}) \hat{\bar{\ma{h}}}_{\text{DL-TA}_{i - 1,d}} + \frac{1}{\alpha} \hat{\bar{\ma{h}}}_{\text{LSTM-DPA}_{i,d}}.
\label{eq: proposed5}
\end{equation}
In this paper we used a fixed $\alpha = 2$ for simplicity. Therefore, the {\ac{TA}} applied in~{\eqref{eq: proposed5}} degrades the AWGN noise power $\sigma^2$ iteratively within the received {\ac{OFDM}} frame according to the ratio
\begin{equation}
\small
\begin{split}
{R}_{\text{DL-TA}_{q}} &= \left( \frac{1}{4} \right)^{(q-1)} + \sum_{j=2}^{q} \left( \frac{1}{4} \right)^{(q-j+1)}=\frac{4^{q-1} + 2}{3 \times 4^{q-1}}.
\label{eq:noise_degradtion}
\end{split}
\end{equation}
${R}_{\text{DL-TA}_{q}}$ denotes the AWGN noise power ratio of the estimated channel at the $q$-th estimated channel, where ${1 < q < I + 1}$ and ${{R}_{\text{DL-TA}_{1}} = 1}$ denotes the AWGN noise power ratio at $\hat{\tilde{\ma{h}}}_{\text{LS}}[k]$. The full derivation of~\eqref{eq:noise_degradtion} is provided in Appendix A. It can be seen from the derivation of ${R}_{\text{DL-TA}_{q}}$ that the
noise power is decreasing iteratively over the received {\ac{OFDM}} frame, hence the SNR increase which leads to better overall performance.
\begin{table}
\renewcommand{\arraystretch}{1.4}
\centering
\caption{Proposed LSTM parameters.}
\label{tb:LSTM_params}
\begin{tabular}{l|l}
\hline
(LSTM units; Hidden size) & (1;128) \\ \hline
Activation function & ReLU ($y= \max(0,x)$) \\ \hline
Number of epochs & 500 \\ \hline
Training samples & 16000 \\ \hline
Testing samples & 2000 \\ \hline
Batch size & 128 \\ \hline
Optimizer & ADAM \\ \hline
Loss function & MSE \\ \hline
Learning rate & 0.001 \\ \hline
Training SNR & 40 dB \\ \hline
\end{tabular}
\end{table}
\begin{figure*}[t]
\setlength{\abovecaptionskip}{6pt plus 3pt minus 2pt}
\centering
\includegraphics[width=2\columnwidth]{figures/Figure_HMS_16QAM_LSTM_Up.eps}
\subfloat[\label{High_BER_16QAM} BER performance employing high mobility scenario.]{\hspace{.5\linewidth}}
\subfloat[\label{High_NMSE_16QAM} NMSE performance employing high mobility scenario.]{\hspace{.5\linewidth}} \\
\vspace*{15pt}
\includegraphics[width=2\columnwidth]{figures/Figure_VHMS_16QAM_LSTM_Up.eps}
\subfloat[\label{Very_High_BER_16QAM} BER performance employing very high mobility scenario.]{\hspace{.5\linewidth}}
\subfloat[\label{Very_High_NMSE_16QAM} NMSE performance employing very high mobility scenario.]{\hspace{.5\linewidth}}
\caption{VTV-SDWW vehicular channel model simulation results.}
\label{fig:High}
\end{figure*}
The proposed LSTM training is performed using {\ac{SNR}} = 40 dB to achieve the best performance as observed in~{\cite{r20}}, due to the fact that when the training is performed for a high {\ac{SNR}} value, the
LSTM is able to better learn the channel statistics, and due to its good generalization ability, it can still perform well in low {\ac{SNR}} regions, where the noise is dominant. Moreover, intensive experiments are performed using the grid search algorithm~{\cite{r210}} in order to select the best suitable LSTM hyper parameters in terms of both performance and complexity. Table.~{\ref{tb:LSTM_params}} summarizes the proposed LSTM training parameters.
\section{Simulation Results} \label{simulation_results}
\begin {figure*}[t]
\centering
\begin{tikzpicture}
\begin{axis}[
ybar,
ylabel={Real-Valued Operations x $10^{3}$},
symbolic x coords={LSTM-DNN-DPA, LSTM-DPA-TA (P=128), LSTM-DPA-TA (P=64)},
xtick=data,
legend style={at={(0.48,+1.2)},
anchor=north,legend columns=-1},
nodes near coords align={vertical},
width=2\columnwidth,
height=5cm, width=15cm,
grid=major,
cycle list = {red!70,blue!70,red!40,black!10}
]
\addplot+[semithick,
pattern = dots,
pattern color = red] coordinates { (LSTM-DNN-DPA,133.088) (LSTM-DPA-TA (P=128), 120.136) (LSTM-DPA-TA (P=64), 44.168 )};
\addplot+[fill,text=black!10] coordinates { (LSTM-DNN-DPA,11.448) (LSTM-DPA-TA (P=128), 2.560) (LSTM-DPA-TA (P=64),1.728 )};
\legend{Multiplications/Divisions, Summations/Subtractions}
\end{axis}
\end{tikzpicture}
\caption{Computational complexity of the studied channel estimators.}
\label{fig:bar_graph_LSTM}
\end{figure*}
In this section, {\ac{BER}} and {\ac{NMSE}} simulations followed by a computational complexity analysis are conducted in order to evaluate the performance of the proposed estimator compared to IEEE 802.11p DL-based estimators.
In this paper, we consider the VTV-SDWW {\ac{TDL}} vehicular channel model~\cite{ref_CM} with two different mobility conditions: (\textit{i}) High mobility: V = 100 km/hr, with Doppler shift $f_{d}$ = 550 Hz. (\textit{ii}) Very high mobility: V = 200 km/hr, with $f_{d}$ = 1100 Hz. VTV-SDWW channel model represents the communication channel between two vehicles moving on a highway having center wall between its lanes, and it was obtained by a measurement campaign implemented in metropolitan Atlanta. Detailed measurement setups are provided in~\cite{ref_CMM}.
Concerning the simulation parameters, 16QAM modulation is employed over a frame size of $I = 50$ {\ac{OFDM}} symbols. The used channel coding is convolutional with half code rate. Moreover, three hidden layer DNN with $15$ neurons each is employed in the STA-DNN and TRFI-DNN estimators, with $\alpha = \beta = 2$ as defined in~\cite{ref_STA_DNN}.
\subsection{BER and NMSE Performances}
Fig.~\ref{fig:High} depict the {\ac{BER}} and {\ac{NMSE}} performances of high and very high mobility vehicular scenarios respectively. As we can notice, the {\ac{DNN}}-based estimators suffer from error floor, especially in high {\ac{SNR}} region. Moreover, the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator outperforms them in the entire {\ac{SNR}} regions. This is explained by the high ability of the LSTM in capturing the temporal correlations of the channel more than the {\ac{DNN}} network.
On the other hand, the proposed {\ac{LSTM}}-{\ac{DPA}}-{\ac{TA}} estimator is able to outperform the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator in high mobility scenario by $7$ dB and $5$ dB gains in terms of {\ac{SNR}} at BER = $10^{-3}$, when $P = 128$ and $P = 64$ are employed respectively. We note that employing larger {\ac{LSTM}} hidden state size, i.e $P = 128$ achieves better performance than the optimized {\ac{LSTM}} unit ($P = 64$). However, in very high mobility scenario, since the temporal correlation between subsequent channel realizations significantly reduces, the proposed estimators outperform the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator by around $3$ dB gains in terms of {\ac{SNR}} at BER = $10^{-3}$.
The proposed estimator performance gain in both scenarios is mainly due to employing the {\ac{TA}} processing which reduces the AWGN noise significantly. Moreover, the LSTM achieves better channel estimation performance than DNN while significantly reducing the computational complexity. This can be explained by the high ability of LSTM in learning the channel time correlations, compared with a simple DNN architecture.
\subsection{Computational Complexity Analysis}
In general, the estimator computational complexity is expressed in terms of real-valued multiplication/division and summation/subtraction mathematical operations required to estimate the channel for one received {\ac{OFDM}} symbol. We note that {\ac{STA}}-{\ac{DNN}} and {\ac{TRFI}}-{\ac{DNN}} achieve lower complexity compared to the {\ac{LSTM}}-based estimators because {\ac{LSTM}} processing requires more operations than {\ac{DNN}} processing. Thus, in the following analysis we will focus on comparing the proposed {\ac{LSTM}}-{\ac{DPA}}-{\ac{TA}} estimator with the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator.
In~\cite{ref_STA_DNN}, the authors provide a detailed computational complexity analysis of the {\ac{DNN}}-based estimators, where the computational complexity of the {\ac{DNN}} network denoted by $C_{\text{DNN}}$ can be expressed as follows
\begin{equation}
\small
C_{\text{DNN}} = 2 \sum_{l=1}^{L+1} {N}_{l-1} {N}_l,
\label{eq:DNNcomp}
\end{equation}
where $L$ is the number of hidden layers within the {\ac{DNN}} network with $N_{l}$ neurons each, and $N_{0}$, $N_{L+1}$ denote the input and output {\ac{DNN}} network dimensions respectively.
For the computational complexity of the {\ac{LSTM}} unit, it can be calculated in terms of the required real values operations performed by its four gates, where each gate applies $P^{2} + P K_{in}$ real-valued multiplications, and $3P + K_{in} - 2$ real-valued summations. In addition to $3P$ real-valued multiplications, and $P$ real-valued summations required by~\eqref{eq: lstm_cell_state}, and~\eqref{eq: lstm_hidden_state}. Therefore, the overall computational complexity for the {\ac{LSTM}} becomes
\begin{equation}
\small
C_{\text{LSTM}} = 4 (P^{2} + P K_{in} + 3P + K_{in} -2) + 4P.
\label{eq:LSTMcomp}
\end{equation}
The {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator employs one {\ac{LSTM}} unit with $P=128$ and $K_{in} = 112$, followed by one hidden layer {\ac{DNN}} network with $N_{1} = 40$ neurons. Finally, the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator applies the {\ac{DPA}} estimation which requires $18 K_{d}$ real-valued multiplication/division and $8 K_{d}$ real-valued summation/subtraction. Therefore, the overall computational complexity of the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator is $512 K_{\text{in}} + 98 K_{d} + 71040$ real-valued multiplication/division and $4 K_{\text{in}} + 88 K_{d} + 6776$ real-valued summation/subtraction.
For the proposed {\ac{LSTM}}-{\ac{DPA}}-{\ac{TA}} estimator, it employs one {\ac{LSTM}} unit with $P=128$ as {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator, or $P = 64$ when the optimized {\ac{LSTM}} unit architecture is employed. Moreover, the proposed estimator uses $K_{in} = 2 K_{on}$, and applies {\ac{TA}} as a noise alleviation technique to the $\hat{\bar{\ma{h}}}_{\text{LSTM-DPA}_{i,d}}$ estimated channel, that requires only $2 K_{on}$ real-valued multiplication/division and $2 K_{on}$ real-valued summation/subtraction. As a results, the proposed {\ac{LSTM}}-{\ac{DPA}}-{\ac{TA}} estimator requires $4 P^{2} + P (8 K_{on} + 3) + 18 K_{d} + 2 K_{on} $ real-valued multiplication/division and $13P + 10 K_{on} + 8 K_{d} - 8$ real-valued summation/subtraction.
Based on this analysis, the proposed estimator achieves less computational complexity compared to the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator. It records $9.73 \%$ and $77.63\%$ computational complexity decrease in the required real-valued multiplication/division and summation/subtraction respectively, when the {\ac{LSTM}} unit is employed with $P = 128$ hidden size. On the other hand, more complexity reduction can be achieved when the optimized {\ac{LSTM}} unit is used with $P = 64$ hidden size, where the proposed estimator is able to decrease the complexity of the required multiplication/division and summation/subtraction by $66.81\%$ and $84.90\%$ respectively. It is worth mentioning that replacing the {\ac{DNN}} network by the {\ac{TA}} processing in order to alleviate the AWGN noise is the main factor in decreasing the overall computational complexity, where the proposed estimator outperforms the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator while recording a significant computational complexity reduction. Fig.~\ref{fig:bar_graph_LSTM} shows a detailed computational analysis of the benchmarked estimators in terms of real-valued operations.
\section{SoA DNN-Based Channel Estimators} \label{soa_estimators}
\subsection{System Model}
IEEE 802.11p standard employs \ac{OFDM} transmission scheme with ${K = 64}$ total subcarriers. ${K_{\text{on}} = 52}$ active subcarriers are used, and they are divided into ${K_{\text{d}} = 48}$ data subcarriers and ${K_{\text{p}} = 4}$ pilot subcarriers. The remaining ${K_{\text{n}} = 12}$ subcarriers are used as a guard band. Moreover, IEEE 802.11p frame structure consists mainly of three parts. The first part contains the preamble that is used at the receiver for signal detection, timing synchronization, and channel estimation. Second, the signal field carries the transmission parameters like the employed code rate, modulation order, and frame length. Finally, we have the \ac{OFDM} data symbols. A detailed discussion of the IEEE 802.p standard and all its features are presented in~\cite{ref_IEEE}.
\begin{figure*}[t]
\setlength{\abovecaptionskip}{6pt plus 3pt minus 2pt}
\centering
\includegraphics[width=0.95\textwidth]{figures/proposed_BD.eps}
\caption{LSTM-based IEEE 802.11p channel estimators block diagram.}
\label{fig:block_diagram}
\end{figure*}
In this paper, we assume perfect synchronization at the receiver, and we omit the signal field for simplicity. Therefore, we consider only the two long training symbols followed by $I$ {\ac{OFDM}} data symbols within each transmitted frame. The received {\ac{OFDM}} symbol can be expressed as:
\begin{equation}
\small
\tilde{\ma{y}}_i[k] = \left\{
\begin{array}{ll}
\tilde{\ma{h}}_{i,d}[k] \tilde{\ma{x}}_{i,d}[k] + \tilde{\ma{v}}_{i,d}[k],&\quad k \in \Kd \\
\tilde{\ma{h}}_{i,p}[k] \tilde{\ma{x}}_{i,p}[k] + \tilde{\ma{v}}_{i,p}[k],&\quad k \in \Kp \\
\end{array}\right. ,
\label{eq: xK}
\end{equation}
where $\tilde{\ma{x}}_{i}[k]$ and $\tilde{\ma{h}}_{i}[k]$ denote the $i$-th transmitted {\ac{OFDM}} symbol and its respective time variant frequency-domain channel response. Moreover, $\tilde{\ma{v}}_{i}[k]$ represents the frequency-domain counterpart of \ac{AWGN} of variance $\sigma^2$. $\Kd$ and $\Kp$ are the sets of data and pilot subcarriers indices respectively.
\subsection{IEEE 802.11p Channel Estimators}
\subsubsection{DPA Basic Estimation}
The {\ac{DPA}} basic estimation employs the demapped data subcarriers besides pilots subcarriers to estimate the channel for the current {\ac{OFDM}} symbol, such that
\begin{equation}
\small
\ma{d}_i[k] = \mathfrak{D} \big( \frac{\ma{y}_i[k]}{\hat{\tilde{\ma{h}}}_{\text{DPA}_{i-1}}[k]}\big)
,~ \hat{\tilde{\ma{h}}}_{\text{DPA}_{0}}[k] = \hat{\tilde{\ma{h}}}_{\text{LS}}[k],~ k \in \Kon,
\label{eq: DPA_1}
\end{equation}
where $\mathfrak{D}(.)$ is the demapping operation to the nearest constellation point according to the employed modulation order. $\hat{\tilde{\ma{h}}}_{\text{LS}}[k]$ refers to the LS estimated channel at the received preambles denoted as $\ma{y}^{(p)}_{1}[k]$, and $\ma{y}^{(p)}_{2}[k]$, such that
\begin{equation}
\small
\hat{\tilde{\ma{h}}}_{\text{LS}}[k] = \frac{\ma{y}^{(p)}_{1}[k] + \ma{y}^{(p)}_{2}[k]}{2\ma{p}[k]},~k \in \Kon,
\label{eq: LS}
\end{equation}
where $\ma{p}[k]$ represents the frequency domain predefined preamble sequence. After that, the final {\ac{DPA}} channel estimated are updated as follows
\begin{equation}
\small
\hat{\tilde{\ma{h}}}_{\text{DPA}_{i}}[k] = \frac{\ma{y}_i[k]}{\ma{d}_i[k]},~ k \in \Kon.
\label{eq: DPA_2}
\end{equation}
It is worth mentioning that the {\ac{DPA}} basic estimation is considered as the starting point by most of the IEEE 802.11p channel estimators.
\subsubsection{STA-DNN Estimator}
The authors in \cite{ref_STA_DNN} discovered that employing an optimized {\ac{DNN}} after the conventional \ac{STA} estimator~\cite{ref_STA} leads to significant performance improvement, while recording lower computational complexity. This is because the conventional {\ac{STA}} estimation applies averaging in both frequency and time successively to the \ac{DPA} estimated channel in~\eqref{eq: DPA_2}, such that
\begin{equation}
\small
\hat{\tilde{\ma{h}}}_{\text{FD}_{i}}[k] = \sum_{\lambda = -\beta}^{\lambda = \beta} \omega_{\lambda} \hat{\tilde{\ma{h}}}_{\text{DPA}_{i}}[k + \lambda], ~ \omega_{\lambda} = \frac{1}{2\beta+1}, k \in \Kd,
\label{eq: STA_1}
\end{equation}
\begin{equation}
\small
\hat{\tilde{\ma{h}}}_{\text{STA}_{i}}[k] = (1 - \frac{1}{\alpha}) \hat{\tilde{\ma{h}}}_{\text{STA}_{i-1}}[k] + \frac{1}{\alpha}\hat{\tilde{\ma{h}}}_{\text{FD}_{i}}[k], k \in \Kon,
\label{eq: STA_2}
\end{equation}
where $\beta$ and $\omega_{\lambda}$ refer to the {\ac{STA}} frequency averaging window size and weight respectively, while $\alpha \geq 1$ defines the {\ac{STA}} time averaging weight. The main {\ac{STA}} limitation is that the averaging parameters should be updated according to the real-time channel statistics, that are not available in practice. Therefore, the {\ac{STA}} averaging parameters are fixed, resulting in a performance degradation especially in high \ac{SNR} regions and high mobility vehicular scenarios. However, as discussed in~\cite{ref_STA_DNN}, when the $\hat{\tilde{\ma{h}}}_{\text{STA}_{i}}[k]$ is fed as an input to an {\ac{STA}}-{\ac{DNN}} network, more time-frequency channel correlation can be captured, besides correcting the conventional STA estimation error. Thus, the overall performance is significantly improved. However, an error floor still appears in high mobility vehicular scenario, especially in high {\ac{SNR}} region.
\subsubsection{TRFI-DNN Estimator}
The {\ac{DPA}} estimation in~\eqref{eq: DPA_2} can be further improved by applying the {\ac{TRFI}}~\cite{ref_TRFI}, where the subcarriers of the received {\ac{OFDM}} symbol are divided into: (\textit{i}) {$\RS_{i}$} set: that includes the reliable subcarriers indices, and (\textit{ii}) {$\URS_{i}$} set: which contains the unreliable subcarriers indices. Then, the estimated channels for the {$\URS_{i}$} are interpolated using the {$\RS_{i}$} channel estimates. {\ac{TRFI}} employs frequency domain cubic interpolation assuming high correlation between two successive received {\ac{OFDM}} symbols. This procedure can be expressed as follows
\begin{itemize}
\item Equalize the previously received {\ac{OFDM}} symbol by ${\hat{\tilde{\ma{h}}}_{\text{TRFI}_{i-1}}[k]}$ and ${\hat{\tilde{\ma{h}}}_{\text{DPA}_{i}}[k]}$, such that
\begin{equation}
\small
\begin{split}
{\ma{d^\prime}_{i-1}[k]} = \mathfrak{D} \big( \frac{\ma{y}_{i-1}[k]}{\hat{\tilde{\ma{h}}}_{\text{DPA}_{i}}[k]} \big), ~
{\ma{d^{\prime\prime}}_{i-1}[k]} = \mathfrak{D} \big( \frac{\ma{y}_{i-1}[k]}{\hat{\tilde{\ma{h}}}_{\text{TRFI}_{i-1}}[k]} \big).
\label{eq: TRFI_1}
\end{split}
\end{equation}
\item According to the demapping results, the subcarriers are grouped as follows
\begin{equation}
\small
\left\{
\begin{array}{ll}
\RS_{i} \leftarrow \RS_{i} + {k} ,&\quad \ma{d^\prime}_{i-1}[k] = \ma{d^{\prime\prime}}_{i-1}[k] \\
\URS_{i} \leftarrow \URS_{i} + {k},&\quad \ma{d^\prime}_{i-1}[k] \neq \ma{d^{\prime\prime}}_{i-1}[k]
\end{array}\right. .
\label{eq: TRFI_2}
\end{equation}
\item Finally, frequency domain cubic interpolation is employed to estimate the channels at the {$\URS_{i}$} as follows
\begin{equation}
\small
\hat{\tilde{\ma{h}}}_{\text{TRFI}_{i}}[k] = \left\{
\begin{array}{ll}
\hat{\tilde{\ma{h}}}_{\text{DPA}_{i}}[k] ,&\quad k \in \RS_{i} \\
\text{Cubic Interpolation},&\quad k \in \URS_{i}
\end{array}\right. .
\label{eq: TRFI_3}
\end{equation}
\end{itemize}
In order to outperform the {\ac{STA}}-{\ac{DNN}} performance limitation in high mobility vehicular scenarios (high \ac{SNR} region), the authors in~\cite{ref_TRFI_DNN}, used the same optimized {\ac{DNN}} architecture as in~\cite{ref_STA_DNN}, but with $\hat{\tilde{\ma{h}}}_{\text{TRFI}_{i}}[k]$ as an input instead of $\hat{\tilde{\ma{h}}}_{\text{STA}_{i}}[k]$. \ac{TRFI}-\ac{DNN} corrects the cubic interpolation error, besides learning the channel frequency domain correlation, thus improving the performance in high {\ac{SNR}} region.
\subsubsection{LSTM-DNN-DPA Estimator}
Unlike the recently proposed \ac{DNN}-based estimators, where the {\ac{DL}} processing is employed after the conventional estimators, the work performed in~\cite{ref_LSTM_DNN_DPA} show that employing the {\ac{DL}} processing before the conventional estimator (specifically the {\ac{DPA}} estimation), could improve significantly the overall performance. In this context, the authors proposed to use two cascaded {\ac{LSTM}} and {\ac{DNN}} networks to estimate the channel for the current {\ac{OFDM}} symbol as shown in Fig.~\ref{fig:block_diagram}. After that, a {\ac{DPA}} estimation is applied using the {\ac{LSTM}}-{\ac{DNN}} estimated channel. Even though, the {\ac{LSTM}}-{\ac{DNN}}-{\ac{DPA}} estimator can outperform the recently proposed \ac{DNN}-based estimators, it suffers from a considerable computational complexity that rises from the employment of two {\ac{DL}} networks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.